Bedovyy commited on
Commit
40b8452
·
verified ·
1 Parent(s): 65ed264

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -8
README.md CHANGED
@@ -39,12 +39,19 @@ extra_gated_fields:
39
  Country: country
40
  I agree to use this model for non-commercial use ONLY: checkbox
41
  base_model:
42
- - CohereForAI/c4ai-command-a-03-2025
43
  ---
44
 
45
 
46
  # GPTQ quantization of Command-A
47
 
 
 
 
 
 
 
 
48
  Quantized using
49
 
50
  - Tool: [GPTQModel 2.3.0-dev (bafda24)](https://github.com/ModelCloud/GPTQModel/commit/bafda2489f9582b2cdbf6a5fb8b242aa38a02cda).
@@ -64,8 +71,7 @@ calibration_dataset = load_dataset(
64
  "allenai/c4",
65
  data_files={"train": "en/c4-train.00001-of-01024.json.gz"},
66
  split="train",
67
- ).select(range(2048))["text"]
68
-
69
 
70
  quant_config = QuantizeConfig(
71
  bits=4,
@@ -87,11 +93,6 @@ model.quantize(
87
  model.save(quant_path)
88
  ```
89
 
90
- I made this for running on vLLM.
91
-
92
- Non-english performance may be significantly dropped. Recommend to set `temperature` to 0.6~0.8.
93
-
94
-
95
  ---
96
 
97
  # **Model Card for C4AI Command A**
 
39
  Country: country
40
  I agree to use this model for non-commercial use ONLY: checkbox
41
  base_model:
42
+ - CohereLabs/c4ai-command-a-03-2025
43
  ---
44
 
45
 
46
  # GPTQ quantization of Command-A
47
 
48
+
49
+ I made this for running on vLLM.
50
+
51
+ Non-english performance may be significantly dropped. Recommend to set `temperature` to 0.6~0.8.
52
+
53
+ ## Quantization method
54
+
55
  Quantized using
56
 
57
  - Tool: [GPTQModel 2.3.0-dev (bafda24)](https://github.com/ModelCloud/GPTQModel/commit/bafda2489f9582b2cdbf6a5fb8b242aa38a02cda).
 
71
  "allenai/c4",
72
  data_files={"train": "en/c4-train.00001-of-01024.json.gz"},
73
  split="train",
74
+ ).select(range(2048))["text"]
 
75
 
76
  quant_config = QuantizeConfig(
77
  bits=4,
 
93
  model.save(quant_path)
94
  ```
95
 
 
 
 
 
 
96
  ---
97
 
98
  # **Model Card for C4AI Command A**