mlx.core.dequantize#
- dequantize(w: array, /, scales: array, biases: Optional[array] = None, group_size: Optional[int] = None, bits: Optional[int] = None, mode: str = 'affine', dtype: Optional[Dtype], *, stream: Union[None, Stream, Device] = None) array#
Dequantize the matrix
wusing quantization parameters.- Parameters:
w (array) – Matrix to be dequantized
scales (array) – The scales to use per
group_sizeelements ofw.biases (array, optional) – The biases to use per
group_sizeelements ofw. Default:None.group_size (int, optional) – The size of the group in
wthat shares a scale and bias. See supported values and defaults in the table of quantization modes. Default:None.bits (int, optional) – The number of bits occupied by each element of
win the quantized array. See supported values and defaults in the table of quantization modes. Default:None.dtype (Dtype, optional) – The data type of the dequantized output. If
Nonethe return type is inferred from the scales and biases when possible and otherwise defaults tobfloat16. Default:None.mode (str, optional) – The quantization mode. Default:
"affine".
- Returns:
The dequantized version of
w- Return type:
Notes
The currently supported quantization modes are
"affine","mxfp4,"mxfp8", and"nvfp4".For
affinequantization, given the notation inquantize(), we compute \(w_i\) from \(\hat{w_i}\) and corresponding \(s\) and \(\beta\) as follows\[w_i = s \hat{w_i} + \beta\]