module owlite.calibrators
function calibrate
python
calibrate(
model: GraphModule | DataParallel | DistributedDataParallel
) → CalibrationContext
Calibration is performed using the supplied data within a 'with' statement.
owlite.calibrate
performs Post-Training Quantization (PTQ) calibration on a model converted with the OwLite.convert
. It is required to preserve the model's accuracy by carefully selecting the quantization hyperparameters (the scale and zero-point). PTQ calibration typically requires only a subset of the training data.
Please review the Calibrator for technical details.
Args:
model
(GraphModuleOrDataParallel
): GraphModule or DataParallel model to calibrate.
Returns:
CalibrationContext
Usage
owlite.calibrate
returns an owlite.CalibratorContext
object from the OwLite library can be used with a with
statement to perform calibration. The CalibratorContext
prepares the model for calibration and updates the model's fake quantizers after calibration is complete.
Example
```python with owlite.calibrate(model): for i, data in enumerate(trainloader): model(*data) # feed data to model and store information from it. # calculate fake quantizers stepsizes and zero_points
You should use the model
outside of the block after the calibration
torch.save(model.state_dict()) ```
In this example, the owlite.calibrate
creates an owlite.CalibratorContext
, referenced by the variable calibrator
. The training data fetched from train_loader
are then passed to the calibrator
to perform calibration.
Note that you should continue writing your code outside of the with
block since the fake quantizers in the model are updated as the with
block exits.
class CalibrationContext
ContextManager for calibration.
method __init__
python
__init__(model: GraphModule | DataParallel | DistributedDataParallel)
Updated: 2024-06-13T23:42:41