Tag Archives: quantization

Compressing VGG for Style Transfer

32-bit float (no quantization) 8-bit 7-bit 6-bit 5-bit 4-bit 3-bit 2-bit 1-bit I recently implemented pastiche—discussed in a prior post—for applying neural style transfer. I encountered a size limit when uploading the library to PyPI, as a package cannot exceed … Continue reading

Tagged , , , , | Leave a comment