BRECQ: Pushing the Limit of Post-Training Quantization by Block ReconstructionJan 1, 2021ยทYuhang Li* Equal ContributionRuihao Gong* Equal Contribution,Xu Tan,Yang Yang,Peng Hu,Qi Zhang,Fengwei Yu,Wei Wang,Shi Guยท 0 min read Cite Code Project URLTypeConference paperPublicationInternational Conference on Learning RepresentationsLast updated on Mar 27, 2022 AuthorsRuihao Gong ← Diversifying Sample Generation for Accurate Data-Free Quantization Jun 1, 2021RobustART: Benchmarking Robustness on Architecture Design and Training Techniques Jan 1, 2021 →