BRECQ: Pushing the Limit of Post-Training Quantization by Block ReconstructionJan 1, 2021·Yuhang Li* Equal ContributionRuihao Gong* Equal Contribution,Xu Tan,Yang Yang,Peng Hu,Qi Zhang,Fengwei Yu,Wei Wang,Shi Gu· 0 min read PDF Cite Code Project URLTypeConference paperPublicationInternational Conference on Learning RepresentationsLast updated on Mar 27, 2022 AuthorsRuihao Gong ← Diversifying Sample Generation for Accurate Data-Free Quantization Jun 1, 2021RobustART: Benchmarking Robustness on Architecture Design and Training Techniques Jan 1, 2021 →