-
Wukong-Reader: Multi-modal Pre-training for Fine-grained Visual Document Understanding
Haoli Bai, Zhiguang Liu, Xiaojun Meng, Wentao Li, Shuang Liu, Nian Xie, Rongfu Zheng, Liangwei Wang, Lu Hou, Jiansheng Wei, Xin Jiang, Qun Liu
To appear in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL), July 2023.
-
mCLIP: Multilingual CLIP via Cross-lingual Transfer
Guanhua Chen, Lu Hou, Yun Chen, Wenliang Dai, Lifeng Shang, Xin Jiang, Qun Liu, Jia Pan and Wenping Wang
To appear in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL), July 2023.
-
Structured Pruning for Efficient Generative Pre-trained Language Models
Chaofan Tao, Lu Hou, Haoli Bai, Jiansheng Wei, Xin Jiang, Qun Liu, Ping Luo and Ngai Wong
To appear in Findings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL Findings), July 2023.
-
LiteVL: Efficient Video-Language Learning with Enhanced Spatial-Temporal Modeling
Dongsheng Chen, Chaofan Tao, Lu Hou, Lifeng Shang, Xin Jiang, Qun Liu
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP), Dec 2022.
-
Wukong: A 100 Million Large-scale Chinese Cross-modal Pre-training Benchmark [pytorch code][MindSpore code][website]
Jiaxi Gu, Xiaojun Meng, Guansong Lu, Lu Hou, Minzhe Niu, Xiaodan Liang, Lewei Yao, Runhui Huang, Wei Zhang, Xin Jiang, Chunjing Xu, Hang Xu
Neural Information Processing Systems (NeurIPS) Track on Datasets and Benchmarks, Nov 2022.
-
Towards efficient post-training quantization of pre-trained language models
Haoli Bai, Lu Hou, Lifeng Shang, Xin Jiang, Irwin King, Michael R Lyu
Proceedings of the Thirty-sixth Conference on Neural Information Processing Systems (NeurIPS), Nov 2022.
-
Compression of Generative Pre-trained Language Models via Quantization
Chaofan Tao, Lu Hou, Wei Zhang, Lifeng Shang, Xin Jiang, Qun Liu, Ping Luo, Ngai Wong
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL), Outstanding Paper Award, May 2022.
-
Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation
Wenliang Dai, Lu Hou, Lifeng Shang, Xin Jiang, Qun Liu, Pascale Fung
Findings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL Findings), May 2022.
-
FILIP: Fine-grained Interactive Language-Image Pre-Training [checkpoints]
Lewei Yao*, Runhui Huang*, Lu Hou*, Guansong Lu, Minzhe Niu, Hang Xu, Xiaodan Liang, Zhenguo Li, Xin Jiang, Chunjing Xu
Proceedings of the The Tenth International Conference on Learning Representations (ICLR), Apr 2022.
-
Improved OOD Generalization via Adversarial Training and Pretraining
Mingyang Yi, Lu Hou, Jiacheng Sun, Lifeng Shang, Xin Jiang, Qun Liu, Zhiming Ma
Proceedings of the Thirty-eighth International Conference on Machine Learning (ICML), Jul 2021.
-
BinaryBERT: Pushing the Limit of BERT Quantization [code]
Haoli Bai, Wei Zhang, Lu Hou, Lifeng Shang, Jing Jin, Xin Jiang, Qun Liu, Michael Lyu, Irwin King
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL), Aug 2021.
-
GhostBERT: Generate More Features with Cheap Operations for BERT
Zhiqi Huang, Lu Hou, Lifeng Shang, Xin Jiang, Xiao Chen, Qun Liu
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL), Oral, Aug 2021.
-
Reweighting Augmented Samples by Minimizing the Maximal Expected Loss
Mingyang Yi, Lu Hou, Lifeng Shang, Xin Jiang, Qun Liu, Zhi-Ming Ma
Proceedings of the Ninth International Conference on Learning Representations (ICLR), May 2021.
-
DynaBERT: Dynamic BERT with Adaptive Width and Depth [code]
Lu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, Qun Liu
Proceedings of the Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS), Spotlight(4.07%), Dec 2020.
-
TernaryBERT: Distillation-aware Ultra-low Bit BERT [pytorch code][mindspore code]
Wei Zhang*, Lu Hou*, Yichun Yin*, Lifeng Shang, Xiao Chen, Xin Jiang, Qun Liu
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Nov 2020.
-
Normalization Helps Training of Quantized LSTM [code]
Lu Hou, Jinhua Zhu, James T. Kwok, Fei Gao, Tao Qin, Tie-yan Liu
Proceedings of the Thirty-third Conference on Neural Information Processing Systems (NeurIPS), Vancouver, BC, Canada, Dec 2019.
-
Analysis of Quantized Models
Lu Hou, Ruiliang Zhang, James T. Kwok
Proceedings of the Seventh International Conference on Learning Representations (ICLR), New Orleans, Louisinna, USA, May 2019.
-
Loss-aware Weight Quantization of Deep Networks [code]
Lu Hou, James T. Kwok
Proceedings of the Sixth International Conference on Learning Representations (ICLR), Vancouver, BC, Canada, Apr 2018.
-
Loss-aware Binarization of Deep Networks [code]
Lu Hou, Quanming Yao, James T. Kwok
Proceedings of the Fifth International Conference on Learning Representations (ICLR), Toulon, France, Apr 2017.
-
Efficient Learning of Timeseries Shapelets [code]
Lu Hou, James T. Kwok, Jacek M. Zurada
Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI), pp.1209-1215, Phoenix, AZ, USA, Feb 2016.