Video details

USENIX Security '20 - TextShield: Robust Text Classification Based on Multimodal Embedding

Security
09.15.2020
English

TextShield: Robust Text Classification Based on Multimodal Embedding and Neural Machine Translation
Jinfeng Li, Zhejiang University, Alibaba Group; Tianyu Du, Zhejiang University; Shouling Ji, Zhejiang University, Alibaba-Zhejiang University Joint Research Institute of Frontier Technologies; Rong Zhang and Quan Lu, Alibaba Group; Min Yang, Fudan University; Ting Wang, Pennsylvania State University
Text-based toxic content detection is an important tool for reducing harmful interactions in online social media environments. Yet, its underlying mechanism, deep learning-based text classification (DLTC), is inherently vulnerable to maliciously crafted adversarial texts. To mitigate such vulnerabilities, intensive research has been conducted on strengthening English-based DLTC models. However, the existing defenses are not effective for Chinese-based DLTC models, due to the unique sparseness, diversity, and variation of the Chinese language.
In this paper, we bridge this striking gap by presenting TextShield, a new adversarial defense framework specifically designed for Chinese-based DLTC models. TextShield differs from previous work in several key aspects: (i) generic – it applies to any Chinese-based DLTC models without requiring re-training; (ii) robust – it significantly reduces the attack success rate even under the setting of adaptive attacks; and (iii) accurate – it has little impact on the performance of DLTC models over legitimate inputs. Extensive evaluations show that it outperforms both existing methods and the industry-leading platforms. Future work will explore its applicability in broader practical tasks.
View the full USENIX Security '20 program at https://www.usenix.org/conference/usenixsecurity20/technical-sessions