TY - GEN
T1 - Linguistic fingerprints of internet censorship
T2 - 34th AAAI Conference on Artificial Intelligence, AAAI 2020
AU - Ng, Kei Yin
AU - Feldman, Anna
AU - Peng, Jing
N1 - Publisher Copyright:
Copyright © 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2020
Y1 - 2020
N2 - This paper studies how the linguistic components of blogposts collected from Sina Weibo, a Chinese microblogging platform, might affect the blogposts’ likelihood of being censored. Our results go along with King et al. (2013)’s Collective Action Potential (CAP) theory, which states that a blogpost’s potential of causing riot or assembly in real life is the key determinant of it getting censored. Although there is not a definitive measure of this construct, the linguistic features that we identify as discriminatory go along with the CAP theory. We build a classifier that significantly outperforms non-expert humans in predicting whether a blogpost will be censored. The crowdsourcing results suggest that while humans tend to see censored blogposts as more controversial and more likely to trigger action in real life than the uncensored counterparts, they in general cannot make a better guess than our model when it comes to ‘reading the mind’ of the censors in deciding whether a blogpost should be censored. We do not claim that censorship is only determined by the linguistic features. There are many other factors contributing to censorship decisions. The focus of the present paper is on the linguistic form of blogposts. Our work suggests that it is possible to use linguistic properties of social media posts to automatically predict if they are going to be censored.
AB - This paper studies how the linguistic components of blogposts collected from Sina Weibo, a Chinese microblogging platform, might affect the blogposts’ likelihood of being censored. Our results go along with King et al. (2013)’s Collective Action Potential (CAP) theory, which states that a blogpost’s potential of causing riot or assembly in real life is the key determinant of it getting censored. Although there is not a definitive measure of this construct, the linguistic features that we identify as discriminatory go along with the CAP theory. We build a classifier that significantly outperforms non-expert humans in predicting whether a blogpost will be censored. The crowdsourcing results suggest that while humans tend to see censored blogposts as more controversial and more likely to trigger action in real life than the uncensored counterparts, they in general cannot make a better guess than our model when it comes to ‘reading the mind’ of the censors in deciding whether a blogpost should be censored. We do not claim that censorship is only determined by the linguistic features. There are many other factors contributing to censorship decisions. The focus of the present paper is on the linguistic form of blogposts. Our work suggests that it is possible to use linguistic properties of social media posts to automatically predict if they are going to be censored.
UR - http://www.scopus.com/inward/record.url?scp=85106043520&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85106043520
T3 - AAAI 2020 - 34th AAAI Conference on Artificial Intelligence
SP - 446
EP - 453
BT - AAAI 2020 - 34th AAAI Conference on Artificial Intelligence
PB - AAAI press
Y2 - 7 February 2020 through 12 February 2020
ER -