A team of researchers from the University of California (USA) has analyzed the presence of hate speech on the social network X (formerly Twitter) since its purchase by Elon Musk in October 2022 until June 2023. Their findings are that this type of racist, homophobic and transphobic speech increased by approximately 50 % throughout this period. In addition, the presence of bots and fake accounts did not decrease, contrary to Musk's own promises. The results are published in the journal Plos One.

Gayo - Musk (EN)
Daniel Gayo Avello
Full Professor at the University of Oviedo in the area of “Computer Languages and Systems”
The article analyzes the prevalence of hate speech and 'fake' accounts on Twitter (now X) following its purchase by Elon Musk.
To do so, data from early 2022 to mid-2023 was analyzed. Elon Musk's entrance with the sink into the Twitter offices took place on October 22, 2022 and graphically marked his takeover of the platform; consequently, the dataset more than reasonably covers the situation on the platform prior to the purchase and for more than enough time for Elon's changes to both the platform and among the staff, as well as his troll-in-chief attitude to 'permeate' among users.
The authors of the study found a significant increase in hate speech manifested as racism, homophobia and transphobia, along with increased user interaction with such content.
The study shows no reduction in the activity of fake accounts (such as bots), and they even point out that it may have increased.
The results contradict many of Elon Musk's claims regarding the reduction in 'fake' account activity after he joined the company and implemented his changes.
Overall, the study is solid, and the fact that Kristina Lerman signed it is a guarantee of quality, given the enormous limitations that exist for studying Twitter since its purchase by Elon Musk. The longitudinal approach followed, even if it does not allow us to make causality claims (i.e., we cannot 'accuse' Elon Musk of being to blame for the new problems at Twitter nor intentionally intend them), does clearly show that there is a change for the worse in the discourse and toxicity of the platform after the purchase.
The data capture was done with the Twitter API for Academic Research, which ensures systematic and transparent data collection within the limitations set by the company itself. It should be noted that academic access ceased around September 21, 2023; it is possible that this is one of the last extensive studies to be conducted with the academic API and that, as of today, a study of this nature is basically impossible to conduct.
As far as the methods used to detect hate speech or 'fake' accounts are concerned they are reasonable, in the sense that any investigator facing this type of investigation would use similar approaches. Of course, this does not mean that they are 'bulletproof'. Using a dictionary to find hate speech tweets and then running them through Perspective's API (a toxic content detection service) is reasonable, but not 100% accurate. That is, there is bound to be hate speech that has slipped through the cracks and there may be a percentage, surely small, of rude and profanity-laced texts that have been flagged as hate speech.
All in all, the approach is reasonable and I am sure that, in aggregate, the indicators of increased hate speech are correct.
As far as detection of 'fake' accounts is concerned, an approach based on detection of coordinated campaigns is most reasonable according to recent literature, rather than simplistically trying to determine whether or not an individual account is a bot.
The work is broadly consistent with previous research focused on the same question - namely, changes in Twitter's toxicity following Musk's purchase - in two ways:
- It shows an increase in hate speech (though not as extreme as, for example, Hickey et al. Auditing Elon Musk's impact on hate speech and bots). The authors of this study argue that the difference is because Hickey et al. included in their lexicon the word “retarded” which is now used as an insult by right-wing users but should not be considered (according to the authors) as hate speech toward any minority group.
- The activity of inauthentic accounts has not noticeably reduced and, in fact, may have increased (particularly among accounts posting about cryptocurrencies).
As far as interesting developments in this paper are concerned:
- The time period analyzed is much longer, both in the stage prior to the purchase by Elon Musk and in his stage as CEO of the company; studies of a social nature (and this one is not without reason) are more robust when the periods under study are longer.
- The study of inauthentic accounts uses different metrics for their detection, which offers greater robustness to the findings.
- Hate speech is not analyzed monolithically but in different dimensions (racism, homophobia and transphobia), finding that transphobic speech is the one that experienced the greatest growth (note: Elon Musk has made many comments that have been interpreted as transphobic).
- It openly confronts Elon Musk's claims with the researchers' findings.
- It takes into account external events that may have played some role in the changes in hate speech (e.g., a beer commercial starring a trans woman).
As far as implications are concerned:
- It allows to argue that having content moderation templates (Elon Musk fired most of them, I seem to recall) has a positive impact on the platform by reasonably keeping hate speech at bay.
- It empirically confirms the 'feelings' (and decisions) of many people and organizations that toxicity and hate on Twitter have increased and make it an unhealthy platform where you need to value being on or off.
- In the absence of other studies that focus on the increase in this type of discourse (racist, homophobic and transphobic) in other media (e.g., traditional press), in legislative chambers (e.g., US Congress or Senate proceedings) or even the increase in hate crimes, one can only make 'guesses' about the real-world impact of this increase. However, the rise of hate speech on social media can influence the public agenda and offline attitudes, which should raise concerns about the potential impact (and harm) on individuals from the targeted hate groups.
Regarding limitations:
- Only English-language speech was analyzed, so nothing can be said about the rise of hate speech or inauthentic activity in other cultures. The choice of language, however, is reasonable given that, with hindsight, the purchase of Twitter could be related to the U.S. election. In this regard, it should be noted that, although the majority of the tweets in English were probably from US citizens, users from other countries were also present.
- The analysis of hate speech focused on clearly hateful speech (toxicity according to the Perspective API greater than or equal to 0.7); the authors themselves point out that the study of more subtle/sibylline hate speech remains to be done.
- The study of coordinated and inauthentic activity could be refined.
- The study relies heavily on third-party tools such as the Perspective API or Botometer (which has been openly criticized, see here).
- Apparently, the authors already saw changes and limitations in academic access during the conduct of their study (see page 6 of the article).
However, I must insist that many of the limitations are not inherent to this study but to any study conducted on Twitter and are consequently unavoidable.
All in all, I think it is a robust, interesting study that makes a strong case about hate speech and toxicity on Twitter in the U.S. post-Elon Musk buyout.
Amalia - Musk (EN)
Amalia Álvarez Benjumea
Researcher at the Institute of Public Goods and Policies of the CSIC (IPP-CSIC)
This study is an advance over previous research because it analyzes a longer period on X since Elon Musk's purchase of Twitter, allowing the researchers to compare the content on the platform with the same period prior to the purchase. Overall, the results show that hate speech increases on X after the purchase of the platform and does so in different areas, such as racism, homophobia and transphobia, as well as increasing engagement with these messages. However, the major limitation of the study is that it cannot establish a direct causal relationship between this increase in hate and the purchase of the platform or changes in its moderation policies, as it does not have an adequate comparison group. The results quantify the increase in hate messages, but this growth may be due to specific events, call-out effects, or changes in the composition of X users.
In addition, the authors use the number of likes on hate posts as an indicator of engagement, but given that policies on who can view and like posts had changed during the period analyzed, the comparison is not entirely accurate. There are also limitations in the methodology used to detect hate content: the study uses a dictionary-based method, which, while useful for analyzing large volumes of data, has significant problems. On the one hand, this approach relies on predefined lists of words considered offensive, which can generate false positives if out-of-context terms are identified (e.g., a word that is offensive in one context but not in another). On the other hand, there may be false negatives, since language is flexible and changes over time; hatred may be expressed subtly or through euphemisms that are not in the list of words used for classification. Still, the study is relevant because it shows a dramatic increase in the number of messages with hateful content on the platform.
Walter - Musk (EN)
Walter Quattrociocchi
Director of the Laboratory of Data and Complexity for Society at the University of Rome La Sapienza (Italy)
This study analyzes trends in hate speech on X and suggests an increase after its acquisition by Elon Musk. It uses lexicon-based automatic classification tools and engagement [engagement] metrics, which are common in computational social science but have inherent limitations.
- Key contributions:
The study provides a useful temporal perspective on changes in content moderation and its possible effects on the prevalence of hate speech.
It adds to the ongoing debate on the role of platform policies in shaping online discourse.
- Broader context and limitations:
Robust research has shown that online toxicity follows persistent patterns across different platforms and time periods, driven more by the dynamics of human interaction than by platform-specific policies.
Automated classifiers, such as those used in this study, can introduce biases and have issues with contextual nuances, which can lead to misclassification of discourse.
Correlation does not imply causation: although the study identifies an increase in hate speech, attributing it directly to the Musk takeover requires a more controlled experimental design.
The analysis is limited to English-language content, which may not reflect global trends on the platform.
- Final thoughts:
While this study contributes to the debate on platform governance and moderation, broader research suggests that toxic behaviors are remarkably stable over time, regardless of changes in platform-specific policies. Future studies should incorporate multiplatform and multilingual perspectives to fully understand the dynamics of hate speech in online spaces.
- Research article
- Peer reviewed
- Observational study
Hickey et al.
- Research article
- Peer reviewed
- Observational study