We discuss several of the key social impacts of social media, and why they matter to investors.  

Key points

  • The negative effects of social media from a social perspective make the industry vulnerable in terms of increasing regulation and changing consumer perceptions.
  • While the issue of social media censorship may be isolated to certain regions, it not only risks people having a warped perception of free speech, but also incurs significant costs for platforms that operate in those regions.
  • Perhaps less widely understood are the impacts of content moderation, where workers responsible for carrying it out are exposed to potentially distressing content which can result in mental-health risks.
  • Other issues such as data protection highlight how a lack of sufficient measures can lead to costly financial implications for social-media companies.

Almost 4.6 billion people worldwide use social media – just under 60% of the global population.1 Evidently, social media plays a significant role in our lives, but there are a number of social impacts that we believe should be important considerations for investors; not only do these have implications in terms of doing business responsibly, but there may also be financial risks involved.

Social-media censorship

Social-media censorship involves the suppression of speech, public communication or other information. While we tend not to see censorship on Western social media platforms, it is prolific in other parts of the world including China, Russia and Iran. The key drivers of censorship in these regions are generally information control and the protectionism of domestic firms. In some countries, governments have the ability to remove and censor online content that criticises the state, and to block access to foreign sites. There is a clear motivation to comply with censorship requirements; if companies do not, there is risk of facing fines and the loss of licence to operate. The social impact of this censorship, however, is that it risks people having a warped perception, given that they are only exposed to government-censored content.

In contrast, content removal is far more difficult to achieve on US social-media platforms, making it practically impossible for regimes where US social media firms dominate to engage in social-media censorship.

For the social-media sites that do currently comply with censorship requirements in countries where they exist, there are significant costs involved. There are the direct costs of employing people who are focused on conducting censorship, in addition to indirect costs of providing the additional resources needed for them to carry out their jobs. Moreover, because these companies are known to engage in pervasive censorship, they are often perceived negatively by consumers in regions that are not subject to censorship, which diminishes their prospects of moving into lucrative markets.

Content moderation

Content moderation is the process of vetting posted content on sites to ensure that it abides by site policy and provides a safe and healthy environment for users. The process entails the application of pre-set rules for monitoring content – if it does not satisfy the guidelines, the content is flagged and removed. The reasons can be varied, and include violence, offensiveness, extremism, nudity, hate speech or copyright infringements. While moderation is typically carried out using artificial intelligence (AI), human review is still needed where software cannot automatically interpret and flag certain content, which exposes the moderator to potentially distressing subject matter.

According to a 2019 study conducted by scientists employed by Google, the mental-health risks associated with content moderation were confirmed, and they indicated that there is “an increasing awareness and recognition that beyond mere unpleasantness, long-term or extensive viewing of such disturbing content can incur significant health consequences for those engaged in such tasks”.2

After receiving significant scrutiny, companies like Facebook are making changes to the way they treat employees involved in content moderation, such as providing better salaries, access to counsellors and other benefits to make the job more manageable; however, many outsourced third-party workers do not get access to these benefits.

We believe that in order to do business responsibly, it will be important for social-media companies to ensure they are providing proper care for these workers, whether they are directly employed or third-party workers. While this is likely to require a significant investment, the risk of not taking these measures could lead to legal action taken against the company, resulting in legal costs as well as reputational damage. This was shown in 2021 when more than 10,000 previous and current moderators won a settlement against Facebook worth $85 million, for failing to protect them from psychological injuries. The amount included a $52 million fund for continuing mental-health treatment.3

Bias and fake news

The aim of social media is generally to keep users on the site, so that the platform can harvest data and gain advertising revenue. To do this, social-media sites typically use algorithms which are intended to hold attention and promote the most trending topics, thereby making them inherently biased. In addition, ‘fake news’ is becoming a major problem, as is the newer issue of ‘deep fakes’ (using AI to create convincing fake video or audio). The circulation of this misleading content can be detrimental to public opinion and the advancement of political and social issues. An example of this was the 2016 presidential election in the US, where the volume of fake news on social media caused significant public concern.  

Ensuring informational integrity can be difficult, but improving transparency around the contents of algorithm black boxes (in relation to which there is currently no visibility) may be a key step to understanding how safe these sites are. The complexity of algorithms also makes it difficult for investors to make informed decisions. From an investment perspective, we think that it is therefore important that social-media platforms are able to demonstrate how they ensure informational integrity, to combat the distribution and spread of fake news.

Data protection

With the rise of big data, it is crucial that companies do not exploit or unfairly farm user data. The core principles of data protection include integrity, confidentiality, transparency and accuracy. The risk for multinational platforms is that differences in regional data-protection laws can leave specific demographics open to exploitation. Firms may need to enforce platform-wide policies to the highest international standards to ensure that all users are protected sufficiently.

As investors, it is important to understand the data-protection practices of companies and to ensure that they meet these high standards. The abuse and leakage of data can cause incredible damage to the victims, as well as significant reputational and financial damage to the firm, depending on the severity of the breach. Even with minor breaches, the erosion of trust between consumers and producers can have a material financial impact.

In April 2018, the Cambridge Analytica scandal made news, with 87 million Facebook users having their personal data improperly obtained by the political data analytics firm.4 As a result of the scandal, Facebook’s market capitalisation fell by 18% in just over a week, and the company also received a $5bn fine a year later.5 This example highlights the importance for investors to be aware of the data-protection policies of platforms, given the significant financial ramifications that not doing this can entail.

Increasing regulation and changing consumer perceptions

The negative effects of social media from a social perspective, as discussed, make the industry vulnerable in terms of increasing regulation and changing consumer perceptions. While we have not discussed the effects of social media on mental health here owing to the complexity of that issue, it is another material consideration, which may also have financial implications for social-media platforms.

Risks for those involved in content moderation and censorship may be concentrated in certain regions, whereas risks around bias and fake news appear universal, affecting both democratic and autocratic states. Likewise, many consumers may be aware, following the Cambridge Analytica scandal, that there is a possibility of their personal data being exploited by platforms, but we expect many could be shocked by the role of content moderators, the materials they must watch, and the impact that this has on them. 

We believe this is a prime example of where, through active management, we can aim to better understand a material social risk which is in turn factored into our investment decision-making processes, with the intention of being forward rather than backward-looking. This analysis can also inform and enable more detailed engagements on this topic, helping our conversations with companies and our understanding of how risks are managed, to move beyond a high-level view.


  1. Statista. Number of social media users worldwide from 2018 to 2022, with forecasts from 2023 to 2027. Accessed 14 September 2022: https://www.statista.com/statistics/278414/number-of-worldwide-social-network-users/
  2. Sowmya Karunakaran, Rashmi Ramakrishan. Google Inc. Testing Stylistic Interventions to Reduce Emotional Impact of Content Moderation Workers. 2019.
  3. Daniel Wiessner. Reuters. Judge OKs $85 mln settlement of Facebook moderators’ PTSD claims. 23 July 2021.
  4. BBC. Facebook ‘to be fined $5bn over Cambridge Analytica scandal’. 13 July 2019.
  5. Bloomberg. Facebook Stock’s Familiar Crisis Cycle: Decline, Rebound, Repeat. 6 October 2021.


Newton responsible investment team

Newton responsible investment team

Responsible investment team


Your email address will not be published.

Newton does not capture and store any personal information about an individual who accesses this blog, except where he or she volunteers such information, whether via email, an electronic form or other means. Where personal information is supplied, it will be used only in relation to this blog, and will not be collected or stored for any other purpose. Comments submitted via the blog are moderated, and, as a result, there may be a delay before they are posted.

This is a financial promotion. These opinions should not be construed as investment or other advice and are subject to change. This material is for information purposes only. This material is for professional investors only. Any reference to a specific security, country or sector should not be construed as a recommendation to buy or sell investments in those securities, countries or sectors. Please note that holdings and positioning are subject to change without notice. Newton manages a variety of investment strategies. How ESG considerations are assessed or integrated into Newton’s strategies depends on the asset classes and/or the particular strategy involved. ESG may not be considered for each individual investment and, where ESG is considered, other attributes of an investment may outweigh ESG considerations when making investment decisions. ESG considerations do not form part of the research process for Newton's small cap and multi-asset solutions strategies.

Explore topics