Some potential risks can be managed, but difficult questions remain.

Key Points

  • Companies across the technology sector have seen renewed investor interest as their ties to artificial intelligence (AI) have boosted share prices.
  • With the disruptive potential of AI having emerged, public policies and regulations will need to be formulated to address potential adverse effects of the technology.
  • We discuss key areas of risk that our responsible investment team believes are important to consider while evaluating AI adoption, and we believe should be addressed by companies while advancing AI.

Artificial intelligence (AI) has become much more prominent in technology solutions, especially in search engines, personal shopping, fraud prevention, maps, facial recognition, and autonomous vehicles, to name a few. But in recent months, AI has drawn more attention than ever before, and like other industrial revolutions it has taken the world by storm. Similar to the industrial revolution, the AI revolution has both positive and adverse societal implications.

On one hand, two months after the release of OpenAI’s ChatGPT, it reached 100 million monthly active users, making it the fastest growing consumer application in history. On the other hand, four months after its launch, some countries have banned its usage and others are in process of restricting its use. Opinion by prominent tech leaders is divided. Philanthropist and Microsoft founder Bill Gates is calling AI the “biggest innovation since the user-friendly computer” and referring to the current era as the age of AI. Conversely, Elon Musk and Apple co-founder Steve Wozniak are asking for a six-month pause on the development of advanced AI systems, citing profound risks to society and humanity. Since every innovation comes with its own positives and negatives, society should be more alert while accepting these changes in systems. We believe advanced AI could bring about a profound change to life on earth and should be planned for and managed with commensurate care and resources.

Most agree that AI can transform the productivity and GDP potential of the global economy. PwC estimates that AI could contribute up to $15.7 trillion to the global economy in 2030, more than the current output of China and India combined. Of this, $6.6 trillion is likely to come from increased productivity and $9.1 trillion is likely to come from consumption-side effects.[1] Companies across software, internet, and semiconductors have seen renewed investor interest as their ties to the technology have boosted share prices. However, one should also consider risk factors arising from widespread adoption of AI, especially those related to social risk. In our view, AI will help companies to achieve numerous sustainability objectives like mitigating climate change, redefining biodiversity protection and monitoring, and improving health care and education systems, and it may lead to improved company financial performance. However, we also see risk factors that environmental, social and governance (ESG) investors will need to consider in exploring AI-driven opportunities.

In the recent past, big tech companies have been slashing personnel from teams dedicated to evaluating ethical issues around deploying AI, which may lead to concerns about the safety of the new technology as it becomes widely adopted across consumer products. We have outlined key areas of risk that our responsible investment team believes are important to consider while evaluating AI adoption and should be addressed by companies while advancing AI.

Unemployment and Income Inequality

Like many revolutionary technologies before it, AI is likely to eliminate some job categories. Many professions face significant risk of job loss, with the most at-risk professions including telemarketers and a variety of post-secondary teachers. Goldman Sachs estimates that one fifth of manual jobs, or 300 million jobs globally, are at risk.[2] McKinsey estimates AI automation will displace between 400 and 800 million jobs, requiring as many as 375 million people to switch job categories.[3] A report from OpenAI and University of Pennsylvania states that 80% of the US workforce could have at least 10% of their work tasks affected, while around 19% of workers may see at least 50% of their tasks impacted by the introduction of AI tools like ChatGPT.[4]

In tandem with these job losses, the earning capability of middle-class and lower-class workers will be affected as they are disproportionately concentrated among mid-level employees and upper middle-management personnel. This is likely to widen the already huge income and wealth inequality by creating a new wave of billionaire tech barons at the same time that it pushes many employees out of higher paying jobs. In the past, automation mainly affected factory jobs, but AI is expected to hurt mid-level, white-collar jobs more than lower paying, physically intensive jobs.

The good news is that worker displacement from automation has historically been offset by the creation of new jobs. The emergence of new occupations following technological innovation accounts for the vast majority of long-run employment growth. The combination of significant labor cost savings, new job creation, and higher productivity for non-displaced workers raises the possibility of a productivity boom that increases economic growth substantially, although the timing of such a boom is hard to predict.

Bias

Generative AI models can perpetuate and even amplify existing biases in the data used to train them. For example, a model trained on a biased dataset of news articles might generate text that reflects those biases. In addition to this, if the people training the algorithm are not from a range of diverse backgrounds, they may not be able to account for certain biases or experiences that are relevant to ESG issues. This could perpetuate harmful stereotypes and discrimination. Hiring or loan approval are very evident examples. For instance, Amazon shut down its AI recruiting tool after using it for one year after developers learned that the tool was penalizing women. About 60% of the candidates chosen by the AI tool were male, which was due to patterns in data on Amazon’s historical recruitments.

Misinformation

‘Hallucination’ is big risk from misinformation (i.e., the potential to generate an incorrect answer with confidence). This could create deepfakes or other manipulated content, which can be used to spread misinformation or cause harm. Generative AI employs machine-learning techniques to generate new information, which may result in inaccuracies. These AI models are built on publicly available data points, and US Section 230[5] says that internet platforms hosting third-party content are not liable for what those third parties post. So, training models on publicly available information may amplify the misinformation.

Additionally, a pre-trained language model lacks the ability to update and adapt to new information. Recently, language models have become more skilled in their ability to speak persuasively and eloquently. However, this advanced proficiency has also brought potential to spread false information or even create false statements.

Safety Concerns/Accountability

We have already seen examples of autonomous machines. With AI tools embedded in them, which is expected to become more common, there remains much ambiguity about liability and accountability in decision making. With AI, the healthcare industry should get a good boost from autonomous AI-powered diagnostic tools, but if bad actors can combine AI technology with synthetic biology, serious problems could ensue. Bad actors, for example, may be able to synthesize viruses through AI systems that were previously unfeasible for individuals and could lead to a very dangerous pandemic—in theory. Automation could also be extended to weapons. For instance, a weapon will independently identify and engage targets based on programmed constraints and descriptions. The democratization of AI can be concerning, to put it mildly. 

Data Privacy and Cyber Security

Malware, phishing, and identity-based ransomware attacks are examples of harmful instruments that can empower bad actors. In general, these threats could have broad-based implications for cybersecurity, particularly for email security, identity security, and threat detection. Generative AI models can be used to create realistic synthetic data, raising concerns about the protection of individuals’ privacy. The data collected for AI technologies are meant to train models for a good purpose but can be used in ways that violate the privacy of the data owners. For instance, energy usage data can be collected and used to help residential customers be more energy efficient and lower their bills. The same data can also be used to derive personal information such as the occupation and religion of the residents. On the security side, data associated with many grid assets are prone to security concerns; therefore, access to and usage of the data requires strict controls.

Intellectual Property Theft

There is currently nothing stopping companies from using AI-generated content beyond compliance. This raises questions about who holds the rights to that content and how those rights can be exploited. The issue of copyright centers on three main questions: Should works created by AI be eligible for copyright protection? Who would have the ownership rights over the created content? Can copyrighted, generated data be used for training purposes? Artists, writers, musicians and other creative professionals worry about the threat to their original work from AI technology. With no precedent in place, it is still early days for legal recourse, thus limiting large-scale enterprise adoption.

The US Copyright Office released a statement saying that there must be a human-created component to any AI output before someone can claim copyright. This means that generating something using AI is not enough to claim copyright, there needs to be some evidence of human input somewhere. This is still a gray area, and no laws are in place around this topic.

Artificial General Intelligence (AGI) / Singularity

A machine capable of human-level understanding could possibly be a threat to humanity and such development may need to be regulated. Although most AI experts don’t expect singularity (the erasing of the boundary between humanity and AI) any time soon (not before 2060 at least), ethical considerations will come more sharply into focus as AI capabilities increase.

When people discuss AI, they mostly mean narrow AI systems (or ‘weak AI’), which is specified to handle a singular or limited task. On the other hand, AGI is the form of artificial intelligence that we see in science fiction books and movies. AGI entails machines understanding or learning any intellectual task that a human being can. Since experts do not expect this to happen in the real world before 2060, there is time to construct a regulatory framework to make sure it is done in a safe way.

Loss of Social Connect/Values

People tend to lose social connection as a result of automation and the use of AI tools. With AI, the disconnect between how we interact on social media and how we actually connect will grow even wider. People may lose some of their human values as a result of this.

Impact on Educational Systems

Generative AI raises many questions around the social implications for educational systems. Indeed, generative AI has the potential to drastically reshape educational systems; the introduction of technology to classrooms has elicited both excitement and distress. On the positive side, AI can offer students a more personalized learning experience by providing feedback and recommendations that are tailored to individual needs and abilities. This can help to keep students engaged and motivated and can lead to improved academic performance. Conversely, generative AI has created backlash about its potential negative impact on student learning.

Conclusion

Given its potential for enormous benefits, AI has taken a prominent place in the business world and society. However, these benefits could come at a price, and there is much speculation about how widespread use of AI will affect society as a whole. Larger concerns around AI could include job losses and extrapolating misinformation/disinformation. Another widespread concern is that it could be used to do harm. AI is one of the most fundamentally revolutionary technologies in human history, but its transformative capacity is something that requires respect and caution. If AI turns out to be as transformative as it appears it will be, it has the potential to do so for both virtuous and, unfortunately, nefarious reasons. But technology has always been accompanied by a fear of the unknown.

We do believe that implementing a comprehensive, responsible AI program should help companies minimize these risks. This includes the policies, governance, processes, tools and broader cultural change to make sure AI systems are built and implemented in a manner that is consistent with organizational values and norms. When properly implemented, a responsible AI program should reduce the frequency of failures by identifying and mitigating issues before the system is deployed. And while failures may still occur, their severity can be lower, creating less harm to individuals and society. Instead of waiting to scale their AI efforts, companies should start early and focus on responsible AI. This can ensure the right controls are in place to minimize the risk of scaling AI, and as an added benefit, it can also increase the business value of the AI systems.


[1] Source: Sizing the prize: What’s the real value of AI for your business and how can you capitalise? https://www.pwc.com/gx/en/issues/analytics/assets/pwc-ai-analysis-sizing-the-prize-report.pdf

[2] https://www.forbes.com/sites/jackkelly/2023/03/31/goldman-sachs-predicts-300-million-jobs-will-be-lost-or-degraded-by-artificial-intelligence/?sh=7753f801782b

[3] https://www.mckinsey.com/featured-insights/future-of-work/jobs-lost-jobs-gained-what-the-future-of-work-will-mean-for-jobs-skills-and-wages

[4] https://arxiv.org/abs/2303.10130

[5] Section 230 is a section of Title 47 of the United States Code that was enacted as part of the Communications Decency Act of 1996, which is Title V of the Telecommunications Act of 1996

Authors

Onkar Jagtap

Onkar Jagtap

Responsible investment analyst*

Comments

Your email address will not be published.

Newton does not capture and store any personal information about an individual who accesses this blog, except where he or she volunteers such information, whether via email, an electronic form or other means. Where personal information is supplied, it will be used only in relation to this blog, and will not be collected or stored for any other purpose. Comments submitted via the blog are moderated, and, as a result, there may be a delay before they are posted.

PAST PERFORMANCE IS NOT NECESSARILY INDICATIVE OF FUTURE RESULTS. Any reference to a specific security, country or sector should not be construed as a recommendation to buy or sell this security, country or sector. Please note that strategy holdings and positioning are subject to change without notice. Newton manages a variety of investment strategies. How ESG considerations are assessed or integrated into Newton’s strategies depends on the asset classes and/or the particular strategy involved. ESG may not be considered for each individual investment and, where ESG is considered, other attributes of an investment may outweigh ESG considerations when making investment decisions. ESG considerations do not form part of the research process for Newton's small cap and multi-asset solutions strategies. For additional Important Information, click on the link below.

Important information

For Institutional Clients Only. Issued by Newton Investment Management North America LLC ("NIMNA" or the "Firm"). NIMNA is a registered investment adviser with the US Securities and Exchange Commission ("SEC") and subsidiary of The Bank of New York Mellon Corporation ("BNY Mellon"). The Firm was established in 2021, comprised of equity and multi-asset teams from an affiliate, Mellon Investments Corporation. The Firm is part of the group of affiliated companies that individually or collectively provide investment advisory services under the brand "Newton" or "Newton Investment Management". Newton currently includes NIMNA and Newton Investment Management Ltd ("NIM") and Newton Investment Management Japan Limited ("NIMJ").

Material in this publication is for general information only. The opinions expressed in this document are those of Newton and should not be construed as investment advice or recommendations for any purchase or sale of any specific security or commodity. Certain information contained herein is based on outside sources believed to be reliable, but its accuracy is not guaranteed.

Statements are current as of the date of the material only. Any forward-looking statements speak only as of the date they are made, and are subject to numerous assumptions, risks, and uncertainties, which change over time. Actual results could differ materially from those anticipated in forward-looking statements. No investment strategy or risk management technique can guarantee returns or eliminate risk in any market environment and past performance is no indication of future performance.

Information about the indices shown here is provided to allow for comparison of the performance of the strategy to that of certain well-known and widely recognized indices. There is no representation that such index is an appropriate benchmark for such comparison.

This material (or any portion thereof) may not be copied or distributed without Newton’s prior written approval.

In Canada, NIMNA is availing itself of the International Adviser Exemption (IAE) in the following Provinces: Alberta, British Columbia, Manitoba and Ontario and the foreign commodity trading advisor exemption in Ontario. The IAE is in compliance with National Instrument 31-103, Registration Requirements, Exemptions and Ongoing Registrant Obligations.

Explore topics