Some potential risks can be managed, but difficult questions remain.
- Companies across the technology sector have seen renewed investor interest as their ties to artificial intelligence (AI) have boosted share prices.
- With the disruptive potential of AI having emerged, public policies and regulations will need to be formulated to address potential adverse effects of the technology.
- We discuss key areas of risk that our responsible investment team believes are important to consider while evaluating AI adoption, and we believe should be addressed by companies while advancing AI.
Artificial intelligence (AI) has become much more prominent in technology solutions, especially in search engines, personal shopping, fraud prevention, maps, facial recognition, and autonomous vehicles, to name a few. But in recent months, AI has drawn more attention than ever before, and like other industrial revolutions it has taken the world by storm. Similar to the industrial revolution, the AI revolution has both positive and adverse societal implications.
On one hand, two months after the release of OpenAI’s ChatGPT, it reached 100 million monthly active users, making it the fastest growing consumer application in history. On the other hand, four months after its launch, some countries have banned its usage and others are in process of restricting its use. Opinion by prominent tech leaders is divided. Philanthropist and Microsoft founder Bill Gates is calling AI the “biggest innovation since the user-friendly computer” and referring to the current era as the age of AI. Conversely, Elon Musk and Apple co-founder Steve Wozniak are asking for a six-month pause on the development of advanced AI systems, citing profound risks to society and humanity. Since every innovation comes with its own positives and negatives, society should be more alert while accepting these changes in systems. We believe advanced AI could bring about a profound change to life on earth and should be planned for and managed with commensurate care and resources.
Most agree that AI can transform the productivity and GDP potential of the global economy. PwC estimates that AI could contribute up to $15.7 trillion to the global economy in 2030, more than the current output of China and India combined. Of this, $6.6 trillion is likely to come from increased productivity and $9.1 trillion is likely to come from consumption-side effects. Companies across software, internet, and semiconductors have seen renewed investor interest as their ties to the technology have boosted share prices. However, one should also consider risk factors arising from widespread adoption of AI, especially those related to social risk. In our view, AI will help companies to achieve numerous sustainability objectives like mitigating climate change, redefining biodiversity protection and monitoring, and improving health care and education systems, and it may lead to improved company financial performance. However, we also see risk factors that environmental, social and governance (ESG) investors will need to consider in exploring AI-driven opportunities.
In the recent past, big tech companies have been slashing personnel from teams dedicated to evaluating ethical issues around deploying AI, which may lead to concerns about the safety of the new technology as it becomes widely adopted across consumer products. We have outlined key areas of risk that our responsible investment team believes are important to consider while evaluating AI adoption and should be addressed by companies while advancing AI.
Unemployment and income inequality
Like many revolutionary technologies before it, AI is likely to eliminate some job categories. Many professions face significant risk of job loss, with the most at-risk professions including telemarketers and a variety of post-secondary teachers. Goldman Sachs estimates that one fifth of manual jobs, or 300 million jobs globally, are at risk. McKinsey estimates AI automation will displace between 400 and 800 million jobs, requiring as many as 375 million people to switch job categories. A report from OpenAI and University of Pennsylvania states that 80% of the US workforce could have at least 10% of their work tasks affected, while around 19% of workers may see at least 50% of their tasks affected by the introduction of AI tools like ChatGPT.
In tandem with these job losses, the earning capability of middle-class and lower-class workers will be affected as they are disproportionately concentrated among mid-level employees and upper middle-management personnel. This is likely to widen the already huge income and wealth inequality by creating a new wave of billionaire tech barons at the same time that it pushes many employees out of higher paying jobs. In the past, automation mainly affected factory jobs, but AI is expected to hurt mid-level, white-collar jobs more than lower paying, physically intensive jobs.
The good news is that worker displacement from automation has historically been offset by the creation of new jobs. The emergence of new occupations following technological innovation accounts for most of the long-run employment growth. The combination of significant labour cost savings, new job creation, and higher productivity for non-displaced workers raises the possibility of a productivity boom that increases economic growth substantially, although the timing of such a boom is hard to predict.
Generative AI models can perpetuate and even amplify existing biases in the data used to train them. For example, a model trained on a biased dataset of news articles might generate text that reflects those biases. In addition to this, if the people training the algorithm are not from a range of diverse backgrounds, they may not be able to account for certain biases or experiences that are relevant to ESG issues. This could perpetuate harmful stereotypes and discrimination. Hiring or loan approval are very evident examples. For instance, Amazon shut down its AI recruiting tool after using it for one year after developers learned that the tool was penalising women. About 60% of the candidates chosen by the AI tool were male, which was due to patterns in data on Amazon’s historical recruitments.
‘Hallucination’ is big risk from misinformation (i.e. the potential to generate an incorrect answer with confidence). This could create deepfakes or other manipulated content, which can be used to spread misinformation or cause harm. Generative AI employs machine-learning techniques to generate new information, which may result in inaccuracies. These AI models are built on publicly available data points, and US Section 230 says that internet platforms hosting third-party content are not liable for what those third parties post. So, training models on publicly available information may amplify the misinformation.
Additionally, a pre-trained language model lacks the ability to update and adapt to new information. Recently, language models have become more skilled in their ability to speak persuasively and eloquently. However, this advanced proficiency has also brought potential to spread false information or even create false statements.
We have already seen examples of autonomous machines. With AI tools embedded in them, which is expected to become more common, there remains much ambiguity about liability and accountability in decision making. With AI, the healthcare industry should get a good boost from autonomous AI-powered diagnostic tools, but if bad actors can combine AI technology with synthetic biology, serious problems could ensue. Bad actors, for example, may be able to synthesise viruses through AI systems that were previously unfeasible for individuals and could lead to a very dangerous pandemic—in theory. Automation could also be extended to weapons. For instance, a weapon will independently identify and engage targets based on programmed constraints and descriptions. The democratisation of AI can be concerning, to put it mildly.
Data privacy and cyber security
Malware, phishing, and identity-based ransomware attacks are examples of harmful instruments that can empower bad actors. In general, these threats could have broad-based implications for cybersecurity, particularly for email security, identity security, and threat detection. Generative AI models can be used to create realistic synthetic data, raising concerns about the protection of individuals’ privacy. The data collected for AI technologies are meant to train models for a good purpose but can be used in ways that violate the privacy of the data owners. For instance, energy usage data can be collected and used to help residential customers be more energy efficient and lower their bills. The same data can also be used to derive personal information such as the occupation and religion of the residents. On the security side, data associated with many grid assets are prone to security concerns; therefore, access to and usage of the data requires strict controls.
Intellectual property theft
There is currently nothing stopping companies from using AI-generated content beyond compliance. This raises questions about who holds the rights to that content and how those rights can be exploited. The issue of copyright centres on three main questions: Should works created by AI be eligible for copyright protection? Who would have the ownership rights over the created content? Can copyrighted, generated data be used for training purposes? Artists, writers, musicians and other creative professionals worry about the threat to their original work from AI technology. With no precedent in place, it is still early days for legal recourse, thus limiting large-scale enterprise adoption.
The US Copyright Office released a statement saying that there must be a human-created component to any AI output before someone can claim copyright. This means that generating something using AI is not enough to claim copyright, there needs to be some evidence of human input somewhere. This is still a grey area, and no laws are in place around this topic.
Artificial general intelligence (AGI) / singularity
A machine capable of human-level understanding could possibly be a threat to humanity and such development may need to be regulated. Although most AI experts don’t expect singularity (the erasing of the boundary between humanity and AI) any time soon (not before 2060 at least), ethical considerations will come more sharply into focus as AI capabilities increase.
When people discuss AI, they mostly mean narrow AI systems (or ‘weak AI’), which is specified to handle a singular or limited task. On the other hand, AGI is the form of artificial intelligence that we see in science fiction books and movies. AGI entails machines understanding or learning any intellectual task that a human being can. Since experts do not expect this to happen in the real world before 2060, there is time to construct a regulatory framework to make sure it is done in a safe way.
Loss of social connect/values
People tend to lose social connection because of automation and the use of AI tools. With AI, the disconnect between how we interact on social media and how we connect will grow even wider. People may lose some of their human values as a result of this.
Impact on educational systems
Generative AI raises many questions around the social implications for educational systems. Indeed, generative AI has the potential to drastically reshape educational systems; the introduction of technology to classrooms has elicited both excitement and distress. On the positive side, AI can offer students a more personalised learning experience by providing feedback and recommendations that are tailored to individual needs and abilities. This can help to keep students engaged and motivated and can lead to improved academic performance. Conversely, generative AI has created backlash about its potential negative impact on student learning.
Given its potential for enormous benefits, AI has taken a prominent place in the business world and society. However, these benefits could come at a price, and there is much speculation about how widespread use of AI will affect society as a whole. Larger concerns around AI could include job losses and extrapolating misinformation/disinformation. Another widespread concern is that it could be used to do harm. AI is one of the most fundamentally revolutionary technologies in human history, but its transformative capacity is something that requires respect and caution. If AI turns out to be as transformative as it appears it will be, it has the potential to do so for both virtuous and, unfortunately, nefarious reasons. But technology has always been accompanied by a fear of the unknown.
We do believe that implementing a comprehensive, responsible AI programme should help companies minimise these risks. This includes the policies, governance, processes, tools and broader cultural change to make sure AI systems are built and implemented in a manner that is consistent with organisational values and norms. When properly implemented, a responsible AI programme should reduce the frequency of failures by identifying and mitigating issues before the system is deployed. And while failures may still occur, their severity can be lower, creating less harm to individuals and society. Instead of waiting to scale their AI efforts, companies should start early and focus on responsible AI. This can ensure the right controls are in place to minimise the risk of scaling AI, and as an added benefit, it can also increase the business value of the AI systems.
 Source: Sizing the prize: What’s the real value of AI for your business and how can you capitalise? https://www.pwc.com/gx/en/issues/analytics/assets/pwc-ai-analysis-sizing-the-prize-report.pdf
 Section 230 is a section of Title 47 of the United States Code that was enacted as part of the Communications Decency Act of 1996, which is Title V of the Telecommunications Act of 1996
This is a financial promotion. These opinions should not be construed as investment or other advice and are subject to change. This material is for information purposes only. This material is for professional investors only. Any reference to a specific security, country or sector should not be construed as a recommendation to buy or sell investments in those securities, countries or sectors. Please note that holdings and positioning are subject to change without notice. Newton manages a variety of investment strategies. Whether and how ESG considerations are assessed or integrated into Newton’s strategies depends on the asset classes and/or the particular strategy involved, as well as the research and investment approach of each Newton firm. ESG may not be considered for each individual investment and, where ESG is considered, other attributes of an investment may outweigh ESG considerations when making investment decisions.
This material is for Australian wholesale clients only and is not intended for distribution to, nor should it be relied upon by, retail clients. This information has not been prepared to take into account the investment objectives, financial objectives or particular needs of any particular person. Before making an investment decision you should carefully consider, with or without the assistance of a financial adviser, whether such an investment strategy is appropriate in light of your particular investment needs, objectives and financial circumstances.
Newton Investment Management Limited is exempt from the requirement to hold an Australian financial services licence in respect of the financial services it provides to wholesale clients in Australia and is authorised and regulated by the Financial Conduct Authority of the UK under UK laws, which differ from Australian laws.
Newton Investment Management Limited (Newton) is authorised and regulated in the UK by the Financial Conduct Authority (FCA), 12 Endeavour Square, London, E20 1JN. Newton is providing financial services to wholesale clients in Australia in reliance on ASIC Corporations (Repeal and Transitional) Instrument 2016/396, a copy of which is on the website of the Australian Securities and Investments Commission, www.asic.gov.au. The instrument exempts entities that are authorised and regulated in the UK by the FCA, such as Newton, from the need to hold an Australian financial services license under the Corporations Act 2001 for certain financial services provided to Australian wholesale clients on certain conditions. Financial services provided by Newton are regulated by the FCA under the laws and regulatory requirements of the United Kingdom, which are different to the laws applying in Australia.