As the sophistication of machine learning and artificial intelligence (AI) continues to grow, there is a wide spectrum of views regarding the potential consequences of this Fourth Industrial Revolution. Current levels of concern are so high that AI has been identified by Cambridge University as one of the few existential risks facing humanity.
At Newton, we keep hearing two narratives about AI. One is that it will bring significant benefits to humanity by automating complex tasks with greater accuracy than is currently possible, improving productivity and freeing up humans to solve more complex tasks or pursue other interests. The other is more dystopian and depicts mass job losses, a need for universal basic income, and robot takeover. At the moment, the truth is that either outcome is possible. The actual outcome is likely to depend on how governments and companies choose to govern and implement principles relating to the ethical design of AI.
What is AI and machine learning?
Broadly speaking, AI is a branch of computer science that builds machines capable of intelligent behaviour, while machine learning is the science of getting computers to learn and improve from experience without being explicitly programmed. Although AI and machine learning have been around for 40-50 years, these topics have grown in importance over the last five years owing to the huge drop in computing costs and widespread adoption of the internet.
What will the impacts be?
Historically, automation has affected the labour market in three ways. First, new technologies lead to a direct substitution of jobs currently performed by workers (the displacement effect); second, there is a complementary increase in jobs and tasks necessary to use, run and supervise the new machines (the skill-complementarity effect); and, third, there is a demand effect both from lower prices and a general increase in disposable income in the economy owing to higher productivity (the productivity effect).
The possibility of mass job losses is refuted by some analysts who point to the fact that innovation has always led to creative economic destruction, where one industry rises as another falls. As engineering jobs in car manufacturing were automated and became redundant, a new demand for labour grew in the area of internet design and, more recently, phone apps. Owing to a lack of formalised global employment data collection, the net effect is difficult to track, but over the long term global employment rates have remained stable. This indicates that the despite population growth and previous rounds of automation, sufficient new jobs have been created.
Others, however, point to the unprecedented job losses that AI will cause and highlight that the jobs created will not be able to compensate. A McKinsey report predicted that 30% of the workforce (800 million careers) will be lost to computers by 2030. Andrew Yang, a 2020 US presidential candidate and education entrepreneur, is running his campaign on the basis the universal basic income will be required to compensate millions of job losses in trucking, manufacturing and retail that simply won’t be replaced. The lack of jobs will lead to reduced disposable income, and will negatively affect corporate revenues, thereby dramatically changing our economy and society.
So what does responsible AI and machine learning look like?
Roy Amara, the late renowned American scientist and previous president of the Institute for the Future, coined a phrase called the Amara law which states that we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. Ensuring that future risks are considered in relation to today’s decision-making is at the heart of sustainability, and a core part of our integrated approach to environmental, social and governance (ESG) analysis.
From a responsible-investment perspective, we want the companies that we invest in on behalf of our clients to adopt an approach to AI that incorporates ethical design principles. This should help reduce the risk of regulatory action, union disputes or revenue-reducing consumer backlash related to negative AI or machine-learning outcomes. We think a starting point for a sustainable approach to AI includes:
- Understanding the long-term social and environmental consequences if the technology is rolled out on a large scale
- Having a plan to minimise the negative social and environmental consequences and maximise the benefits
- A clear determination of who bears the risk of responsibility if the outcome of AI decisions is harmful.
In addition, we believe AI developments should incorporate:
- Clear processes for monitoring and managing unintended or irrational results
- Clear documentation of the reasoning behind AI decision-making
- Controls to identify and correct bias and discrimination
- The ability to override or turn off the AI decision-making process
Finally, as more data is gathered for input into decision-making, principles should be agreed in advance as to whether information can be used again, forgotten or used for other purposes.
There are clearly significant potential benefits – and investment opportunities – from AI and machine learning, but, as with all technological advancements, proper planning using established risk-management techniques should help avoid unintended consequences and/or negative outcomes.
 World Economic Forum – The Future of Work 2018
 The economics of artificial intelligence: Implications for the future of work; International Labour Organisation
 Doc Searls (2012). The Intention Economy: When Customers Take Charge. Harvard Business Press.
This is a financial promotion. Any reference to a specific security, country or sector should not be construed as a recommendation to buy or sell investments in those countries or sectors. Please note that holdings and positioning are subject to change without notice.
This material is for Australian wholesale clients only and is not intended for distribution to, nor should it be relied upon by, retail clients. This information has not been prepared to take into account the investment objectives, financial objectives or particular needs of any particular person. Before making an investment decision you should carefully consider, with or without the assistance of a financial adviser, whether such an investment strategy is appropriate in light of your particular investment needs, objectives and financial circumstances.
Newton Investment Management Limited is exempt from the requirement to hold an Australian financial services licence in respect of the financial services it provides to wholesale clients in Australia and is authorised and regulated by the Financial Conduct Authority of the UK under UK laws, which differ from Australian laws.
Newton Investment Management Limited (Newton) is authorised and regulated in the UK by the Financial Conduct Authority (FCA), 12 Endeavour Square, London, E20 1JN. Newton is providing financial services to wholesale clients in Australia in reliance on ASIC Corporations (Repeal and Transitional) Instrument 2016/396, a copy of which is on the website of the Australian Securities and Investments Commission, www.asic.gov.au. The instrument exempts entities that are authorised and regulated in the UK by the FCA, such as Newton, from the need to hold an Australian financial services license under the Corporations Act 2001 for certain financial services provided to Australian wholesale clients on certain conditions. Financial services provided by Newton are regulated by the FCA under the laws and regulatory requirements of the United Kingdom, which are different to the laws applying in Australia.