In our March edition of Double Take, we delved into the ever-evolving world of generative AI. Since the launch of Open AI’s ChatGPT earlier this year, the media has been buzzing about the cutting-edge technology and its potential to change life as we know it. Large multinational technology corporations are pouring billions of dollars into companies developing large language models that power innovative chat-based AI systems and solutions. We turned our investigative lens to the potential risks and opportunities of investing in these early days of this nascent AI technology.

To better understand the evolution of the AI market and what the latest developments could mean for investors, we spoke to Rob May, an AI angel investor and entrepreneur. May likens the introduction of ChatGPT to the first iPhone in 2007, which revolutionized smartphone technology with its user-friendly interface.

There were smartphones before the iPhone, and the iPhone came out with this super intuitive interface that really brought mobile phones to the masses. But you could have had a Blackberry before. There were all kinds of Nokia phones and Motorola phones…ChatGPT is really a user interface innovation on what was an existing OpenAI model that just made it really easy to play around with.

Rob May, AI entrepreneur and angel investor

Seemingly, ChatGPT became ubiquitous overnight. However, ChatGPT—along with most of today’s generative AI technology—was spun off from GPT-3, a large language model that launched in 2020 and represented a paradigm shift in the evolution of AI. If GPT-3 is anything like the base technology of smartphone companies back in the early 2000s, ChatGPT could be the next iPhone, a product innovation that transformed the way we live and work.

Davis Sawyer is the co-founder and chief product officer at Deeplite, a Montreal-based firm focused on developing cost-effective AI for consumer products. Sawyer explains that GPT-3 uses a deep learning algorithm that can process massive data sets, making it capable of a myriad of tasks with extraordinary levels of accuracy. The industry has dubbed AI models of this size and functionality foundation models.

They are 10,000 times bigger than any model we’ve seen before. And when you say bigger, you’ve got to think in terms of memory, so compute memory to store and train and actually have these models available…These hundred billion parameter models with hundreds of billions of sources of data have now enabled us to have more accurate models than ever, and that’s what’s predicated this whole buzz.

Davis Sawyer, co-founder and chief product officer at Deeplite, chairperson at tinyML Foundation

While narrow artificial intelligence models are programmed to perform a single task (e.g. website chatbots and smartphone facial recognition technology), foundation models are trained with a vast amount of far-reaching data and can transfer knowledge from one task to another. This type of wide-ranging neural network can be trained once and then honed for different functions.[1]

According to May, foundation models were developed much faster than many industry experts anticipated. For this reason, he cautions prospective investors to recognize that the AI ecosystem could change considerably in the next three-to-five years, and even more so over the next two decades. His concern is that the companies stemming from today’s emerging technology (and employing high levels of capital expenditures) could become irrelevant as the models continue to develop. Changes in the space are happening relatively quickly, as GPT-4—the newest iteration of the large language model— was released shortly after we recorded this podcast episode.

One of the core tenets of today’s foundation models is that the amount of training data fed into the model is positively correlated with the predictive power of that model—in other words, a greater amount of training data should yield a more predictive model. In the ever-evolving AI ecosystem, May questions the staying power of that assumption.

Those of you that have children know that you don’t have to show your kids 8,000 versions of a coffee cup before they know what a coffee cup is. You show them like three coffee cups and they’re like, ‘Okay, I got it. That’s a coffee cup.’ And so, this idea that you need a lot of data to train AI models, a lot of people believe that at some point it’s going to go away. So when that happens, the foundation models could be at risk. Right now, it costs millions, maybe tens of millions of dollars, to train these ChatGPT (models) and millions of dollars a month to operate, but that might not be the case in a few years.

Rob May

May cautions against investing at this stage, owing to the rapid speed at which AI is developing. He believes that, eventually, training a large language model may not be as expensive or labor-intensive. Currently, the power, infrastructure, cloud-computing capabilities, data centers and bandwidth required to train a large language model is estimated to cost at least $10 million dollars. Furthermore, this large cost does not even begin to cover the steep expenses associated with one of the most critical elements of AI: inferencing. Inferencing is the process by which a computer taps into all of the intelligence gathered and stored in the training phase to process and analyze new data (e.g. a ChatGPT query).

Sawyer says that creating a successful model is a bit of a double-edged sword in this regard. As user adoption increases, the collective costs associated with inferencing could become unbounded. Some companies are beginning to amortize training costs, aiming to ultimately pass off inferencing expenses to customers. Until that becomes scalable, Sawyer says that the reality of operating generative models is expensive regardless of end-user, and success really comes down to resources.

The bottom line—and this has been the dirty secret for some time, that’s now being thrown into the public sphere—is that whoever has the most computers wins, and I mean this exactly as it sounds. And this is true of not just consumer applications, like generating copyright for a marketing product or marketing campaign. It also has implications for defense. It has implications for spatial and satellite imagery, really any source of information.

Davis Sawyer

Big players with immense financial wherewithal, such as Microsoft, Google, Meta and Amazon, are inherently advantaged, though most AI development is playing out in private markets. According to May, there are plenty of viable investment avenues in this , although he cautions that none are without risk. The AI environment is dynamic and continuously evolving, and the most recent release of GPT-4 underscores the importance of patience by AI companies and investors alike.

May is constructive on AI use cases in specific industries, such as health care and finance, which have lots of existing data and potential efficiencies. May is most optimistic about a hybrid approach—combining vertically-integrated AI stacks, which could optimize certain workflows and change cost structures, with careful management that does not rely entirely on a costly algorithm that could be made redundant tomorrow. In other words, he believes a measured approach that contemplates gradual changes with the use of AI is the safest option for companies and investors at this stage.

Through his work at Deeplight, Sawyer has a great deal of experience helping industrial customers realize cost efficiencies by deploying AI models into consumer products. He points to the role of AI in the explosion of the smart home and surveillance marketplace, specifically residential home security. Home security cameras now offer a range of capabilities that were previously not possible without AI—for instance, a home security camera’s ability to distinguish between the motion of the family dog outside the door and that of a raccoon rummaging through the trash. His key takeaway, though, is how accessible AI-enhanced products have become.

The commodification of this intelligence, if you will, is a really powerful force. As someone in the know, let’s say, or in the research domain, you might think, ‘Well, that feature’s been doable for some time.’ But it’s not until that feature is cost-effective do you see it proliferate.

Davis Sawyer

Successful implementation of smaller-scale AI has led to the proliferation of reasonably priced consumer products. Sawyer is optimistic that the cost-intensive large language models of today may have similar and even more sophisticated applications in the future. For investors, this still-nascent stage of generative AI may require patience, but the speed at which it continues to evolve indicates that the future could be bright for this impressive new frontier of foundation models.

Subscribe to “Double Take” on your podcast app of choice or view the Investing in Generative AI and Shrinking Machine Learning episode pages to listen in your browser.


[1] Techopedia. As of April 5, 2023. https://www.techopedia.com/definition/34826/foundation-model#:~:text=Unlike%20narrow%20artificial%20intelligence%20(narrow,from%20one%20task%20to%20another.

Authors

Jack Encarnacao

Jack Encarnacao

Research analyst, investigative, Specialist Research team

Raphael J. Lewis

Raphael J. Lewis

Head of specialist research

PAST PERFORMANCE IS NOT NECESSARILY INDICATIVE OF FUTURE RESULTS. Any reference to a specific security, country or sector should not be construed as a recommendation to buy or sell this security, country or sector. Please note that strategy holdings and positioning are subject to change without notice. For additional Important Information, click on the link below.

Important information

For Institutional Clients Only. Issued by Newton Investment Management North America LLC ("NIMNA" or the "Firm"). NIMNA is a registered investment adviser with the US Securities and Exchange Commission ("SEC") and subsidiary of The Bank of New York Mellon Corporation ("BNY Mellon"). The Firm was established in 2021, comprised of equity and multi-asset teams from an affiliate, Mellon Investments Corporation. The Firm is part of the group of affiliated companies that individually or collectively provide investment advisory services under the brand "Newton" or "Newton Investment Management". Newton currently includes NIMNA and Newton Investment Management Ltd ("NIM") and Newton Investment Management Japan Limited ("NIMJ").

Material in this publication is for general information only. The opinions expressed in this document are those of Newton and should not be construed as investment advice or recommendations for any purchase or sale of any specific security or commodity. Certain information contained herein is based on outside sources believed to be reliable, but its accuracy is not guaranteed.

Statements are current as of the date of the material only. Any forward-looking statements speak only as of the date they are made, and are subject to numerous assumptions, risks, and uncertainties, which change over time. Actual results could differ materially from those anticipated in forward-looking statements. No investment strategy or risk management technique can guarantee returns or eliminate risk in any market environment and past performance is no indication of future performance.

Information about the indices shown here is provided to allow for comparison of the performance of the strategy to that of certain well-known and widely recognized indices. There is no representation that such index is an appropriate benchmark for such comparison.

This material (or any portion thereof) may not be copied or distributed without Newton’s prior written approval.

In Canada, NIMNA is availing itself of the International Adviser Exemption (IAE) in the following Provinces: Alberta, British Columbia, Manitoba and Ontario and the foreign commodity trading advisor exemption in Ontario. The IAE is in compliance with National Instrument 31-103, Registration Requirements, Exemptions and Ongoing Registrant Obligations.

Explore topics