What American Technology Companies Are Thinking About AI

A vast collection of notable quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

Meanwhile, the latest earnings season for the US stock market is coming to its tail-end. I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. Here they are, in no particular order:

Airbnb (NASDAQ: ABNB)

Airbnb’s management thinks AI is a massive platform shift

Well, why don’t I start, Justin, with AI. This is certainly the biggest revolution and test since I came to Silicon Valley. It’s certainly as big of a platform shift as the Internet, and many people think it might be even bigger. 

Airbnb’s management thinks of foundational models as the highways and what they are interested in, is to build the cars on the highways, in other words, they are interested in tuning the model

And I’ll give you kind of a bit of an overview of how we think about AI. So all of this is going to be built on the base model. The base models, the large language models, think of those as GPT-4. Google has a couple of base models, Microsoft reaches Entropic. These are like major infrastructure investments. Some of these models might cost tens of billions of dollars towards the compute power. And so think of that as essentially like building a highway. It’s a major infrastructure project. And we’re not going to do that. We’re not an infrastructure company. But we’re going to build the cars on the highway. In other words, we’re going to design the interface and the tuning of the model on top of AI, on top of the base model. So on top of the base model is the tuning of the model. And the tuning of the model is going to be based on the customer data you have.

Airbnb’s management thinks AI can be used to help the company learn more about its users and build a much better way to match accommodation options with the profile of a user

If you were to ask a question to ChatGPT, and if I were to ask a question to ChatGPT, we’re both going to get pretty much the same answer. And the reason both of us are going to get pretty close the same answer is because ChatGPT doesn’t know that it’s between you and I, doesn’t know anything about us. Now this is totally fine for many questions, like how far is it from this destination to that destination. But it turns out that a lot of questions in travel aren’t really search questions. They’re matching questions. Another is, they’re questions that the answer depends on who you are and what your preferences are. So for example, I think that going forward, Airbnb is going to be pretty different. Instead of asking you questions like where are you going and when are you going, I want us to build a robust profile about you, learn more about you and ask you 2 bigger and more fundamental questions: who are you? And what do you want?

Airbnb’s management wants to use AI to build a global travel community and world-class personalised travel concierge

And ultimately, what I think Airbnb is building is not just a service or a product. But what we are in the largest sense is a global travel community. And the role of Airbnb and that travel community is to be the ultimate host. Think of us with AI as building the ultimate AI concierge that could understand you. And we could build these world-class interfaces, tune our model. Unlike most other travel companies, we know a lot more about our guests and hosts. This is partly why we’re investing in the Host Passport. We want to continue to learn more about people. And then our job is to match you to accommodations, other travel services and eventually things beyond travel. So that’s the big vision of where we’re going to go. I think it’s an incredibly expanding opportunity.

Airbnb’s management thinks that AI can help level the playing field in terms of the service Airbnb provides versus that of hotels

One of the strengths of Airbnb is that Airbnb’s offering is one of a kind. The problem with Airbnb is our service is also one of a kind. And so therefore, historically less consistent than a hotel. I think AI can level the playing field from a service perspective relative to hotels because hotels have front desk, Airbnb doesn’t. But we have literally millions of people staying on Airbnb every night. And imagine they call customer service. We have agents that have to adjudicate between 70 different user policies. Some of these are as many as 100 pages long. What AI is going to do is be able to give us better service, cheaper and faster by augmenting the agents. And I think this is going to be something that is a huge transformation. 

Airbnb’s management thinks that AI can help improve the productivity of its developers

The final thing I’ll say is developer productivity and productivity of our workforce generally. I think our employees could easily be, especially our developers, 30% more productive in the short to medium term, and this will allow significantly greater throughput through tools like GitHub’s Copilot. 

Alphabet (NASDAQ: GOOG)

Alphabet’s management thinks AI will unlock new experiences in Search as it evolves

As it evolves, we’ll unlock entirely new experiences in Search and beyond just as camera, voice and translation technologies have all opened entirely new categories of queries and exploration.

AI has been foundational for Alphabet’s digital advertising business for over a decade

AI has also been foundational to our ads business for over a decade. Products like Performance Max use the full power of Google’s AI to help advertisers find untapped and incremental conversion opportunities. 

Alphabet’s management is focused on making AI safe

And as we continue to bring AI to our products, our AI principles and the highest standards of information integrity remain at the core of all our work. As one example, our Perspective API helps to identify and reduce the amount of toxic text that language models train on, with significant benefits for information quality. This is designed to help ensure the safety of generative AI applications before they are released to the public.

Examples of Alphabet bringing generative AI to customers of its cloud computing service

We are bringing our generative AI advances to our cloud customers across our cloud portfolio. Our PaLM generative AI models and Vertex AI platform are helping Behavox to identify insider threats, Oxbotica to test its autonomous vehicles and Lightricks to quickly develop text-to-image features. In Workspace, our new generative AI features are making content creation and collaboration even easier for customers like Standard Industries and Lyft. This builds on our popular AI Bard Workspace tools, Smart Canvas and Translation Hub used by more than 9 million paying customers. Our product leadership also extends to data analytics, which provides customers the ability to consolidate their data and understand it better using AI. New advances in our data cloud enable Ulta Beauty to scale new digital and omnichannel experiences while focusing on customer loyalty; Shopify to bring better search results and personalization using AI; and Mercedes-Benz to bring new products to market more quickly. We have introduced generative AI to identify and prioritize cyber threats, automate security workflows and response and help scale cybersecurity teams. Our cloud cybersecurity products helped protect over 30,000 companies, including innovative brands like Broadcom and Europe’s Telepass.

The cost of computing when integrating LLMs (large language models) to Google Search is something Alphabet’s management has been thinking about 

On the cost side, we have always — cost of compute has always been a consideration for us. And if anything, I think it’s something we have developed extensive experience over many, many years. And so for us, it’s a nature of habit to constantly drive efficiencies in hardware, software and models across our fleet. And so this is not new. If anything, the sharper the technology curve is, we get excited by it, because I think we have built world-class capabilities in taking that and then driving down cost sequentially and then deploying it at scale across the world. So I think we’ll take all that into account in terms of how we drive innovation here, but I’m comfortable with how we’ll approach it.

Alphabet’s management does not seem concerned with any potential revenue-impact from integrating LLMs into Google’s core Search product

So first of all, throughout the years, as we have gone through many, many shifts in Search, and as we’ve evolved Search, I think we’ve always had a strong grounded approach in terms of how we evolve ads as well. And we do that in a way that makes sense and provide value to users. The fundamental drivers here are people are looking for relevant information. And in commercial categories, they find ads to be highly relevant and valuable. And so that’s what drives this virtuous cycle. And I don’t think the underpinnings over the fact that users want relevant commercial information, they want choice in what they look at, even in areas where we are summarizing and answering, et cetera, users want choice. We care about sending traffic. Advertisers want to reach users. And so all those dynamics, I think, which have long served us well, remain. And as I said, we’ll be iterating and testing as we go. And I feel comfortable we’ll be able to drive innovation here like we’ve always done.

Amazon (NASDAQ: AMZN)

Amazon’s management thinks that the AI boom will drive significant growth in data consumption and products in the cloud

And I also think that there are a lot of folks that don’t realize the amount of nonconsumption right now that’s going to happen and be spent in the cloud with the advent of large language models and generative AI. I think so many customer experiences are going to be reinvented and invented that haven’t existed before. And that’s all going to be spent, in my opinion, on the cloud.

Amazon has been investing in machine learning for more than two decades, and has been investing large sums of capital to build its own LLMs for several years

I think when you think about machine learning, it’s useful to remember that we have had a pretty substantial investment in machine learning for 25-plus years in Amazon. It’s deeply ingrained in virtually everything we do. It fuels our personalized e-commerce recommendations. It drives the pick pass in our fulfillment centers. We have it in our Go stores. We have it in our Prime Air, our drones. It’s obviously in Alexa. And then AWS, we have 25-plus machine learning services where we have the broadest machine learning functionality and customer base by a fair bit. And so it is deeply ingrained in our heritage…

…We’ve been investing in building in our own large language models for several years, and we have a very large investment across the company. 

Amazon’s management decided to build chips – Trainium for training and Inferentia for inference – that have great price and performance because LLMs are going to run on compute, which depend on chips (particularly GPUs, or graphic processing units) and GPUs are scarce; Amazon’s management also thinks that a lot of machine learning training will be taking place on AWS

If you think about maybe the bottom layer here, is that all of the large language models are going to run on compute. And the key to that compute is going to be the chips that’s in that compute. And to date, I think a lot of the chips there, particularly GPUs, which are optimized for this type of workload, they’re expensive and they’re scarce. It’s hard to find enough capacity. And so in AWS, we’ve been working for several years on building customized machine learning chips, and we built a chip that’s specialized for training, machine learning training, which we call Trainium, and a chip that’s specialized for inference or the predictions that come from the model called Inferentia. The reality, by the way, is that most people are spending most of their time and money on the training. But as these models graduate to production, where they’re in the apps, all the spend is going to be in inference. So they both matter a lot. And if you look at — we just released our second versions of both Trainium and Inferentia. And the combination of price and performance that you can get from those chips is pretty differentiated and very significant. So we think that a lot of that machine learning training and inference will run on AWS.

Amazon’s management thinks that most companies that want to use AI are not interested to build their own foundational models because it takes a lot of resources; Amazon has the resources to build foundational models, and is providing the foundational models to customers who can then customise the models

And if you look at the really significant leading large language models, they take many years to build and many billions of dollars to build. And there will be a small number of companies that want to invest that time and money, and we’ll be one of them in Amazon. But most companies don’t. And so what most companies really want and what they tell AWS is that they’d like to use one of those foundational models and then have the ability to customize it for their own proprietary data and their own needs and customer experience. And they want to do it in a way where they don’t leak their unique IP to the broader generalized model. And that’s what Bedrock is, which we just announced a week ago or so. It’s a managed foundational model service where people can run foundational models from Amazon, which we’re exposing ourselves, which we call Titan. Or they can run it from leading large language model providers like AI 21 and Anthropic and Stability AI. And they can run those models, take the baseline, customize them for their own purposes and then be able to run it with the same security and privacy and all the features they use for the rest of their applications in AWS. That’s very compelling for customers.

Every single one of Amazon’s businesses are built on top of LLMs

Every single one of our businesses inside Amazon are building on top of large language models to reinvent our customer experiences, and you’ll see it in every single one of our businesses, stores, advertising, devices, entertainment and devices, which was your specific question, is a good example of that.

ASML (NASDAQ: ASML)

ASML’s management sees that mature semiconductor technologies are actually needed even in AI systems

So I think this is something people underestimate how significant the demand in the mid-critical and the mature semiconductor space is. And it will just grow double digit, whether it’s automotive, whether it’s the energy transition, whether it’s just the entire industrial products area, where is the — well, those are the sensors that we actually need as an integral component of the AI systems. This is where the mid-critical and the mature semiconductor space is very important and needs to grow.

Block (NYSE: SQ)

Block’s management is focused on three technology trends, one of which is AI

The three trends we’re focused on: Number one is artificial intelligence; number two is open protocols; and number three is the global south. Consider how many times you’ve heard the term AI or GPT in the earnings calls just this quarter versus all quarters in history prior. This trend seems to be moving faster than anyone can comprehend or get a handle on. Everyone feels like they’re on their back foot and struggling to catch up. Utilizing machine learning is something we’ve always employed at Block, and the recent acceleration in availability of tools is something we’re eager to implement across all of our products and services. We see this first as a way to create efficiencies, both internally and for our customers. And we see many opportunities to apply these technologies to create entirely new features for our customers. More and more effort in the world will ship to creative endeavors as AI continues to automate mechanical tasks away.

Datadog (NASDAQ: DDOG)

Datadog’s management thinks AI can make software developers more productive in terms of generating more code; as a result, the complexity of a company’s technology will also increase, which will lead to more importance for observability and trouble-shooting software products

First, from a market perspective, over the long term, we believe AI will significantly expand our opportunity in observability and beyond. We seek massive improvements in developer productivity will allow individuals to write more applications and to do so faster than ever before. And as with past productivity increases, we think this will further shift value from writing code to observing, managing, fixing and securing live applications…

… Longer term, I think we can all glimpse at the future where productivity for everyone, including software engineers, increases dramatically. And the way we see that as a business is, our job is to help our customers absorb the complexity of the applications they’ve built so they can understand and modify them, run them, secure them. And we think that the more productivity there is, the more people can write in the amount of time. The less they understand the software they produce and the more they need us, the more value it sends our way. So that’s what makes us very confident in the long term here…

…And we — the way this has played out in the past typically is you just end up generating more stuff and more mess. So basically, if one person can produce 10x more, you end up with 10x more stuff and that person will still not understand everything they’ve produced. So the way we imagine the future is companies are going to deliver a lot more functionality to their users a lot faster. They’re going to solve a lot more problems in software. But the they won’t be as tight and understanding from their engineering team as to what it is they’ve built and how they built it and what might break and what might be the corner cases that don’t work and things like that. And that’s consistent with what we can see people building with a copilot today and things like that.

Etsy (NASDAQ: ETSY)

Etsy’s management thinks that AI can greatly improve the search-experience for customers who are looking for specific products

We’ve been at the cutting edge of search technology for the past several years, and while we use large language models today, we couldn’t be more excited about the potential of newer large language models and generative AI to further accelerate the transformation of Etsy’s user experience. Even with all our enhancements, Etsy search today is still key-word driven and text based and essentially the result is a grid with many thousands of listings. We’ve gotten better at reading the tea leaves, but it’s still a repetitive cycle of query result reformulation. In the future we expect search on Etsy to utilize more natural language and multimodal approaches. Rather than manipulating key words, our search engines will enable us to ask the right question at the right time to show the buyer a curated set of results that can be so much better than it is today. We’re investigating additional search engine technologies to identify attributes of an item, multi-label learning models for instant search, graph neural networks and so much more, which will be used in combination with our other search engine technologies. It’s our belief that Etsy will benefit from generative AI and other advances in search technology as much or perhaps even more so than others…

When you run a search at Etsy, we already use multiple machine learning techniques. So I don’t think generative AI replaces everything we’re doing, but it’s another tool that will be really powerful. And there are times when having a conversation instead of entering a query and then getting a bunch of search results and then going back and reformulating your query and then getting a bunch of search results, that’s not always very satisfying. And being able to say, no, I meant more like this. How about this? I’d like something that has this style and have that feel like more of a conversation, I think that can be a better experience a lot of the time. And I think in particular for Etsy where we don’t have a catalog, it might be particularly powerful.

Fiverr (NYSE: FVRR) 

Fiverr’s management thinks that the proliferation of AI services will not diminish the demand for freelancers, but it will lead to a bifurcation in the fates of freelancers between those who embrace AI, and those who don’t

We haven’t seen AI negatively impact our business. On the contrary, the categories we open to address AI-related services are booming. The number of AI-related gigs has increased over tenfold and buyer searches for AI have soared over 1,000% compared to 6 months ago, indicating a strong demand and validating our efforts to stay ahead of the curve in this rapidly evolving technological landscape. We are witnessing the increasing need for human skills to deploy and implement AI technologies, which we believe will enable greater productivity and improved quality of work when human talent is augmented by AI capabilities. In the long run, we don’t anticipate AI development to displace the need for human talent. We believe AI won’t replace our sellers; rather sellers using AI will outcompete those who don’t…

…In terms of your question about AI, you’re right, it’s very hard to understand what categories or how categories might be influenced. I think that there’s one principle that we’ve — that I’ve shared in my opening remarks, which I think is very important, and this is how we view this, which is that AI technology is not going to displace our sellers, but sellers who have better gross and better usage of AI are going to outcompete those who don’t. And this is not really different than any meaningful advancement within technology, and we’ve seen that in recent years. Every time when there’s a cool new technology or device or form factor that sellers need to become professional at, those who become professional first are those who are actually winning. And we’re seeing the same here. So I don’t think that this is a different case. It’s just different professions, which, by the way, is super exciting.

Fiverr’s management thinks that AI-produced work will still need a human touch

Furthermore, while AI-generated content can be well constructed, it is all based on existing human-created content. To generate novel and authentic content, human input remains vital. Additionally, verifying and editing the AI-generated content, which often contains inaccuracies, requires human expertise and effort. That’s why we have seen categories such as fact-checking or AI content editing flourish on our marketplace in recent months.

Mastercard (NYSE: MA)

Mastercard’s management thinks AI is a foundational technology for the company

For us we’ve been using AI for the better part of the last decade. So it’s embedded in a whole range of our products…

…So you’ll find it embedded in a range of our products, including generative AI. So we have used generative AI technology, particularly in creating data sets that allow us to compare and find threats in the cybersecurity space. You will find AI in our personalization products. So there’s a whole range of things that we set us apart. We use this as foundational technology. And internally, you can see increasingly so, that generative AI might be a good solution for us when it comes to customer service propositions and so forth.

MercadoLibre (NASDAQ: MELI)

MercadoLibre is utilising AI within its products and services, in areas such as customer-service and product-discovery

In terms of AI, I think as most companies, we do see some very relevant short- to midterm positive impact in terms of engineering productivity. And we are also increasing the amount of work being done on what elements of the consumer-facing experiences we can deploy AI on I think the focus right now is on some of the more obvious use cases, improving and streamlining customer service and interactions with reps, improving workflows for reps through AI-assisted workflow tools and then deploying AI to help a better search and discovery in terms of better finding products on our website and better understanding specific — specifications of products where existing LLM are quite efficient. And then beyond that, I think there’s a lot of work going on, and we hope to come up with other innovative forms of AI that we can place into the consumer-facing experience. but the ones I just mentioned are the ones that we’re currently working on the most.

Meta Platforms (NASDAQ: META)

Meta’s work in AI has driven significant improvements in (a) the quality of content seen by users of its services and (b) the monetisation of its services

Our investment in recommendations and ranking systems has driven a lot of the results that we’re seeing today across our discovery engine, reels and ads. Along with surfacing content from friends and family, now more than 20% of content in your Facebook and Instagram Feeds are recommended by AI from people groups or accounts that you don’t follow. Across all of Instagram, that’s about 40% of the content that you see. Since we launched Reels, AI recommendations have driven a more than 24% increase in time spent on Instagram. Our AI work is also improving monetization. Reels monetization efficiency is up over 30% on Instagram and over 40% on Facebook quarter-over-quarter. Daily revenue from Advantage+ shopping campaigns is up 7x in the last 6 months.

Meta’s management is focused on open-sourcing Meta’s AI models because they think going open-source will benefit the company in terms of it being able to make use of improvements to the models brought on by the open-source-community

Our approach to AI and our infrastructure has always been fairly open. We open source many of our state-of-the-art models, so people can experiment and build with them. This quarter, we released our LLaMA LLM to researchers. It has 65 billion parameters but outperforms larger models and has proven quite popular. We’ve also open sourced 3 other groundbreaking visual models along with their training data and model weights, Segment Anything, DINOv2 and our Animated Drawings tool, and we’ve gotten some positive feedback on all of those as well…

…And the reason why I think why we do this is that unlike some of the other companies in the space, we’re not selling a cloud computing service where we try to keep the different software infrastructure that we’re building proprietary. For us, it’s way better if the industry standardizes on the basic tools that we’re using, and therefore, we can benefit from the improvements that others make and others’ use of those tools can, in some cases, like Open Compute, drive down the costs of those things, which make our business more efficient, too. So I think to some degree, we’re just playing a different game on the infrastructure than companies like Google or Microsoft or Amazon, and that creates different incentives for us. So overall, I think that that’s going to lead us to do more work in terms of open sourcing some of the lower-level models and tools, but of course, a lot of the product work itself is going to be specific and integrated with the things that we do. So it’s not that everything we do is going to be open. Obviously, a bunch of this needs to be developed in a way that creates unique value for our products. But I think in terms of the basic models, I would expect us to be pushing and helping to build out an open ecosystem here, which I think is something that’s going to be important.

Meta’s management thinks the company now has enough computing infrastructure to do leading AI-related work after spending significant sums of money over the past few years to build that out

A couple of years ago, I asked our infra teams to put together ambitious plans to build out enough capacity to support not only our existing products but also enough buffer capacity for major new products as well. And this has been the main driver of our increased CapEx spending over the past couple of years. Now at this point, we are no longer behind in building out our AI infrastructure, and to the contrary, we now have the capacity to do leading work in this space at scale. 

Meta’s management is focused on using AI to improve its advertising services

We remain focused on continuing to improve ads ranking and measurement with our ongoing AI investments while also leveraging AI to power increased automation for advertisers through products like Advantage+ shopping, which continues to gain adoption and receive positive feedback from advertisers. These investments will help us develop and deploy privacy-enhancing technologies and build new innovative tools that make it easier for businesses to not only find the right audience for their ad but also optimize and eventually develop their ad creative.

Meta’s management thinks that generative AI can be a very useful tool for advertisers, but they’re still early in the stage of understanding what generative AI is really capable of

 Although there aren’t that many details that I’m going to share at this point, more of this will come in focus as we start shipping more of these things over the coming months. But I do think that there’s a big opportunity here. You asked specifically about advertisers, but I think it’s going to also help create more engaging experiences, which should create more engagement, and that, by itself, creates more opportunities for advertisers. But then I think that there’s a bunch of opportunities on the visual side to help advertisers create different creative. We don’t have the tools to do that over time, eventually making it. So we’ve always strived to just have an advertiser just be able to tell us what their objective is and then have us be able to do as much of the work as possible for them, and now being able to do more of the creative work there and ourselves for those who want that, I think, could be a very exciting opportunity…

…And then the third bucket is really around CapEx investments now to support gen AI. And this is an emerging opportunity for us. We’re still in the beginning stages of understanding the various applications and possible use cases. And I do think this may represent a significant investment opportunity for us that is earlier on the return curve relative to some of the other AI work that we’ve done. And it’s a little too early to say how this is going to impact our overall capital intensity in the near term.

Meta’s management also thinks that generative AI can be a very useful way for companies to have high-quality chatbots interacting with customers

I also think that there’s going to be a very interesting convergence between some of the AI agents in messaging and business messaging, where, right now, we see a lot of the places where business messaging is most successful are places where a lot of businesses can afford to basically have people answering a lot of questions for people and engaging with them in chat. And obviously, once you light up the ability for tens of millions of small businesses to have AI agents acting on their behalf, you’ll have way more businesses that can afford to have someone engaging in chat with customers.

Microsoft (NASDAQ: MSFT)

Microsoft’s management thinks there is a generational shift in online search happening now because of AI

As we look towards a future where chat becomes a new way for people to seek information, consumers have real choice in business model and modalities with Azure-powered chat entry points across Bing, Edge, Windows and OpenAI’s ChatGPT. We look forward to continuing this journey in what is a generational shift in the largest software category, search.

Because of Microsoft’s partnership with OpenAI, Microsoft Azure is now exposed to new AI-related workloads that it previously was not

Because of some of the work we’ve done in AI even in the last couple of quarters, we are now seeing conversations we never had, whether it’s coming through you and just OpenAI’s API, right, if you think about the consumer tech companies that are all spinning, essentially, i.e. the readers, because they have gone to OpenAI and are using their API. These were not customers of Azure at all. Second, even Azure OpenAI API customers are all new, and the workload conversations, whether it’s B2C conversations in financial services or drug discovery on another side, these are all new workloads that we really were not in the game in the past, whereas we now are. 

Microsoft’s management has plans to monetise all the different AI-copilots that it is introducing to its various products

Overall, we do plan to monetize a separate set of meters across all of the tech stack, whether they’re consumption meters or per user subscriptions. The Copilot that’s priced and it is there is GitHub Copilot. That’s a good example of incrementally how we monetize the prices that are there out there and others are to be priced because they’re in 3D mode. But you can expect us to do what we’ve done with GitHub Copilot pretty much across the board.

Microsoft’s management expects the company to lower the cost of compute for AI workloads over time

And so we have many knobs that will continuously — continue to drive optimization across it. And you see it even in the — even for a given generation of a large model, where we started them through the cost footprint to where we end in the cost footprint in a period of a quarter changes. So you can expect us to do what we have done over the decade plus with the public cloud to bring the benefits of, I would say, continuous optimization of our COGS to a diverse set of workloads.

Microsoft’s management has not been waiting – and is not waiting – for AI-related regulations to show up – instead, they are thinking hard about unintended consequences from Day 1 and have built those concerns into the engineering process

So overall, we’ve taken the approach that we are not waiting for regulation to show up. We are taking an approach where the unintended consequences of any new technology is something that from day 1, we think about as first class and build into our engineering process, all the safeguards. So for example, in 2016 is when we put out the AI principles, we translated the AI principles into a set of internal standards that then are further translated into an implementation process that then we hold ourselves to internal audit essentially. So that’s the framework we have. We have a Chief AI Officer who is sort of responsible for both thinking of what the standards are and then the people who even help us internally audit our following of the process. And so we feel very, very good in terms of us being able to create trust in the systems we put out there. And so we will obviously engage with any regulation that comes up in any jurisdiction. But quite honestly, we think that the more there is any form of trust as a differentiated position in AI, I think we stand to gain from that.

Nvidia (NASDAQ: NVDA)

Cloud service providers (CSPs) are racing to deploy Nvidia’s chips for AI-related work

First, CSPs around the world are racing to deploy our flagship Hopper and Ampere architecture GPUs to meet the surge in interest from both enterprise and consumer AI applications for training and inference. Multiple CSPs announced the availability of H100 on their platforms, including private previews at Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure, upcoming offerings at AWS and general availability at emerging GPU-specialized cloud providers like CoreWeave and Lambda. In addition to enterprise AI adoption, these CSPs are serving strong demand for H100 from generative AI pioneers.

Nvidia’s management sees consumer internet companies as being at the forefront of adopting AI

Second, consumer Internet companies are also at the forefront of adopting generative AI and deep-learning-based recommendation systems, driving strong growth. For example, Meta has now deployed its H100-powered Grand Teton AI supercomputer for its AI production and research teams.

Nvidia’s management is seeing companies in industries such as automotive, financial services, healthcare, and telecom adopt AI rapidly

Third, enterprise demand for AI and accelerated computing is strong. We are seeing momentum in verticals such as automotive, financial services, health care and telecom where AI and accelerated computing are quickly becoming integral to customers’ innovation road maps and competitive positioning. For example, Bloomberg announced it has a $50 billion parameter model, BloombergGPT, to help with financial natural language processing tasks such as sentiment analysis, named entity recognition, news classification and question answering. Auto insurance company CCC Intelligent Solutions is using AI for estimating repairs. And AT&T is working with us on AI to improve fleet dispatches so their field technicians can better serve customers. Among other enterprise customers using NVIDIA AI are Deloitte for logistics and customer service, and Amgen for drug discovery and protein engineering.

Nvidia is making it easy for companies to deploy AI technology

And with the launch of DGX Cloud through our partnership with Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure, we deliver the promise of NVIDIA DGX to customers from the cloud. Whether the customers deploy DGX on-prem or via DGX Cloud, they get access to NVIDIA AI software, including NVIDIA-based command, end-to-end AI frameworks and pretrained models. We provide them with the blueprint for building and operating AI, spanning our expertise across systems, algorithms, data processing and training methods. We also announced NVIDIA AI Foundations, which are model foundry services available on DGX Cloud that enable businesses to build, refine and operate custom large language models and generative AI models trained with their own proprietary data created for unique domain-specific tasks. They include NVIDIA NeMo for large language models, NVIDIA Picasso for images, video and 3D, and NVIDIA BioNeMo for life sciences. Each service has 6 elements: pretrained models, frameworks for data processing and curation, proprietary knowledge-based vector databases, systems for fine-tuning, aligning and guard railing, optimized inference engines, and support from NVIDIA experts to help enterprises fine-tune models for their custom use cases.

Nvidia’s management thinks that the advent of AI will drive a shift towards accelerated computing in data centers

Now let me talk about the bigger picture and why the entire world’s data centers are moving toward accelerated computing. It’s been known for some time, and you’ve heard me talk about it, that accelerated computing is a full stack problem but — it is full stack challenged. But if you could successfully do it in a large number of application domain that’s taken us 15 years, it’s sufficiently that almost the entire data center’s major applications could be accelerated. You could reduce the amount of energy consumed and the amount of cost for a data center substantially by an order of magnitude. It costs a lot of money to do it because you have to do all the software and everything and you have to build all the systems and so on and so forth, but we’ve been at it for 15 years.

And what happened is when generative AI came along, it triggered a killer app for this computing platform that’s been in preparation for some time. And so now we see ourselves in 2 simultaneous transitions. The world’s $1 trillion data center is nearly populated entirely by CPUs today. And I — $1 trillion, $250 billion a year, it’s growing of course. But over the last 4 years, call it $1 trillion worth of infrastructure installed, and it’s all completely based on CPUs and dumb NICs. It’s basically unaccelerated.

In the future, it’s fairly clear now with this — with generative AI becoming the primary workload of most of the world’s data centers generating information, it is very clear now that — and the fact that accelerated computing is so energy efficient, that the budget of a data center will shift very dramatically towards accelerated computing, and you’re seeing that now. We’re going through that moment right now as we speak, while the world’s data center CapEx budget is limited. But at the same time, we’re seeing incredible orders to retool the world’s data centers. And so I think you’re starting — you’re seeing the beginning of, call it, a 10-year transition to basically recycle or reclaim the world’s data centers and build it out as accelerated computing. You have a pretty dramatic shift in the spend of a data center from traditional computing and to accelerated computing with SmartNICs, smart switches, of course, GPUs and the workload is going to be predominantly generative AI…

…The second part is that generative AI is a large-scale problem, and it’s a data center scale problem. It’s another way of thinking that the computer is the data center or the data center is the computer. It’s not the chip. It’s the data center, and it’s never happened like us before. And in this particular environment, your networking operating system, your distributed computing engines, your understanding of the architecture of the networking gear, the switches and the computing systems, the computing fabric, that entire system is your computer, and that’s what you’re trying to operate. And so in order to get the best performance, you have to understand full stack and understand data center scale. And that’s what accelerated computing is.

Nvidia’s management thinks that the training of AI models will be an always-on process

 You’re never done with training. You’re always — every time you deploy, you’re collecting new data. When you collect new data, you train with the new data. And so you’re never done training. You’re never done producing and processing a vector database that augments the large language model. You’re never done with vectorizing all of the collected structured, unstructured data that you have. And so whether you’re building a recommender system, a large language model, a vector database, these are probably the 3 major applications of — the 3 core engines, if you will, of the future of computing as well as a bunch of other stuff. But obviously, these are very — 3 very important ones. They are always, always running.

When it comes to inference – or the generation of an output – there’s a lot more that goes into it than just the AI models

The other thing that’s important is these are models, but they’re connected ultimately to applications. And the applications could have image in, video out, video in, text out, image in, proteins out, text in, 3D out, video in, in the future, 3D graphics out. So the input and the output requires a lot of pre and postprocessing. The pre and postprocessing can’t be ignored. And this is one of the things that most of the specialized chip arguments fall apart. And it’s because the length — the model itself is only, call it, 25% of the data — of the overall processing of inference. The rest of it is about preprocessing, postprocessing, security, decoding, all kinds of things like that.

Paycom Software (NYSE: PAYC)

Paycom’s management thinks AI is definitely going to have a major impact in the payroll and HCM (human capital management) industry

I definitely think it’ll be relevant. You can use AI for multiple things. There are areas that you can use it for that are better than others. And they’re front-end things you can use it for direct to the client. There are back-end things that you can use it for that a client may never see. And so when you’re talking about AI, it has many uses, some of which is front end and some back end. And I don’t want to talk specifically about what exactly we’re using it for already internally and what our opportunities would be into the future. But in answer to your question, yes, I do think that over time, AI is going to be a thing in our industry.

PayPal (NASDAQ: PYPL)

PayPal has been working with AI (in fraud and risk management) for several years, and management thinks generative AI and other forms of AI will be useful in the online payments industry

For several years, we’ve been at the forefront of advanced forms of machine learning and AI to combat fraud and to implement our sophisticated risk management programs. With the new advances of generative AI, we will also be able to accelerate our productivity initiatives. We expect AI will enable us to meaningfully lower our costs for years to come. Furthermore, we believe that AI, combined with our unique scale and sets of data, will drive not only efficiencies, but will also drive a differentiated and unique set of value propositions for our merchants and consumers…

…And we are now beginning to experiment with the first generation of what we call AI-powered checkout, which looks at the full checkout experience, not just the PayPal checkout experience, but the full checkout experience for our merchants…

…There’s no question that AI is going to impact almost every function inside of PayPal, whether it be our front office, back office, marketing, legal, engineering, you name it. AI will have an impact and allow us to not just lower cost, but have higher performance and do things that is not about trade-offs. It’s about doing both in there.

Shopify (NASDAQ: SHOP)

Shopify’s management thinks the advent of AI makes a copilot for entrepreneurship possible

But now we are at the dawn of the AI era and the new capabilities that are unlocked by that are unprecedented. Shopify has the privilege of being amongst the companies with the best chances of using AI to help our customers. A copilot for entrepreneurship is now possible. Our main quest demands from us to build the best thing that is now possible, and that has just changed entirely.

Shopify recently launched an AI-powered shopping assistant that is powered by OpenAI’s ChatGPT

We also — you’re also seeing — we announced a couple of weeks ago, Shop at AI, which is what I think is the coolest shopping concierge on the planet, whereby you as a consumer can use Shop at AI and you can browse through hundreds of millions of products and you can say things like I want to have a barbecue and here’s the theme and it will suggest great products, and you can buy it right in line right through the shopping concierge.  

Shopify has been using AI to help its merchants write product descriptions so that merchants can better focus on taking care of their customers

 For example, the task of writing product descriptions is now made meaningfully easier by injecting AI into that process. And what does that — the end result of that is merchants spend less time running product descriptions and more time making beautiful products and communicating and engaging with their customers. 

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

TSMC’s management sees demand in most end-markets as being mostly soft, but AI-related demand is growing

We observed the PC and smartphone market continue to be soft at the present time, while automotive demand is holding steady for TSMC and it is showing signs of soften into second half of 2023. I’m talking about automotive. On the other hand, we have recently observed incremental upside in AI-related demand

TSMC’s management thinks it’s a little too early to tell how big the semiconductor market can grow into because of AI, but they do see a positive trend

We certainly, we have observed an incremental increase in AI-related demand. It will also help the ongoing inventory digestion. The trend is very positive for TSMC. But today, if you ask me to quantitatively to say that how much of the amount increase or what is the dollar content in the server, it’s too early to say. It still continue to be developed. And ChatGPT right now reinforce the already stronger conviction that we have in HPC and AI as a structurally megatrend for TSMC’s business growth in the future. Whether this one has been included in our previous announcement is said that we have a 15% to 20% CAGR, the answer is probably partly yes, because of — for several, we have accelerated into our consideration. But this ChatGPT is a large language model is a new application. And we haven’t really have a kind of a number that put into our CAGR. But is definitely, as I said, it really reinforced our already strong conviction that HPC and AI will give us a much higher opportunities in the future…

…We did see some positive signs of the people getting much more attention to AI application, especially the ChatGPT’s area. However, as I said, quantitatively, we haven’t have enough data to summing it up to see what is the contribution and what kind of percentage to TSMC’s business. But we remain confident that this trend is definitely positive for TSMC.

TSMC’s management sees most of the AI work performed today as being focused on training but that it will flip to inference in the future – but nonetheless, high-performance semiconductors will still need be needed for AI-related work

Right now, most of the AI concentrate or focus on training. And in the future, it will be inference. But let me say that, no matter what kind of application, they need to use a very high-performance semiconductor component, and that actually is a TSMC’s advantage. So we expect that semiconductor content starting from a data center for [indiscernible] to device and edge device or those kind of things, put all together, they need a very high-speed computing with a very power-efficient one. And so we expect it will add to TSMC’s business a lot.

Tencent (NASDAQ: TCEHY)

Tencent is using AI to deliver more relevant ads to users of its services

We upgraded our machine learning advertising platform to deliver higher conversions for advertisers. For example, we help our advertisers dynamically feature their most relevant products inside their advertisements by applying our deep learning model to the standard product unit attributes we have aggregated within our SPU database. 

Tencent’s management thinks there will be a proliferation of AI models – both foundational as well as vertical – from both established companies as well as startups

So in terms of going forward, we do believe that number one, there’s going to be many models in the market going forward for the large companies, I think each one of us would have a foundation model. And the model will be supporting our own use cases as well as offer it to the market both on a 2C basis as well as on a 2B basis. And at the same time, there will be many start-ups, which will be creating their own models, some of them may be general foundation model. Some of them may be more industry and vertical models and they will be coming with new applications. I think overall, it’s going to be a very vibrant industry from a model availability perspective.

Tencent’s management thinks AI can help improve the quality of UGC (user-generated content)

In terms of the user-to-user interaction type of services like social network and short video network and games, long lead content, there will be — a lot of usages that helps to increase the quality of content, the efficiency at which the content are created as well as lowering the cost of content creation. And that will be net beneficiary to these applications. 

Tencent’s management thinks China’s government is supportive of innovation in AI

Now in terms of — you asked about regulation. Without the government’s general stance is like it’s supportive of regulation, but the industry has to be regulated. And I think this is not something that’s specific to China, even around the world. And you look at the U.S., there’s a lot of public discussion about having regulation and even the founder of OpenAI has been testifying and asking for regulation in the industry. So I think that is something which is necessary, but we felt under the right regulation and regulatory framework, then the government stance is supportive of innovation and the industry will actually have room for healthy growth.

Tesla (NASDAQ: TSLA)

Tesla’s management thinks data will be incredibly valuable when building out AI services, especially in self-driving

Regarding Autopilot and Full Self-Driving. We’ve now crossed over 150 million miles driven by Full Self-Driving beta, and this number is growing exponentially. We’re — I mean, this is a data advantage that really no one else has. Those who understand AI will understand the importance of data — of training data and how fundamental that is to achieving an incredible outcome. So yes, so we’re also very focused on improving our neural net training capabilities as is one of the main limiting factors of achieving full autonomy. 

Tesla’s management thinks the company’s supercomputer project, Dojo, could significantly improve the cost of training AI models

So we’re continuing to simultaneously make significant purchases of NVIDIA GPUs and also putting a lot of effort into Dojo, which we believe has the potential for an order of magnitude improvement in the cost of training. 

The Trade Desk (NASDAQ: TSLA)

Trade Desk’s management thinks that generative AI is only as good as the data that it has been trained on

ChatGPT is an amazing technology, but its usefulness is conditioned on the quality of the dataset it is pointed at. Regurgitating bad data, bad opinions or fake news, where AI generated deep bases, for example, will be a problem that all generative AI will likely be dealing with for decades to come. We believe many of the novel AI use cases in market today will face challenges with monetization and copyright and data integrity or truth and scale.

Trade Desk has very high-quality advertising data at scale (it’s handling 10 million ad requests per second) so management thinks that the company can excel by applying generative AI to its data

By contrast, we are so excited about our position in the advertising ecosystem when it comes to AI. We look at over 10 million ad requests every second. Those requests, in sum, represent a very robust and very unique dataset with incredible integrity. We can point generative AI at that dataset with confidence for years to come. We know that our size, our dataset size and integrity, our profitability and our team will make Koa and generative AI a promising part of our future.

Trade Desk’s management sees AI bringing positive impacts to many areas of the company’s business, such as generating code faster, generating creatives faster, and helping clients learn programmatic advertising faster

In the future, you’ll also hear us talk about other applications of AI in our business. These include generating code faster; changing the way customers understand and interact with their own data; generating new and more targeted creatives, especially for video and CTV; and using virtual assistance to shorten the learning curve that comes with the complicated world of programmatic advertising by optimizing the documentation process and making it more engaging.

Visa (NYSE: V)

Visa, which is in the digital payments industry, has a long history of working with AI and management sees AI as an important component of what the company does

I’ll just mention that we have a long history developing and using predictive AI and deep learning. We were one of the pioneers of applied predictive AI. We have an enormous data set that we’ve architected to be utilized at scale by hundreds of AI and ML, different services that people use all across Visa. We use it — we use it to run our company more effectively. We use it to serve our clients more effectively. And this will continue to be a big part of what we do.

Visa’s management thinks generative AI can take the company’s current AI services to the next level

As you transition to generative AI, this is where — we see this as an opportunity to take our current AI services to the next level. We are kind of as a platform, experimenting with a lot of the new capabilities that are available. We’ve got people all over the company that are tinkering and dreaming and thinking and doing testing and figuring out ways that we could use generative AI to transform how we do what we do, which is deliver simple, safe and easy-to-use payment solutions. And we’re also spending a fair bit of time thinking how generative AI will change the way that sellers sell, and we all buy and all of the shop. So that is — it’s a big area of opportunity that we’re looking at in many different ways across the company.

Wix (NASDAQ: WIX)

Wix’s management thinks AI can reduce a lot of friction for users in creating websites

First, our goal at Wix is to reduce friction. The easier it is for our users to build websites, the better Wix is. We have proven this many times before, through the development of software and products, including AI. As we make it easier for our users to achieve their goals, their satisfaction goes up, conversion goes up, user retention goes up, monetization goes up and the value of Wix grows…

…  Today, new emerging AI technologies create an even bigger opportunity to reduce friction in more areas that were almost impossible to solve a few years ago and further increase the value of our platform. We believe this opportunity will result in an increased addressable market and even more satisfied users. 

Wix’s management thinks that much more is needed to run e-commerce websites than just AI and even if AI can automate every layer, it is still very far into the future

The second important point is that there is a huge amount of complexity in software, even with websites, and it’s growing. Even if AI could code a fully functional e-commerce website, for example — which I believe we are still very far from — there is still a need for the site to be deployed to a server, to run the code, to make sure the code continues to work, to manage and maintain a database for when someone wants to buy something, to manage security, to ship the products, to partner with payment gateways, and many more things. So even if you have something that can build pages and content and code…you still need so much more. This gets to my third and final point, which is that even in the far future, if AI is able to automate all of these layers, it will have to disrupt a lot of the software industry, including database management, server management and cloud computing. I believe we are very far from that and that before then, there will be many more opportunities for Wix to leverage AI and create value for our users.

Zoom Video Communications (NASDAQ: ZM)

Zoom management’s approach to AI is federated, empowering, and responsible

We outlined our approach to AI is to drive forward solutions that are federated empowering and responsible. Federated means flexible and customizable to businesses unique scenarios and nomenclature. Empowering refers to building solutions that improve individual and team productivity as well as enhance the customers’ experience. And responsible means customer control of their data with an emphasis on privacy, security, trust and safety.

Zoom recently made a strategic investment in Anthropic and management will be integrating Anthropic’s AI assistant feature across Zoom’s product portfolio

Last week, we announced our strategic investment in Anthropic, an AI safety and research company working to build reliable, interpretable and steerable AI systems. Our partnership with Anthropic further boosts our federated approach to AI by allowing Anthropic’s AI assistant cloud to be integrated across Zoom’s entire platform. We plan to begin by layering Claude into our Contact Center portfolio, which includes Zoom Contact Center, Zoom Virtual Agent, and now in-beta Zoom Workforce Engagement Management. With Claude guiding agents towards trustworthy resolutions and empowering several service for end users, companies will be able to take customer relationships to the next level.

Zoom’s management thinks that having AI models is important, but it’s even more important to fine-tune them based on proprietary data

Having said that, there are 2 things really important. One is the model, right? So OpenAI has a model, Anthropic and Facebook as well, Google and those companies. But the most important thing is how to leverage these models to fine tune based on your proprietary data, right? That is extremely important when it comes to collaboration, communication, right? Take a zoom employee, for example. We have so many meetings, right, and talk about — every day, like our sales team use the Zoom call with the customers. We accumulated a lot of, let’s say, internal meeting data. How to fine tune the model with those data, it’s very important, right?

Examples of good AI use cases in Zoom’s platform

We also look at our core meeting platform, right, in meeting summary. It is extremely important, right? And it’s also we have our team chat solution and also how to lever that to compose a chat. Remember, last year, we also have email candidate as well. How do we leverage the generative AI to understand the context, right, and kind of bring all the information relative to you and help you also generate the message, right? When you send an e-mail back to customers or prospects, right, either China message or e-mail, right? We can lever to generate is, right? I think a lot of areas, even like you like say, maybe you might be later to the meeting, right? 10 minutes later, you joined the meeting. You really want to stand in what had happened, right? Can you get a quick summary over the positive minutes. Yes, you just also have to generate AI as well. You also can get that as well. 

Zoom’s management thinks there are multiple ways to monetise AI

I think in terms of how to monetize generative I think first of all, take Zoom IQ for Sales for example, that’s a new service to target the sales deportment. That AI technology is based on generative AI, right, so we can monetize. And also seeing some features, even before the generative AI popularity, we have a live transmission feature, right? And also, that’s not a free feature. It is a paid feature, right, behind the pay wall, right? And also a lot of good features, right, take the Zoom meeting summary, for example, for enterprise — and the customers… For to customers, all those SMB customers, they did not deploy Zoom One, they may not get to those features, right? That’s the reason — another reason for us to monetize. I think there’s multiple ways to monetize, yes.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Alphabet, Amazon, ASML, Datadog, Etsy, Fiverr, Mastercard, MercadoLibre, Meta Platforms, Microsoft, Paycom Software, PayPal, Shopify, TSMC, Tencent, Tesla, The Trade Desk, Visa, Wix, Zoom. Holdings are subject to change at any time.