What We’re Reading (Week Ending 19 February 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 19 February 2023:

1. The AI Bubble of 2023 – Joshua Brown

In December, ChatGPT began to spread like wildfire on social media. A handful of art-related AI programs like DALL-E 2 also began to proliferate on Instagram and some of the more image-oriented sites, but ChatGPT captured the imaginations (and nightmares) of the chattering class like nothing else we’ve ever seen.

Wall Street has begun to take notice of the AI theme for the stock market. It should be noted that trading programs based on earlier versions of AI have been around for decades, so the concept is a very comfortable one among analysts, traders and bankers at traditional firms. But now that there is retail investor interest in riding the wave, you’re going to see the assembly line lurch into action very rapidly. The switch has already been thrown. They’re pulling up their overalls and rolling up their sleeves. Funds, products, IPOs and strategies are being formulated in the dozens as we speak. This will hit the hundreds before we’re through. It’s merely stage one…

..I want to lay out a few of the things you’re about to see, so that when they happen, you understand that this is nothing new and all part of the ancient rhythm of the markets. An ebb and flow that’s been with us from the first sales of the South Seas company stock in London, or the Dutch East India Company’s share offerings, or the bubbles in canal stocks during the early 1800’s or the railroad stocks in the late 1800’s or the oil and steel ventures of the early 1900’s. We repeat this over and over again, always with the temporary amnesia that allows us to forget how this cycle usually ends – small handful of winners, lots of ruin, rancor and recrimination for everyone else.

Let’s get into these items:

1. Bubbles do not occur out of thin air or for no reason. There’s always a kernel of truth around which the mania coalesces. This is what makes them so irresistible and frustrating to fight against…

2. Twitter will be filled with charlatans, promoters and people who do not have your best interests in mind…

3. The people who make money in AI stocks will go after the conservative investors who have missed out or stayed on the sideline. If you’re a value investor or a bank CEO or some other paragon of the established order on Wall Street, you’re going to want to avoid walking in front of an open microphone and blurting out an opinion on this stuff. It’s going to come back to haunt you…

4. In the beginning, there are not enough stocks to go around. Have a look at the chart below. These are the three pure-plays in AI that currently trade publicly. BigBear ai has government contracts for artificial intelligence (legitimacy!). C3.ai has the right ticker symbol (AI, nailed it!) and SoundHound has the term “ai” in its name plus a backlog of about $300 million worth of projects for corporate customers in the space (customer service phone calls, conversational AI that replaces human interaction, etc). Their market caps are small and their business models unproven but there are no alternatives. Retail investors can’t call up Silicon Valley and order themselves up some shares of the next wave of AI startups. They must content themselves with what is on the menu today…

5. The ETFs are not going to suffice here. They are loaded up with traditional tech stocks like semiconductor companies and software companies and robotics and automated driving and all sorts of stuff that is AI-related or AI-adjacent or AI-scented, but is not quite in the eye of the hurricane. You can find a full list of ETFs at VettaFi that have something to do with AI. Most of them are loaded with large cap Nasdaq names where AI is just a small (but growing) part of their business. By this logic, IBM is an AI stock…

8. The machinery is cranking up. I mentioned the assembly line above. Here’s how Wall Street works: Sell the people what they want to buy, when they want to buy it, and if a little of a good thing is good, then a lot of a good thing is great. When the ducks are quacking, you feed them. That’s how we ended up with one thousand SPACs and two thousand IPOs and 10,000 crypto currencies. Because Old Man Thirst is one of nature’s most reliable, renewable resources. The old men are thirsty to capitalize on what the young men are capitalizing on, so they will be stuffed with AI IPOs and AI ETFs until their livers are turned into foie gras. 

2. David Ha — AI & Evolution: Learning to do More with Less (EP.146) – Jim O’Shaughnessy and David Ha

Jim O’Shaughnessy:

They’re willing to say to me, “What’s a large language model? And why is everyone so excited by GPTChat and large language models? And what’s the difference between a large language model and a stable or a diffusion model and what’s a multimodal type thing?” So if you wouldn’t mind and you would indulge me, could you give us just a little bit of a tutorial for your average? Most of our watchers and listeners are quite bright, so you don’t have to dumb it down, but large language models are great for certain purposes. They’re not so great for other purposes. Same with generative models. So if you wouldn’t mind, just a 101 on large language models then versus the other models that we’re working with.

David Ha:

Yeah, sure. Let’s take a step back before we talk about large language models, language models or just models in particular. At the end of the day, these models, they’re statistical models. They’re prediction models. They model the statistics of the world and the data we feed them to train them. Like we were talking about von Neumann earlier, if we even stepped back another century when these models started… When people started to use these for things like when Bayes I think he was a priest or a prior.

Jim O’Shaughnessy:

Yeah. You’re right. That’s right. He was.

David Ha:

Yeah. Yeah. And he had to figure out a way to model how long people would live so that the widows and orphans can gets receive an annuity to support their… To live from. So even back at that time, there was a concept of a model that the idea that you cannot predict everything with absolute certainty. You have to model the statistics from a bunch of data. So the simplest model that people use a few hundred years ago is, okay, you have a bunch of data, you wanted to model, say, your heights given your age or your weight given your age, then you would fit a linear model to that. It’s not going to be perfect, but here’s a bunch of data points. To the data point, you draw the best line that fits it and you observe some uncertainty around it, then this is a prediction model. Given X, you predict what likely Y is with some uncertainty.

David Ha:

So that uncertainty is very important because it’s an admission of the fact that you don’t know what you’re doing, but this is your very best guess and this is the error bar that you’re going to get. And that’s basically the foundation of machine learning is you have data and you try to find the relationships between the data sets and you make a prediction model with an uncertainty. And then when machine learning started taking off is when we have larger sets of data. So we no longer have a hundred or a thousand samples of human height versus weight or age versus weight. We would have a hundred million samples of all sorts of different characteristics. Then we can no longer think in one dimension anymore. Once you extend beyond one dimension, you get a two dimension, you get a plane and beyond that is a hyperplane, you’re thinking about tens of thousands of dimensions.

David Ha:

You’re fitting models of tens of thousands of dimensions, which… And they may not be linear models, they could have models with curvature or it is very common in statistics there. There’s like, how do you say? The sigmoidal models where you’re not predicting one thing, you’re predicting the probability of that thing happening, like a zero or one threshold. So machine learning, I think the advent of deep learning really became popular around exactly 10 years ago when researchers, some of my previous colleagues and Geoff Hinton has demonstrated that’s these neural network models that can be trained to understand the statistical properties of really large data sets. Back then you had a data set developed at Stanford by Fei-Fei Li’s group called ImageNet and for a while it was all of these handcrafted traditional computer vision methods. But what deep learning has shown is you can have less handcrafting of features, just give the model the data and let it learn the rules by itself from the data in order to get the best prediction accuracy.

David Ha:

So I think from 10 years ago you started to see the rise from simple linear regression or logistic regression models to more of a deep neural network models that can have a… Rather than having a pure linear or logistic regression, you can can have more curvature in hyper-dimensional space. This is something that neural networks do. And in a sense, maybe some people think this is what our brain does, as well. We have a hundred billion neurons and after a certain number of neurons, these phenomena emerge. So I’m going to talk about that next. So we’re at the stage where neural network models are starting to be really good at prediction given, because it can model lots of data.

David Ha:

Then the interesting thing is, sure, you can train things on prediction or even things like translation. If you have paired English to French samples, you can do that. But what if you train a model to predict itself without any labels? So that’s really interesting because one of the limitations we have is labeling data is a daunting task and it requires a lot of thought, but self-labeling is free. Like anything on the internet, the label is itself, right? So what you can do is there’s two broad types of models that are popular now. There’s language models that generate sequences of data and there’s things like image models, Stable Diffusion you generate an image. These operate on a very similar principle, but for things like language model, you can have a large corpus of text on the internet. And the interesting thing here is all you need to do is train the model to simply predict what the next character is going to be or what the next word is going to be, predict the probability distribution of the next word.

David Ha:

And such a very simple objective as you scale the model, as you scale the size and the number of neurons, you get interesting emerging capabilities as well. So before, maybe back in 2015, ’16, when I was playing around with language models, you can feed it, auto Shakespeare, and it will blab out something that sounds like Shakespeare.

David Ha:

But in the next few years, once people scaled up the number of parameters from 5 million, to a hundred million, to a billion parameters, to a hundred billion parameters, this simple objective, you can now interact with the model. You can actually feed in, “This is what I’m going to say,” and the model takes that as an input as if it said that and predict the next character and give you some feedback on that. And I think this is very interesting, because this is an emergent phenomenon. We didn’t design the model to have these chat functions. It’s just like this capability has emerged from scale.

David Ha:

And the same for image side as well. I think for images, there are data sets that will map the description of that image to that image itself and text to image models can do things like go from a text input into some representation of that text input and its objective is to generate an image that encapsulates what the text prompt is. And once we have enough images, I remember when I started, everyone was just generating tiny images of 10 classes of cats, dogs, airplanes, cars, digits and so on. And they’re not very general. You can only generate so much.

David Ha:

But once you have a large enough data distribution, you can start generating novel things like for example, a Formula 1 race car that looks like a strawberry and it’ll do that. This understanding of concepts are emergent. So I think that’s what I want to get at. You start off with very simple statistical models, but as you increase the scale of the model and you keep the objectives quite simple, you get these emergent capabilities that were not planned but simply emerge from training on that objective.

David Ha:

It’s similar as a researcher, I think both of us are interested in things like civilization and developments. We ourselves, we only have a very simple optimization objective to survive and to maybe to pass on our genes to our descendants. And somehow throughout this simple objective, like human civilization has emerged with all the goodness. And I find it fascinating that we have a parallel universe where you have a simple objective of, “Let’s predict the next character,” and you get this vast understanding. So yeah, I think that’s the high level description of what’s going on and what we could see from these principles.

Jim O’Shaughnessy:

One of the quotes that you like that I also love compares emergence to engineering. And the quote was, “Bridges are designed to be indifferent to their environment and withstand fluctuations,” whereas with emergent type models, they are far more adaptive and far more similar to complex adaptive systems, which I’m fascinated by, which we both are, I know since chatting with you offline.

Jim O’Shaughnessy:

And one of the things that you made me start thinking about a lot was the idea of resource constraints. And you use our own evolution as you just mentioned, that goodness, we have two primary objective functions, live and pass our genes on. And out of those two simple objective functions we tried to maximize, came this incredible world of 8 billion sentient beings. So I love the connections between emergence and what’s happening right now in all of the models. But I also wonder, are these causative or are they correlative, do you think?

David Ha:

That’s a good question. It’s tricky, because when people… Let’s take a step back and say your job as an engineer is to design a system, to identify whether an image is a cats or not a cats. And before a neural networks or machine learning, you would have to come up with all of these rules for figuring out, okay, let’s put in the whisker detector and let’s put in… Does it have two eyes? Is the cats a full cats, with the body or just the head of the cats and so on.

David Ha:

Like these expert systems back in the ’70s or ’80s, you would have maybe come up with 2000 rules. And that is an example of a hand engineered bridge rather than an emergent system. An emergent system would be like, okay, here are a million pictures of cats, figure out what’s a cat. And it’ll do that.

David Ha:

So it is very tricky because there’s also the question of correlation versus causation in this. And one of the examples that I like the most is, that’s why the neural networks also it’s a double-edged sword because sometimes your model might treat a correlation as causation. There’s some examples of ImageNet classification that I find hilarious. There’s a general category inside ImageNet called ants, like the insect ants.

David Ha:

Some models, I think they optimize too hard to get state-of-the-art accuracy that people were feeding in an image of the side, the corner of a wall at home, and it would classify that as an ant, because maybe there were lots of examples of an ant around the corner of a wall that’s where they hangout. So it would just think this is an ant because well, that’s what the data is suggesting.

David Ha:

So I think one of the arguments is, well maybe there’s not enough data to suggest that. There’s not enough examples for it to do that. But this is debatable as well. Maybe just purely scaling on data and having the rules learned is not the only way forward. One of my hypothesis is, it’s a combination. Well, some works I did before, there’s a paper called ‘Weight Agnostic Neural Networks’, where I tried to find…

David Ha:

My collaborators and I we did a project where we trained neural networks without training the neural networks. We only found the architecture of the neural networks. But that architecture still had to work, not great, but still had to work kind of, if we randomized the parameters of the model. So you actually try to find the neural network model that needs to work for some task, even if the parameters were randomized.

David Ha:

And I think one of the intuition or the parallels or the inspirations for that research is the structure of our own brains. It’s not this random like a neuro network that have no structure. There’s a lot of structure in the brains and how it’s developed. And the architecture of that is optimized for particular task of survival on Earth. We will not survive at the bottom of the ocean or on another planet, is on Earth. We’re not general intelligences. We’re very good narrow intelligences for Earth.

Jim O’Shaughnessy:

Lines to Earth, right?

David Ha:

Yeah, exactly. So I feel like getting back to causation versus correlation. A lot of the cause of structures, they could be learned, but sometimes maybe through evolution or there’s an outer loop of how the system is defined or how the rules were set up so that the systems can learn may influence ultimately what it can learn and the understanding that it can do. One thing for example, when people are training now these large language models, in addition to doing things like language, they can also generate computer code.

3. Brunello Cuccinelli – Om Malik and Brunello Cuccinelli

The self-made billionaire greeted me at the door as if I was his long-lost friend. I felt as if I had known him all of my life, just hadn’t met him. I had bought two of his sweaters almost seven years ago, when I had lost a lot of weight (which I have since regained), but his clothes aren’t really part of my wardrobe. And yet I have admired them, as well as his stores and his ethics.

For example, he gives 20 percent of his company’s profits to his charitable foundation in the name of “human dignity” and pays his workers wages that are 20 percent higher than the industry standard, mostly because it allows his company to encourage and continue the Italian craftsman traditions. Cucinelli also pays for an artisan’s school in Solemeo: Young people are free to work either at his company or for another Italian company. The on-campus cafe is way more beautiful than Google Cafe or Facebook’s facilities. And the pasta is really heavenly…

…Om Malik: I’ve been reading about you, and I have been fascinated by your progress and more importantly how you have conducted your business. Where did you find the inspiration to follow this path?

Brunello Cucinelli: From the teary eyes of my father. When we were living in the countryside, the atmosphere, the ambiance — life was good. We were just farmers, nothing special. Then he went to work in a factory. He was being humiliated and offended, and he was doing a hard job. He would not complain about the hardship or the tiny wages he received, but what he did say was, “What have I done evil to God to be subject to such humiliation?”

Basically, what is human dignity made of? If we work together, say, and, even with one look, I make you understand that you are worth nothing and I look down on you, I have killed you. But if I give you regards and respect — out of esteem, responsibility is spawned. Then out of responsibility comes creativity, because every human being has an amount of genius in them. Man needs dignity even more than he needs bread.

[In the past, people] didn’t know anything about their employer. My father or my brother didn’t know if their employer had a villa on the sea. Whereas with Google Maps, I can see where your house is. That’s where the world is becoming new. Mankind is becoming more ethical, but it is not happening because man has decided to become better than he was 100 years ago. It’s because we know we live in a glass house where everybody can see.

In order to be credible, you must be authentic and true. Twenty years ago, something might be written about you in a newspaper. Then this newspaper would be scrapped, and that would be it. But now your statement stays [online] for the next 20 to 50 years — who knows how long for. To be credible, you must be consistent in the way you behave. Someone can say to you, “Listen, two years ago, you said something different.” In a split second, they know. That’s where lies that wonderful future for mankind…

…Om: Now we have a world that is changing. The idea of “brand” is kind of amorphous, and you don’t really know who stands behind that brand. I wonder if you have any thoughts about it.

Brunello: I wanted the brand to have my face. I wanted the product to convey the culture, life, lifestyle, dignity of work. We are a listed company, and I wanted to manufacture a product with dignity. I wanted a profit with dignity. Because the press all talk about the moral ethics of profit. Why can’t we have a dignified profit then?

Would you buy something from someone if you knew that the person, by making this product, has harmed or damaged mankind? No, you would not buy it. You wouldn’t even buy it if you knew that the company had staggering profits. Our cashmere blazer costs $3,000 retail, but the profit must be dignified. It needs to respect the raw material producer, then the artisans, then those working for the company. The consumer also needs to be respected. Everything must be balanced.

We need a new form of capitalism, a contemporary form of capitalism. I would like to add “humanistic” to that equation.

Don’t you feel that over the last two or three years? Don’t you smell it? There is an awareness raising, a civil, ethical point of view. The idea of community, dignity. Yes, it’s a strong sensation…

…Om: You once said that running a company is simple. I wanted to know more about that. I want to learn the business principles that other people, other entrepreneurs, can learn from you.

Brunello: You must believe in the human being, because the creativity of a company — Let’s say you have a company with 1,000 people. Maybe we were told that there are only two or three genius people in the 1,000. But I think that if you have 1,000 people, you have 1,000 geniuses. They’re just different kinds of genius and a different degree of intensity.

We hold a meeting here with all the staff every two months. Everybody takes part in it. Even the person with the humblest tasks knows exactly what was the latest shop we opened. Everything is based on esteem, and esteem then generates creativity.

Everything is visible, when things go well and also when they go less well. When we are sad, when we are worried, when we are happy: If you show all these different moods, then you are credible. That’s why I say this is simple.

Om: Right now you’re a publicly traded company, but you yourself have a more a philosophical bent. How do you reconcile the need of the stock market with your outlook on the world?

Brunello: Finance is now going back to working along with industry while respecting each’s mutual role. In the last 20 years, finance dealt too much with industry, and industry dealt too much with finance. Whereas I myself, I’m an industrialist. I don’t know anything about finance. If you invest in me, you invest in an industry. I like it even better if you call it an artisanal industry.

As for my business plans, I have three-year business plans and 30-year business plans but also three-centuries business plans. I think that this is another good breakthrough in the world.

I haven’t come across one single investor who asked me to target a higher growth. Generally speaking, we pay our suppliers and staff 20 percent more than the average on the market. No investor ever asked, “Why don’t you reduce their wages? They’re too high.” I’m confident, because finance will become contemporary and modern too…

…Om: I’m fascinated that you have such deep passion for philosophy. I wonder how it has helped you as a businessperson.

Brunello: In everything, really. For example, take Marcus Aurelius, the emperor. In any possible mood that you might be in, you read a sentence by him and you feel better. Any philosopher helps you to raise your head and the world will look better. Respect the human being, and that will be better. Hadrian the emperor said, “I never met anyone who after being paid a compliment did not feel better.”

The true way to nurture your soul is philosophy. The true malaise of the human being — no matter whether Italian, American, Chinese — is the malaise of your soul, the uneasiness of your soul. This is stronger now than when my father was young or my grandfather.

I would like to try to somehow cure this malaise of the soul, even with the young people working for my company, because at the end of the day, you can be wealthy and still feel the same way. I know many people who own a fortune. The other day, a very loaded person said to me, “I’d love to be more serene.” This is true for rich people, poor people.

There are three things you cannot buy. Fitness: You have to keep fit, whether you’re rich or not. Diet: You cannot pay someone to be on a diet for you. I think that diet is the biggest sacrifice in my life. Then, looking after your soul. No one can possibly treat your soul but you yourself. This is something you can do through culture and philosophy.

Marcus Aurelius says, “You should go with the flow of mankind, you should live as if it was the last day of your life, plan as if you were to live forever,” and then he also adds, “You should be at rest, at peace, you should give yourself some peace.” Saint Benedict adds, “The sun should never set on our rage. Let’s go to sleep at peace with mankind.”

Let’s try looking after our soul while working. Do you know that we work 11 percent of our life? We can’t have everything revolve around work. Unfortunately, now in Italy, it is hip and chic to say, “I am so tired and exhausted by work.” My father was tired because he was farming the land. He would say, “I need some sleep, I need some rest,” but he did not have this kind of feeling.

This is the great kind of treatment that we have to follow on a daily basis. Philosophy prescribed this treatment to me. I don’t know if you know Boethius, who lived in 520 AD. He was King Theodoric’s right-hand man. Theodoric condemned Boethius to death. He resorts to philosophy for help. Philosophy turns up as a woman, not very young, but with alert eyes. She says to Boethius, “What are you complaining about in your life? You have had this, this, this and that.” This is part of man.

Alexander the Great conquered a country. The tyrant cut the noses off the people there. It’s just the way it is. It’s part of life. I do not feel anxiety. What am I supposed to say here? You see, I think that philosophy really is part of human life.

4. Control, Complexity and Politics: Deconstructing the Adani Affair! – Aswath Damodaran 

In sum, I am willing to believe that the Adani Group has played fast and loose with exchange listing rules, that it has used intra-party transactions to make itself look more credit-worthy than it truly is and that even if it has not manipulated its stock price directly, it has used the surge in its market capitalization to its advantage, especially when raising fresh capital. As for the institutions involved, which include banks, regulatory authorities and LIC, I have learned not to attribute to venality or corruption that which can be attributed to inertia and indifference.

It is possible that Hindenburg was indulging in hyperbole when it described Adani to be  “the biggest con” in history. A con game to me has no substance at its core, and its only objective is to fool other people, and part them from their money. Adani, notwithstanding all of its flaws, is a competent player in a business (infrastructure), which, especially in India, is filled with frauds and incompetents. A more nuanced version of the Adani story is that the family group has exploited the seams and weakest links in the India story, to its advantage, and that there are lessons  for the nation as a whole, as it looks towards what it hopes will be its decade of growth. 

  • First, in spite of the broadening of India’s economy, it remains dependent on family group businesses, some public and many private, for its sustenance and growth. While there is much that is good in family businesses, the desire for control, sometimes at all cost, can damage not just these businesses but operate as a drag on the economy. Family businesses, especially those that are growth-focused, need to be more willing to look outside the family for good management and executive talent.
  • Second, Indian stock markets are still dominated by momentum traders, and while that is not unusual, there is a bias towards bullish momentum over its bearish counterpart. In short, when traders, with no good fundamental rationale, push up stock prices, they are lauded as heroes and winners, but when they, even with good reason, sell stocks, they are considered pariahs. The restrictions on naked short selling, contained in this SEBI addendum, capture that perspective, and it does mean that when companies or traders prop up stock prices, for good or bad reasons, the pushback is inadequate.
  • Third, I believe that stock market regulators in India are driven by the best of intentions, but so much of what they do seems to be focused on protecting retail investors from their own mistakes. While I understand the urge, it is worth remembering that the retail investors in India who are most likely to be caught up in trading scams and squeezes are the ones who seek them out in the first place, and that the best lessons about risk are learnt by letting them lose their money, for over reaching.
  • Fourth, Indian banks have always felt more comfortable lending to family businesses than stand alone enterprises for two reasons. The first is that the bankers and family group members often are members of the social networks, making it difficult for the former to be objective lenders. The second is the perception, perhaps misplaced, that a family’s worries about reputation and societal standing will lead them to step in and pay of the loans of a family group business, even if that business is unable to. It is easy to inveigh against the crony relationships between banks and their borrowers, but it will take far more than a Central Banking edict or harshly worded journalistic pieces to change decades of learned behavior.

5. Beijing Needs to Junk Its Economic Playbook – Zongyuan Zoe Liu

Chinese household consumption was a solid growth driver supporting nearly 40 percent of Chinese GDP over the past two decades. China’s rising consumer class was willing to spend more on aspirational goods, confident that their incomes would continue to grow. They were right: The Chinese economy maintained an average of 9 percent annual GDP growth rate for nearly two decades between 2000 and 2019. As a group of Gallup researchers observed using data from a 10-year nationwide survey of the Chinese people, about 3.5 percent of Chinese households had annual incomes of 30,000 yuan (about $3,800) in 1997. This number skyrocketed to more than 12 percent in just five years. Researchers found a continued strong consumer appetite for both must-have items and discretionary fun.

Until roughly 2017, household consumption growth never lost steam. Yet during Chinese President Xi Jinping’s second term, Chinese households experienced the worst slowdown in consumption growth in a generation, dropping from 6.7 percent during Xi’s first term to 4 percent during his second term—considerably slower than GDP growth. Although the nationwide lockdowns and supply chain disruptions have certainly contributed to the downturn in consumption, the Chinese government’s regulatory crackdown on the tech industry combined with China’s worsening external environment have also fueled an unemployment crisis, especially among young people…

…All of these credit expansions with record-breaking exports only generated 3 percent growth in 2022 but at a mounting cost. The result of a proactive fiscal policy for over a decade since 2008 is that about a quarter of Chinese provinces will spend more than half of their fiscal revenue on debt repayment by 2025, as former Chinese Finance Minister Lou Jiwei warned. Previous credit expansion schemes also aimed to support major corporations, not to boost private consumption or provide household support. As a result, Chinese household income growth and consumption growth fell behind GDP growth. Although the U.S. government’s pandemic relief measures were also primarily targeted at corporations rather than households, many American households received greatly increased unemployment insurance as a cushion. However, this option was unavailable for the hardest-hit millions of unemployed migrant workers and recent college graduates in China…

…One way to interpret these policy announcements is that they collectively signal that Chinese policymakers have recognized the urgency of correcting China’s underconsumption problem. If this is true, then this year could be a watershed moment as the government pivots toward prioritizing household consumption over exports, which was China’s canonical growth strategy since 1978.

But changing the course of government priorities in China, especially ones deeply mixed with local government finances, can be a slow and tangled process at best. And even if Chinese leaders genuinely attempt to prioritize consumption, then they face two primary challenges: financial repression and household balance sheet deterioration.

Since former Chinese President Deng Xiaoping, three generations of Chinese leaders have established a system of financial repression that suppresses consumption, forces savings, and prioritizes export and state-led investments. At the operational center of China’s repressive financial system is state-owned commercial banks, whose primary customers are state-owned enterprises and have little experience promoting relationship banking. Take the episode in 2022, when Chinese banks offered loans to companies and then allowed them to deposit funds at the same interest rate, or the time when Chinese banks inflated their loan numbers by swapping bills with one another to meet regulatory requirements for corporate lending. Both are sad evidence that the only type of lending that Chinese banks know how to do—and are allowed to do in the current system—is lending to enterprises, and when demand from enterprise is weak, Chinese banks are incapable of channeling credit to anyone else, especially consumers.

The balance sheet of the average Chinese household has gotten increasingly dire over the last 15 years. Household net asset growth has decelerated since 2010, a problem that worsened during the pandemic. A report by Zhongtai Securities, a Chinese securities service firm, estimated that between 2011 and 2019, Chinese household net asset growth rates dropped to around 13 percent from close to 20 percent before 2008. During the pandemic, household net asset growth sunk below 10 percent.

Most of this wealth is concentrated in the country’s increasingly shaky property sector. An urban household balance sheet survey conducted by the People’s Bank of China in 2019 showed that housing was roughly 70 percent of household assets, with mortgage loans accounting for 75.9 percent of total household debt. This level of indebtedness was comparable to the United States in the run-up to the 2008 subprime crisis and the burst of the real estate and stock market bubble in Japan in the 1980s.

6. Investors: The one thing separating excellent from competent – Simon Evan-Cook

All great investors, past and present, are specialists, not generalists. They’re laser focused on doing one thing, and doing that one thing really well.

Warren Buffett, for example, finds great companies and holds them while they do great stuff. He knows what he wants (to make lots of money) and how to achieve it (hold great companies). So he ignores what everyone else is doing, and focuses on that.

The rub is that, no matter what your one thing is, it won’t work each and every year. This means there will be years when everyone else — the market — looks better than you (even Buffett — he’s had plenty of years like that).

Take 2020; the pandemic year. If your one thing was finding small turnaround stories (another perfectly good way of making lots of money), then 2020 was a nightmare: The share prices of big, obvious companies like Google, Amazon and Facebook, rocketed, while your carefully-selected recovery stocks cratered.

So, doing your one thing in 2020 made you look like a moron. I mean, wasn’t it obvious? It’s Amazon! We’re all locked in our homes! Amazon does home delivery, dummy!..

…Now, if I (or you) pick managers who say they only do one thing, but stop doing it after a tough year or two, I’m stuffed. It means I’m spending too much time exposed to their one thing when it’s not working, and not enough time when it is…

…This is why I’m drawn to Deck-Chair dude. He’s comfortable being different to everyone else. Like Buffett; he knows what he wants, and he knows how to get it. So he’s ignoring everyone else, and their disapproving glares, and focusing on doing his one thing. So, as long as neither of us buckle (and that we’re both good at our respective one things that work over the long term), we’ll do OK.

But there’s more to it than that…

This would have been quicker to write (and read) if there was a word to describe this super-trait.

But there isn’t. Not in English, and not that I know of, anyway (let me know if there is).

There are plenty that come close, but none hit the nail on the head.

‘Disagreeable’ is a word I’ve seen used in this context. And while it’s partly right, it’s also an exclusively negative word that entails being unpleasant or bad-tempered. And that’s not it.

‘Contrarian’ is another often-used term. But this implies someone who always does the opposite to everyone else. Whereas I’m talking about being prepared to do it when necessary, but also being content to run with the crowd when that’s the right thing to do.

To experience the missing word for yourself, try to think of a term that describes this trait in its most heroic form. Like Rosa Parks, for example, when she defied racist rules and social norms to sit in the ‘wrong’ part of the bus.

‘Contrarian’ doesn’t cut it: She wasn’t breaking all laws, just that one. And ‘disagreeable’ in her case is downright offensive.

‘Stubborn’? ‘Dogged’? ‘Pugnacious’? Sure; these are characteristics she displayed, but do they define her? I don’t think so.

Clearly she was ‘brave’, but that’s too broad a term to describe her act of deliberately breaking an unfair law and social order: You can be brave by upholding a good rule as well as breaking a bad one.

On paper ‘Conscientious’ comes close: “Being controlled by one’s inner sense of what is right”, says the dictionary. That works, but recently it’s come to mean someone who’s quietly hard-working — “a conscientious worker” — and that’s well wide of the mark.

So, until I’m told otherwise, I’ve had to create a new word: ‘Bellitious’. A mash-up of ‘belligerent’ and ‘conscientious’, which describes someone who can be belligerent when doing what their conscience tells them to be right.

7. Who Owns the Generative AI Platform? – Matt Bornstein, Guido Appenzeller, and Martin Casado

Infrastructure is, in other words, a lucrative, durable, and seemingly defensible layer in the stack. The big questions to answer for infra companies include:

  • Holding onto stateless workloads. Nvidia GPUs are the same wherever you rent them. Most AI workloads are stateless, in the sense that model inference does not require attached databases or storage (other than for the model weights themselves). This means that AI workloads may be more portable across clouds than traditional application workloads. How, in this context, can cloud providers create stickiness and prevent customers from jumping to the cheapest option?
  • Surviving the end of chip scarcity. Pricing for cloud providers, and for Nvidia itself, has been supported by scarce supplies of the most desirable GPUs. One provider told us that the list price for A100s has actually increased since launch, which is highly unusual for compute hardware. When this supply constraint is eventually removed, through increased production and/or adoption of new hardware platforms, how will this impact cloud providers?
  • Can a challenger cloud break through? We are strong believers that vertical clouds will take market share from the Big 3 with more specialized offerings. In AI so far, challengers have carved out meaningful traction through moderate technical differentiation and the support of Nvidia — for whom the incumbent cloud providers are both the biggest customers and emerging competitors. The long term question is, will this be enough to overcome the scale advantages of the Big 3?…

…There don’t appear, today, to be any systemic moats in generative AI. As a first-order approximation, applications lack strong product differentiation because they use similar models; models face unclear long-term differentiation because they are trained on similar datasets with similar architectures; cloud providers lack deep technical differentiation because they run the same GPUs; and even the hardware companies manufacture their chips at the same fabs.

There are, of course, the standard moats: scale moats (“I have or can raise more money than you!”), supply-chain moats (“I have the GPUs, you don’t!”), ecosystem moats (“Everyone uses my software already!”), algorithmic moats (“We’re more clever than you!”), distribution moats (“I already have a sales team and more customers than you!”) and data pipeline moats (“I’ve crawled more of the internet than you!”). But none of these moats tend to be durable over the long term. And it’s too early to tell if strong, direct network effects are taking hold in any layer of the stack.

Based on the available data, it’s just not clear if there will be a long-term, winner-take-all dynamic in generative AI.

This is weird. But to us, it’s good news. The potential size of this market is hard to grasp — somewhere between all software and all human endeavors — so we expect many, many players and healthy competition at all levels of the stack. We also expect both horizontal and vertical companies to succeed, with the best approach dictated by end-markets and end-users. For example, if the primary differentiation in the end-product is the AI itself, it’s likely that verticalization (i.e. tightly coupling the user-facing app to the home-grown model) will win out. Whereas if the AI is part of a larger, long-tail feature set, then it’s more likely horizontalization will occur. Of course, we should also see the building of more traditional moats over time — and we may even see new types of moats take hold.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon, and Meta Platforms (parent of Facebook). Holdings are subject to change at any time.