What We’re Reading (Week Ending 12 February 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 12 February 2023:

1. The race of the AI labs heats up – The Economist

But almost all recent breakthroughs in big ai globally have come from giant companies, because they have the computing power (see chart 2), and because this is a rare area where results of basic research can be rapidly incorporated into products. Amazon, whose ai powers its Alexa voice assistant, and Meta, which made waves recently when one of its models beat human players at “Diplomacy”, a strategy board game, respectively produce two-thirds and four-fifths as much ai research as Stanford University, a bastion of computer-science eggheads. Alphabet and Microsoft churn out considerably more, and that is not including DeepMind, Google Research’s sister lab which the parent company acquired in 2014, and the Microsoft-affiliated Openai (see chart 3).

Expert opinion varies on who is actually ahead on the merits. The Chinese labs, for example, appear to have a big lead in the subdiscipline of computer vision, which involves analysing images, where they are responsible for the largest share of the most highly cited papers. According to a ranking devised by Microsoft, the top five computer-vision teams in the world are all Chinese. The baai has also built what it says is the world’s biggest natural-language model, Wu Dao 2.0. Meta’s “Diplomacy” player, Cicero, gets kudos for its use of strategic reasoning and deception against human opponents. DeepMind’s models have beat human champions at Go, a notoriously difficult board game, and can predict the shape of proteins, a long-standing challenge in the life sciences.

Jaw-dropping feats, all. When it comes to the sort of ai that is all the rage thanks to Chatgpt, though, the big battle is between Microsoft and Alphabet. To see whose tech is superior, The Economist has put both firms’ ais through their paces. With the help of an engineer at Google, we asked Chatgpt, based on an Openai model called gpt-3.5, and Google’s yet-to-be-launched chatbot, built upon one called Lamda, a set of questions. These included ten problems from an American maths competition (“Find the number of ordered pairs of prime numbers that sum to 60”) and ten reading questions from America’s sat school-leavers’ exam (“Read the passage and determine which choice best describes what happens in it”). To spice things up, we also asked each model for dating advice (“Given the following conversation from a dating app, what is the best way to ask someone out on a first date?”).

Neither ai emerged as clearly superior. Google’s was slightly better at maths, answering five questions correctly, compared with three for Chatgpt. Their dating advice was uneven: fed some real exchanges in a dating app, each gave specific suggestions on one occasion, and platitudes such as “be open minded” and “communicate effectively” on another. Chatgpt, meanwhile, answered nine sat questions correctly compared with seven for its Google rival. It also appeared more responsive to our feedback and got a few questions right on a second try. On January 30th Openai announced an update to Chatgpt improving its maths abilities. When we fed the two ais another ten questions, Lamda again outperformed by two points. But when given a second chance Chatgpt tied.

The reason that, at least so far, no model enjoys an unassailable advantage is that ai knowledge diffuses quickly. Researchers from competing labs “all hang out with each other”, says David Ha of Stability ai. Many, like Mr Ha, who used to work at Google, move between organisations, bringing expertise and experience with them. Moreover, since the best ai brains are scientists at heart, they often made their defection to the private sector conditional on a continued ability to publish their research and present results at conferences. That is partly why Google made public big advances including the “transformer”, a key building block in ai models, giving its rivals a leg-up. (The “t” in Chatgpt stands for transformer.) As a result of all this, reckons Yann LeCun, Meta’s top ai boffin, “Nobody is ahead of anybody else by more than two to six months.”

2. What’s a Blockchain? – Technically

Like SQL databases, blockchains are a way to store data. SQL databases store data in rows and are great for storing “facts” like how many likes an Instagram post has or the contents of this article.

Unlike SQL, blockchains take a state-machine based approach to storing this data. A state machine models reality in a different way than the spreadsheet-like relational databases that you’re used to.

A state-machine describes ways of moving between states via actions. For a simple example, think about a button. The button has two states: pressed (on), or not pressed (off). When you take the action of pressing the button, the state changes from off to on (and vice versa).

On the blockchain, an action like “button pressed” is called a transaction. And in the context of cryptocurrencies like Bitcoin, the most common type of transaction is moving value from one entity to another. In state 1 (the beginning), Alice has $7 and Bob has $1. Then a transaction happens, where Alice gives Bob $2. Now, in the subsequent state 2, Alice has $5 and Bob has $3.

These transactions are grouped into blocks. Blocks are then chained together, forming a blockchain! And voila, you understand the blockchain.

Blocks depend on the previous state, which is determined by the previous block, and so on. In essence, the blockchain is a sequential list of changes (blocks) made to the initial state. 

On a blockchain, the current state is never explicitly represented because only transactions are stored. In other words, there’s no record saying “Bob currently at this moment has $x.” This is different from how SQL databases store data, where the only thing stored is the current state (and perhaps some history). As with all things technology, there are tradeoffs to this approach:

Since blockchains don’t store the state, it takes time to calculate it by running through previous transactions, whereas SQL databases always have access to the current state. 

On the other hand, blockchains automatically store history via transactions while SQL databases don’t keep a record of the history at all. Storing history is powerful because it enables blockchains to be transparent: it’s easy to see how we got to the current state on a blockchain. This key property enables a lot of cool use cases, which we’ll cover in the next section.

3. TIP520: Investing Through Post-Bubble Markets w/ Jamie Catherwood – Trey Lockerbie and Jamie Catherwood

[00:01:45] Trey Lockerbie: I know that many have compared today to the 1970s, but I figured you might have different perspectives and possibly draw comparisons to other periods in time that resemble what we’re seeing today. 

[00:01:55] Jamie Catherwood: Yeah, so anyone that’s familiar with my work will know that I like to look at things before the 1970s. 

[00:02:06] Jamie Catherwood: I’m talking about the 1870s, In all seriousness, if you want to read about it, my colleague at O’Shaughnessy Asset Management, Ehren Stanhope, a member of our research team and client portfolio manager, wrote a great paper called “The Great Inflation.” You can find it on our website, osam.com, where he walks through the similarities and, more importantly, the key differences between the 1970s and today and why this is not like the 1970s Great Inflation.

[00:02:38] Jamie Catherwood: But to actually answer your question, I would say that the period I find most interesting in terms of a parallel to today would be the 1920s, which I’m sure most people know by now. I’ve found it really interesting since, honestly, COVID started, the similarities in progression and timeline between the early 1910s and 1920s with today.

[00:03:02] Jamie Catherwood: Because while we obviously, at least knock on wood, didn’t have a world war today, it looks like that might also be following the path of when the Russia-Ukraine conflict started. But thankfully, so far that has been avoided. A hundred years ago, you had a pandemic with the Spanish flu. After that, you had a wave of summer protests around race called the Red Summer of 1919, which was similar to the George Floyd Black Lives Matter summer of protests and demonstrations.

[00:03:33] Jamie Catherwood: And then you had a reopening where things were really kind of speculative and surging to make up for the pent-up demand that had existed while we were all locked. Which also occurred coming out of World War I and the Spanish flu a hundred years ago. But then in 1920-1921, you had a really sharp and severe recession, which was very short.

[00:03:55] Jamie Catherwood: But again, it was a problem of, in that case, rampant inflation very quickly turning into rampant deflation. It was an interesting period, but then after that is when you got the roaring twenties. But people tend to skip over that part when they talk about the roaring twenties – the one that came out of the pandemic.

[00:04:14] Jamie Catherwood: And then we had a recession, and then we had the roaring twenties. And so today, obviously the parallels are pretty obvious. We had a pandemic, we had the George Floyd Summer, and then we had the recession. And now the question is kind of, are we going to keep following roughly in line with the twenties, or, and by that, we would be experiencing or on the precipice of experiencing a true like roaring twenties.

[00:04:40] Jamie Catherwood:  Or is it going to be something different, where the economy takes longer to rebuild and truly get back to the pre-COVID levels? And so, time will tell, but I think in terms of similarities, there are a few periods that have so much in common…

…[00:05:22] Trey Lockerbie: In the article you just mentioned, and we’ll be sure to add it to the show notes so that our listeners can find it, there’s a quote in it that I wanted to emphasize. I think it summarizes pretty well. It says, “As we dive into the impact on equity markets, there does not appear to be a link between high inflation and lower equity returns, most likely associated with the compression in valuations that occurs, as it did during the Great Inflation.

[00:05:45] Trey Lockerbie: That said, certain factors like value, momentum, and shareholder yield historically hold up quite well in moderate to high inflation regimes. So I thought that was a really interesting point. I think a lot of people think there is a high correlation between inflation and performance of stocks, so it’s interesting to dig in a bit more. Can you highlight anything else on that subject around performing assets, sectors, etc., that actually do perform or even factors that are best to focus on during periods like this?

[00:06:19] Jamie Catherwood: Yeah, so factors in general tend to hold up very well to earn inflationary regimes. In addition to this paper by Aaron, which goes through and shows the returns across different factors in different inflation regimes since 1926, there is a great paper by JP Morgan aptly titled The Best Strategies for Inflationary Times ,pretty to the point. And in that paper, which I think came in like two years ago, at this point, they argue for factors that they looked at essentially eight high inflation regimes starting with I think coming out of World War II. And then there’s been like eight kinds of main inflation regime since then. And so they look at how different assets and kinds of investing styles were sectors performed in each of those regimes.

[00:07:08] Jamie Catherwood: And then also on average. And so they found that across those eight regimes, from a factor standpoint, momentum was the best performing factor across all inflation regimes, and the size factor was the worst. And then for sectors, energy was the best sector across all eight inflation regimes, and consumer durables, and so like consumer staples was the worst performing sector by some margin. And so it’s a really interesting paper and it was interesting to see that momentum in their research was the highest performer….

…[00:13:55] Trey Lockerbie: You know, FTX is, I think, around 8 billion. And, but it’s still huge, right? And, a lot of people were very surprised to see them file bankruptcy essentially overnight. So it brought up the phrase bankruptcy to me. I was kind of curious about this, so I wanted to learn a little bit about the history of bankruptcy.

[00:14:12] Trey Lockerbie: I would look to you or someone like you to share something about, you know, where the term bankruptcy comes. 

[00:14:18] Jamie Catherwood: Essentially back in the 14th century in Italy, their bankers at that time were conducting their business and transactions off of a bench. A bench is what they called it. But it really looked kind of more like a big table.

[00:14:34] Jamie Catherwood: But for all intentS&Purposes, it was this bench that they would sit on. They have the table, and that’s where they would basically sit in squares in Italy. So you know, you can picture somewhere like Venice and all these Venetian bankers sitting out in a courtyard and they’re doing their banking from this table.

[00:14:48] Jamie Catherwood: If a banker went insolvent though, and they could not continue lending out money or meeting their payments, then to signal and kind of shame that banker publicly and to let people know that he was insolvent and had gone, busted the kind of authorities or other bankers would. That person’s bench in half is just a kind of public signal.

[00:15:09] Jamie Catherwood: Like this guy literally blew up. He broke his bench in half. He’s insolvent. And the Italian, sorry to any Italian listeners, , brace yourself. The Italian phrase at that time was banca rta. That meant a broken bench. And so obviously you can see how over time, Bancta’s broken bench goes from broken bench to bankruptcy.

[00:15:34] Jamie Catherwood: So Banta bankruptcy, that’s where we get the term bankrupt from because it goes back to broken benches. When a banker went insolvent, they smashed his bench. And so a broken bench equals bankruptcy….

… [00:36:32] Trey Lockerbie: SBF obviously still in the news was once compared to JP Morgan. For bailing out a lot of crypto companies, which is also kind of interesting leading up to, you know, the demise, let’s say of FTX. Talk to us about the panic of 1907 and why this comparison to JP Morgan is being made. 

[00:36:50] Jamie Catherwood: So it’s really interesting, always in hindsight, these are like comparisons for people that turn out to be not so great. Cause I think he was also called like the next Warren Buffet. But yeah, so 1907 Panic was a really interesting one.

[00:37:03] Jamie Catherwood: A large reason why it started was actually from a year earlier in April, 1906 with the San Francisco Earthquake. Quick history it’s kind of a quirk at that period. Over 50% of fire insurance companies in San Francisco were British, which becomes very important because I think it’s like April 6th, 1906, the San Francisco earthquake happened and what a lot of people I think don’t know is that it wasn’t actually the earthquake that did the most damage. It was the fires because essentially the earthquake took out the city’s water mains. And so an earthquake happens, it hits a bunch of pipes and whatever. It causes fires. But then because the city’s water mains had been taken out, there was no water to put out the fire.

[00:37:50] Jamie Catherwood: And so for four straight days, the whole city just burned. And something like 20,000 blocks were destroyed in between 30 and like 70%, which I know is a huge gap of the San Francisco population went into homelessness because of that fire. I mean, even if it’s just 30, that’s still a lot of people. And at the time there was no earthquake insurance.

[00:38:12] Jamie Catherwood: And so people that had had their house destroyed by the earthquake, but it didn’t catch on fire. They had no real way to get insurance because it was just from the earthquake. But if they did have fire insurance, what a lot of people started doing was literally just setting their house on fire because there was no earthquake insurance.

[00:38:31] Jamie Catherwood: So they knew like, if we’re gonna get anything out of this, it’s by lighting our house on fire and then saying like the earthquake caused our house to catch on fire. But this is important because again, as over 50% of the fire insurance companies in San Francisco were British when this event happened, suddenly British fire insurance firms had a lot of money that they were on the hook for to pay out.

[00:38:58] Jamie Catherwood: And so what happened was Britain ended up sending the equivalent of 13% of their nation’s gold supply to San Francisco. On ships because these firms were just, they needed to pay out so much money and after Britain sends out 13% of their gold supply, they hike up their rates afterwards and really contract their kind of market because they’re trying to bring gold back over to London after depleting its reserves so much.

[00:39:32] Jamie Catherwood: And so this had knock-on effects for global markets, specifically in New York because this was happening at a time of year where financial markets were already kind of fragile because of just seasonal funding and capital needs around kind of more agricultural stuff. And so, even though it seems like an unrelated event, this earthquake had knock on effects because it was really kind.

[00:39:55] Jamie Catherwood: Tightened up markets. And then alongside that, you have the Nicker Brocker Trust Company and all these other sketchy trust companies that were highly levered and taking a lot of risk on speculative stocks. And so markets were already kind of fragile because of the San Francisco earthquake issue. And then alongside that, you had a failed corner of the copper market and then the collapse of Knickerbocker Trust Company and all these other trust companies.

[00:40:20] Jamie Catherwood: And at the time, we didn’t have a Federal Reserve. And so JP Morgan, the person ended up basically acting like the Federal Reserve and as a lender of last resort and providing capital and doing deals with companies and individuals that needed help because there wasn’t really another place for them to turn.

[00:40:40] Jamie Catherwood: So basically what ended up happening was the government realized we can’t continue to rely on a single person, you know, to bail us out of future crises. That panic also highlighted. Downsides of relying on gold as the base of kind of your monetary system because something like an earthquake and a lot of British fire insurance firms leading to a lot of gold needing to be moved, causing financial markets to tighten and become more fragile.

[00:41:12] Jamie Catherwood: It just really highlighted how kind of susceptible the gold standard was to these types of shocks. And so that, and the need for a Federal Reserve or some type of central bank were really two of the lasting kind of impacts from the 1907 panic because it just really highlighted, you know, JP Morgan dies, what are we gonna do?

[00:41:30] Jamie Catherwood: So it led to the creation of the Federal Reserve in 1913. So yeah, panic in 1907 is kinda like the last Pree real panic.

4. How Gautam Adani Made (and Could Lose) a $147 Billion Fortune – Stacy Meichtry, Shan Li, Krishna Pokharel, and Weilun Soon

AHMEDABAD, India—Gautam Adani is ubiquitous in this country.

His name is plastered on roadside billboards and on the airports and shipping docks he operates. His power plants light Mumbai office towers and irrigate rural fields, fueled by coal he imports from mines as far away as Australia. He recently expanded into defense and media.

So when U.S. short seller Hindenburg Research alleged last week that the Adani Group—the energy and infrastructure conglomerate he controls—was engaged in wide-ranging fraud, the fallout was widespread and severe. His companies’ stocks and bonds plunged, leaving investors with billions of dollars in losses and igniting a bitter fight that the company cast as an assault on the nation itself. On Wednesday, Mr. Adani’s flagship company, Adani Enterprises, canceled a stock sale of up to $2.5 billion.

The Adani Group denied the short seller’s allegations, describing the report as “a calculated attack on India, the independence, integrity and quality of Indian institutions, and the growth story and ambition of India.” Hindenburg shot back that Adani’s rebuttal stoked nationalist sentiment without adequately addressing the issues the firm had raised…

…Hindenburg’s allegations have shaken what many Indians call the Gujarat model of economic growth—a reference to the home state of both Messrs. Adani and Modi. The approach has involved using large government subsidies to fund infrastructure construction by private firms such as Mr. Adani’s.

The opposition Congress party has used the Hindenburg report to cast the Adani Group as an oligarch enabled by the Modi government.

“It says a lot about what corporate India is like,” said Hemindra Hazari, a Mumbai-based analyst who specializes in the Indian capital markets. Investors, he said, “are clearly very shaken up.”…

…Mr. Adani is involved in the Modi government’s plans to pivot the economy from fossil fuels to cleaner sources of energy such as wind and solar power. He has vowed to build three factories to make solar modules, wind turbines and hydrogen electrolyzers, part of his plan to invest $70 billion in cleaner technologies over the next decade. He is developing a vast solar farm in India’s northwestern desert…

…Mr. Adani returned to Ahmedabad in the early 1980s to work with his older brother, Mahasukh, who had acquired a plastics maker. He worked there as an importer, procuring raw materials for the firm’s factories. The family later founded Adani Exports, sending goods such as toothpaste and shoe polish to global markets.

In the early 1990s, India fell into an economic crisis fueled in part by the economy’s reliance on imports. The government secured an emergency loan from the International Monetary Fund and embarked on a sweeping privatization drive.

Adani Exports began buying land at Mundra Port, which was owned by the state of Gujarat. Mundra’s unusually deep waters made it ideal for docking massive ships, and its position along the Arabian Sea made it an effective gateway for Asian goods to travel west…

…Mr. Adani later proposed forming a joint venture with the state of Gujarat, which still owned land in the area, to further develop the port. Gujarat’s government approved the venture.

In 2001, after climbing the ranks of the Hindu nationalist Bharatiya Janata Party, Mr. Modi was appointed chief minister of Gujarat’s government. He helped its economy grow by providing incentives to attract businesses such as auto manufacturers, upgrading the electricity supply and improving irrigation for farmers.

Under Mr. Modi, the Gujarat government sold the state’s stake in the port joint venture to Adani Exports for two billion rupees, about $24 million at today’s exchange rate, according to a 2014 report by the federal government auditor.

“The development of Mundra Port which was envisaged as a joint sector port turned out to be a private sector port for which competitive bidding was not followed,” the report said.

Mr. Adani built a rail line to the port, making it the first in India connected to the national rail system. That allowed Mr. Adani to turbocharge the movement of goods through Mundra. The central government designated the port a special economic zone, providing another incentive to do business there.

India lacked abundant supplies of fossil fuels, so Mr. Adani began importing coal from Indonesia and Australia. He built a giant conveyor belt in Mundra to carry coal from the dock toward a nearby Adani power plant. Electricity generated at the plant moved over Adani transmission lines to cities and towns hundreds of miles away.

“I proudly say that we had a very good experience with the Modi government,” Mr. Adani said in the recent TV interview, referring to Mr. Modi’s Gujarat administration.

Mundra became India’s largest private port, which allowed Mr. Modi to brandish his pro-business credentials as he prepared to run for prime minister. A Hindu nationalist, Mr. Modi tapped into the frustrations of a generation of Indians who had climbed out of poverty but didn’t reach the middle class because of slowing growth and a lack of employment.

After Mr. Modi won, his government sought to further accelerate economic growth. That included a plan to privatize the operation of six airports. Companies in the bidding weren’t required to have any experience in building or operating airports. Mr. Adani won all six contracts, making his company India’s largest airport operator…

Adani Group’s expansion into new businesses such as data centers, copper refining and hydrogen drew it into capital-intensive sectors, where analysts say its companies have limited experience. Much of that expansion was funded by debt. Analysts have said many projects aren’t expected to turn a profit for a few years.

Debt-research firm CreditSights published a report in August describing Adani Group as “deeply overleveraged.” Adani Green Energy had a debt-to-equity ratio of 2,023% at the end of the fiscal year ended March 31, 2022, the report said, while Adani Transmission’s was 272%.

The report warned that if one of the conglomerate’s companies became financially distressed, it could negatively affect the stock prices or valuations of others.

Adani Group said in September the debt ratios of its companies “continue to be healthy and are in line with industry benchmarks,” adding that the companies have consistently reduced their debt loads.

In November, Adani Enterprises announced plans for a large stock sale, aiming to raise as much as $2.5 billion. It said some of the funds would be used to repay debt and fund capital expenditures for green-energy projects, expressway construction and airport improvements.

Three days before the public offering began last Friday, the Hindenburg report was released, sending shares of Adani companies plummeting.

5. Sunday Reads #170: Lemon markets, dark forests, and a firehose of malicious garbage – Jitha Thathachari

One thing I’ve been saying often is: when it’s 10x easier to fake it than to make it, fakes will always outnumber the truth. We saw it in the crypto summer of 2021, when all you needed was to create a token and you’d get mass adoption. A paper found that 98% of tokens launched on Uniswap were scams.

The general principle is: When it’s easy to showcase a veneer of “work” without doing the work itself, then 99% of the work you see will not be real.

When it’s easy to generate content without writing it yourself, then 99% of content will be AI-generated. And if 99% of content is AI-generated, you’re better off assuming that 100% is AI-generated. When you see any content online, the default assumption will be: this has been written by an AI.

This won’t happen tomorrow. It might not happen for the next three years. But inevitably, it will happen. The Internet will become “a market for lemons”.

“A market for lemons” is a thought experiment that shows how a market degrades in the presence of information asymmetry.

From Wikipedia:

Suppose buyers cannot distinguish between a high-quality car (a “peach”) and a “lemon”. Then they are only willing to pay a fixed price for a car that averages the value of a “peach” and “lemon” together.

But sellers know whether they hold a peach or a lemon. Given the fixed price at which buyers will buy, sellers will sell only when they hold “lemons”. And they will leave the market when they hold “peaches” (as the value of a good car as per the seller will be higher than what the buyer is willing to pay).

Eventually, as enough sellers of “peaches” leave the market, the average willingness-to-pay of buyers will decrease (since the average quality of cars on the market decreased), leading even more sellers of high-quality cars to leave the market through a positive feedback loop.

Thus the uninformed buyer’s price creates an adverse selection problem that drives the high-quality cars out of the market.

This is how a market collapses.

Soon, everything that’s for sale is garbage. Nobody has any incentive to put anything other than garbage up for sale. Why would they, when they cannot prove that they’re selling the real thing?…

…Coming back to generative AI, what we see will be similar. As instant “fake content” becomes more and more like “real content” that takes hours to painstakingly produce, the outcome is clear: The Internet will become, slowly and then suddenly, completely fake. It will become a market for lemons. So what does this mean for how we use the Internet?

Lars Doucet talks about this in AI: Market for Lemons and the Great Logging Off

The internet gets clogged with piles of semi-intelligent spam, breaking the default assumption that the “person” you’re talking to is human.

The default assumption will be that anything you see is fake. You think this is hyperbole? You don’t think this can happen? Well, then ask yourself: When did you last pick up a phone call from an unknown number? 20 years ago, you’d pick up a call from any number. It was almost always a real person, whom you wanted to speak to or who had something useful to tell you. Today, an unknown number is always a robocaller, a scammer, or a telemarketer. You really really don’t want to speak to them.

Why won’t the same thing happen with the Internet?…

…To paraphrase Lars, what happens when fake content becomes 100x easier to create? What happens when every social network is chock-full of bots, drowning your feed in utter gibberish? What happens when 99% of the people you interact with on Instagram are fake? What happens when 99% of the people you play chess against online are “fake” humans? What happens when they defeat you within 20 moves every single time? What happens when every profile you right-swipe on Tinder is a bot that’s about to scam you? What happens when the Internet becomes a never-ending firehose of malicious garbage?

This is what happens: You start logging off the Internet…. and logging in to more curated, closed communities. No more talking to fake people on Twitter or Facebook. No more using Google for search. Instead, everything happens in closed Slack or Discord communities. Invite-only social networks where a curated set of people talk to each other.

Maggie Appleton talks about this scenario, in The Expanding Dark Forest.

The “Dark Forest” is originally a term from astronomy. It’s a hypothesis for why we haven’t found any aliens yet, despite searching for decades. First proposed in 1983, it became popular with Liu Cixin’s Three-Body Problem trilogy.

Summarizing from Wikipedia:

The dark forest hypothesis is the idea that many alien civilizations exist throughout the universe, but are both silent and paranoid.

In this framing, it is presumed that any space-faring civilization would view any other intelligent life as an inevitable threat, and thus destroy any nascent life that makes its presence known. As a result, the electromagnetic spectrum would be relatively silent, without evidence of any intelligent alien life, as in a “dark forest”…filled with “armed hunter(s) stalking through the trees like a ghost”.

6. The generative AI revolution has begun—how did we get here? – Huang Haomiao

You may be familiar with the latest happenings in the world of AI. You’ve seen the prize-winning artwork, heard the interviews between dead people, and read about the protein-folding breakthroughs. But these new AI systems aren’t just producing cool demos in research labs. They’re quickly being turned into practical tools and real commercial products that anyone can use.

There’s a reason all of this has come at once. The breakthroughs are all underpinned by a new class of AI models that are more flexible and powerful than anything that has come before. Because they were first used for language tasks like answering questions and writing essays, they’re often known as large language models (LLMs). OpenAI’s GPT3, Google’s BERT, and so on are all LLMs.

But these models are extremely flexible and adaptable. The same mathematical structures have been so useful in computer vision, biology, and more that some researchers have taken to calling them “foundation models” to better articulate their role in modern AI.

Where did these foundation models came from, and how have they broken out beyond language to drive so much of what we see in AI today?

There’s a holy trinity in machine learning: models, data, and compute. Models are algorithms that take inputs and produce outputs. Data refers to the examples the algorithms are trained on. To learn something, there must be enough data with enough richness that the algorithms can produce useful output. Models must be flexible enough to capture the complexity in the data. And finally, there has to be enough computing power to run the algorithms.

The first modern AI revolution took place with deep learning in 2012, when solving computer vision problems with convolutional neural networks (CNNs) took off. CNNs are similar in structure to the brain’s visual cortex. They’ve been around since the 1990s but weren’t yet practical due to their intense computing power requirements.

In 2006, though, Nvidia released CUDA, a programming language that allowed for the use of GPUs as general-purpose supercomputers. In 2009, Stanford AI researchers introduced Imagenet, a collection of labeled images used to train computer vision algorithms. In 2012, AlexNet combined CNNs trained on GPUs with Imagenet data to create the best visual classifier the world had ever seen. Deep learning and AI exploded from there.

CNNs, the ImageNet data set, and GPUs were a magic combination that unlocked tremendous progress in computer vision. 2012 set off a boom of excitement around deep learning and spawned whole industries, like those involved in autonomous driving. But we quickly learned there were limits to that generation of deep learning. CNNs were great for vision, but other areas didn’t have their model breakthrough. One huge gap was in natural language processing (NLP)—i.e., getting computers to understand and work with normal human language rather than code.

The problem of understanding and working with language is fundamentally different from that of working with images. Processing language requires working with sequences of words, where order matters. A cat is a cat no matter where it is in an image, but there’s a big difference between “this reader is learning about AI” and “AI is learning about this reader.”

Until recently, researchers relied on models like recurrent neural networks (RNNs) and long short-term memory (LSTM) to process and analyze data in time. These models were effective at recognizing short sequences, like spoken words from short phrases, but they struggled to handle longer sentences and paragraphs. The memory of these models was just not sophisticated enough to capture the complexity and richness of ideas and concepts that arise when sentences are combined into paragraphs and essays. They were great for simple Siri- and Alexa-style voice assistants but not for much else.

Getting the right training data was another challenge. ImageNet was a collection of one hundred thousand labeled images that required significant human effort to generate, mostly by grad students and Amazon Mechanical Turk workers. And ImageNet was actually inspired by and modeled on an older project called WordNet, which tried to create a labeled data set for English vocabulary. While there is no shortage of text on the Internet, creating a meaningful data set to teach a computer to work with human language beyond individual words is incredibly time-consuming. And the labels you create for one application on the same data might not apply to another task.

You want to be able to do two things. First, you want to train on unlabeled data, meaning text that didn’t require a human to mark down details about what it is. You also want to work with truly massive amounts of text and data, taking advantage of the breakthroughs in GPUs and parallel computing in the same way that convolutional network models did. At that point, you can go beyond the sentence-level processing that the RNN and LSTM models were limited to.

In other words, the big breakthrough in computer vision was data and compute catching up to a model that had already existed. AI in natural language was waiting for a new model that could take advantage of the compute and data that already existed.

The big breakthrough was a model from Google called “the transformer.” The researchers at Google were working on a very specific natural language problem: translation. Translation is tricky; word order obviously matters, but it changes in different languages. For example, in Japanese, verbs come after the objects they act on. In English, senpai notices you; in Japanese, senpai you notices. And, of course, French is why the International Association Football Federation is FIFA and not IAFF.

An AI model that can learn and work with this kind of problem needs to handle order in a very flexible way. The old models—LSTMs and RNNs—had word order implicitly built into the models. Processing an input sequence of words meant feeding them into the model in order. A model knew what word went first because that’s the word it saw first. Transformers instead handled sequence order numerically, with every word assigned a number. This is called “positional encoding.” So to the model, the sentence “I love AI; I wish AI loved me” looks something like (I 1) (love 2) (AI 3) (; 4) (I 5) (wish 6) (AI 7) (loved 8) (me 9).

Using positional encoding was the first breakthrough. The second was something called “multi-headed attention.” When it comes to spitting out a sequence of output words after being fed a sequence of input words, the model isn’t limited to just following the strict order of input. Instead, it’s designed so that it can look ahead or back at the input sequence (attention) and at different parts of the input sequence (multi-headed) and figure out what’s most relevant to the output.

The transformer model effectively took the problem of translation from a vector representation of words—taking in words in sequence and spitting out words one after another—and made it more like a matrix representation, where the model can look at the entire sequence of the input and determine what’s relevant to which part of the output.

Transformers were a breakthrough for translation, but they were also exactly the right model for solving many language problems.

They were perfect for working with GPUs because they could process big chunks of words in parallel instead of one at a time. Moreover, the transformer is a model that takes in one ordered sequence of symbols—in this case, words (technically fragments of words, called “tokens”)—and then spits out another ordered sequence: words in another language.

And translation doesn’t require complicated labeling of the data. You simply give the computer input text in one language and output text in another. You can even train the model to fill in the blanks to guess what comes next if it’s fed a particular sequence of text. This lets the model learn all kinds of patterns without requiring explicit labeling.

Of course, you don’t have to have English as the input and Japanese as the output. You can also translate between English and English! Think about many of the common language AI tasks, like summarizing a long essay into a few short paragraphs, reading a customer’s review of a product and deciding if it was positive or negative, or even something as complex as taking a story prompt and turning it into a compelling essay. These problems can all be structured as translating one chunk of English to another.

The big breakthrough in language models, in other words, was discovering an amazing model for translation and then figuring out how to turn general language tasks into translation problems.

So now we have an AI model that lets us do two critical things. First, we can train by fill-in-the-blanks, which means we don’t have to label all the training data. We can also take entire passages of text—whole books, even—and run them in the model.

We don’t have to tell the computer which lines of text are about Harry Potter and which are about Hermione. We don’t have to explain that Harry is a boy and Hermione is a girl and define boy and girl. We just need to randomly blank out strings like “Harry” and “Hermione” and “he” and “she,” train the computer to fill in the blanks, and in the process of correcting it, the AI will learn not just what text references which character but how to match nouns and subjects in general. And because we can run the data in GPUs, we can start scaling up the models to much larger sizes than before and work with bigger passages of text.

We finally have the model breakthrough that lets us take advantage of the vast amount of unstructured text data on the Internet and all the GPUs we have. OpenAI pushed this approach with GPT2 and then GPT3. GPT stands for “generative pre-trained transformer.” The “generative” part is obvious—the models are designed to spit out new words in response to inputs of words. And “pre-trained” means they’re trained using this fill-in-the-blank method on massive amounts of text….

…Computer vision before deep learning was a slog. Think for a moment about how you, as a person, might recognize a face. The whole is made up of the parts; your mind looks for shapes that look like eyes and a mouth and determines how combinations of those shapes fit together in the pattern of a face.

Computer vision research used to be a manual effort of trying to replicate this process. Researchers would toil away looking for the right building blocks and patterns (called “features”) and then try to figure out how to combine them into patterns. My favorite example of this is the Viola-Jones face detector, which worked by recognizing that faces tend to fall into a pattern of a bright forehead and nose in a T-shape, with two dark areas under them.

Deep learning started to change all of this. Instead of researchers manually creating and working with image features by hand, the AI models would learn the features themselves—and also how those features combine into objects like faces and cars and animals. To draw an analogy to language, it’s as if the models were learning a “language” of vision; the “vocabulary” of lines, shapes, and patterns were the basic building blocks, and they were combined higher into the network with rules that served as a “grammar.” But with vast amounts of data, the deep learning models were better than any human researcher.

This was immensely powerful because it gave computers a scalable way to learn rules over images. But it wasn’t yet enough. These models were going in one direction—they could learn to map pixels to categories of objects to drop them into buckets and say, “these pixels show a cat; these pixels show a dog”—but they couldn’t go in the other direction. They were like a tourist who memorizes some stock phrases and vocabulary but doesn’t really understand how to translate between the two languages.

You can probably see where we’re going.

7. The Retreat of the Amateur Investors – Gunjan Banerji

Amateur trader Omar Ghias says he amassed roughly $1.5 million as stocks surged during the early part of the pandemic, gripped by a speculative fervor that cascaded across all markets.

As his gains swelled, so did his spending on everything from sports betting and bars to luxury cars. He says he also borrowed heavily to amplify his positions.

When the party ended, his fortune evaporated thanks to some wrong-way bets and his excessive spending. To support himself, he says he now works at a deli in Las Vegas that pays him roughly $14 an hour plus tips and sells area timeshares. He says he no longer has any money invested in the market.

“I’m starting from zero,” said Mr. Ghias, who is 25…

…Some investors have exited the market. They include Mr. Ghias, the 25-year-old amateur trader who watched the value of his stock portfolio swing wildly during the early stages of the pandemic.

Mr. Ghias says his first exposure to investing happened as a teenager growing up in the suburbs of Chicago, where his guitar teacher would monitor stocks by phone. He and that guitar teacher say they would discuss everything from penny stocks to pot stocks to shares of larger companies. When he got to high school, he started trading with some of his own money in between jobs. He says he sometimes cut class in high school and college to trade.

Once the pandemic began, he gravitated to stocks and funds tracking the performance of metals as well as options, which allow investors to buy or sell shares at a certain price. He used these to generate income or profit from stock volatility. He also borrowed from his brokerage firms to amplify his positions, a tactic known as leverage.

In 2021, he started increasing that leverage, his brokerage statements show. He often turned to trades tied to the Invesco QQQ Trust, a popular fund tracking the tech-heavy Nasdaq-100 index, while continuing to bet heavily on metals. At times, he dabbled in options tied to hot stocks such as Tesla Inc. and Apple.

At one point, his leverage amounted to more than $1 million, brokerage statements reviewed by The Wall Street Journal show. By around June 2021, according to those brokerage statements, his portfolio was worth roughly $1.5 million.

“I really started treating the market like a casino,” Mr. Ghias said…

…In late 2021, he placed one of his biggest bets. The Fed’s Mr. Powell had warned he was about to pull back the central bank’s easy-money policies, opening the door to tapering its monthly asset purchases. The plans threatened to inject a jolt of turbulence into a market that had been ascending to fresh records for much of the year.

Mr. Ghias says he thought the Fed was bluffing and made a speculative investment that he thought would benefit from an accommodative central bank, expecting prices of silver and gold to rally and help a portfolio that included a large position in Hecla Mining Co., statements show. He says he also added a bearish position tied to the Nasdaq.

The trade didn’t work, he says, and a broker demanded he post more money to fund his losses. By the end of the year, according to his statements, he had lost more than $300,000 in one account even as the S&P notched a gain of 27%.

“That was my breaking point,” Mr. Ghias said.

In 2022, he says he started taking even more risks trading options and betting on sports in hopes of making some of the money back. One big strategy was to gamble on the direction of the S&P 500 by buying and selling options contracts tied to that index that often expired the same day, brokerage statements show.

Mr. Ghias traded S&P 500 options at all hours, sometimes around midnight, placing some trades worth hundreds of thousands of dollars, brokerage statements show. For example, if he had a hunch that the S&P 500 would keep tumbling the next day, extending losses from its overnight session, he might sell options contracts that would profit from a steeper plunge. At times, he was left with losses from such trades, his statements show.

“That just put me in a really bad mental state,” Mr. Ghias said. “I began chasing losses.”


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon, Apple, Meta Platforms (parent of Facebook), Microsoft, and Tesla. Holdings are subject to change at any time.