What We’re Reading (Week Ending 10 March 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 10 March 2024:

1. Flawed Valuations Threaten $1.7 Trillion Private Credit Boom – Silas Brown, Laura Benitez, John Sage, Kat Hidalgo, and Ellen Schneider

The meteoric rise of private credit funds has been powered by a simple pitch to the insurers and pensions who manage people’s money over decades: Invest in our loans and avoid the price gyrations of rival types of corporate finance. The loans will trade so rarely — in many cases, never — that their value will stay steady, letting backers enjoy bountiful and stress-free returns. This irresistible proposal has transformed a Wall Street backwater into a $1.7 trillion market.

Now, though, cracks in that edifice are starting to appear.

Central bankers’ rapid-fire rate hikes over the past two years have strained the finances of corporate borrowers, making it hard for many of them to keep up with interest payments. Suddenly, a prime virtue of private credit — letting these funds decide themselves what their loans are worth rather than exposing them to public markets — is looking like one of its greatest potential flaws.

Data compiled by Bloomberg and fixed-income specialist Solve, as well as conversations with dozens of market participants, highlight how some private-fund managers have barely budged on where they “mark” certain loans even as rivals who own the same debt have slashed its value.

In one loan to Magenta Buyer, the issuing vehicle of a cybersecurity company, the highest mark from a private lender at the end of September was 79 cents, showing how much it would expect to recoup for each dollar lent. The lowest mark was 46 cents, deep in distressed territory. HDT, an aerospace supplier, was valued on the same date between 85 cents and 49 cents…

…“As interest rates have risen, so has the riskiness of borrowers,” Lee Foulger, the Bank of England’s director of financial stability, strategy and risk, said in a recent speech. “Lagged or opaque valuations could increase the chance of an abrupt reassessment of risks or to sharp and correlated falls in value, particularly if further shocks materialize.”…

…Some market participants wonder, however, whether the fog around pricing suits investors just fine. Several fund managers, who requested anonymity when speaking for fear of endangering client relationships, say rather than wanting more disclosure, many backers share the desire to keep marks steady — prompting concerns about a code of silence between lenders and the insurers, sovereign wealth funds and pensions who’ve piled into the asset class.

One executive at a top European insurer says investors could face a nasty reckoning at the end of a loan’s term, when they can’t avoid booking any value shortfall. A fund manager who worked at one of the world’s biggest pension schemes, and who also wanted to remain anonymous, says valuations of private loan investments were tied to his team’s bonuses, and outside evaluators were given inconsistent access to information.

The thinly traded nature of this market may make it nigh-on impossible for most outsiders to get a clear picture of what these assets are worth, but red flags are easier to spot. Take the recent spike in so-called “payment in kind” (or PIK) deals, where a company chooses to defer interest payments to its direct lender and promises to make up for it in its final loan settlement.

This option of kicking the can down the road is often used by lower-rated borrowers and while it doesn’t necessarily signal distress, it does cause anxiety about what it might be obscuring…

…According to Solve, about three-quarters of PIK loans were valued at more than 95 cents on the dollar at the end of September. “This raises questions about how portfolio companies struggling with interest servicing are valued so high,” says Eugene Grinberg, the fintech’s cofounder.

An equally perplexing sign is the number of private funds who own publicly traded loans, and still value them much more highly than where the same loan is quoted in the public market.

In a recent example, Carlyle Group Inc.’s direct-lending arm helped provide a “second lien” junior loan to a US lawn-treatment specialist, TruGreen, marking the debt at 95 cents on the dollar in its filing at the end of September. The debt, which is publicly traded, was priced at about 70 cents by a mutual fund at the time…

…Thrasio is an e-commerce business whose loan valuations have been almost as varied as the panoply of product brands that it sells on Amazon, which runs from insect traps and pillows to cocktail shakers and radio-controlled monster trucks.

As the company has struggled lately, its lenders have been divided on its prospects. Bain Capital and Oaktree Capital Management priced its loans at 65 cents and 79 cents respectively at the close of September. Two BlackRock Inc. funds didn’t even agree: One valuing its loan at 71 cents, the other at 75 cents. Monroe Capital was chief optimist, marking the debt at 84 cents. Goldman Sachs Group Inc.’s asset management arm had it at 59 cents.

The Wall Street bank seems to have made the shrewder call. Thrasio filed for Chapter 11 on Wednesday as part of a debt restructuring deal and one of its public loans is quoted well below 50 cents, according to market participants. Oaktree lowered its mark to 60 cents in December…

…Distressed companies do throw up some especially surprising values. Progrexion, a credit-services provider, filed for bankruptcy in June after losing a long-running lawsuit against the US Consumer Financial Protection Bureau. Its bankruptcy court filing estimated that creditors at the front of the queue would get back 89% of their money. Later that month its New York-based lender Prospect Capital Corp. marked the senior debt at 100 cents…

…For private credit’s many champions, the criticism’s overblown. Fund managers argue that they don’t need to be as brutal on marking down prices because direct loans usually involve only one or a handful of lenders, giving them much more control during tough times. In their eyes, the beauty of this asset class is that they don’t have to jump every time there’s a bump in the road…

…Direct lenders also use far less borrowed money than bank rivals, giving regulators some comfort that any market blowup could be contained. They typically lock in cash they get from investors for much longer periods than banks, and they don’t tap customer deposits to pay for their risky lending. They tend to have better creditor protections, too. 

2. An Interview with Nat Friedman and Daniel Gross Reasoning About AI – Ben Thompson, Nat Friedman, and Daniel Gross

The other release, I think around the same day, was Groq released a demo of using their processor online. This is about the processor, it’s not about the model. They’re using Mistral and Llama as the the available models, but the speed is truly remarkable. It strikes me as a big deal, not because what it says about Groq — that’s a different question and I actually I’m curious about your guys points of view on some questions there — but I’ve been on, for a long time, there is a user experience issue when it comes to AI, and a lot of the use cases we’re talking about where, because it is human-like, the vastness of the uncanny valley is very large and basically any friction in that experience matters way more than it matters with a phone. With a phone, when you’re pulling it out of your pocket or you’re sitting out of your device, you’re never not aware that you’re using a phone or that you’re using a computer. It’s never like, “Wow, I thought I was talking to a human, I was actually talking on my phone.” No, that’s never going to happen, and so you actually have way more latitude for user experience friction. However, when it comes to AI, the fact that it can sound like a human, speed matters, it matters hugely, and the reason why I thought that demo was a big deal was again, the business prospects of Groq aside, it was tangible that, yes, this is the right thesis. Speed actually makes an astronomical difference and it felt like validation of a view that I had on that.

DG: Yeah, I think we have pretty fast response times from our minds, I think the brain runs at a pretty high hertz, and depending on the mood that you’re in, there’s alpha, beta, gamma, but at the end of the day we perceive reality very quickly and we hadn’t quite had an experience where something was that instant and that fast and that fluid, but I think that’s only the beginning to be honest, and someone’s going to have to do the hard work of actually taking that concept, be it on Groq’s hardware or somewhere else and turning it into something that’s very polished, refined and a product that can handle interruptions, that sort of thing.

But once someone does that, if I had to guess, if we try to project forward in the next podcast or the one after that, what is the big new thing? It’s just this idea that we’re going to move into a more agentic world of models where what we have now is very Precambrian. You go to chat.openai.com and you put in a bunch of words and some words come out and at the end of the day the model is rhyming more than it’s thinking, and it’s a little slow and I think next era is to have actual agents do tasks for you on the Internet, converse with you at human speed, and I think the economy and market prices don’t factor this in at all.

Well, this is the reason to be optimistic about Groq. If you actually pencil out the cost of their systems, and part of the reasons why it’s so fast is every individual chip has a very small amount of SRAM, which keeps the data in place and is super expensive, but it’s deterministic, they know exactly where the data is, but that means they need big systems to have enough memory. That means they would need a large market to develop. So they’re pushing this cost per token idea, but you have to have just an astronomical amount of tokens moving through the system for that pricing to make sense. My sense though is speed actually matters so much that this is a use case unlocker.

NF: It’s a user interface unlocker too. With slow model outputs, you were forced to have this streaming tokenization, the stream of tokens basically coming at you and now with speed, speed has always been a feature and I think actually in many ways this is just a reminder of a perennial rule of user interface design, which is that speed matters, latency matters. It’s a funny thing because users usually don’t ask for it, but they just sense that they prefer the thing that’s snappy and they choose it over the thing that’s sluggish.

And I think that difference is, like I said, that much bigger for these sorts of models.

NF: But in this case I think it unlocks new types of UI, whereas previously you had to sit there and watch the model just stream tokens at you.

This is where you can actually talk to it and it feels normal. It doesn’t feel weird.

NF: Yeah. Well, it also actually, I think, feels more superhuman in a way, because you can get a whole essay in seconds and you can get a book in minutes and there’s a way in which the superhuman feeling is stronger, but also I think you could have the model, for example, if you’re willing to spend the money, it’s more reasonable to have the model explore several paths and maybe it’s going to try ten things and pick the one that works best because it can do it very quickly…

Groq is really interesting because they’ve been around for a long time. Jonathan Ross, the founder, invented the TPU at Google and then set out to do it better in a certain respect. I think they almost died and then LLMs come along and suddenly they have this architecture that seems to works well. Again, you have this, under the surface, it’s quite deterministic that maps well to their approach.

You mentioned the scaling bit, Daniel. I think one of the questions that goes with this about chip design in general is at what point does it make sense to specialize even more than the GPU? The GPU is much more specialized than a CPU, but it’s still general purpose, and that comes with real costs when it comes to things like latency and things like that. Do these go hand in hand? If it actually is the case that scale is the answer to almost every problem, does that mean the opportunity for a more specialized architecture has arrived maybe sooner than we expected?

DG: I think so. And we are sitting here, I think, before the era of AI ASICs [Application-specific integrated circuit]. Maybe Groq is a little early to it because it’s been around for a little longer but if I had to guess, this is a big part of the future.

I think one of the main things that’s changed, I remember calling Jonathan the day after Llama came out, and I told him the industry is going to finally standardize around something where you can show people how great you are, because previously his issue was, he was parading around a bunch of these benchmarks and people had a tough time translating that into something that was so economically valuable they’d reconfigured their entire architecture for a specialized chip. It wasn’t just Jonathan, it was that whole era of your 2016, ’17 AI companies. What happened was really Meta created a standard by open sourcing Llama and everyone started thinking in terms of token output per second basically. That became a standard where you can perform by, and much more importantly, you can measure your balance sheet by.

AI companies go through two cycles when they train their models, they’re fairly margin, I think, insensitive, they just want the best GPUs, they don’t want to take any risk. You’re spending $300 million, you just want your model to “tape out” properly and then if you find product market fit, you switch to this inference era. Now in the inference era, you’re ultimately staring at your COGS and you’re staring at your COGS every month and you’re thinking, “Gosh, we’re paying so much per hour, per GPU, whatnot. It makes total sense for us to allocate five engineers and re-architect towards this completely different alien platform.” It’s an ASIC effectively, people would be upset if I call their chips ASICs but you get the idea.

Well, it’s more of that category than a GPU, yes.

DG: It’s a dedicated chip and it makes total sense to do that because you’re just staring at your COGS. It’s sort of like how much would you be willing to architect your infrastructure as a fintech company if you could lower your interchange rate? Well, the answer is usually a lot and the Nvidia margin is a kind of interchange rate for tokens, and you’re very much willing to do the work and the schlep for custom architecture if it works in a way that people just weren’t willing to do in 2017 because very few companies had revenue coming in.

The inference was smaller than the training market.

DG: The only people who had this, by the way, were the advertising companies, Meta and Google, and they had their own chips.

So I think ultimately that’s what happened is you’re now able to monetize these models in a way where you can do the mental math to yourself about why it makes sense to rewrite them for a custom architecture, and if I had to guess, Nvidia’s dominance in training, as far as I can tell, remains strong as ever. Over time, I don’t necessarily know that they’ll lose share, but the pie will grow and the inference pie is going to grow to some of these ASICs and to some extent it already has of course, with the TPU, and Meta has its own internal custom inference chips and that’s going to grow, I think, over time because it just makes economic sense to do so…

…There seems to be a groundswell of robotic foundation models that are coming, where we haven’t yet had this GPT-3 moment of robotics where you have a couple of hands on a desk and it can tie a shoe or it can decorate a cake or put a Lego together and do all those things relatively well or in a way that feels like the beginnings of robotic intelligence, but it seems like that’s coming in the next 12 or 18 months. We will see those demonstrations.

What’s enabling it is this belief in scaling and a few breakthroughs on the model architecture side and what’s holding it back is data. You don’t have the common crawl of robotic data, you can’t scrape the Internet for robotic instruction data and so all the efforts going into collecting those data sets and the early demonstrations are really impressive and they do involve local learned models for things like motion and kinematics and balance and stuff like that in some cases.

Is data going to be a real differentiator in that there’s going to be fights for exclusive data sets, or will it become a commodity where everyone realizes the way you actually differentiate is with the product and it’s actually to everyone’s benefit to have access to the best data sets and there’ll be more collective action?

NF: I think this is a really good question. If it had happened a few years ago, I think it would’ve been much more likely that there’d be common data sets. There are a few open robotic data sets, but they’re pretty small, pretty low quality and now that we’re already in the AI gold rush, it seems likely that the really expensive project of collecting a bunch of data, whether that’s through teleoperations or something else, will happen inside funded companies, either big companies or smaller.

Does this apply to data generally, just because maybe theoretically it’d be best for everyone to adopt a collective approach to have a high-minded where we’re going to actually differentiate, but right now the stakes are so high, everyone’s like, “Nope, my data, I’m not going to share”?

NF: The walls are going up, definitely the shutters are down on data, it used to be easier to scrape websites than it is today. Scraping has gotten harder, generally, you see that across the board. So I think companies, that at one point didn’t view the content of all their UGC as an asset, now suddenly do. They say, “Wait, we’ve got this big data set that can be trained on.”…

…NF: The bet on long context is very important and we think that being able to not just retrieve out of but reason over huge amounts of information, is a super, I mean, it’s partly a human ability. We have episodic memory and we have procedural memory and the ability to retain skills or memories over time and there’s been an open question, “How are models going to do this? How are they going to develop episodic or procedural memory?”, and you can do both in the context.

In the context, you can put episodes in that the model will remember and you can put skills in, as Google actually demonstrated by teaching it new languages inside a single prompt and then asking it to use those skills. So this has been a big missing skill, this may not be the final way it shows up in AI systems, but it’s a new way that we can do this that I think is incredibly meaningful.

You can also do superhuman things as well. Reason over huge code bases, show it hours of security footage and ask it to draw correlations across that. I do think it’s amazing and a real breakthrough, and it’s clear that Google has figured something out here, and they have a bit of a secret and we’ve all been looking for clues and poring over the literature to figure out what it is. But this is a real axis of differentiation.

Well, that’s the big question in my mind, how much of this is model and how much of this is infrastructure? Because there was a presentation they did at their enterprise event last year, and it’s weird, I can’t find this anywhere, I spent hours looking for it last week, I was writing about 1.5. But I very tangibly remember it where they were talking about this sort of sharding capability, where we know about sharding in the context of databases, and the problems that solves and the challenges it presents, but they were talking about sharding in the context of, I think they were talking about it for training. But it seems like they’re doing sharding in the context of inference where they have this ability to distribute the workload, not just across chips, not just across clusters, but at least in theory, across data centers, which introduces huge challenges as far as you’re constrained by the speed of light.

Google’s networking capabilities have always been well known, but I’m not sure it’s been appreciated how that could be brought to bear on these issues. And you talked about, Daniel, how much can you make a sparse model, and to do this, and to do a mixture-of-experts sort of approach, and to spread it out. It’s the exact opposite of Groq. Groq is massively serial, super fast. What if we can spread it out all over the place and because the use case is tolerable of latency, we can just take that all the way to the extreme? And it feels like only Google could do what Gemini 1.5 is right now, and it doesn’t feel like anyone else is even close.

DG: Do you think anyone else is close, Nat?

NF: Well, we know of one company that has this also.

DG: Yeah.

NF: Daniel and I made an investment last week in a company called Magic that has a very good, very efficient, extremely long, longer than Gemini, context that’s working. To be honest with you, we thought there was only one company that had this, now we know there were two…

The reason why Gemini as it shipped feels so distasteful, is it feels like bad faith, it’s very blatantly on the tin, “We’re not actually doing our best job to give you an answer”. It’s just straightforward, and it feels like an aspect where we would forgive an AI screwing up, we’ve been forgiving OpenAI all along, and they had some early episodes where there was clearly slants put on, and they’ve worked through that. But it felt like in good faith, “We’re doing our best here.” Gemini doesn’t feel like it’s in good faith, and maybe it was an accident that it feels that way, but it crossed a line of perception that just seems very problematic.

How did this happen? How did we get a product like this from a company that is supposedly too scared to ship and they ended up finally shipping and then it’s just a disaster?

NF: Well, I think you’re right. I think one reason they should get a little less leeway than OpenAI did, is that they saw what came before them, and they learned nothing from the precedents. Dall-E 2 had its own sort of crazy woke image creation problem that they had to adjust and tune and they learned from, and that was all forgivable because they were pioneering and ChatGPT has been through this as well and so Google should have seen all that and learned from it and done better.

It’s such a great point. This is a big advantage of going first, is you get more grace.

NF: You do, you get more grace, because no one’s ever solved these problems before. But Google definitely didn’t come first and still made mistakes that feel like 2021 mistakes, 2022 mistakes, and that’s much less forgivable.

How did it happen? I mean, I think culture’s a very big component. You wrote about that, and it’s clear that it was very difficult for anyone at Google to raise their hand and say, “Hey, I don’t think we should ship in this form, we should probably do something about this.”

Then, we’ve heard from people at Google that the models themselves, this is not likely to be something that was a deep problem in the model training, but a decision that was made in the productization by someone who came later. So, there’s probably a set of system prompts or templates or something like that that are imposing a set of rules and guidance to the models that the raw internal models don’t do.

I think this is the challenge. Google’s always had this funny word they use for shipping products, which is what they call externalization, I always thought that was a very culturally-indicative piece of jargon from Google, because it kind of captures in a way, the way Google thinks of itself. They develop breakthrough technologies internally and then they externalize the magic, and it’s not a product-first thinking, it’s not even a customer-first thinking, it’s a technology-first thinking. I think that’s where the mistake is here, in the externalization, in the process of putting it out there.

So in a way that makes it easy to fix, there’s probably a single file that could be edited that would improve things a lot, and in another way, editing that file might mean going through layers of product people and policy people who will potentially have a lot to say about that, and the gulf between the brilliant minds creating the models and the users, there’s someone in the middle and that’s where the challenge lies.

How exactly do you think this is happening, Daniel? Is it that there’s the level from the data, there’s the model, there’s the RLHF [Reinforcement Learning from Human Feedback] process, there’s the prompt, where are things going sideways here?

DG: Well, we were having a good conversation about this earlier. I mean, traditionally there’s, I think, a few things people misunderstand a little bit. Pre-training and fine-tuning a model are not distinct ideas, they’re sort of the same thing. That fine-tuning is just more the pre-training at the end. As you train models, this is something I think we believe, but we now see backed by a lot of science, the ordering of the information is extremely important. Because look, the ordering for figuring out basic things like how to properly punctuate a sentence, whatever, you could figure that out either way. But for higher sensitivity things, the aesthetic of the model, the political preferences of the model, the areas that are not totally binary, it turns out that the ordering of how you show the information matters a lot.

In my head, I always imagine it like you’re trying to draw a sheet, a very tight bed sheet over a bed, and that’s your embedding space, and you pull the bed sheet in the upper right-hand corner and the bottom left hand corner pops off, and you do that and then the top right hand corner pops off, that’s sort of what you’re doing. You’re trying to align this high dimensional space to a particular set of mathematical values, and then at some point you’re never going to have a perfect answer or a loss of zero. So, the ordering matters, and fine-tuning is traditionally more pre-training do at the end.

I think that’s originally the liberal leanings of the OpenAI ChatGPT model, came out of that. I think it was a relatively innocuous byproduct of those final data points that you show the model to, it becomes very sensitive to and those data points, it’s very easy to accidentally bias that. For example, if you have just a few words in the internal software you have where you’re giving the human graders prompts in terms of what tokens they should be writing into the model, those words can bias them and if the graders can see the results of other graders, you have these reflexive processes. It’s like a resonant frequency and very quickly it compounds. Errors compound over time. I actually think you could end up without really thinking through it with a model that’s slightly left-leaning, a lot of the online text is slightly left-leaning…

…I think the piece of information that’s most interesting is the fact that Google lacked a very basic process. This is your point, where maybe people thought or maybe people didn’t even think before they launched it and I’m thinking a lot of that famous Steve Jobs interview where he says, “The problem with Microsoft is they just have no taste.” I think the unexpected thing about AI, we’ve talked about it in this podcast, but I don’t think it’s been generally expected, is fine-tuning a model is just as aesthetic an art as making a beautiful landing page for your website.

So in hindsight, it shouldn’t be that surprising that the Borg that built the interfaces of GCP also produced very robotic models, like that’s the same thing and it also should not be surprising to us that Mistral, which a French company with French cultures and now French products, was able to produce a model that to their credit, I mean, it’s not the smartest, but it’s by far the most obedient and has by far the most neutral political tone, at least in my anecdotal testing.

Well, actually, I want to get to Mistral in a moment, but Nat, what does Google do now?

DG: Other than call you?

NF: (laughing) Yeah, I mean I think this is a leadership challenge. There’s a missing editor here and there’s a missing product editor and a missing person with good taste and judgment who gives a damn and has the authority to overrule anyone in the company and make sure the right thing goes out the door. I do think leadership changes have to happen, culture is the hardest type of change to make in a company. You could do strategy change, you could do product change, you could do operational change. Culture change is the one that’s just super difficult and it can only happen with leadership. We either need to see dramatically different behavior from Google leadership or we need to see dramatically different leaders.

3. TIP611: The Bear Case For China w/ Kyle Bass – Clay Finck and Kyle Bass

[00:06:59] Clay Finck: One of the things that sort of struck me in preparing for this conversation is that much of the information that various institutions have used to gather on what’s happening in China has actually been cut off by the CCP and it’s no longer available.

[00:07:14] Clay Finck: So why have such moves? been made by the CCP. We know they like to control data and information flow. And how are you able to get accurate information on what’s happening in China and really make sense of it?

[00:07:28] Kyle Bass: No one has accurate data on China except the Chinese Communist Party. They do and used to, they began to adhere to Western standards and they put together data aggregators that collected both micro macro level data.

[00:07:40] Kyle Bass: And so they had a Bloomberg of China called wind and there were four or five others. And they were actually pretty good, but if you dug into the data, if you looked at the Chinese Customs Bureau for import and export, and you looked at the customs data that was in the wind database 1 year until they recently cut it off, it was off by 200 billion dollars.

[00:08:02] Kyle Bass: Not 2 billion dollars, 200 billion dollars. Then you think about trade with the US is what? 650 billion. So to be off by 200 billion, that just means someone’s really cooking the books. We all knew that Chinese data had low fidelity, and now there just isn’t Chinese data anymore.

[00:08:22] Kyle Bass: As of March of 2023, they severed all of those links to U.S. research universities, to the Fed, to Wall Street writ large, and that data is only allowed out of the mainland. To mainland data, call it readers, and they’re not allowed to share it unless the party approves it. So do you think you’re getting the truth? Probably not. And, they were reporting youth unemployment until they actually reported that it was over 20%.

[00:08:47] Kyle Bass: And then they say, we’re not going to report that anymore. If you read some Chinese scholars while that was going on, 1 of the top scholars at 1 of the top universities in China said. It looks like it’s 46 percent and then they silenced him…

…[00:12:13] Kyle Bass: They’d rather pretend. Those things aren’t bad. And I’ll take you to an October 2023 Reuters release where the People’s Bank of China, which is the regulator or the call it the Chinese Fed that regulates their banking system issued an edict in October 23 and it said, The local government financing bonds that exist in the marketplace in China, it’s a 13 trillion dollar equivalent market, a monster market in China.

[00:12:39] Kyle Bass: It’s all about how the local governments fund themselves by selling real estate. They sell real estate to pay their debts. They issue debt and to gather even more funding. And that 13 trillion dollar market is in default. 80 percent of those bonds are not paying. Those local governments can’t pay because there’s no real estate bid because every public developer in China is in default.

[00:13:00] Kyle Bass: When you think about what the PBOC said in October of 23, they said to the banks, if you own the debt or you own those bonds, you can just say they’re current and it won’t affect your ratings in our annual reviews of the banks. We’re just going to pretend that the market’s paying. Just think about that for a second.

[00:13:17] Kyle Bass: Clay, a 13 trillion market. is in a complete state of default, and we’re just not going to talk about it…

…[00:14:44] Kyle Bass: We really haven’t sanctioned anything or anyone when you really look at this. I know we’re going to try to get serious, but going back to what they’re doing in their legal system, in January of 2020, China updated its foreign investment law, giving Beijing the power and the ability to nationalize foreign assets or investments.

[00:15:03] Kyle Bass: Under special circumstances, which include war, that’s their words, not mine that began in January of 2020. That’s super interesting because that’s when a covid emanated from the city of Wuhan. So that’s when they began their legal movements in the system. In June of 2021, they issued a new counter foreign sanctions law.

[00:15:24] Kyle Bass: Foreign sovereigns that were sanctioning anyone in China, they were saying if Chinese. Corporate interests or international corporate interests that have business in China are adhering to foreign sanctions that are punitive on China. That China can just nationalize their interests, imprison the expats that live there, and basically turn their companies off.

[00:15:49] Kyle Bass: Basically they were countering foreign sanctions by saying we’ll just shut off all of your business here in China and we’ll take everything that you’ve got. That happened on June 21. In April of 23, Chinese lawmakers passed a new update to their anti espionage legislation. If you remember, that’s when they were raiding U.S. due diligence firms.

[00:16:06] Kyle Bass: They raided 3 or 4 firms, they arrested everyone, they took all of the computers, and due diligence firms were just doing due diligence, business due diligence. On potential acquisitions management teams, they’re everything that companies like Bain or McKenzie or these others do when they get hired to do due diligence, that became illegal and that had a chilling effect…

…[00:19:55] Clay Finck: In light of those laws that you mentioned that were passed around COVID and ever since COVID, I actually ran across this chart that showed data from the administration of foreign exchange. It showed that China’s inbound foreign direct investment has just essentially collapsed.

[00:20:10] Clay Finck: It was, this data shows it was north of 300 billion just prior to COVID. And then in 2023 it is around 33 billion. Does that data sound accurate to you?

[00:20:19] Kyle Bass: That’s right. And there’s a caveat to that data where they don’t asterisk and don’t tell you this, but it’s actually wildly negative. And let me explain to you how.

[00:20:27] Kyle Bass: If you are a corporate interest in the U. S. and, or a multinational and you have business in China Tesla’s got business in China, there are plenty of multinationals that have business there. Chevron has business there. The profits they make in China get put in a Chinese bank and China never lets them out.

[00:20:45] Kyle Bass: So I know many multinational companies that have hired friends of mine to try to get their money out. And China just, pardon the pun, gives them a bunch of red tape and won’t allow the money out. Every dollar that’s made by a multinational in China, if it stays in the bank through the end of the year, it’s counted as foreign direct investment into China.

[00:21:06] Kyle Bass: When you look at the FDI numbers, they’ll always be until they nationalize everything, right? Multinational profits in China are automatically FDI. And I think that’s also a lens that we need to be thinking about looking at things through. What is a complete collapse of FDI, by the way, Clay…

…[00:29:20] Clay Finck: So in addition to what’s happening here, in relation to Taiwan, China definitely seems to be going through a financial crisis of their own, which you’ve touched on plenty here. And a lot of data has pointed towards an economic contraction, but they actually reported GDP growth of 5.3 percent in 2023.

[00:29:38] Clay Finck: And real estate is definitely a big part of China’s economy. So What are you seeing in their real estate market and how this plays into the bigger picture?

[00:29:50] Kyle Bass: The data that’s actually being released, again, whether there’s proper fidelity in the data, nobody knows. Clearly it’s suspect, but Hong Kong’s real estate is down over 25%.

[00:30:01] Kyle Bass: Again, since China took over, that’s the largest decline ever. And that’s just a harbinger of more to come. And by the way, that’s probably that’s the reported number. We know the real numbers are much worse and we have a couple of anecdotes from people that we know that have traded in that market and been forced to trade in the real estate market there.

[00:30:22] Kyle Bass: And it’s much worse than people think it is. But when you think about the Chinese, you mentioned that Chinese real estate is vital to their GDP. It’s somewhere between 33 percent and 40 percent of their GDP. It’s 70 percent of their net worth. And it is, it was the primary driver of the Chinese miracle of their GDP growth.

[00:30:41] Kyle Bass: And imagine if you allowed reckless speculation in your real estate markets. Your GDP grows, all the ancillary services grow. Everyone technically gets wealthier and wealthier. The banks lend into it. The bank, their banking system is three and a half times the size of its GDP. The U. S. going into a financial crisis was one time our GDP.

[00:31:02] Kyle Bass: And you know how bad we screwed this up back in 2008. And if you include non banks like Fannie and Freddie and other financials, we’re about 1. 7 times. They’re three and a half times levered to their GDP. 

4. Off the Run: Piero Sraffa and Imperial Japanese Government Bonds – Irwin Union

For the better part of 70 years, rumours have followed the Italian economist Piero Sraffa. Long the subject of speculation, it has been asserted that in the dying days of the Second World War, Sraffa heavily bought defaulted Imperial Japanese Government bonds. These, following the Treaty of San Francisco, being eventually honoured in full.

Though several authors have offered differing accounts of what Sraffa was purported to have done, till now, no person has been able to offer a satisfying and granular account of events…

Two credible accounts of Sraffa’s investments survive… 

…The second comes from the historian Norman Stone:

The economist Piero Sraffa, editor of the correspondence of David Ricardo and re-floater of Marx’s sunken theory of surplus value, took two economic decisions in his life. He bought Japanese bonds in 1945, and he swapped them in 1960 for gold, dying a very rich man.

…Luckily, recent events, including the opening of Sraffa’s archive at Trinity College, afford new insight in to what Sraffa did, when he did it, and, indeed, how he did it…

…Following her entry into the Second World War, Japan began to default on most of her external obligations in, as best as can be figured, mid 1941.

At the outbreak of the war, a number of Imperial Japanese Government bonds were listed on the London Stock Exchange. These securities were issued in the United Kingdom, denominated in British Pounds and were obligations that Japan had entered into under British law.

Japan could refuse to acknowledge them, but could not inflate them away, nor strike them out by fiat. And so they remained outstanding, with an ongoing market made, all through the war and into the peace that followed; shielded from the worst problems of the immediate post war Japanese economy by dint of their denomination in sterling and their legal domicile.

Following her 1941 default, the bonds, already on the ropes prior to the war, collapsed completely…

…Among the items in Sraffa’s archive at Trinity College are two remarkable sets of papers.

The first is a series of trading receipts issued by the London branch of the Swiss Bank Corporation. These receipts run from 1946 to 1951, and cover Sraffa’s trading of Imperial Japanese Government Bonds, as well as some miscellaneous securities (City of Wilno, Poland at 3.25 of par and Estonian bonds at 6 of par, as well as some common stock.)

The second is a series of letters received by Sraffa from an unnamed Swiss organisation who custodied gold bullion for him.

It’s reasonable to conjecture that this was also the Swiss Bank Corporation, though it’s impossible to know as the letters are so discrete as to carry no letterhead or distinguishing detail of any kind. These letters give us an inventory of Sraffa’s bullion holdings in Switzerland as of 1975, and broadly corroborate Stone’s assertion that Sraffa swapped out of bonds into gold bullion.

From the set of trading receipts, we can, with only a few minor adjustments, build a chronology of Sraffa’s trading, and, thus, a simulated portfolio of his holdings. This portfolio can then be priced using collected price data.

As of 1960, we can substitute the simulated portfolio of bonds for gold and then continue to price the portfolio all the way through to 1983.

Of course, there are wrinkles, discussed vide infra, and so it should be understood that the best that can be done is speculation about Sraffa’s actual record.

Nonetheless, we can get somewhere close to reality, and enough detail is provided for the reader to make her own back of the envelope adjustments and calculations as desired.

I first collected monthly price data for the period from 1946 to 1951 (the period in which Sraffa was actively trading) and six monthly data from 1929 to 1960.

With this data in hand, we can begin to unravel the question of how and what Sraffa accomplished.

Sraffa’s receipts show that between 1946 and 1951, he traded quite frequently, realising capital gains and recycling his proceeds into other issues. However, in late 1951 Sraffa halted his trading altogether.

From here, for the purposes of simulating his record, we assume that the portfolio remained static until 1960. 

Sraffa’s final trades consolidate his holdings into the 1899 bond. This issue bore one of the earliest maturity dates…

…On the 9th of March, 1946, as Sraffa was likely contemplating his first purchases, the Financial Times ran a front page story titled Japan Bonds’ Bleak Outlook: Chancellor Reaffirms Gloomy View. The article reported on comments made by the Chancellor of the Exchequer in the House of Commons the previous day, wherein he had stated that:

[…] in the case of British bondholders at large, and in general, I will do my utmost to see that they get fair play. There is nothing new in that, but why humbug Japanese bondholders into believing that they have anything but the very dimmest and remotest chance of recovering anything of their investments?.

Following the Chancellor’s remarks, the bonds sold off by approximately 20%…

…Reading the financial papers of the time, one finds a veritable feast of views on the Japanese loans expressed in articles, opinion pieces and letters to the editor. Indeed, the letters to the editor in particular functioned as a sort of clearinghouse for opinion and query. It’s not a stretch to compare these exchanges to those that happen on message boards and social media today.

Though the full record is too voluminous to feature in full, it is also so information dense that it forms a vital part of any study of the securities.

We learn some extraordinary facts from these articles and letters. For instance, as early as late 1946 thru January 1947, it was being stated that interest on the defaulted bonds had been paid into sinking funds during the war.

One stock which tended to be overlooked when the market was active was the Tokyo Five Percent, 1912. Like Japanese Government Stocks, the interest has been set aside for bondholders in Tokyo throughout the period of the war and after, and Japanese nationals have been paid.

Any question of transfer to British bondholders awaits the signing of the Peace Treaty and the unfreezing of the yen-sterling exchange; the latter process can hardly be a quick one.

Japanese Bonds Speculation – Lex – Financial Times – 27/1/47

We also learn that the amount needed to make British bondholders whole was relatively de minimis. This is because Japanese citizens, for reasons not apparent, owned most of the sterling issues. Japanese citizens were compulsorily converted into Yen denominated bonds in 1943, presumably due to strains on Japan’s foreign exchange balances, leaving only the rump holdings of foreign owners intact.

A correspondent has lately received a cable from the Far East which has bearing on my note of yesterday on Japanese bonds. The cable reads as follows:

“Japanese Sterling Bonds interest paid all Japanese holders in Japan at former rates of exchange until March, 1942. Foreign nationals in Japan paid interest into special custody account. After March, 1943, Japanese owned compulsorily converted into yen bonds. No payments made of interest against unconverted bonds, but still being made on converted.”

That puts the position in a nutshell. Whatever the peace treaty may have to say on the matter, it is a fact, as is pointed out by my correspondent, that the default in interest due to British and Allied holders of Japanese sterling bonds not resident in Japan would not need a large sum to wipe out, as the Japanese always held the larger part of the sterling bonds. Lex

Japanese Post Script – Lex – Financial Times – 28/1/47

We also learn of Japan’s wish to join the United Nations and apply for membership of the IMF.

[…] 6) The goodwill of the Japanese since the end of hostilities, and the expressed desire of the Japanese Government to join the United Nations as soon as permissible after the signing of the Peace Treaty. An intention to apply for membership to the International Monetary Fund once the Peace Treaty has been signed has also been indicated.

Letters to the Editor – Financial Times – 19/4/47

In the following letter, the author, a former resident of Japan, argues that the settlement of the debt would allow Japan to reestablish herself with foreign lenders at negligible cost.

Having spent several years in the service of the Japanese Government and having always kept in close touch with financial circles in that country, I have no hesitation in endorsing the view expressed by one of your readers a few weeks ago, namely, that the bonds in question are the best three-year lock-up on the market to-day, or as “Lex” remarked in your issue dated 2nd January: “If I were asked to name a good speculative long-shot for 1947, I think Japanese bonds would be as strong a starter as any.”

[…] Finally, the amount of Japan’s foreign indebtedness is infinitesimal, and the Government is fully alive to the fact that by meeting its commitments it is reestablishing its financial credits abroad at a very small cost.

Japan Bonds and Reparations

Letter to the Editor – Financial Times – 21/5/47

And then, on the 23rd of December, 1947, there is what can only be described as an extraordinary letter from William Teeling, a member of the House of Commons. This letter is worth inclusion in full.

Sir, -There has been much comment in your paper and elsewhere recently on the widening interest in all Japanese loans. Yesterday (Friday) afternoon I told a number of business men in the City interested in Japan what I know about these loans, and I feel that it is only fair that everyone should know, since contact with Japan and the Japanese is so difficult.

I have just returned as a member of a Parliamentary delegation which spent six weeks in the Far East, and while in Tokyo I made it my business to inquire about these loans which interest so many people here.

The Finance Minister in the present Japanese Coalition Cabinet told me that all interest accrued on the Japanese bonds would definitely be paid when peace with America has been signed. He could not say yet at what rate, but it would definitely not be at the rate when war broke out. He added that even during the war bondholders in Switzerland for certain loans were paid and he assured me that money has all the time been set aside in Tokyo for this purpose.

This was confirmed to me at a later meeting with heads of Japanese business firms and banks at which meeting the Foreign Secretary, Mr. Ashida, was also present. Mr. Ashida explained to me that new loans from America were essential and therefore Japan must keep up her reputation for meeting her debts and would pay off her earlier loans.

Reparations officials confirmed that the sums outstanding are small and could be repaid. The American officials concerned told me that a rate for the repayment of all debts will shortly be fixed and will definitely take into account the present depreciation of the yen.

But when will peace be signed? I only know that America was waiting for the recent Four Power Conference to break down before going ahead on a separate peace with Japan, and Great Britain will reluctantly support her as it is the only solution, but it will mean the strengthening of Japan and that means more loans.

William Teeling. House of Commons, S.W.1.

Letters to the Editor – Financial Times – 23/12/47…

…On the 23rd of August, 1949, we learn that Japan’s total external debt was then $323mm USD with approximately $80mm USD of unpaid interest thereon. We also learn that British claims totalled approximately £62mm GBP.

Kaneschichi Masuda, Japanese chief Cabinet Secretary, said here today that he was unable to reveal any practical plans whereby Japans foreign bond commitments could be met.

[…]

He said that $323m. worth of bonds were held by foreigners, on which $80m. in interest had accumulated. British subsribers held about £62m. of this amount.

Japan and Bond Repayment – Financial Times – 23/8/49

However, it was not so cut and dried. By 1951, the mood had soured, and the question of reparations, long simmering, had become acute. In April, Teeling again wrote to the Times, this time expressing concern about the lack of progress and the possible outcomes for British bondholders.

At question was whether reparations would rank ahead of foreign bondholders, and whether reparations might exhaust Japan’s capacity to make foreign bondholders whole, irrespective of her desire to do so.

Then, on the 13th of August, news of formal recognition by the Japanese Government of her prewar debts was published in the Financial Times.

Japan will not be restricted milatarily, politically or economically under the draft peace treaty published yesterday by Britain and the United States.

Japan affirms its liability for the pre-war external debt of the Japanese State, and for debts of corporate bodies subsequently declared to be liabilities of the Japanese State, and expresses its intention to enter on negotiations at an early date with its creditors with respect to the resumption of payments on those debts.

It will facilitate negotiations in respect to private pre-war claims and obligations; and will facilitate the transfer of sums accordingly.

Japanese bonds were active on the London Stock Exchange yesterday. Prices rose sharply at the opening and were up to £5 higher at one time. Following publication of the terms of the draft treaty there was, however, considerable profit taking. As a result, closing prices well after hours were £4 below the best.

Japan Recognises Debt Liability; Prepared for Talks on Payments – Financial Times – 13/8/51

The formal end of hostilities between Japan and the Allied powers came in September, 1951, with the signing of the Treaty of San Francisco. With the treaty formalised, Japan was now able to turn to the issue of settling her defaulted foreign obligations.

In March, 1952, the Financial Times reported that the Japanese Government was placing £20mm GBP on deposit in London as a goodwill gesture.

The Treasury announces that the Japanese Foreign Exchange Control Board is arranging to deposit with the Bank of England £20m. as a token of good will towards the holders of Japanese sterling bonds.

The initiative for this move was taken by the Japanese Foreign Minister. When neccessary formalities have been completed, the sum will be deposited and will remain with the Bank of England for two years.

During that period, it will be available for any payments by Japan to her creditors in connection with a settlement of her sterling bond indebtedness.

Japan to Deposit £20m. in London – Financial Times – 29/3/52

The front page of the 29 September issue of the Financial Times read Japan to Pay Full Interest Arrears, and detailed the terms agreed upon in New York.

After negotiations lasting nearly two and a half months, agreement has been reached in New York on the treatment of Japan’s bonded debt to Britain and the United States. It is a settlement that goes a very long way to meeting British Claims. The service on outstanding issues is to be resumed forthwith. Interest arrears that have piled up since the Pearl Harbour affair brought Japan into the war are to be met in full, though at a time lag of ten years from the due dates. There is a similiar arrangment for the treatment of repayment obligations. Moreover, the currency clauses included in a number of the debts under discussion at the conference are to be substantially honoured. The Japanese have, in short, comitted themselves to do what they said they would do before the conference began.

Contractual Terms – Financial Times – 29/9/52

On the 24th of November, the Times published the full terms of the settlement.

Briefly, the terms provided for the extensions of maturities by ten and fifteen years, a catch up payment generally equal to a single coupon, and the amortisation of accumulated defaulted coupons by the payment of one current and one defaulted coupon for each payment period until all defaulted coupons had been settled. This, in effect, doubling the coupon of each bond for a discrete period…

…With firm details of the restructuring of the loans, we can now model the post 1951 evolution of Sraffa’s portfolio through to 1960. I assume that Sraffa allowed his coupons to accumulate in cash, rather than reinvesting them.

With this account curve in hand, we can now model his swap to gold bullion in 1960.

At the end of 1960, Sraffa’s simulated account had a value of £52,676.

At year end 1960, a kg of gold bullion cost £404.46. Thus, assuming no frictions, we find that Sraffa swapped his bonds and cash for ~ 130 kg of gold bullion.

With this, we now have a complete simulated account curve for the entire period.

According to these calculations, Sraffa compounded his initial simulated outlay of £8000 cash into £1,105,839, a multiple of 138 times, or 13.97% per annum over approximately 38 years.

5. Thoughts on Ben Graham’s “Unpopular Large Caps”: A Still-Effective Strategy – John Huber

In the spirit of Graham’s categories, I recently gave a presentation to Saber investors during our latest client Zoom call with an overview of my own three main categories of our own investments: 1) Core operating businesses that we hope can compound value for a decade+, 2) Time Arbitrage (Similar to Ben Graham’s Unpopular Large Caps) and 3) Bargains.

This “Category 2” provides a frequent enough flow of ideas thanks to a very simple fact: stocks fluctuate much more than true business values do…

…I’ve written about the concept of “investment edge” on numerous occasions (see: What is Your Edge?), and how in today’s world, information has become easier to get and thus more of a commodity. But this information access, along with other technologies, has caused our attention spans to become shorter and shorter, which I think has diminished our patience and our time horizons. We want results now. This has created a “time arbitrage” opportunity, and I expect this will only gain strength as time horizons and patience levels continue to shorten.

Past examples of Category 2 ideas would include Apple in 2016 when pessimism surrounding the next iPhone cycle and worries about Apple’s competition caused the stock to fall below 10 P/E, Verisign when worries about government intervention into its pricing practices caused the stock to fall to multiyear valuation lows, or large banks like BAC and JPM in 2015-2016 when the market was expecting and fearing a difficult economy (and larger loan losses). More recent examples of mispriced large caps might include large cap tech stocks in 2022: AMZN fell 50% in 2022 and rose 80% in 2023, and that was mild compared to what happened at numerous other mega cap stocks. The valuation levels fluctuate far more than business values.

To be clear, there always is a legitimate negative fundamental case to be made when stocks get mispriced, but I think the majority of the time these concerns tend to be focused on the short term. Amazon over invested in warehouse capacity because it overestimated the growth in online retail sales, but was this going to negative impact Amazon’s long-term moat? (I would argue that in one sense it actually further entrenched their moat, making it very difficult for other retailers with lesser capacity to offer the same experience of low cost and speed of delivery: another large online marketplace with ambitions to enter the logistics space ended up throwing in the towel during this period). Sometimes, these short-term difficulties end up being long-term beneficial for the “unpopular large caps”, and the great thing about this category of investment is you get to acquire a stake in these better-positioned large companies when their stocks are depressed.

JPM is recent example of a Category 2 idea as well: the stock traded down under 8 P/E in the summer of 2022 when recession fears were prevalent (similar to what happened in 2016 to bank stocks).

I think Jamie Dimon had some great advice on the right mindset last year when he said (paraphrasing): “in 20 years, the world’s stock market capitalization will be much higher, the assets in the banking system will be higher, corporate earning power will be higher, the dollar volume of merger transactions will be higher, global payment volume will be higher.” The implication is JPM has a durable moat and thus is positioned to take a cut of all of that business. Earnings might decline in the near term, but what matters to business values is the long-term free cash flows that it earns over time.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon, Apple, Meta Platforms, Microsoft, and Tesla. Holdings are subject to change at any time.