What We’re Reading (Week Ending 23 October 2022)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 23 October 2022:

1. A Rare Interview with Phil Fisher Following the 1987 Crash – Conor Macneil 

If that’s what’s coming, how does one protect oneself against this hyperinflation?

I made as deep a study as I could of what happened after World War I in France, where there was lots of inflation, and in Germany, where there was inflation into infinity. And in both countries the same thing happened. If you bought the very best stocks, according to my definition–not just any stocks–you were still darned uncomfortable during that period of the spiraling inflation. But when the inflation was over, you came out of it with about 80% of the real purchasing power intact. If I can come out of it with 80% of my present assets in real money, and my people can do that, that is fine. Until then I’m keeping a fair amount in Treasury bills. Timing these things is so damnably difficult. I don’t want to be the smart guy with too much cash because I think a big break is coming. Nor do I want, once it comes, to spend too long getting myself ready. When you’re not sure, you hedge. Very roughly, I have between 65% and 68% in the four stocks I really like, between 20% and 25% in cash and equivalents, and the balance in the five stocks that are in the grooming stage…

What do you look for in a core stock?

They are all low-cost producers; they are all either world leaders in their fields or can fully measure up to another of my yardsticks, the Japanese competition. They all now have promising new products, and they all have managements of above-average capabilities by a wide margin.

You place a lot of emphasis on management, don’t you?

Getting to know the management of a company is like getting married. You never really know the girl until you live with her. Until you’ve lived with a management, you don’t really know them to that same degree…

What’s the single most important lesson to be learned from your career as an investor?

It is just appalling the nerve strain people put themselves under trying to buy something today and sell it tomorrow. It’s a small-win proposition. If you are a truly long-range investor, of which I am practically a vanishing breed, the profits are so tremendously greater. One of my early clients made a remark that, while it is factually correct, is completely unrealistic when he said, “Nobody ever went broke taking a profit.’ Well, it is true that you don’t go broke taking a profit, but that assumes you will make a profit on everything you do. It doesn’t allow for the mistakes you’re bound to make in the investment business. Funny thing is, I know plenty of guys who consider themselves to be long-term investors but who are still perfectly happy to trade in and out and back into their favourite stocks.

Some years ago I was the adviser to a profit-sharing trust for a large commodities dealer. I bought for them–I think the stock has been split 15 times since then–a block of Texas Instruments at $14 a share. When the stock got up to $28, the pressure got so strong (“Well, why don’t we sell half of it, so as to get our bait back?’) I had all I could do to hold them until it got to $35. Then the same argument: “Phil, sell some of it; we can buy it back when it gets down again.’

That is a totally ridiculous argument. Either this is a better investment than another one or a worse one. Getting your bait back is just a question of psychological comfort. It doesn’t have anything to do with whether it is the right move or not. But, at any rate, we did that. The stock subsequently went above $250 within two or three years. Then it had a wide open break and fell to the mid-50s. But it didn’t go down to $35.

What turned you off in the short term?

Let me go back to the 1930s. The company I really started my business on was FMC Corp., then called Food Machinery. Two-thirds of its business was in selling to fruit and vegetable canners. So I started learning a fair amount about the canning business. Three different times in the Thirties I bought California Packing–that’s the Del Monte line–at a low price, when the outlook for canning looked poor, and sold it at a high price. I also bought, for any client who I could get to buy it, as much Food Machinery stock as they would let me.

Then in 1940 or 1941 I reviewed the bidding and found that the effort I had put into the timing of buying and selling California Packing shares considerably exceeded the time I had spent learning about and watching Food Machinery stock. Yet already by 1940 my profits in Food Machinery dwarfed the ins and outs of California Packing. That episode finally made me decide not to follow the almost accepted policy at the time that you should buy low and sell high and make a profit and bring it in. This just isn’t valid.

Warren Buffett once said his investment philosophy was 85% Ben Graham, 15% Phil Fisher. What’s the difference between Grahamism and Fisherism?

There are two fundamental approaches to investment. There’s the approach Ben Graham pioneered, which is to find something intrinsically so cheap that there is little chance of it having a big decline. He’s got financial safeguards to that. It isn’t going to go down much, and sooner or later value will come into it. Then there is my approach, which is to find something so good–if you don’t pay too much for it–that it will have very, very large growth. The advantage is that a bigger percentage of my stocks is apt to perform in a smaller period of time–although it has taken several years for some of these to even start, and you’re bound to make some mistakes at it. [But] when a stock is really unusual, it makes the bulk of its moves in a relatively short period of time.

The disadvantage of Ben Graham’s approach, as he preached it, is it is such a good method that practically everybody knows it and has picked up the things that meet his formula. I don’t want to say that mine is the only formula for success. But I think, and I may be conceited about this, that I started my business before the term growth stock was thought of.

2. Alien Truth – Paul Graham

If there were intelligent beings elsewhere in the universe, they’d share certain truths in common with us. The truths of mathematics would be the same, because they’re true by definition. Ditto for the truths of physics; the mass of a carbon atom would be the same on their planet. But I think we’d share other truths with aliens besides the truths of math and physics, and that it would be worthwhile to think about what these might be.

For example, I think we’d share the principle that a controlled experiment testing some hypothesis entitles us to have proportionally increased belief in it. It seems fairly likely, too, that it would be true for aliens that one can get better at something by practicing. We’d probably share Occam’s razor. There doesn’t seem anything specifically human about any of these ideas.

We can only guess, of course. We can’t say for sure what forms intelligent life might take. Nor is it my goal here to explore that question, interesting though it is. The point of the idea of alien truth is not that it gives us a way to speculate about what forms intelligent life might take, but that it gives us a threshold, or more precisely a target, for truth. If you’re trying to find the most general truths short of those of math or physics, then presumably they’ll be those we’d share in common with other forms of intelligent life.

Alien truth will work best as a heuristic if we err on the side of generosity. If an idea might plausibly be relevant to aliens, that’s enough. Justice, for example. I wouldn’t want to bet that all intelligent beings would understand the concept of justice, but I wouldn’t want to bet against it either.   

3. Nvidia CEO Jensen Huang: ‘The semiconductor industry is near the limit’ – Max A. Cherney and Jensen Huang

Artificial intelligence is one of the most transformative technologies that the world’s ever known. We can apply intelligence to problems at an extraordinary scale. Humans have great intelligence, but we can only read so much information and wrap that intelligence around so much data. And artificial intelligence, especially with today’s computing scale, could solve problems that no humans could possibly imagine wrapping their arms around. This instrument [AI] is available for the world’s largest technology companies that apply it for all kinds of interesting, very important problems like shopping and music recommendation and things like that.

But we need to put this technology in the hands of scientists, so they can apply it to the most important and pressing challenges. Most universities don’t have the budget. And it’s really quite a shame that most universities today still have [haven’t] come to grips with the idea that in order to advance the most important fields of science, you need a new type of instrument — just like we needed radio telescopes, just like we needed particle accelerators. We need instruments to advance science.

And in this new form, in this new world of scientific discovery, where principal methods, theoretical methods are still very important, but data-driven methods are vitally important. And this data-driven method is really about inferring from sensor information: How to predict physics, and in order to do this you need a large instrument, and that large instrument [today] is a computer, and most universities just don’t have the budgets for the scientists. They have the budget for the buildings, but they don’t have budgets for computers.

The semiconductor industry is near the limit. It’s near the limit in the sense that we can keep shrinking transistors but we can’t shrink atoms — until we discover the same particle that Ant Man discovered. Our transistors are going to find limits and we’re at atomic scales. And so [this problem] is a place where material science is really going to come in handy.

A great deal of the semiconductor industry is going to be governed by the advances of material sciences, and the material sciences today is such an enormously complicated problem because things are so small, and without a technology like artificial intelligence we’re simply not going to be able to simulate the complicated combination of physics and chemistry that is happening inside these devices. And so artificial intelligence has been proven to be very effective in advancing battery design. It’s going to be very effective in discovery and has already contributed to advancing more durable and lightweight materials. And there’s no question in my mind it is going to make a contribution in advancing semiconductor physics.

4. Black Monday – October 19, 1987 – Gene Hoots

The worst day in market history was October 19, 1987 – 35 years ago tomorrow. Some analysts will mention this on market news, perhaps referring to “old timers” who remember Black Monday.  To me, it seems like only yesterday. It was the most fearful day I have experienced in my 53 years of investing…

…People always think that the current bad market is the worst ever. Historical perspective is important. And unfortunately, we get more perspective from experience than from reading history. You just had to be there!

The 1987 crash ended a five-year ‘bull’ market. The Dow Jones Industrial Average rose from 776 in August 1982 to 2,722 in August 1987.” Then, in 8 weeks, the Dow declined 17%. On October 19, forever known as “Black Monday,” the Dow plummeted another 23%, the greatest loss Wall Street had ever suffered in a single day. (An equivalent one day drop now would be about 7,000. That would get your attention!)

I was working in New York at the investment firm, Reich & Tang, very aware of the disaster that was occurring 3 miles south at Broad & Wall Streets. Our office was eerily quiet. About 4:30, a lady who worked in the next office came to my door, and in hushed tones, said, “The Dow is down 508 points.”  The worst day ever!

We had no idea whether the market and the economy would now freefall as they had in 1929.  Everyone was scared. After work, I walked up 5th Avenue toward our apartment. My world had just collapsed. Reich & Tang might go out of business…

…Tom Quinn, the chief investment officer at RJR Investment Management in Winston-Salem was buying stocks like crazy that afternoon.  And when the rest of the world, or 99% of it, was either selling or frozen with fear. The consequences of his decision, a courageous and correct one, show how hard it is to be a contrarian who does the right thing. He later recalled:

I could have been characterized as dangerously unemotional. We were buying stocks for a new $100 million portfolio. By October 18, we had purchased $60 million of stocks, and we held $40 million in cash, planning to be fully invested by month-end. On October 19, we watched the market crash that morning.  We were experiencing pure panic in the market. This was most likely an excellent buying opportunity.  That afternoon, with the market down over 20%, we completed the buying.

This decision was logical and economically correct.  But I was threatened with termination because of this economic wisdom.  Black Swan [unexpected] events create unique opportunities to make money.  However, emotions that create the event will also constrain people from capitalizing on that event.

Some called my not getting caught up in the emotions of the day a “Herculean effort”.  It did not seem Herculean to me, just simple investment common sense.  But what I did may have been more stupid than stupendous. It almost cost me my job.

On the day following the crash, the CEO, expressed shocked that we were buying stocks when the market was crashing.  He wanted us to move the pension fund to 100% cash.  We argued that such action was inappropriate.  We held a series of meetings and discussions to resolve our major differences and reached a compromise. Our investment group agreed to hedge (short) 20% of the plan assets, $370 million. [If the market continued to decline, the trade would profit. If the market advanced, the trade would lose.]

We took the short position 30 days after the crash. Ironically, that was the day the market bottomed. My theory has always been that it took 30 days for the last ERISA fiduciaries to argue, resolve their differences, and complete their liquidation of plan assets. 

Now, being a little wiser, I realize that making the most money at the lowest risk is not the primary objective of fiduciaries or their agents (advisors).  The primary objective is to cover your backside and keep your job or investment account.  It is a matter of short-term thinking rather than long term investing.

The 20% shorting of the S&P 500 Index resulted in a significant loss for the pension fund. A year later, CEO Ross Johnson put RJR ‘in play’ as a buyout.  Floyd Rodgers, a Winston-Salem Journal reporter, interviewed Johnson.  Floyd asked about the shorting incident. Ross dismissed the question, saying, “We made money on that move.” In this business, lots of people make terrible decisions and later claim they are not!  His comment still infuriates me.

5. Meta’s new AI-powered speech translation system for Hokkien pioneers a new approach for an unwritten language – Meta AI Blog

Until now, AI translation has mainly focused on written languages. Yet nearly half of the world’s 7,000+ living languages are primarily oral and do not have a standard or widely used writing system. This makes it impossible to build machine translation tools using standard techniques, which require large amounts of written text in order to train an AI model. To address this challenge, we’ve built the first AI-powered translation system for a primarily oral language, Hokkien. Hokkien is widely spoken within the Chinese diaspora but lacks a standard written form. Our technology allows Hokkien speakers to hold conversations with English speakers.

The open sourced translation system is part of Meta’s Universal Speech Translator (UST) project, which is developing new AI methods that we hope will eventually allow real-time speech-to-speech translation across all extant languages, even primarily spoken ones. We believe spoken communication can help break down barriers and bring people together wherever they are located — even in the metaverse.

To develop this new speech-only translation system, Meta’s AI researchers had to overcome many challenges from traditional machine translation systems, including data gathering, model design, and evaluation. We have much work ahead to extend UST to more languages. But the ability to speak effortlessly to people in any language is a long-sought dream, and we’re pleased to be one step closer to achieving it. We’re open-sourcing not just our Hokkien translation models but also the evaluation datasets and research papers, so that others can reproduce and build on our work…

…Speech translation systems are usually evaluated using a metric called ASR-BLEU, which involves first transcribing the translated speech into text using automatic speech recognition (ASR), and then computing BLEU scores (a standard machine translation metric) by comparing the transcribed text with a human-translated text. However, one of the challenges of evaluating speech translations for an oral language such as Hokkien is that there is no standard writing system. In order to enable automatic evaluation, we developed a system that transcribes Hokkien speech into a standardized phonetic notation called Tâi-lô. This technique enabled us to compute a BLEU score at the syllable level and easily compare the translation quality of different approaches.

In addition to developing a method for evaluating Hokkien-English speech translations, we also created the first Hokkien-English bidirectional speech-to-speech translation benchmark dataset based on a Hokkien speech corpus called Taiwanese Across Taiwan. This benchmark dataset will be open-sourced to encourage other researchers to work on Hokkien speech translation and together make further progress in the field…

…The techniques we pioneered with Hokkien can be extended to many other written and unwritten languages. To that end, we are releasing SpeechMatrix, a large corpus of speech-to-speech translations mined with Meta’s innovative data mining technique, called LASER, which will enable researchers to create their own speech-to-speech translation (S2ST) systems and build on our work.

6. An Interview With Replit Founder Amjad Masad – Ben Thompson and Amjad Masad

I really want to understand more about who your users are. One thing you read a lot about is Replit has a massively positive reputation, I would say, amongst teachers for example, where it’s just so much easier for students in a class. I think this really came to the fore during COVID when people were working from home, and you couldn’t help students get their computers set up correctly and the ability to just go to a browser and to your point, everything is there ready to use. There’s another core which is people like young Amjad back in Amman who doesn’t have access to a ton of resources but does have a computer with a browser, and Replit is accessible and it’s free to use. But then if you go to the other end of the spectrum, there are professional developers who would perhaps argue that making it hard to get set up is an excellent filter for people who are good at problem solving, Replit maybe makes it too easy.

I would assume your biggest market is by far on that first side of the spectrum, the people that where the ease of use is a really killer feature. The problem is maybe all the money to be made or the big market at least today is on the opposite end of the spectrum. Do you see Replit in the long run bridging that gap? Or is this a situation where you’re going to be so easy to use and so easy to get started and then you’ll just keep building features over time that you’ll capture the next generation and you don’t need to worry about the gray beards over there saying like, “Oh, that’s trivial. I could have built that if I wanted to.”

AM: I think the answer is both, and historically it’s been both. So if you look at the PC revolution, the microcomputer, it started with kids and with hobbyists and computer clubs. What did the professionals do? They were using IBM mainframes at their companies and they looked down at everyone that used a PC. PCs didn’t have a killer app, it was VisiCalc or some of the spreadsheet apps that were the first killer apps. Apple was mostly a home computer and education computer, Microsoft was also an education company, Microsoft’s first product is BASIC, it’s a beginner programming environment. So a lot of computer revolutions, a lot of these big companies start kind of simple, start like a toy, and a lot of the initial beachhead market is hobbyists, teachers, schools and things like that, and that’s a great place to be in.

The reason Adobe has been around for a long time is because everyone’s pirating Adobe and using it at school, and it’s so embedded. These companies are so embedded in our childhood, and so we just grew up using it and I think the same thing’s going to happen with Replit. We have people start their first line of code on Replit and go all the way now to starting a job. I just tweeted about a kid who he said every line of code he’s ever written, CS student at Waterloo, has been on Replit. He just interned at Apple, and presumably he used Replit as well there. So you’re going to see a whole generation of people growing up on Replit and taking it to their jobs and I think at some point there’s going to be enough of them that also the companies are going to be paying attention and be like, “Okay, I have all these people that are very effective, that are very collaborative, that are very fast.” And by the way, maybe we’ll talk about some of our AI tools later on…

You did an excellent job of setting up a whole host of topics that I wanted to touch on. Let’s start with the collaboration bit. This is something that the web makes possible first and foremost, but how important has that been to Replit? Was that in the vision to start out with? Because you start out with this idea, “Oh, it should be super easy to get started.” Well, for a single player in part you need it to be super easy to get started because you don’t know anyone to talk to, you have no one to help you get set up. It’s almost counter to your founding story where actually now collaboration and people being able to work together is a big part of what I hear in the Replit value proposition. So tell me how you got to collaboration and why that is important for you and your value proposition now.

AM: I think the web is deeply collaborative. The idea that a resource is accessible via URL, which has the word resource in it, is a deeply collaborative concept. The idea that I can pass you a link and we’re both looking at the same thing, that is the fundamental of collaboration. So it was immediately obvious for us the moment we added saving of projects and the ability to pass around these links — the way people used to collaborate before we added multiplayer coding is that I’ll get a link, I’ll fork that, and I’ll change something, I’ll send you a link, and people will fork things thousands of times and just go back and forth between each other. Collaboration was an emergent property of the architecture of the system just by being in on the web and then collaborative coding — Google Docs sort of popularized this idea of being in real time together, and it was an obvious extension of being able to pass a URL and being able to have access to the same object and the same thing that you’re viewing the same document.

So we started working on that in 2017/2018 and turned out it’s a lot harder than we expected. Recently it became really, really good and really reliable because with Google Doc you have a server-client relationship, and it’s very easy to reason about that. With Replit you have the server that’s running your code that could actually crash at any given moment because you might write bad code. It’s also managing the state and so it is more of a distributed systems problem because any time the server can come up and down and anyone can connect at any given point, and so Replit collaboration is designed in a more distributed systems fashion. So we worked on that and we launched it in 2018. It was a bit of a slow start, and it really exploded in COVID when people realized that I can just share this thing and I can be in this document with this other person and we’re coding and it’s really a magical experience.

That being said, I think that it’s very early in code collaboration. Again, there’s a lot of different things other than writing prose. One example being is I could introduce syntax error in one file, and you’re trying to run the program in another file and I just broke the program, how do you deal with that? We think there’s this potential hybrid system somewhere between real time and between Git that exists where you could checkout single files, or you could imagine this thing that I like to call the multiverse of a code project where you have a single code project instance but every edition is sort of a mini-fork of the project and then you can transparently go between forks or go back to master and having all that kind of resolved in real time. There’s a lot of innovation to be made on collaboration, and I think we’ll be the first to figure it out…

I did want to touch on that. One of the reasons I reached out to you, and why I’ve been interested in Replit just from afar, you mentioned Paul Graham earlier, he is one of your biggest cheerleaders for sure. He’s got a, needless to say, pretty good track record on the companies he’s pretty enthusiastic about. But then also there was this just being this web-centric, collaborative-centric, there’s a Figma, you mentioned Figma earlier, comparison there and how powerful that can be. But what really triggered reaching out now was your announcement of Ghostwriter, and this is GitHub Copilot adjacent, this idea where you can basically — you already have this multiplayer idea, you can code with someone else. I like the way you framed it where now, the other multiplayer can basically be AI and it can help you do this. Was Copilot, is that what opened your eyes to this and this possibility?

AM: In 2013, I read this paper called On The Naturalness Of Software. I actually have it on my website because I love that paper so much. So this paper — I read it fairly early on and it basically says code is like natural language. They actually have this statistical reason for why they think code can be thought of as natural language. And then, they’re like, “Okay, if it is natural language, can you apply NLP (Natural Language Processing) on it?” So in this paper, they built this n-gram model and they build this auto complete engine based on this n-gram model and it was actually — so an n-gram model is just a frequency model, right? It’s like how frequent is this word following this word? So a probability distribution for words.

It turns out, you can build a fairly sophisticated auto-complete engine just based on the n-gram. When they actually married n-gram with a more semantic engine like say, IntelliJ’s IDE engine, they were able to build a superior auto-complete engine that users found better and they had some data around that. So I was like, “Okay. Wow, this is insane.” I’ve spent all my career working with code and coding, I write code to manage code. At Code Academy, we wrote code to execute code. At Facebook, I wrote compilers. I was lead developer on Babel which is the world’s most popular JavaScript compiler. I started a JavaScript infrastructure engineering team at Facebook, so I always wrote compiler-like things and it’s very laborious.

Yep. A lot of busy work.

AM: A lot of busy work and also very algorithmic and semantically challenging. So when I saw this paper and saw this, “Okay, we can actually apply these statistical approaches to code,” that opened my mind. One of our first pitches, we actually talk about we want to do AI-supported code. As I was thinking about the modes that we’re going to have, data kept coming back as a mode. We’re going to have everyone’s coding experiences from their first line of code to their first job. And so, what does that give us?

So I’ve been following the space and tried a few times to do something with it. When GPT-2 came out, that was the first time we were like, “Okay, now the technology is almost there.” Actually, we also tried to acquire Tabnine, the founder of Tabnine was an OpenAI intern that sold it to some company before it grew. So when we were seeding we were trying to cap money together to acquire this company but anyways, the technology wasn’t really there. So GPT-2 came out, we started saying, “Okay, this is probably going to enable this technology.” GPT-3 came out, I immediately started writing software for it and then we started building on GPT-3. We released explain code before Copilot and before anyone else. We released a bunch of experiments on OpenAI.

Unfortunately, the pricing model of OpenAI just didn’t make sense for us. The other thing is we are a company that knows how to optimize compute and we have, at any given point, 1 million containers running continuously. It makes us one of the bigger clouds in the world actually and so it just didn’t make sense for us to build on OpenAI because we couldn’t control latency, we couldn’t get uptime. It’s a great company, we love them, we’re partnered with them but at the end of the day, it just made sense for us to build our own. So with the Ghostwriter, we started from an open source model and we applied a ton of optimization on it and a ton of additional training and work and we built this really nice front-end UX on top of it and we’re now in closed beta. We’re going to open beta pretty soon and then you’ll be able to buy it as a Power Up on Replit next month.

7.  “Zaitech” (財テク) – O-Tone

But a handful of pundits in Japan had been well aware about a peculiar phenomenon taking place within Japan’s corporate sector: “Zaitech” (Japanese 財テク), or “Zaiteku”, which means money management. The Japanese term is a blend of 財務 (zaimu, “financial dealings”) and テクノロジー (tekunorojii, “technology”).

It describes a corporate strategy where earnings are generated through non- operating financial activities and speculation. Today “Zaitech” is known under the label of “financial engineering”, and mainly conducted in the western hemisphere.

In its extremes, “Zaitech” was as simple and safe as investing retained earnings in short-term bank certificates. At the other end of the spectrum, it meant borrowing money from Japanese banks or in the Eurobond market to enter highly speculative positions in a variety of financial instruments.

The heyday of “Zaitech” was in the late 1980’s, right after the Plaza Accord to devalue the dollar against its major peers. Within one year after the agreement in 1985 by the G5 countries (Japan, United States, West Germany, Britain, and France) the Yen doubled in value vs. the US Dollar.

Japanese exporters were hit hard. They had to lower unit prices in Yen to maintain market shares. Profit margins plummeted. To shore up profits, an increasing number of companies engaged in “Zaitech”. The Bank of Japan (BOJ) facilitated the trend by deciding to soften the blow of rapid Yen appreciation through unprecedented monetary easing.

The actual extent of “Zaitech” was unknown. According to Nomura, financial assets of Japan Inc. totalled $909.7 billion in March 1983. Three years later they had grown to $1.4 trillion. In 1986 it was rumoured that one third of Toyota Motor’s pre-tax profits was based on “Zaitech”. And, at least within Japan, it was an open secret that other prominent Japanese companies, like Sony Corp. and Sanyo Electric Co engaged in it.

With the amount also complexity of “Zaitech” operations increased. Corporate Japan had long gone beyond merely speculating out of retained earnings or bank loans. Instead, it raised money via sales of dollar-denominated Eurobonds with stock warrants attached to it. Those warrants became extremely popular over time. Roughly half of them were sold within Japan. The rest easily placed, and eagerly purchased, internationally with very low interest rates attached to them. Basically, when converting the dollar proceeds into Yen, Japanese borrowers often had an effective interest cost of nil, or less.

The funds would then be put into securities that had been in a sustained uptrend, such as Japanese real estates, stocks, bonds, or even warrants and/ or options. “Tokkin” funds were also popular vehicles used by corporate Japan. Trust accounts with special tax advantages, often handled by young and highly aggressive “portfolio managers”, and eagerly marketed by leading Japanese investment banks like Nomura or Daiwa.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. Of all the companies mentioned, we currently have a vested interest in Adobe, Apple, Meta Platforms, and Microsoft. Holdings are subject to change at any time.