Kai-Fu Lee on the Power of A.I. to Transform Humanity
The media tends to hyperbolize and boosterize technologists and the work that they do, creating all kinds of absurdly over-the-top titles for them. But when CBS’s 60 Minutes dubbed Kai-Fu Lee “the oracle of A.I.” earlier this year, it was actually a spot-on assessment. Lee has indeed been at the forefront of the field for more than three decades and is without question an artificial intelligence visionary. There are few people in the world who understand A.I. so astutely, especially within so many social and cultural contexts. His accolades speak volumes: In 2013, Lee was named to that year’s Time 100 list of the world’s most influential people, and this January, he was named co-chair of the World Economic Forum’s A.I. Council. His new book, A.I. Superpowers: China, Silicon Valley, and the New World Order, quickly rose to become a New York Times bestseller.
Lee’s got one extraordinary résumé: After receiving a B.S.in computer science from Columbia University in 1983, he went on to get his Ph.D. in 1988 from Carnegie Mellon, where developed Sphinx, the first-ever speaker-independent continuous speech recognition system. In 1990, he joined Apple as a research scientist, heading up multiple R&D groups there for several years. From 1998 to 2005, he worked at Microsoft, where he established what would become Microsoft Research Asia, and later, upon returning to the U.S., he was named a vice president at the company. In 2005, he decamped to Google, resulting in a widely publicized five-month legal battle with Microsoft. Once settled, Lee helped bring Google to China, overseeing its growth and operations there for four years. Lee now runs Sinovation Ventures, a venture capital firm that invests in start-ups in China, many of them in the A.I. space. As of a year ago, according to Bloomberg, Sinovation had $2 billion under asset management with more than 300 companies in its portfolio.
On this episode of Time Sensitive, Lee shares with Andrew Zuckerman his fascinating story of emigrating from China to Oak Ridge, Tennessee, at age 11; why he remains rationally optimistic about A.I. (and its increasingly potent presence in our lives); and how a recent bout with cancer drastically altered his outlook on life and work.
Lee clearly and succinctly defines A.I.—including “machine intelligence,” “machine learning,” and “deep learning”—also explaining to Zuckerman the evolution and trajectory of the field at large.
As he details in his new book, A.I. Superpowers, Lee tells Zuckerman how China has risen to the top in the A.I. space, especially in the past five years—and why, as Lee puts it, “success breeds self-entitlement, and I think that’s the danger facing Silicon Valley.”
Lee shares his rationally optimistic take on A.I., arguing that rote and mechanical jobs will be replaced by A.I., freeing people up to take on work that is more empathic and creative. “A.I. is certainly not going to take over humanity,” he says, while also acknowledging security concerns and many of the darker possibilities that come with the technology.
The two discuss Lee’s upbringing in Oak Ridge, Tennessee, where he lived with his brother and sister-in-law (with his parents’ support, he moved there from China for a better high school education). Lee then talks about how he went on to attend Columbia and Carnegie Mellon before starting an illustrious career that has included prominent posts at Apple, Microsoft, and Google.
Lee opens about how, following a Stage 4 lymphoma cancer scare several years ago—one that he was able to recover from—he took a fresh look at his life and work, realizing that he was spending way too much time on the job and not enough time with his family and close friends. With this new perspective, he has found his life to be happier, healthier, and richer.
Follow us on Instagram (@slowdown.tv) and Twitter (@time__sensitive), and subscribe to our weekly newsletter.
ANDREW ZUCKERMAN: Kai-Fu Lee is an investor, a computer scientist, a bestselling author. He has been called “the oracle of A.I.” So thrilled to have you here. Welcome, Kai-Fu Lee.
KAI-FU LEE: Thank you!
AZ: I wanted to start with some basic stuff. When did the development of artificial intelligence really begin?
KL: The concept began almost six years ago, when researchers were trying to figure out how human intelligence worked and if it could be replicated by software. But eventually, it morphed and changed. What we call A.I. today—things that work in speech recognition, and pattern recognition, and internet technologies, and robotics—is actually based on the branch of A.I. called “machine intelligence” and “machine learning.” More specifically, the one big breakthrough is called “deep learning.” And deep learning actually isn’t human intelligence at all. It’s a really big pattern-recognition engine that, when you feed it with a huge amounts of data and proper decision prediction and outcomes, it trains to use mathematical ways to come up with optimized answers—whether to give a loan to someone, whether to show you an ad, whose face that is, what did the person say—at superhuman accuracy. Today’s A.I., or what we call A.I., is actually very narrow, domain-specific, but incredibly capable and superhuman within very limited tasks.
AZ: Why was A.I. so stagnant for so long?
KL: Actually, the ideas of deep learning have been around. Even when I was doing my Ph.D. thesis in the eighties [at Carnegie Mellon], algorithms were discussed back then. But it turns out that deep learning needed a lot more samples, because it’s mathematically based in statistics. And it needed to see a lot more samples than humans do. When I did my Ph.D. in the eighties, the machines weren’t fast enough, the disks were too expensive. We were using a one millionth of the computation and storage we have today. With the lower cost of storage and computes, now deep learning really works well. It’s just a different mathematical brain, if you will, that requires many more samples than humans do to learn the same concepts. But when you have so much data, it actually works better than people. It’s just a different kind of a brain that needs more samples, but once you have that, it works great.
AZ: Can you describe the current moment in terms of human history? I’m not sure too many of us understand it.
KL: Well, we’re at an inflection point, when what I described will be used in every imaginable industry to create a huge amount of value. Because, while I try to be realistic, and describe the limitations of A.I. and deep learning, it’s actually incredibly easy to use. It’s a tool that a smart computer-science engineer can learn in weeks—at most, months. And as these become easier and easier to use, there will be applications of A.I. applied to every imaginable industry. Banking, insurance, automotive, healthcare, retail—there will be not a single industry that isn’t going to be revolutionized with A.I. In some cases, industries will be disrupted by A.I.
Imagine the future—we don’t go to banks to get loans; we use an app to get however much money we want, at a much lower interest rate. But it will also be infused into various traditional companies. It will help retail companies manage inventory better, figure out sales forecasts. It will help convenience stores become autonomous, cashierless, and people can just go in and take things and put them in their pockets and they are charged automatically. It will be connected to all kinds of data. It will know more about us. It will know about our spending habits, and our usage history, to infer what we want. It will recommend things to us that will be much more accurate than before. It will disrupt everything.
I think this process of disruption is something that we’ve only seen a few times in history. Electricity, the Industrial Revolution, and the Internet Revolution are the three that I can think of, but this time it will be so much faster. Compared to electricity, it took decades for the electrical grid to be built up, and then people had to invent new ways to use the electricity—air conditioners, refrigerators, and so on. It took over a century, and only now we’re getting electric cars. But with A.I., these engines work on the cloud, on the internet, and you can program them and connect them with the data that’s also on the cloud. And engineers can access them. Open source also allows people to build on each other’s work. Compared to electricity, which took decades if not a century to become fully pervasive, A.I. can be pervasive in years. And this will bring tremendous value, tremendous efficiency, but also tremendous disruptions. Because it will change business practices, it will cause companies to go out of business, it will take away people’s jobs, especially if they are routine. It’s going to be a very exciting but also a very challenging decade ahead.
AZ: But the fears that we had about trains and electricity were similar to the fears that we’re having about A.I.
KL: Yes, yes. People are always afraid of new things. When automobiles first came out, England actually passed a red flag law that said, in order for the automobiles to not scare the horses, someone has to carry a red flag or wave a lantern walking in front of the automobile. That completely destroys the value of the car, and that was the fear that people had. Now we can’t imagine our lives without automobiles. The same will be true about the paranoia about A.I., and eventually there will be acceptance. In a few years, we probably couldn’t imagine living our lives without A.I.
AZ: How are we going to make A.I. compatible with humanity, though? How are we going to reconcile that relationship?
KL: Well, there are many challenges. A.I. is largely a tool, so it’s not going to grow up to become a monster that wants to control us. It has no self-awareness. It has no desire to control us or manage us. In that sense, it is just like a tool. It is like an Excel, like a Word—something that we set the objective, so it follows us. It’s not as scary as most people think. However, there are many issues that come up in order for A.I. to work well. It takes our personal information, our privacy, in order to give us the convenience. Is that the tradeoff we want? And when it optimizes on the function, such as when Facebook wants to optimize the minutes we spend on the News Feed, it doesn’t look at other aspects, such as: Should it, according to responsible journalism, show us the kinds of things that might cause us to be more biased? It’s single-minded, and that can have certain effect.
There are also security issues. A.I. can be hacked. There can be deepfakes, making it look like a celebrity or a president when it’s not. So there are many of these smaller issues—
AZ: Which you pointed out so beautifully with a President Trump deepfake on your TED Talk.
KL: Right, right. There was a company that actually made a system that talked like President Trump. We’ve also seen President Obama, videos that were completely synthesized by A.I. These are new things that we have to learn to deal with, and we’re not very good at it. We get scared, but eventually we’ll figure out solutions. Think about social networks, right? A lot of people are talking about the dangers of social networks. But I think, eventually, we will have to believe in human wisdom. Technologies were always scary in the beginning, but given time, human wisdom will figure out how we can coexist with A.I.
AZ: You made an interesting point regarding A.I. after the AlphaGo win. Can you talk to me about how you saw that experience?
KL: Yes, AlphaGo is a system built by Google and DeepMind in the U.K., and yet it demonstrated what people seemed to believe required human intelligence: playing the game of Go, which is much more complex than chess, and also invented in China, and defeating the Chinese master who is the champion of the world. That really, in particular, woke up China, because I think the Chinese people really thought Go was the pride of Chinese culture, invented by the Chinese, the Chinese are best at it. And here it is, a European software, defeating China at it. It became kind of a Sputnik moment that caused people who are constructive to think about, “What other uses can we use this technology for? Should we start a company based on that?” It caused the Chinese government to think, Is this a science in an area that we want to increase funding and focus? That has caused a huge mindset shift in China. And, as I mentioned, A.I. is not really a rocket science anymore. When all of these Chinese people that work extremely hard become determined to understand and use this engine—and create value—China has since actually rapidly caught up with the U.S. in artificial intelligence.
AZ: The point that you make about how A.I. is not human, it doesn’t contain emotion, was so clear in what happened in the end of that game.
KL: Yeah, at the end of the game, Ke Jie, the person who was defeated, was crying because this was the game that he loved, and he couldn’t imagine this brilliant computational engine that he couldn’t possibly beat, no matter how he tried. But on the other hand, if you think about AlphaGo, it’s just a piece of software. It didn’t enjoy winning, it doesn’t know why it is playing the game, it felt no happiness from winning and no desire to hug a loved one. If we take a step back and think about it, machines are cold, calculating engines, and humans have love and emotion, compassion, and attachment. These are maybe the things we should think more about rather than, Can we elevate ourselves to beat AlphaGo again?
AZ: You came out with an extraordinary book last fall called A.I. Superpowers: China, Silicon Valley, and the New World Order, and in it you describe how fast China is moving to becoming the global leader in A.I. What do you think is leading to this momentum?
KL: Well, because A.I. is becoming more accessible and open-source, a big breakthrough is deep learning. It’s reasonably well understood. While [the] U.S. has much better researchers in A.I. than in any other country, it isn’t the case that deep research capability translates directly to commercial value. Because A.I. is well understood, and no longer a rocket science, what really matters are things like, Do we have entrepreneurs who will find places where A.I. adds value? Do we have engineers who can quickly train themselves and work incredibly hard to iterate and make A.I. work? Do we have large amounts of data that can be the fuel to make A.I. work great? Do we have subsidies from government? Do we have venture capitalists with money? And do we have a large market that has that kind of data and demonstrates that kind of value once it’s proven? China happens to be strong in all of these areas.
China is by far the largest market of internet mobile users, has by far the largest amount of data, and Chinese entrepreneurs are incredibly hard working and tenacious and single-minded, wanting to achieve great success. China has more money in the VCs for A.I., and the Chinese government is supporting and building infrastructure to help move this along. All of these things became fused together and rapidly accelerated China to be, arguably, roughly on par with the U.S. right now. And that would seem inconceivable in many other scientific domains, but because of the special nature of A.I., China managed to do that in the last couple of years.
AZ: There’s this general belief in this country that Silicon Valley is a cradle of invention and China is the copycat culture. But something has changed. What’s changed in the recent years that has made this belief outdated?
KL: Yeah, outdated is exactly the right word. That used to be the situation. The U.S. was, by far, ahead. China copied and learned from the master. Silicon Valley was the most creative, and certainly Silicon Valley is still the most innovative in coming up with breakthrough ideas. But with the advent of the internet, one could iterate and try many ideas. It’s not about the success of Apple or Google, and the brilliant ideas took years to develop and became way ahead when they came out. Now, it’s more about, Are you a flexible team that launches something quickly, and lets things break, and uses feedback from users and iterates your products? Iteration became more important, in many cases, in creating value than the brilliant, original idea. If you iterate fifty times, and arrived at a product that could wow people, that’s just as useful. Just as valuable as if you had a brilliant idea and then closed the doors and built something that took three years.
Chinese entrepreneurs have been, in the last ten years, leveraging the size of the China market, which attracted a lot of venture capital money. They would iterate. They would work eighty to a hundred hours a week, and they would find solutions, and they would find new ways to pivot. In the beginning, it was copying the American idea completely. Then it was initiating with the American idea. And then it was iterating something better. Now, with a lot of practice, many of the Chinese entrepreneurs have become innovative. They could come up with their own ideas and use the market to test it.
In short, I think China’s ten-year miracle—moving from copycat to innovator—is basically a cycle that began with a larger market attracting more money. And then attracting great entrepreneurs who iterate and build great products, and use that product to get users and data, and use that data to train A.I., and use that A.I. to build even better products, and then use those products to grow the market. It’s basically a cycle that continued, and, if you look at the Chinese products today, products like TikTok, it’s being used everywhere. American teenagers even love it.
AZ: My kids love it.
KL: Yeah, and Ant Financial is the world’s largest mobile payment processor, much larger than PayPal. China has innovative companies like VIPKid and Mobike and Pinduoduo—a twenty five billion dollar company made in three years. These new innovative products were not even inspired by Silicon Valley anymore. The Chinese are able to create a parallel set of applications, in almost a parallel universe, that no longer depends on American innovation. And I think that kind of a change in ten years is unthinkable, but it—
AZ: I don’t think Americans understand that if you’re in China now, you don’t own a credit card.
KL: Right, right.
AZ: What are some great examples you could tell me of—you had mentioned the scooter company—that you really love, some of these success stories?
KL: Yeah, so the incredible convenience when you have mobile payments in China: There’s no longer use of cash or credit cards, everything is mobile payments. And it’s not just Apple Pay for China. This is a payment in which anyone can pay anyone. It’s seven hundred million people, anyone can pay anyone with just a few clicks—
AZ: Any amount of money.
KL: Any amount of money, down to fifteen cents at the minimum. So it makes everything easy, it makes it so easy to run a store because—
AZ: You tell a story about how you can donate to the homeless while on the street.
KL: Well, actually the homeless, they started having some trouble because people didn’t have coins anymore. But now the homeless hold up a big sign that says “Scan Me,” and when you scan, you just click to give money by mobile to the person who wants the money.
AZ: That’s amazing.
KL: There were robbers who went to convenience stores, and they could only get less than one hundred dollars after robbing three stores because there’s no cash anymore. That dramatically reduces the transactional costs, because credit card companies charge two or three percent. That’s a tax on the entire economy, and now that’s gone because it’s gone direct. Also, it makes entrepreneurism a lot easier. New products came out. Shared bicycles that you can scan and pay immediately. Ordering online is incredibly easy, and ordering food has become easy. In almost any Chinese city, you can order takeout from probably a thousand different restaurants near you, delivered to you in thirty minutes, and the delivery fee is only seventy cents. That’s not just—it’s a combination of refined algorithms, and A.I., and just hard work, in reducing the cost of delivery.
Life has become much more convenient in China. A famous American professor recently went to China, and she was going from meeting to meeting. She didn’t have time for dinner and was hungry. So the driver said, “Do you want me to order you something?” And she says, “Well, how is that possible? We’re stuck in traffic.” He says, “Don’t worry.” So he ordered the food to where he thought the car would be in thirty minutes. The moped driver drove next to the car, they opened the window, and he handed the food over.
KL: That’s almost like science fiction. The convenience is incredible, and that’s happened in the last five years or so.
AZ: Do you think the sort of explosion of success in Silicon Valley affected its perspective? In short, is hubris getting in the way of innovation?
KL: Well, I think certainly success breeds self-entitlement, and I think that’s the danger facing Silicon Valley. If you think about the days of Intel, Microsoft—there was no competitor anywhere in the world. They didn’t have to localize or cater to the needs of China or other countries. It was take it or leave it. So the American companies are essentially used to monopolizing the world. And you could call that behavior hubris, you could call it self-entitlement, you could call it, Why do work when you don’t have to? But the Chinese companies are now emerging. Not only are they successful in China, but they are going overseas. TikTok, for example, is taking the world by storm, and it’s quite successful in Southeast Asia and Africa and even the U.S. Chinese companies are also more willing to customize the product for local needs. When Alibaba sold Ali Cloud to the Middle East, they will make changes in order to comply with local regulations or user preferences. But Amazon Cloud would be reluctant to make a special version for the Middle East.
When the Chinese companies have good enough technologies and greater flexibility to customize, and greater attention to markets and users, this is when Silicon Valley needs to wake up. Otherwise they will lose business in different parts of the world.
AZ: You talk about how you took some of your colleagues on a trip to the Valley. And you noticed some things …
KL: Yes, actually, I have always taken entrepreneurs to visit Silicon Valley. The first trip was, I think, about five years ago. At that time, it was like seeing Mecca. The people were saying things like “Wow, that’s Google!” and “That’s Tesla!” They were taking pictures and learning things, and didn’t notice some of the flaws of Silicon Valley. But the last trip we took, which was about a year ago, people came back with real sobering remarks. They said, “Well, Silicon Valley is very innovative, the people were very smart, but they didn’t seem to work very hard.” That was the summary. And the Chinese entrepreneurs work eighty to one hundred hours a week. When they came to Silicon Valley, yes, there was some time for relaxation. But they really wanted to get a nine p.m. meeting with some people, and no one would take that meeting. They would want to have meetings on weekends, and very few people were open to that. They went to these companies they worshiped, Google and so on, and they found the parking lots were empty at six or seven. They found that astounding. When you’re on top of the world and have such great technologies, how could it not excite you to work incredibly hard? That drive seemed missing in Silicon Valley.
AZ: It wasn’t in the nineties and in the early two thousands. They were struggling.
AZ: What happens when we get some success?
KL: I have worked at Apple, Microsoft, and Google. What I saw within those three companies was when the employees were in their twenties, they were single, they were passionate, excited, the company was growing fast, and they wanted to experience that high. But when they got in their late thirties and forties, they had family, work-life balance, and the companies were maturing. The stock wasn’t going up as much, and the environment wasn’t so exciting. The larger company had bureaucracy and politics, and that wasn’t so fun, so working fifty hours a week seemed good enough. Within each company, that was the case. But I do wonder, where is that next Google that worked as hard as Google when it was in its first five years? I think that may be slowing down, too.
AZ: Why does this not happen as much in China?
KL: Well, eventually, this may happen in China. I don’t think, personally, it’s healthy for people to work one hundred-hour weeks. People will burn out. But I think the Chinese people, currently, resist burnout through sheer determination. They’re so determined because many of these entrepreneurs come from families that have been poor for twenty generations. And their parents expect, and hope, and push them to be successful, so that it could lift the family out of poverty and their village out of poverty. Obviously, China doesn’t have a lot of poor regions anymore, but there are still poorer villages that have high expectations. And when it’s a single-child family, then that single child has two parents and four grandparents. Those six peoples’ one hundred percent expectation is on you to make good for the family name and bring people into the middle class. That expectation is huge.
Also, China was poor not too long ago—thirty years ago, twenty years ago. When Deng Xiaoping opened up China forty years ago, he said, “Let some people get rich first.” And this was in a country where everybody was poor. People were just rushing through the gate, hoping they could be that first group of people, or second group of people, who made it. This became a single-minded goal for essentially all of the Chinese. That’s why there is a single-minded determination. Now, take another thirty, forty years, as the middle class emerges, that might be gone then. But I think it’s still here, and here to stay for another decade or two.
AZ: We’ve been talking a lot about America versus China, or America and China. Why has it been so hard for U.S. companies to work within China? How has language created this breakdown in understanding?
KL: Well, the cultures are quite different, the government regulations are different. And I think, in the early days, when China didn’t have it’s own domestic companies, the American companies were not only welcome but quite successful. So those were the days when IBM went in, HP went in, and Intel went in. Those were the days when Procter Gamble and General Motors were quite successful. But over time, the Chinese companies developed their own capabilities, and the American companies didn’t customize for China, or didn’t customize enough. Then the Chinese products eventually became as good, or in many cases not as good but so much cheaper. And then, in some cases, they became better, and that changed the Chinese peoples’ mindset. Yet the American companies didn’t flip over to the new way of seeing the world. They were still thinking their product was better and should dominate, and it didn’t make enough exceptions. As the Chinese companies improved, the window of opportunity closed. Now there are almost no examples of American success in China, so many companies think of this as a hopeless task. It didn’t have to be, but now it really might be.
AZ: As we move into this age of A.I., where do you see some of the biggest security concerns?
KL: They’re multifold. One is someone could hack into A.I. parameters and cause A.I. to fail. For example, because A.I. is a big math equation with lots of numbers, it’s not code that gets hacked into but numbers. Imagine a hacker messes with the bank’s numbers, so none of the software is changed, but they mess with the numbers, so that a loan is given to someone that it shouldn’t be. Or imagine, because of hacking, a face that should be recognized as a terrorist and arrested at the airport but isn’t. Imagine someone putting a few stickers on an automobile that might be a terrorist engine of attack, but those stickers caused the automobile to be invisible to other autonomous vehicles. Think how dangerous that can be. That’s one set of hacking into A.I. that’s a security issue.
A second set would be taking over A.I. Just like hackers have taken over PCs and phones, they could take over A.I. Imagine when all of our cars are autonomous vehicles, and someone hacked in and turned them into cars that don’t avoid people but hit people. Imagine what a weapon of mass destruction that could be. A.I. security is also deepfakes. How can you tell what is real and what is not? And finally, the ultimate security is autonomous weapons without humans in the loop. If we allow, A.I. to pull the trigger, imagine how many triggers there can be and also how smart the targeting can be. We’ve seen a video on the internet with a drone that will track down someone using face recognition and then basically pull the trigger and become a personal assassin. That isn’t arrestable—you can’t go arrest a drone. All of these issues are significant and real concerns.
AZ: There is this growing mistrust. I mean, in the last year, the tech backlash has been enormous in this country. And the government hasn’t totally figured out how to step in, at least here, completely. How will governments play a role in A.I. policy moving forward?
KL: Well, governments clearly have to regulate and have serious punishment for the most egregious behavior. On the other hand, they shouldn’t overdo it.
If a company sells user data to another company, that should be viewed as a serious felony in order to prevent that from happening. Autonomous weapons and drones—those have to be regulated. We probably just can’t let drones fly into cities. I wish there was a better solution, but that might just be what we need to give us the safety we need. Regulations [in A.I.] are needed, but at the same time, technology is often the best thing to combat technology misuse. So, can we protect a user’s privacy by coming up with technologies that transform and morph your personal information into nonreversible, non-interpretable strings? Your name, address, credit card number no longer looks like what they do, but they can still be used in A.I. to make better-targeted recommendations for you. Those kinds of technologies, can they be used? Deepfakes—human eyes cannot tell the real video and fake video, but A.I. forensics can.
Developing the technologies and using them in tandem with regulations, just like viruses are fought with computer-security software, I think the same will be true. It’s very dangerous if we think “It’s all Silicon Valley’s fault, it’s all A.I.’s fault, it’s all technology’s fault, let’s use regulation to stop technology development.” That would be the wrong thing to do. I think we should work together, technologists and policy makers, to use the combination of regulations and technology to combat misuse.
AZ: You have this incredibly optimistic perspective on technology. Some people think this is just a technological arms race for hearts and minds without any real substance. Do you believe that we actually have a shot at improving our lives and wellbeing, or that ultimately we’ll steamroll it with surveillance capitalism?
KL: I think we clearly have a shot because A.I. is a neutral technology. It’s how we humans use it.
My optimism comes from past technological revolutions. They’ve all led to good and bad uses, but the good outweigh the bad ones, and we find ways to control the bad ones and the misuse. I think there is historical evidence that there is human wisdom that will eventually prevail. I also think that peoples’ belief of A.I. is sometimes shaped by what they see in science fiction. Science fiction always makes A.I. the villain, and makes the villain full of desire to control the human beings, when A.I. is just a tool with no desire. I think we need to educate everyone to be aware that A.I. is just a really, really powerful tool. But it’s a tool that we control nevertheless. And when we start to see that, some of the fears will hopefully begin to subside.
AZ: In contrast to what you’re talking about, in the last year, we’ve heard from really influential people, like Elon Musk, saying that A.I. is going to take over humanity.
KL: A.I. is certainly not going to take over humanity. There are many influential people who might not be experts in A.I. who, understandably, draw conclusions, but they are not right. When you see every week there are headlines showing A.I. is now beating people in Go, beating doctors in lung cancer diagnosis, beating people in computer games, and doing better than people on standardized tests, you could easily draw an exponential curve where the A.I.’s I.Q. is increasing. But what really is happening is it’s just one technology breakthrough that could work on one domain at a time. Deep learning is being applied to a number of domains, and people are clever to pick domains in which it would do well. It’s not at all anywhere close to doing what a full doctor can do. It’s arguably never going to be able to do that, because that requires creativity and compassion, and that A.I. does not have.
I think these extrapolations based on a growing number of applications is not the same as exponential increase in true technological capability that hasn’t happened. We’ve had one big breakthrough in technology, and we have yet to see another. People who draw the conclusions based on the applications are just too optimistic and extrapolating too fast about capabilities of A.I. Now, having said that, I do think the other dangers that we talked about earlier—privacy, and security, and bias, and carelessness, and errors that could cause us to think extreme thoughts—are all serious issues, but they’re not existential issues.
AZ: And my spellcheck doesn’t work on my phone.
KL: [Laughs] Yeah, yeah, exactly.
AZ: Some basic A.I. stuff is still very early.
KL: Yeah, spellcheck should actually improve a lot. Things that we thought we had ninety-five percent right, OCR [optical character recognition] and spell checking, with A.I. and deep learning, they should get to 99.999 percent. That will get fixed.
AZ: You were born in 1961. The seventh child. And your siblings were far older than you. You were, like, the ultimate baby in the family.
KL: Yeah, my siblings were eight to twenty five years older than me.
AZ: What were your earliest days like? What do you remember from that time?
KL: Well, my strongest recollection was my mom. She really wanted a boy. She had all girls, and she would, on the one hand, give me whatever I wanted—whatever I wanted to eat, whatever toys I wanted. She spoiled me. On the other hand, she was extremely demanding. She insisted that I become No. 1 in every class. She would watch me as I wrote Chinese calligraphy, and she would throw away paper when I didn’t do a good job. She would make me memorize ancient Chinese poems. If I got one word wrong, she would throw the book out of a room and make me redo it. It was a combination of spoiled materialistically but incredibly demanding academically.
AZ: You made this amazing decision, when you were five, that you thought you were ready to move on in school.
KL: Yeah, I wanted to skip kindergarten.
KL: It was a very small decision, but my parents were quite enlightened, too. Most parents would say “This is silly, don’t do it” or “Okay, do it.” What they said was, “In public schools, you can’t skip a grade, so the only way you can do it is to go to a private school, and they have an entrance exam. So why don’t you study for it? And if you pass it, then you can go. If not, you can’t.” That had the impact of turning the decision back to me, and making a five-year-old feel like he could control his own destiny. I think that was very empowering. Especially in Asian culture that is quite unusual. I think that actually was a very important thing for my life, where I could be the driver of my life and control my own destiny.
AZ: What happened after you took the test?
KL: I did quite well. I got in.
AZ: There’s one story in particular from your childhood that I read about and really loved: your strategy for staying up late at night. I’d love to hear you talk about it.
KL: My strategy in college? For staying up?
AZ: No, no, when you were a little kid.
KL: Oh, oh, when I was little. [Laughs] Right. Yes, I hated sleeping, so …
AZ: Do you still hate sleeping?
KL: No, no, sleeping is the best way to help your immune system recover. But when I was little I just wanted to sleep less. So, of course, my parents would insist that I go to bed at nine, and that was way too early. To fight back, one night I went to all the clocks in the room and turned all the clocks back by an hour. That gave me an extra hour to play. It also made everyone late to work or school the next morning.
AZ: At age eleven, you made a decision to leave China. Tell me about that.
KL: Yes, my brother—who went to the U.S. to study—went back to Taiwan, and he saw that Asian exams and education was much more about rote learning, and that the American system was much more creative. So he told my parents that I could stay with him in Tennessee, and he would take care of me, and then I would be able to enjoy the best education system in the world. Again, my parents said to me, “Kai-Fu, do you want to do it? If you do, you can.” And I said yes. So I moved as an eleven-year-old to study in the U.S.
AZ: In Oak Ridge, Tennessee. What was that like? That must have been totally shocking.
KL: Oak Ridge was actually a town with a lot of scientists. So academically—
AZ: Why is that?
KL: Because of the Oak Ridge National Labs. The Manhattan Project was actually built in Oak Ridge. And then, after World War II, Oak Ridge grew and had labs in other areas, like biology and biochemistry.
Both my brother and sister-in-law were there, but nevertheless it was the South. Not everybody was a scientist. Also, I went to a Catholic school, and it was very eye opening to me, because not only was the education great, kids were allowed to pursue their individual dreams and choose some of their own classes. Which was unthinkable coming from Asia. Also, there was a principal of my school who saw that I didn’t know a word of English. And she gave up her lunch hour and taught me first-grade English. I was in seventh grade, but she taught me first-grade English. After a few months, I was able to follow in class. I had a great math teacher, and she really convinced me that I was a math genius. I’m not, but I thought I was and that gave me an interest and a passion to work on math. In fact, next week I’ll be back in Oak Ridge visiting my school and my teachers.
AZ: Oh, fantastic. Did you notice something early on when you got there that actually gave you an advantage from the early education in Taiwan?
KL: Well, yes, Taiwan education was very much rote learning, so we were very good at memorizing things. Maybe that’s why some of my teachers thought I was a genius in math. You know, they would give a problem like, “What is one seventh?” So in my head I could come up with the answer because I memorized it. One seventh expressed in decimals is 0.1428. I can still, to this day, remember it because it was pounded into my head by my math teachers in Taiwan.
AZ: You were also really creative in business by a young age, which is interesting. Tell me a bit about your—there were two stories that I loved, the math camp story and the T-shirt company story. So start at the math camp. What happened at math camp?
KL: The T-shirt company—
AZ: That was part of the Junior Achievement—
KL: Junior Achievement.
AZ: I want to hear a bit about the math camp first.
KL: The math camp in Chicago?
KL: What story did I tell? [Laughs] Where I stole passwords?
AZ: Yeah, exactly.
KL: [Laughs] Oh, okay. Yeah, yeah, so the first time I got to play with computers was during my high-school years. I went to a University of Chicago math program, and we got to play on mainframes. And then I learned simple programming. Then I wrote a program to guess other people’s passwords. It’s a very simple program—you just try to iterations of combinations of characters. Most people weren’t that careful at the time. If there was, like, a three-character sequence, you could guess it. I guessed the password of a friend, I hacked into his account, and I made some funny posts that were embarrassing for him. [Laughs]
AZ: In high school, you had this opportunity to take part in a Junior Achievement club, which is, I guess, the first time you properly did any business?
KL: Yes, I think Junior Achievement is a great thing. I’m on Junior Achievement’s China board now, to help them there as well.
The entrepreneurial aspect was the best part of Junior Achievement. A bunch of kids got to build a company, and I was in there twice. The second time I was the president of the company, and then we decided to make these T-shirts to complain to the school about reduced lunch hours. We made these really cute T-shirts, with a picture of a dog that’s really long, that said “Longer Lunch.” The T-shirts sold very well, so the company made a big profit and became the Company of the Year in the region.
AZ: Did it give you sense that you wanted to take part in business later in your life?
KL: Yes, and it gave me the basic, rudimentary workings of a company—what shareholders do and how you can create shareholder value, how to do marketing, how to do sales, how to self-organize. I thought that was a lot of fun, so I wanted that to be a part of my future.
AZ: And you graduated. You were “Most Likely to Succeed” in your high school, right?
KL: [Laughs] Yes, right.
AZ: And you applied to twelve colleges.
AZ: What did you learn from that process? Where did you end up eventually?
KL: Well, I applied to colleges in a very Chinese way. You take the twelve top-ranked schools, apply to them all, see and then get into the highest-ranked one, and go. It’s unimaginable that people would do that anymore, but that’s the way it worked in China. When you apply to college, you actually take an exam, and based on your score, the highest-scoring people go to the No. 1 school, the second set goes to the No. 2 school. And so that’s what I did. I got rejected by the No. 1, No. 2, No. 3, No. 5 schools, and I ended up going to Columbia in New York City, which was great for me.
AZ: Yeah, why’d that turn out to be such a good place for you?
KL: Well, I chose Columbia just because of luck, because I didn’t get into the other schools that were ranked higher. Columbia was the highest-ranked school that I got into, but what turned out to be great is the exposure to New York City. Which is as different from Tennessee as you can imagine, and also Columbia had a program that required reading of classics in philosophy and literature. The Contemporary Civilization class gave me a different foundation than a normal engineering education would have given me. It made me think more about issues like why do we exist, how to deal with problems, our responsibility. When that was artificial intelligence, I didn’t just see it as an engineering problem, or solution, or product, how to make money. I thought about its implications, about job displacements, and what are our responsibilities? How can that be a strong calling to our inner selves, and how would that help the progress of human beings? Those would have not been possible had I not gone to Columbia and read those two hundred classics. Which I really hated at the time, but they really sank in me, and became a part of me, and caused me to think with my left and right brain, to give speeches and write books in addition to the technology, products, and business.
AZ: Which is really half of your focus: the humanities.
KL: A lot of people, when they ask me about college choice and their careers, even though they might be wanting to go into engineering or science, I always encourage them to consider a school like Columbia or Harvard that allows them to really balance the humanities part. Those are the things that will stay with you. You know, my older daughter wanted to be a fashion designer, but we discussed and compromised. She ended up going to Columbia and took all the classes that I took, and then she went to Parsons. So she did both, and I think she now feels her most wonderful years were at Columbia.
KL: And her closest friends are her Columbia friends. The learning that she had at Columbia is now instrumental to her always [being] able to come up with art in a timeless way. As opposed to just [being] a very good designer or drawer of pictures.
AZ: Then you went to Carnegie Mellon for your Ph.D. work. What did you work on at Carnegie Mellon?
KL: Speech recognition. Carnegie Mellon is an amazing school for artificial intelligence. I chose speech recognition because I thought, at the time, given the state of the art and the cost of computing, it was something I could demonstrate tangible results in. It was important to me that my Ph.D. thesis was not just a theoretical piece of work, but that I could have something to show that could have practical use. At the time, I thought computer vision was harder because it required dealing with multiple different dimensions and speech recognition was a fewer dimensional signal and that was something I could really do.
I studied under Raj Reddy, who was a pioneer in speech—
AZ: Your mentor.
KL: Yeah, my mentor.
AZ: And then, after graduating from Carnegie Mellon, they wanted you to stick around.
KL: Yes, my Ph.D. thesis at the time was a breakthrough. It led to much better results than other speech recognizers. So Carnegie Mellon made an exception. Usually schools don’t want their graduating Ph.D.s to stay because the cross pollination is better for academia. But they wanted me to stay, so I did.
AZ: And you could have gotten tenure, you had a cushy situation, but you made another choice.
KL: Yes, I went to Apple in 1990, two years after staying at Carnegie Mellon to teach.
AZ: What was it like at Apple at that time? I remember hearing it wasn’t the Apple that people often think of.
KL: Not at all. The first thing that people ask me about my time at Apple is, “Did you work with Steve Jobs?” And my answer is, “I worked there between Jobs.” [Laughs] I was there after he left, and then I left before he came back. It was a very dark period at Apple. Most people thought it would go out of business. Not during the time I went, but I think it started getting in trouble in late ’91 or ’92. And then that trouble continued until Steve came back. Most people thought Apple was this conflicted strategy of wanting to preserve its roots of excellence in design and also go for market share. Those were not compatible strategies. The company’s’ DNA was really built around the former and twisting it to do the latter turns out to be very, very difficult.
AZ: But you had this beautiful situation. You were in this secret office.
AZ: You were largely unlooked at, right? There were two significant moments during those years that I would love to hear about: how you got to the TED Talk and Good Morning America.
KL: One led right to the other. When I started, I was working on a secret project meant to be Macintosh III. That never shipped, so I was sent back to the Advanced Technology Group, where I led the Speech and Natural Language Groups. Toward the end of ’92, John Sculley, who was at the time CEO, was getting pressure from the board to sell the company, because Apple was doing pretty poorly. The future prognosis was not good. So he decided he needed to showcase that the company had leading technology in some number of areas and use that as a selling point, to sell to companies like Phillips, AT&T, Sony, and others. I became one of the chief demonstrators for John. I would go to these companies and demonstrate technologies. He also wanted to demonstrate them publicly so as to project Apple’s image, so it could sell for a better price.
When he was invited to give a TED Talk in Monterey—I think it was in ’92 or ’93—he brought me along. [Editor’s note: It was in Feb. 1990 that Sculley spoke at TED2; more information about the conference can be found here.] He gave his talk, and then I demonstrated speech recognition working on the Mac, which at that time was unthinkable, because speech recognition required a lot of computational cycles and Mac was not very fast. But we made a special hardware, a DSP-based board, that accelerated speech. It worked speaker independently, continuously, and it responded in real time. We built very nice demos of speech controlling a number of functions on the Mac. For example, it could write checks to people, and program a VCR, and schedule meetings. It was a very compelling demo. At TED, there was sort of the who’s-who, including professor Marvin Minsky from MIT, who was fascinated and quoted in The Wall Street Journal about my demo. Also, The Wall Street Journal wrote a frontpage article about Apple’s breakthrough in speech. That caused the stock to go up two or three points, and then Good Morning America saw that, and invited John and me to go on the show to demonstrate the speech recognition.
AZ: Live TV, with a demo—
KL: It was live TV with a demo that crashed a lot.
AZ: So how did you deal with this situation?
KL: Well, John said, “This is live TV, so you can’t rerecord. This thing must work.” And I told him there was about a ten percent chance of crashing, because it was board that we built and the board was not reliable. And he said, “Well, a ten percent chance of crashing, that is too dangerous. We should just cancel it unless you can get it down to one percent.” Of course, we can’t change the board, it was too late. So I said we can do it, and what we did is that we brought in two computers with two boards. And then there was a manual switch, that if the first computer crashed, there was a human who would switch the second computer. So if one computer has a ten percent chance of crashing, the chance of both of them crashing is one percent. So I got to what he wanted.
AZ: [Laughs] Which is brilliant. And then you went to Microsoft.
KL: Yes, yes, at Apple it was very challenging because Apple was not making money. There were a lot of layoffs. And then I went to SGI [Silicon Graphics Inc.] after Apple, which also ran into similar problems. Drawing on my two failed company experiences, I concluded that Microsoft was the only company that I could work at, because other companies were all getting killed by Microsoft. If my ideas and research were to see the light of day, it would have to be through a platform company like Microsoft. So I went to Microsoft, and a lot of people in Silicon Valley couldn’t really believe that, because Microsoft at the time was viewed in the Valley as the evil empire.
AZ: Yeah, and it was also sort of the dark days of A.I. This was not a popular area that you were in. This was pre-deep learning.
KL: Yes, yes. I recognized that trying to make advanced technologies work at Apple and SGI—maybe some of these technologies needed more research for them to see the light of day. Microsoft offered me to start their research lab in China that would give me more time to work on the same technologies, but it would take a longer-term horizon before deploying them in a wide way.
AZ: This was the beginning of you being a Chinese executive for an American company in China.
KL: That’s right.
AZ: And what was that like?
KL: Well, it was ’98, and China was really quite backwards. But I had gone back to China in 1990, and I was really impressed with the young students there. They were working incredibly hard. They would actually study under the streetlight when the dorm lights went out, and I felt—you know, I was ethnically the same as them. I was just luckier that I got to the U.S. and studied in the best schools. These kids were as smart as me and worked harder than me, and deserved more. I felt my working for Microsoft, a brilliant brand in China, could attract some of the smartest people and help them realize their potential. Microsoft Research China, which later was renamed Microsoft Research Asia, became a talent magnet that attracted young, somewhat unpolished, really smart, hardworking, young people. We basically helped retrain them, because they weren’t trained well by the Chinese education at the time, which was very backward. We essentially said, “Forget everything you did in your Ph.D. That was not a useful piece of work, and we are going to retrain you.” Many of these people that we hired and trained are now the leaders of A.I. in China.
AZ: Wow. Then you went to Google.
KL: Yes, yes. I was at Microsoft Beijing for a couple of years, moved back to Redmond for a couple of years, and then I saw that Google was starting a China effort. I was fascinated with Google, like everyone was. At the time, there was a joke that if you didn’t get invited to Google for an interview, then you weren’t really all that smart. They were going after the smartest people. I wrote Eric Schmidt a message that said, “I heard you’re going to China, and that might be something I’m interested in.” Eric invited me for an interview, and then I got a job offer.
AZ: What was Google like in China at that time? I mean, you talk about it like it was the greatest company you ever worked for.
KL: Well, I still think Google is one of the greatest, if not the greatest company in the world. It has amazingly smart people and a fantastic culture. Eric and Larry [Page] and Sergey [Brin] gave me a lot of latitude. I had wanted to go build Google China because it would broaden me out from a research and technology experience to a business executive. I would take the core technologies that were built in Silicon Valley, and have my own engineering and product team, and build new products that would win market share back that we had lost, and also build sales, marketing, and business and investment teams. It would be almost like a functioning company, but built on top of a brilliant platform.
AZ: Sounds amazing.
KL: It was an amazing job for me for a while.
AZ: So why did you leave?
KL: Oh, well, there were a lot of reasons. But I think the biggest, most compelling reason was that I was there in Google China for four years. Toward the last year and half, I lost most of my staff. They had left largely to do startups in China, and that really got me thinking that all these smart people—Google trained them well, they learned a lot about technology, future, vision, what the U.S. is doing, and then they’re doing startups in China. There must be a market emerging and an entrepreneurial ecosystem. I talked to many of the people who had left. I was really lucky to have hired the smartest people, because Google was such a big brand at the time. Still is, but it was even bigger then. I saw this excitement in their eyes. They saw China is going to be the next-biggest market and they need to leave now, not a moment later, in order to capture this opportunity.
I started looking into the opportunity, and it was indeed very exciting. I thought, “Hey, I want to do that, too, if this is going to blossom into the largest, most exciting market. I want a piece of that excitement and action.” And I was too old to start my own company, and probably not hands-on enough. But I thought I could be an investor, an angel, or a venture capitalist, and help nurture and help young people realize their potential and build great companies. So that was the main reason I left, in 2009, to form Sinovation Ventures, which is a venture capital firm.
AZ: Right, which you run now. And it seems that your relationship to the division of work and life has changed over time.
AZ: Was there an incident in your life that made you shift perspective on your time and how you’re spending it?
KL: Yes, absolutely. I had the Chinese work ethic. I worked eighty hours a week, easily, for most of my career. I was obsessive. Not just when I was at work, but when I went to bed. I would wake up automatically at one o’clock or two o’clock, and then automatically again at five o’clock. Then I would go check my email and make sure that, when I worked for Microsoft and Google in China, I was able to respond instantly to my bosses or my colleagues in the U.S. I would also send a message to my team: “Wow, our boss works so hard. We have to work hard, too.” I felt that that was very exciting, and motivating, and also to be able to use time incredibly efficiently was very important to me. So I worked incredibly hard, and very efficiently.
I didn’t give my family enough time that they deserved. When my first daughter was born, I almost missed her birth because I had a big presentation to John Sculley at Apple. The big thing that changed me was when I got lymphoma about six years ago. It was during the Sinovation years. I was working very hard—it was the first time I had a business. It was very exciting. But being diagnosed with Stage 4 lymphoma really made me rethink all the things that I had strived to achieve. With hundreds of days left in my life, if the treatments were not effective, I realized that working hard was the last thing that I wanted to do. I would want to spend time with the people that I love. I regretted that I hadn’t done that. I want to work on things that I love and with people I love. Going back and doing more work was the least important thing. I had my priorities messed up. I had turned myself into a machine that was just running and running and running, every day the same.
I promised myself that if I got well, I would change my ways. Fortunately, my chemotherapy was effective, and I returned to work. I still work hard, but not eighty hours a week anymore. I also, most importantly, changed priorities so that when my family needed something, I would drop everything at work to attend to them, whether it was just, you know, a personal issue, or someone needed my help, or a graduation, or a birthday. Then, when there were no personal emergencies, or personal priorities, I could attend to work. Before, I would get all the work done, and then, when I had a moment, I would spend time with my family.
I think the thing is not that you have to work twenty hours a week now, and spend all the time with family and friends, but rather know when it’s critical and important to be by their side. When that happens, you need to make that the first priority and make work the second priority. That was something I learned the hard way. Also, during my recovery, my family—my wife, kids, sisters—they really took incredible care of me. And I saw their selflessness in how they treated me and how I was very cold and cruel in putting work as a top priority and treating them as a second priority. Those combinations made me decide I need to change my priorities. I still love my work—I still work very hard—but I think I achieve a much better balance now.
AZ: Of course you didn’t come to this the day you got your diagnosis. Having this period of time, in treatment, is maybe where you—
KL: Yes, yes, when I got my diagnosis I went through the usual denial, anger, and eventually acceptance. It was after acceptance that I had time off. Because during my treatment, my partners at work didn’t want me to spend any time at work. They wanted me to focus on my health. That gave me time to think, and rethink, and come to this realization.
AZ: Back to A.I.: What do you think needs to change about our mindset about work and productivity as we move into this new era in A.I.?
KL: I think my own illness made me think that during the Industrial Revolution we became programed to work hard because the Industrial Revolution actually replaced artisan jobs with assembly line jobs. It would be shrewd to brainwash the people in the assembly-line job that if you worked hard, even though it’s routine work, you will make a better life for yourself and your families. That was the kind of thinking that led to the eighty- or one hundred-hour workweek and the kind of routine that I had, of waking up at two a.m. every morning. I became a machine, and when I saw for myself that A.I. could do all routine jobs, it’s really a double wakeup call for me. I had made myself into a machine, and A.I. is the machine that will do the work meant for machines. People are meant to do something else. The epiphany is that we should really be happy, ultimately, that A.I. will take care of all routine jobs and liberate us from having to do them. We should do what we are intended to do as humans who inhabit the Earth. Whether we are put here by a maker or evolved ourselves, that’s our humanity. Our speciality is in our creativity, our ability to deal strategically, our connections and compassion with other people, and our love. These are the things that I should do, and these are the things that I should help get other people to realize is what they should do. Not the routine work. Finding a way to let go of the routine work and finding what you love and embrace this creativity or compassion—I think therein lies humanity’s hope of not only being able to survive A.I. and coexist with A.I., but find a better definition or meaning for humanity going forward.
AZ: Which was illustrated for you when you were developing an elder-care platform. Can you tell me a bit about what you noticed in that process?
KL: I had an entrepreneur that came to me who had essentially developed a robot for taking care of the elderly. He noticed, despite all of the fancy functions that he put in, the elderly primarily only used one function, which was customer service. And the customer service person would come up on video, on the screen on the robot, and say, “How may I help you? Do you have trouble with your machinery?” And the elderly would say, “Oh, let me tell you about my kids.” Or, “Why didn’t my son call me today?” And what we found is that people don’t want robots to take care of them. They want people to take care of them. They want their children, if possible, if not their friends, if not another human. But the belief that robots can do compassionate work is still very far off. Maybe it’s something that will never happen because I think we thrive on that human connection, that human touch.
AZ: There is this resounding fear about job loss, and what I love about what you put forward in A.I. Superpowers—I think what everyone got so excited about—was that, yes, there will be job loss, routine job loss, but like you’ve been saying, we’re creating an environment where empathic jobs and creative jobs will rise up. You have this concept of a Social Investment Stipend. Can you tell me a bit about that?
KL: Yes, because A.I. will create a lot of wealth, and that’s money in super-A.I. companies and that could be taxed. People are talking about whether that taxed money could be given as universal basic income to help people, basically, find a new beginning for themselves. But I propose something different. If you just tax the rich and give everybody cash, and for those who lost their jobs hope that they will find the retraining they need and get back on their feet, I think that’s naïve. Because it isn’t obvious to everyone which jobs will be displaced by A.I. and which will not.
It’s important that we provide targeted guidance on what are the professions you should train yourself for that will be here to stay. Rather than giving money to everyone, I think money should be—first there should be subsistence money offered to everybody. But people who work hard in retraining themselves with a new skill are the ones who should get the reimbursement. In other words, if you’re laid off because your doing some repetitive job, like as an assembly-line worker, factory worker, warehouse worker, cashier, customer service, or so on, you will get an extra reimbursement if you take the time to retrain yourself in a skillset that would not be displaced by A.I. For example, a nurse, or elderly care, or in the physical work—repair of robots and aeronautics repair and things like that. Offering retraining and giving people a chance to find a job that will not be displaced is one aspect of the stipend.
Another aspect may be for people who don’t want another big, long career, but they just want to contribute to the society in positive, compassionate energy that helps other people. I think we should think not just about jobs as something that could be compensated but also volunteer work—people who want to spend time in an orphanage or elderly homes. Just spending time with people, not necessarily taking care of them or bathing them, but just spending time, chatting with them, and just being there—that kind of volunteer work has, in many countries that I see, given a new meaning back to the people, because they feel like they are contributing positive energy to the world. They feel more fulfilled than even routine jobs to the extent that they are bringing value to other people. Why can’t we not also pay people for this kind of role of being a volunteer?
AZ: Which is beautiful and answers your large question of why we exist. We exist to create and love.
AZ: What I want to leave on is—I was very struck by this motto your father had.
AZ: “Knowing the sun will set soon, the old horse runs faster without being whipped.”
AZ: How do you feel about that right now at this moment in life?
KL: Actually, I feel that spiritually, but I don’t feel that as an urgent thing. I feel that, as someone who has worked many, many years and gained a lot of useful experience, it would be a pity if I didn’t share that during the next decade or two that I would still be working. But I think my father really felt not just an importance, but an urgency, and that caused him to work incredibly hard, even when he was in his seventies and eighties. I think, having had lymphoma, faced it, and having realized the importance of having a balanced life, I think I should pick the highest-priority, most valuable things I have to share with the world, and write a book once in a while. But not feel like I need to be doing that one hundred hours a week. I share some of my father’s desire to use the limited time to contribute to the world, but I want to do it in a prioritized, important way. Not just an urgent-hours-worked kind of way.
AZ: That’s beautiful. Thank you so much for coming on.
KL: Thanks for inviting me.
This interview was recorded in The Slowdown’s New York City studio on April 25, 2019. The transcript has been slightly condensed and edited for clarity. This episode was produced by our director of strategy and operations, Emily Queen, and sound engineer Pat McCusker.