Eric Daimler: AI, Robotics & Consciousness | SALT Talks #24

“Just because Artificial Intelligence can solve problems doesn’t mean that every problem will be solved with algorithms.”

Dr. Eric Daimler is a leading authority in Robotics and Artificial Intelligence. Eric served under the Obama Administration as a Presidential Innovation Fellow for AI and Robotics in the Executive Office of the President, with the sole authority for driving the agenda for United States leadership in research, commercialization and public adoption of AI & Robotics.

“Don’t confuse a clear vision with a short time horizon.” Take the Jetsons, for example. We had a clear vision of a robot who would do all these things. Nowadays, we have a Roomba. AI will be most successful when deployed in very structured environments.

On AI and consciousness, “this will certainly not happen in the next 10-20 years, if ever. We don’t even understand our own consciousness.”

LISTEN AND SUBSCRIBE

SPEAKER

Dr. Eric Daimler.jpg

Eric Daimler

Founder & CEO

Conexus

MODERATOR

anthony_scaramucci.jpeg

Anthony Scaramucci

Founder & Managing Partner

SkyBridge

EPISODE TRANSCRIPT

John Darsie: (00:07)
Hello everyone. Welcome back to SALT Talks. My name is John Darsie. I'm the managing director of SALT, which is a global thought leadership forum, at the intersection of finance, technology, and geopolitics. SALT Talks are a series of digital interview we've been doing during the work from home period, in lieu of our global conference series.

John Darsie: (00:25)
Just like we do at our SALT conferences, our goal is twofold. It's to provide our audience a window into the minds of subject matter experts across those disciplines that we listed. As well as provide a platform for big, world changing idea. That is not embodied more than by our guest today, Dr Eric Daimler. An expert in artificial intelligence.

John Darsie: (00:48)
Dr Daimler is a leading authority in robotics and AI, with over 20 years of experience as an entrepreneur, investor, technologist, and policy maker. He served under the Obama administration, as the Presidential Innovation Fellow for AI and Robotics, in the executive office of the president. As the sole authority driving the agenda for US leadership in research, commercialization, and public adoption of AI and robotics. He serves on the boards of WelWaze, Petuum, the largest AI investment by Softbank's Vision Fund.

John Darsie: (01:19)
His newest venture, Conexus, is a groundbreaking solution for what is perhaps today's biggest information technology problem, which is data deluge. As a founder and CEO of Conexus, Eric is leading a leading CQL patent pending platform, founded upon category theory. A revolution in mathematics, to help companies manage the overwhelming challenge of data integration and migration.

John Darsie: (01:45)
He's served as the assistant professor and assistant dean at Carnegie Mellon's school of computer science, where he founded the university's entrepreneurial management program, and helped to launch Carnegie Mellon's Silicon Valley campus. He studied at the University of Washington Seattle, Stanford University, and Carnegie Mellon University, where he earned his PhD in computer science.

John Darsie: (02:06)
A reminder. If you have any questions for Dr Daimler during today's talk, enter them in the Q&A box at the bottom of your video screen. Now I'll turn it over to Anthony Scaramucci, who's the founder and managing partner of SkyBridge Capital, as well as the chairman of SALT, to conduct today's interview.

Anthony Scaramucci: (02:23)
Doctor, great to be with you. We had Kai-Fu Lee a month or two back, talking about AI. People in our industry, frankly some of them are just a little confused about it. There's a whole big bandwidth of AI. From, let's say, artificial intelligence, machine learning, and robotics. I was wondering how you sir, how do you personally define AI?

Eric Daimler: (02:49)
Yeah, well thank-you, Anthony, for having me. It's good to be here, as part of your terrific video series. Kai-Fu Lee is a friend of mine. We actually were in the same program at Carnegie Mellon University. I like him, I like his work. I like his book, AI Superpowers. I find that the way in which people think about AI in these contexts, can often resemble how I would think about it as an AI researcher.

Eric Daimler: (03:19)
I've been doing AI for 30 years in various capacities. But when I was an AI researcher day to day, I would think about AI as a learning system. A subset of which is machine learning. Many people have heard of that. Then a subset of machine learning is deep learning. Often popularized by the company, Deep Mind. Deep Mind and deep learning is a subset of machine learning, which is a subset of AI. There are non machine learning AIs, which is semantic AI. That level of being pedantic about AI isn't terribly helpful, I find, for the people that are not day to day AI researchers.

Eric Daimler: (04:00)
What I like to do, and probably what I'll be doing for the rest of my life, is trying to bring people into the conversation about AI. So they purchase, a way to grab onto the conversation, about what it is. How I do that is talking about AI as a system. As the totality of a deployment. From sensing, to planning, to acting. Then learning from that experience.

Eric Daimler: (04:24)
If we go to sensing, we would think that, "I'm going to collect data". This could be the billion points of an internet of things. This could be data, the LIDAR on top of my car. This could be the air quality in my room. I'm going to collect data. That then moves across a network, where it then goes into planning, processing, thinking. That's the traditional way people might think of AI.

Eric Daimler: (04:52)
If I look out in these images, and I say, "Well is that a crosswalk? Is that a kid at a crosswalk?", then it goes into planning. I make a determination, with what degree of confidence is that a kid at a crosswalk? How confident do I think of it?

Eric Daimler: (05:05)
Then I have acting, right? Sensing, planning, and then acting. I act on that experience to say, "Well, should I slow down the car? How abruptly? Or should I have a sharp left turn of the car? Or should I wait until I collect more information?". Then it loops back on the experience. Sense, plan, act, learn from the experience. That's how I think about the system of AI, as a useful definition for citizens and policy makers.

Anthony Scaramucci: (05:29)
It's very good. It's very helpful. You're the CEO of Conexus. You've got great signage right now. I just want to tell you, I'm very proud of you, sir. Because you're beating John Darsie in room rate. I'm obviously in a hostage situation. If I blink twice, call the FBI for me. Darsie has all those fake objects in his room there. But you're winning the room rater -

John Darsie: (05:51)
I'm trying my best. I'm trying my best.

Anthony Scaramucci: (05:53)
Conexus, what does Conexus do? You're the CEO of the company, and the founder. How would you describe it to our listeners and viewers?

Eric Daimler: (06:01)
Yeah, and first of all, thanks for the compliments of the room. I was alerted to this when, in the COVID crisis, a new Twitter handle came up. I think it's called something like Rate My Room. You'll see people like us going on that, and people will see -

Anthony Scaramucci: (06:13)
He gave me 1/10th of a Scaramucci for my current environment, which is one out of 11. It hurt my feelings, I have to tell you that. Because you'll figure out by the end of this, I'm very shy. I'm a little bit introverted, and I'm somewhat sensitive. I'm very thin skinned, so I was hurt. But go ahead.

Eric Daimler: (06:29)
That's your reputation, yeah. Well aware. Yeah.

Anthony Scaramucci: (06:32)
Yes, so go ahead sir. What is Conexus? What does it do?

Eric Daimler: (06:36)
What does Conexus do? Yeah, Conexus. What Conexus does is, we will guarantee data and models, in order to be making better decisions. That's what we do. That's obviously very important in these AI deployments. That's what we do. We're an MIT spin out. We like to think of ourselves as the cartography of the 21st century.

Eric Daimler: (06:57)
How maps were made, how they used to be made is, we would send a ship out. That ship may hit a rock, and then go to the bottom of the sea. Then we would send another ship out, and it wouldn't hit that rock, because we made a map of that rock. But it might hit another rock. This trial and error approach resulted in thousands of ships at the bottom of the sea.

Eric Daimler: (07:16)
That happens today, in our scientific process, and in the deployments of AI. Early on in Amazon's history, they would be trial and erroring their logistics systems. It happens today in other domains of AI. Today, when I get a dumbbell shipped from Amazon in 48 hours, I may not care about their trial and error approach to shipping or logistics. But in high consequence contexts, I really do care.

Eric Daimler: (07:45)
If I'm sending a commercial jetliner into the sky, if I'm managing a utility system, if I'm developing a COVID vaccine, these trial and error approaches can cost real lives. That's what Conexus does, is we work to guarantee the data and the models in the deployment of these systems.

Anthony Scaramucci: (08:04)
Well, it sounds very compelling. Who's your typical customer then? You don't have to give us a name of a customer. But who would call you up and say, "Okay. I would like this information"?

Eric Daimler: (08:15)
Yeah, well I can give you an example. Where we're working with a large logistics company, to find out where their personal protective equipment is. I didn't know how large some of these logistics companies could be. One was tens of thousands of employees, and their client was also tens of thousands of employees.

Eric Daimler: (08:35)
This client of our client had these ships around the world. Each one of these ships had tens of thousands of containers. The Conex box, the shipping containers. Inside of those were personal protective equipment, PPE. Where in the world the PPE is for our client, or our client's client, might seem like an easy question to answer, in the age where I could get a dumbbell shipped from Amazon in two days. But it's actually surprisingly hard, and it could take hours, if not days, to come up with this answer, of transferring data across these massive systems.

Eric Daimler: (09:09)
That's what we work to do right now, is make these systems more agile, more responsive. To address, "Where is my personal protective equipment? Should it go to Seoul, or Houston, or Rome?", much more quickly than how classic approaches would allow.

Anthony Scaramucci: (09:26)
Yeah, so you're making the economy more efficient. You're saving costs, saving energy. It's good for the environment. Conexus category theory. It's a game changer. Explain to people what that is, what it means, and why is it a game changer?

Eric Daimler: (09:45)
We are based on innovations to mathematics. This is just a fundamental basis for the foundation of the firm. We're not the only ones doing it. We're just the leaders in expressing this with enterprise software. We're often used to these innovations in physics. These reach the popular imagination, because they've kept the continuity of Moore's Law. These innovations in physics. We see them in faster chips.

Eric Daimler: (10:09)
But what we're less appreciative, is theses innovations in math. These discoveries in math, that have enabled transformations of our world. This is even more foundational than the nature of physics. The relational database system, upon which Amazon does its business, was funded, founded, powered by Oracle, who runs a big relational database management company. That would have not been able to be started in 1977, were it not for the discovery of relational algebra in 1970.

Eric Daimler: (10:42)
We have that new type of mathematics, categorical mathematics, category theory, that powers a new way of thinking for the digital age. This will sweep away everything you know, Anthony, about math, over the next 10 to 20 years. It's a math more appropriate to the digital environment. Calculus, trigonometry, geometry, that is a math of our ancestors. It's not inappropriate, it's just going to become less appropriate. A little bit like Latin. You know, you and I are speaking in English, not Latin. We won't really mention Latin, except when it comes time to our bonus, etc.

Eric Daimler: (11:19)
Those other maths will still exist. But they'll just become less important. The math of the future, the math of the digital age, the math that will sweep away everything else, is category theory.

Anthony Scaramucci: (11:30)
Okay, and so if I had to describe it to my 20 year old son, category theory is what exactly? How is it different from trigonometry, and ...

Eric Daimler: (11:39)
It's the math of equivalence. The math of our ancestors is the math of rigid equality. Rigid equality works really well when you're building a factory, and you have gears that need to interlace. When you're looking at a farm, and there's precise definitions of where a farm ends. It's much less appropriate in a digital environment.

Eric Daimler: (12:05)
The math of continuity is calculus, in waves. The math of a digital environment is in these relationships, where things are fungible. Where things are not exactly equal, but they could be related. Where you need to be transforming a view, from one view to another view.

Eric Daimler: (12:24)
I'll give you an example of this. We're working, one of the things Conexus does, my company, is we're working with some firms to come up with a COVID vaccine. If we looked at a COVID vaccine, we see many, many drug databases that need to be brought together for analysis. If we're bringing together more data for analysis, we find that they've been collected in different ways, because they've been collected at different times.

Eric Daimler: (12:48)
I might see one drug database that said, "Do you have high blood pressure?". Another drug study would say, "What is your high blood pressure? What's your number?". Another might say, "You have it. What's your medicine that you take? How much medicine do you take?". The classic ways of doing this bring together that data, and they just say, "Yes, high blood pressure". That's based on the math and the systems in the classic approaches.

Eric Daimler: (13:14)
We have to bring it together in a way that maintains the fidelity for better analysis, and better decisions. To come up with vaccines that are more effective, and faster to market.

Anthony Scaramucci: (13:27)
All right, that's a very good description. Where do you think AI is going? Do you have a personal philosophy, or an opinion on that? Obviously we have dystopian visions. Elon Musk has said this could end up like The Matrix. Kai-Fu Lee was on the opposite side of the spectrum. He didn't think it was ever going to advance to the point of full blown consciousness. Where are you on that spectrum? In terms of where the future is for AI.

Eric Daimler: (13:55)
I have many things to say about this. I spent my time working in the last administration, talking to congressmen about what AI is, and generally how scared they should be about AI. This is still a very useful conversation to be had. When we think about AI, and I remember, Kai-Fu Lee and I talked about this a couple of times. We both look at this dystopia, from The Terminator, to a utopia, like it'll save all of our lives. As a sort of lazy thinking.

Eric Daimler: (14:28)
Those extremes just aren't terribly helpful. Because there's a multitude of expressions in between, that we need to have a conversation about as a society. As I've said, I've been in AI for a long time, in a lot of different ways. One of which was a venture capitalist, on Sand Hill Road. In that capacity as a venture capitalist, we had an adage. Which is, "Don't confuse a clear vision with a short time horizon".

Eric Daimler: (14:57)
People do this all the time. Before I was born, way back in the '60s, there was this cartoon called The Jetsons. It's still entertaining. You can watch it today. In that cartoon, there was this robot that walked around and cleaned the house, did the dishes, vacuumed the floors, made the food. It was called Rosie the Robot. We had a clear vision back then, "This is what we want".

Eric Daimler: (15:17)
But 50, 60 years later? We have a Roomba. That's all we have, a Roomba. You had a clear vision, but you would not want to confuse that with a short time horizon. I think today, when people think of the fears, or even the hopes of AI, they can reflect on Google Glass. That was a disaster. "I know what I want. It's very clear how cool it could be. But it's a horrible disaster".

Eric Daimler: (15:40)
When I think of autonomous cars, we can think what we would like. "I would like a sofa to be going down the street while I watch Netflix". But that is not going to happen next year. Elon Musk has mentioned for the last several years that, "Next year, we'll have a fully autonomous car in every situation". But it's not going to happen next year.

Eric Daimler: (15:59)
Not to put you on the spot. But you can think, imagine in your head when you think fully autonomous cars first appeared on a public street in the United States. The answer is, back when Reagan and Thatcher were in power. In the early '80s. 1983, Carnegie Mellon University had vans. They were big vans, but vans nonetheless. Driving on public streets, stopping at stop signs. They were going five miles an hour. But it's just another indication of, having a clear vision doesn't at all indicate something's going to happen soon.

Eric Daimler: (16:29)
The idea about general intelligence, this idea about AI becoming conscious? I'll tell you, the consensus is, that isn't going to happen in the next 10 to 20 years. If it will ever happen, my personal view is it will never happen. Because we don't even understand our own consciousness. You can do a thought experiment, and I'll leave it at this. When you say, "Who's that thinking?", and then you notice in your head, you're saying, "Who's that thinking?". We don't have enough understanding of our own consciousness to be able to re-create that consciousness in another entity, in a machine.

Eric Daimler: (17:02)
That's my answer. These things actually move relatively slowly. What's distinct about this world, is that things don't just happen quickly. They happen abruptly. That's the distinction people should think about, with regard to artificial intelligence.

Anthony Scaramucci: (17:14)
When you mentioned 1983, doctor, I wanted to ask you, how does Madonna look exactly the same as she did in 1983? But since that's outside of your expertise, I'll ask Darsie about that later.

Eric Daimler: (17:25)
It is indeed.

Anthony Scaramucci: (17:28)
You did a great job of explaining the limitations. But let's talk about application, and talk about current application of what you're doing. What industries are going to be the ones that transform, as a result of Conexus, and a result of what you're working on right now?

Eric Daimler: (17:48)
Yeah, this is something that we can talk about for AI in general. I often am caught saying that every business is an AI business. Every business is an artificial intelligence business. That can shock people. But what I mean by that, is that physical manifestations of goods have become increasingly commoditized over the last 100 years or so. Five percent of us work on farms. A very small percentage of us work in factories. I can buy the gear to run a farm, or run a factory, in a way that I couldn't 100 years ago. We run more and more on data.

Eric Daimler: (18:27)
A friend of mine runs a company that does this sort of analysis for restaurants, or companies. I can give a context for that. When we think about where AI deployments happen right now. AI deployments happen in very structured environments. An easy one is a children's game, Tic Tac Toe. It's a very structured environment. AI can master Tic Tac Toe. Slightly more sophisticated is checkers, and then chess.

Eric Daimler: (18:54)
Then there's this other, more sophisticated board game called Go, many of us are familiar with. It's a Chinese board game, with stones on a big grid. For all intents and purposes, that game was solved about a decade ago, using that technique we talked about earlier. Deep learning. That problem, just because it solved that solution, doesn't mean that every problem now exists around algorithms. That we have these sort of decisions to make day to day.

Eric Daimler: (19:23)
Back to my friend. He goes into companies, looking for ways in which we can solve these ordinary problems. In a café, in this particular instance. The conclusion is, no one wants to buy the last croissant. It may seem counterintuitive to have this highfalutin technologist, with such a banal conclusion. But the idea is that, for whatever reason, people want to buy every croissant to the last one.

Eric Daimler: (19:50)
Even, you might think intuitively, that you want to run out of croissants when the store ends. Instead he said, "Well, we're going to not call that food waste. We're going to call that last croissant, 'Making every other customer comfortable, until that last croissant'". This is proprietary data now. It's proprietary data to that particular restaurant, that particular cupcake retailer or factory.

Eric Daimler: (20:11)
That acquisition of data, that processing of data, and that execution of data, is proprietary information to that cupcake factory. That has more in common with an AI business than may initially appear on its face.

Anthony Scaramucci: (20:27)
It's a great example, and it's a good segue into my next question. Steve Schwarzman, the CEO of Blackstone, is saying to people, or suggesting, that businesses should increase their inventory. In the event that there are future pandemics, and future supply chain disruptions. What do you think of that? Is that a good idea for businesses? Is that something that's going to weigh them down, and make them carry too much cost on their inventory and balance sheets? What's your opinion?

Eric Daimler: (21:00)
Yeah, this ... I remember his statement. That drives me nuts. Because it's, a problem about the COVID crisis, is that we couldn't have predicted it. How can you possibly have predicted something, or planned for something, that could not have been predicted? For people just to carry -

Anthony Scaramucci: (21:20)
Well, I've got a lot of my investors that are on this call. I want you to say that seven more times. Of course you couldn't predict it. That's why we're good fundamental investors. If you get caught, you've got to ride it out. You've got to be patient. But go ahead, I'm sorry to interrupt.

Eric Daimler: (21:34)
No, it's exactly right. How you'd respond isn't by having more bloated inventory, or having now 1000 alternatives. Because you actually don't know what the next problem is going to be. You're going to plan for the last war, is the criticism that we get in government, to the defense department. What's far, far more powerful, is to be a quick learner, and be adaptable. That's where my firm, Conexus, that's where we work with our companies.

Eric Daimler: (22:02)
The logistics company we worked with earlier, we can't plan for every possible eventuality. But what we can plan for is having better visibility. Better capital goods planning. Better labor planning. Then more supply or options, so that we can quickly respond to whatever problem may appear, in whatever timeframe. It's about responsiveness, it's about flexibility, and it's about adaptability.

Anthony Scaramucci: (22:29)
Okay. Well we're getting a ton of questions that are coming into our chat room. I'm going to turn it over to John, to have him ask you some of these questions. But I want to come back in a moment. But go ahead, John. I see the questions piling up.

John Darsie: (22:45)
Yeah. There's a question about, we mentioned that you worked as the Presidential Innovation Fellow for AI and Robotics, in the Obama White House. How did that position come about? What was it like day to day? What were you really working on there? Then how do you think, in general, we do as a government, driving research into AI and robotics? I know that places like China are much more aggressive in pushing into those areas. How are we doing? Talk a little bit about your work while you were in the White House.

Eric Daimler: (23:15)
Right. I will first say, it was a privilege. It was a privilege to be working in the Obama administration. It was actually just a privilege to be working in the federal government. I had an appreciation for the work that people do within the federal government. I was very impressed with the people with whom I had worked.

Eric Daimler: (23:34)
I mostly interacted with people in the defense department. I got a little challenge coin from the work I did with Ash Carter. I flew on a plane with Ray Mabus, and Air Force One is as cool as you think it is. The job and the people are fantastic. I had a great experience.

Anthony Scaramucci: (23:53)
See, Darsie's smiling right now, doc. Because I have flown on Air Force One. I see him laughing.

John Darsie: (23:58)
Anthony I think worked in the federal government for a couple weeks.

Anthony Scaramucci: (24:00)
I was only there for 11 days, doc. But I had three trips on Air Force One. I didn't mean to interrupt your stream and your thought there. But Darsie, you're lucky that we're not in the same room together. You'd get smacked right now. Okay, keep going, doc. I'm sorry about that.

Eric Daimler: (24:16)
It was as cool as you think it is, and the people were fantastic.

Anthony Scaramucci: (24:18)
Air Force One is as cool as you think it is, and Darsie, you haven't flown on Air Force One. I just wanted to point that out to everybody. Go ahead, doc.

John Darsie: (24:24)
I've got time.

Eric Daimler: (24:27)
The work we did, the work that I did, was in the coordination between what the president wanted to do, from the executive office, to the rest of the executive branches. Whether it was state, or defense, or treasury, or health and human services. They would all have their different uses for AI, and obviously the different uses for robotics.

Eric Daimler: (24:48)
We would come to try to conceptualize a framework, under which we could be going down the same path. This is both a Republican and a Democratic talking point, that shouldn't be politicized. I'm fortunate that this one area seems to not have been in the new administration. Because we were trying to make the allocation of our resources more effective for the American people, and maintain American leadership. That was my objective, and it was my day to day work. I hope I get the opportunity to do it again.

Eric Daimler: (25:19)
What I think about policy is, that number one, we need to engage in this conversation. It's not enough to sit back and react to whatever technology exists. We actually have to educate ourselves. As systems have become more complex over the past 50 years, we have become more educated than our parents, and certainly our grandparents, about the technologies in our world.

Eric Daimler: (25:45)
We need to do the same with AI. Because whether we like it or not, there are a million programmers out in the world, codifying our human values into software, in ways we may not like. We need to become educated, so we can be part of that conversation.

Eric Daimler: (26:00)
Another part of policy that I recommend is, that we have circuit breakers. These systems have become automated in ways that we may not appreciate. I may not want, that my Instagram account and my liking of a picture of a hamburger, to then somehow go through a sequence of algorithms, that then affects my life insurance policy. Because somebody thinks I'm now going to have high cholesterol, or what have you. That can happen. That sequence of behavior. So circuit breakers are important.

Eric Daimler: (26:28)
Related to that but distinct, we should have human auditors on AI, overlaying the black boxes that are there. There are tragic stories around black box criminal justice sentencing systems, as AI. I don't like that. I wouldn't want to be sentenced by a black box. I want to be talking to human beings. I want to make sure the decisions represent human values.

Eric Daimler: (26:50)
Those are the three ways I think we need to be involved in policy. Whether we are actually public officials, or whether we are citizens or business people.

John Darsie: (26:59)
Going to the pandemic for a moment, we have a question about, we actually had a very interesting panel a couple weeks ago on health tech. About how the massive amounts of data we have, and our new ability to process that data using AI and machine learning and things, is helping us in areas like radiology, oncology. As it relates to the pandemic, how can a firm like Conexus, leveraging AI tools and data, help speed up the race to find a vaccine, and generally help us confront pandemics going forward?

Eric Daimler: (27:33)
As I said earlier, we are working with companies to detect where personal protective equipment is in the world, and help bring effectiveness and agility to our logistics and shipping systems. This is a global problem, that will benefit our economy into the distant future. We're also working with companies to come up with a COVID vaccine.

Eric Daimler: (27:55)
The way we do that is, we work to guarantee data, and guarantee the integrity of the models. Since the COVID crisis, some 3000 scientific papers have been published. A couple hundred of these papers had to have been thrown out, because this one lab at Imperial College London, was found to have had a misinterpretation of their data. Some corrupted data, essentially.

Eric Daimler: (28:22)
The way we talked about normalizing data, and bringing data together, has found its way into our Centers for Disease Control. This feels almost criminal to me. That the Centers for Disease Control, these are the experts in the United States, to which we give a lot of resources, and give a lot of credibility. They conflated the idea of a positive test for a virus, versus a positive test for an antibody. They just merged them, and said, "Positive test".

Eric Daimler: (28:50)
This speaks to a lack of numeracy. Literacy in numbers, literacy in math. The point of taking math isn't to learn calculus, in the past, or to learn category theory today. It's the point to become literate with these numbers, so you don't have these sort of confusions. But even among the scientifically literate, you have to depend on the integrity of the models, and the data given to you. This is what failed this lab at Imperial College London.

Eric Daimler: (29:14)
Many, many different studies, hundreds in this particular case, had to have been thrown out. That may not seem terribly tragic, until you realize the consequences is a vaccine. It cost time, it cost money. In this case, since we're developing a vaccine, it literally cost lives. That's what my firm is willing to look into helping to shorten, and bring effectiveness to.

John Darsie: (29:34)
Going back to government policy for a moment, we have a question about, again, what the United States needs to do to ensure that AI research and development capabilities don't fall behind China. When you have a centrally planned government, they have an ability to make long term bets, and to drive investment into certain areas that they think are super important strategically, long term. We maybe have lost a little bit of sight of that in the United States, just given our political cycles.

John Darsie: (30:01)
What do we need to do? Should we be more focused on US government policy, regarding AI development? To ensure that we remain either ahead, or competitive with someone like China?

Eric Daimler: (30:12)
I think there's a lot to say here, and there's a lot to say here that doesn't just involve technology. It pained me to have been in, I remember back when I was in my PhD program. I would be around foreign nationals that wanted to stay in the United States, but we forced them back to their home countries. They wanted to be contributing to the research in the US. We paid to train them, but then we would force them back.

Eric Daimler: (30:42)
I would have been the person with whom they were competing for a job, and I'm fine with doing that. But we forced them back. I think that's tragic, because it impairs American leadership. I obviously think that we could multiply, by an order of magnitude, the funding of research overall into these domains.

Eric Daimler: (31:00)
I'll say two things that are very important for the listeners. That besides the funding, besides the immigration, is that we need to be engaged in the conversation, so that we know how to deploy the technology. My friend, Dr Kai-Fu Lee, will often talk about data, and China being the superpower of data. That's fine. But that only goes so far, because the winner will not just be in the accumulation of data. If that was true, where would that leave France, or Brazil, or Germany? As a second class citizen? No. It's in the implementation. It's in the use of AI that will have a lot of power.

Eric Daimler: (31:43)
We need to be looking at this, as I will say, as a system. The totality of an AI system. We have an advantage. As a free country, and as western democracies, to be collaborating with each other. On the acquisition of data, the processing of that data with guaranteed integrity. The analysis of the data, and the execution of that data. I would invite people to look at this as a totality of a system, in order to maintain American leadership. Both commercially, and militarily.

John Darsie: (32:13)
I want to move on to robotics for a moment. We talked a lot about AI. Robotics is very closely related to AI, but we haven't dived so much into it. A question came in from the audience, what are your thoughts about the increasing collaborative robot adoption, in reaction to an overdependence on human labor, post COVID?

John Darsie: (32:34)
The notion that if we're going to expect that there's going to be pandemics and disruptions like we've seen with COVID, are we in danger of companies moving towards a more robotics driven workforce, that has a little bit more flexibility, like an accordion? Depending on economic conditions?

Eric Daimler: (32:53)
You know, this came up in conversations when I was talking to people, representatives in congress. My answer always was, we don't automate what humans fundamentally want to do. We have machines that can do a lot of things. But if we still want to do them, we can still do them. But many of the repetitive tasks for which we have built robots, are not something that people, when they're a child, say, "Gosh. I want to do that repetitive project when I grow up".

Eric Daimler: (33:28)
I invite people, there's many different frameworks one might have about these technologies, including robots. But one is just as an automation tool. As an augmentation. This is how you can think about it. You are not going to be replaced by a robot. You're going to be replaced by a human using a robot. You need to work with these new automation tools. Automation's been coming the last hundreds of years. Learn how to work with these new tools, and how it can have you be better at the job, and then the larger job that needs to be done.

Eric Daimler: (33:59)
There are parts of this world that are fundamentally human. That is often the part where we're interacting with each other, like now. It's the part where we're expressing empathy with each other, like now. Those are the sort of skills that probably need to be nurtured, in order to have a longer term view of what your career is, or what your children's career is.

John Darsie: (34:22)
We have a question, and this might not be in your wheelhouse. So feel free to say so. But about social policy, that will need to be a result in a world where we have greater AI and robotics participation in our workforce. Maybe a stripping out of some of those types of jobs that you mentioned. Ones that maybe are very cumbersome for human beings, that get taken over by robots or machines.

John Darsie: (34:48)
In terms of things like universal basic income, or things of that nature. From a social perspective and a government perspective, what do we need to do in a world where technology replaces a lot of the human labor, that has existed for hundreds, thousands of years?

Eric Daimler: (35:03)
Yeah. You know, I think that it's useful to be contextualizing AI as something that happens abruptly, and not quickly. What's changed here isn't that we're going to have automation, and jobs are getting replaced. It's that those sort of changes used to happen over a generation, but in this world, when they happen, they'll happen abruptly.

Eric Daimler: (35:27)
Long haul trucking is often brought up as an example, because that's actually the largest job in most states, is long haul trucking. But if we look at that job, it's actually terrible. It's a terrible job. Many truckers don't even want to do that job, because they're taken away from their friends and from their family. That job is actually likely to get better, before it's replaced. I don't want to be encouraging my children to be going into long haul trucking. But it doesn't mean that that job is going away next year.

Eric Daimler: (35:53)
I'll tell you how that's going to go. This can give people a framework to think about AI, and social policy. They can invent their own answers. Right now, driving an automated semi down the road is the easiest of problems. But it's still a hard problem. How a company is solving this, Locomation in Pittsburgh, is they're going to have a peloton of trucks, three trucks.

Eric Daimler: (36:15)
The first truck is actually going to have a human driver. The second truck will have drivers, but they'll be asleep. Then that truck will rotate, and another trucker will take over. Then rotate, and then take over. Instead of that truck driving 11 hours, which is the current maximum, the truck can drive 24 hours. You can go out from Philadelphia to Kansas City, and then back, in a timeframe that allows you to keep the relationships with your friends and your family.

Eric Daimler: (36:42)
That's transformational, and those sort of transformations will be available with AI before it replaces jobs. I invite people to think a little more broadly about this. Kathleen Carley has invented a way of thinking about AI, that takes into account a cyber social system. Yesterday, Twitter got hacked. Banks get hacked all the time. Or they get attempted with hacking all the time. That's been happening for a while.

Eric Daimler: (37:11)
But what's now happening is, instead of people just attacking my bank, they're attacking me to attack my bank. "Is this thing fake? Is this thing real?". That gets manipulated now in my news feed, to continue my biases or what have you. But how will it now affect me, if I see videos that may be fake? It takes a whole new dimension to criminal justice, when eye witness accounts may not be eye witness accounts in the traditional sense. That's a new way of thinking about AI. We have a physical manifestation. A physical interaction with these technologies.

Eric Daimler: (37:42)
Then lastly, it is outside of my wheelhouse to be talking about universal basic income. But I will say, personally I'm not necessarily a fan. That came out of Silicon Valley, where a lot of people in my neighborhood think of these sort of easy answers to these complex public policy problems. I'd say that earned income tax credit is a much easier way, is a much more direct way to address folks that have job displacements.

Eric Daimler: (38:08)
I would say that early childhood education is another way to be addressing this. I do not want to have a simple solution, my personal view, of universal basic income, instead of addressing these more complex public policy answers, to that very hard problem.

John Darsie: (38:23)
We have one last question before we let you go. It was a good segue from what you just said, about early childhood education. That's a big part. We can put resources behind research and development. We also need to create a culture, where we incentivize young people to learn about AI, and coding, and robotics, and things of that nature.

John Darsie: (38:41)
I have three young kids. Anthony has young kids as well. What should we be doing, as a society, to teach our children about artificial intelligence? What type of resources should they be consuming? We obviously have a parent that's posed this question, and it's interesting to me as well. What should we be doing with our kids, to drive interest and engagement in the world of AI and robotics?

Eric Daimler: (39:04)
I don't know if I want to drive interest in robotics or AI. But I'd love for people to participate. There's an organization called First Robotics, of which I'm a really big fan. This nonprofit organization, and you can just use your favorite search engine, for First Robotics. I find it to be fascinating, because they allowed for this contest for children of various ages, to get involved in a project, that had a fundamentally technical outcome. But required people in different capacities.

Eric Daimler: (39:31)
What I loved about this particular competition that I saw in Florida a couple years back, was that women, girls in this case, girls were leading these teams of guys, kids, boys, in the development of robots, that they were given a challenge to develop. They were not only developing a technical acuity, but they were feeling comfortable that there was a place for them, that wasn't just the nerd in the basement. Which is how I started out.

Eric Daimler: (40:01)
Other people could say, "Well, I'm the marketer. I'm the fundraiser. I'm going to go find us money from our neighborhood, to help fund for our equipment". "Well I'm the leader. I'm the organizer, to make sure that we are not just experimenting, but we're developing the product towards the solution of winning the competition".

Eric Daimler: (40:17)
These little children were developing these sensibilities. I get goosebumps even just thinking about it right now, I was so moved. That real world interaction. Giving people a place to think about the breadth of opportunity for them, inside of these systems, that's not just nerds in the basement. I think it's critically important.

Eric Daimler: (40:38)
I think math is a big deal. You might say the more math, the better. But if I were to choose, I would say, let's put aside the geometry, the trigonometry, and really the calculus. Put it aside. It's going to become like Latin. Let's focus on probability, and categorical mathematics. Category theory. That's the math of the future. That's the math that we can use. That's the math you'll use in your day to day life.

John Darsie: (41:03)
Well that's fascinating. Dr Daimler, I want to thank you for joining us. You were able to attend our SALT conference in Abu Dhabi last December, where we had a great time. I hope that we can host you at one of our future SALT conferences in person. But for now, this will do. You talked about how you're in San Francisco. That's why you're wearing a sweater in the summer. For a nerd in the basement, you're very fashionable, and you have a very well decorated basement. Thanks again for joining us. We really appreciate the time.

Anthony Scaramucci: (41:30)
I mean, you're calling him a nerd at the end of the thing? I mean, that's like the pot calling the kettle black -

John Darsie: (41:34)
He called himself a nerd, give me a break.

Anthony Scaramucci: (41:35)
I mean, it's unbelievable. But doc, what I will say, is that in the middle of a pandemic, you're helping people see through to where the opportunity is. We're about to embark upon a massive technological transformation again, of our society. Which is going to lead to unbridled economic growth and prosperity. We just have to see ourselves to the other side of it.

Eric Daimler: (41:59)
Yes.

Anthony Scaramucci: (41:59)
But in the meantime, I'm going to be auctioning off the silverware and stuff behind John Darsie. But the truth of the matter is, it's likely not actually to be silver. It's probably just plastic. But that'll be for another SALT talk, Dr Daimler. Thank-you so much for joining us.

Eric Daimler: (42:16)
Good to be here. Thank-you, Anthony. Thank-you, John.

Anthony Scaramucci: (42:17)
All right. All the best.