Transcript
WEBVTT
00:00:00.240 --> 00:00:03.810
Welcome everyone to another episode of Dynamics Corner.
00:00:03.810 --> 00:00:07.588
Is AI a necessity for the survival of humanity?
00:00:07.588 --> 00:00:08.711
That's my question.
00:00:08.711 --> 00:00:11.569
I'm your co-host, chris, and this is Brad.
00:00:11.640 --> 00:00:15.271
This episode was recorded on December 18th 2024.
00:00:15.271 --> 00:00:16.364
Chris, chris, chris.
00:00:16.364 --> 00:00:21.751
Is AI required for the survival of humanity?
00:00:21.751 --> 00:00:26.855
Is humanity creating the requirement for AI for survival?
00:00:26.855 --> 00:00:28.379
That's a good question.
00:00:28.379 --> 00:00:34.341
When it comes to AI, I have so many different questions and there's so many points that I want to discuss about it With us.
00:00:34.341 --> 00:00:39.441
Today we had the opportunity to speak with Zoran Fries-Alexanderson and Christian Lenz about some of those topics.
00:00:39.441 --> 00:00:59.332
Good morning, good afternoon.
00:00:59.332 --> 00:00:59.973
How are?
00:00:59.993 --> 00:01:06.298
you doing there, we, there we go good day good afternoon over the pond.
00:01:06.617 --> 00:01:07.097
How are you doing?
00:01:07.097 --> 00:01:14.457
Good morning, well, good good good, I'll tell you, soren, I love the video.
00:01:14.457 --> 00:01:15.099
What did you do?
00:01:15.099 --> 00:01:22.250
You have the nice, the nice blurred background, the soft lighting yeah, it's uh.
00:01:23.531 --> 00:01:25.834
You can see great things with a great camera.
00:01:27.540 --> 00:01:30.290
It looks nice, it looks really nice, christian.
00:01:30.290 --> 00:01:30.811
How are you doing?
00:01:31.801 --> 00:01:32.945
Fine, thank you very much.
00:01:35.263 --> 00:01:38.271
Your background's good too, I like it, it's real.
00:01:38.290 --> 00:01:39.093
Back to the future.
00:01:41.061 --> 00:01:49.430
It is good, it is good, but thank you both for joining us this afternoon, this morning, this evening, whatever it may be been looking forward to this conversation.
00:01:49.430 --> 00:01:50.861
I was talking with chris prior to this.
00:01:50.861 --> 00:01:54.355
This is probably the most prepared I've ever been for a discussion.
00:01:54.355 --> 00:01:56.421
How well prepared I am we'll see.
00:01:56.421 --> 00:02:07.031
Uh, because I have a lot of things that I would like to bring up based on some individual conversations we had via either voice or via text.
00:02:07.031 --> 00:02:16.367
And before we jump into that and have that famous topic, can we tell everybody a little bit about yourself, soren?
00:02:18.401 --> 00:02:21.485
Yes, so my name is Soren Alexandersen.
00:02:21.485 --> 00:02:32.372
I'm a product manager in the Business Central engineering team working on finance features basically rethinking finance with co-pilot and AI.
00:02:33.980 --> 00:02:35.467
Excellent, excellent Christian.
00:02:37.701 --> 00:02:38.907
Yeah, I'm Christian.
00:02:38.907 --> 00:02:41.769
I'm a development facilitator at CDM.
00:02:41.769 --> 00:02:44.147
We're a Microsoft Business Central partner.
00:02:44.147 --> 00:02:45.612
Development facilitator at CDM.
00:02:45.612 --> 00:02:52.259
We're a Microsoft Business Central partner and I'm responsible for the education of my colleagues in all the new topics, all the new stuff.
00:02:52.259 --> 00:03:03.429
I've been a developer in the past and a project manager and now I'm taking care of taking all the information in that it leads to good solutions for our customers.
00:03:04.651 --> 00:03:06.966
Excellent excellent and thank you both for joining us again.
00:03:06.966 --> 00:03:14.349
You're both veterans and I appreciate you both taking the time to speak with us, as well as your support for the podcast over the years as well.
00:03:14.349 --> 00:03:32.169
And just to get into this, I know, soren, you work with AI and work with the agent portion I'm simplifying some of the terms within Business Central for the product group and you know, in our conversations you've turned me on to many things.
00:03:32.169 --> 00:03:52.675
One thing you've turned me on to was a podcast called the Only Constant, which I was pleased I think it was maybe at this point a week or so ago, maybe a little bit longer to see that there was an episode where you were a guest on that podcast talking about AI, and you know Business Central, erp in particular.
00:03:52.819 --> 00:04:23.194
I mean, I think you referenced Business Central, but I think the conversation that you had was more around ERP software and that got me thinking a lot about AI, and I know, christian, you have a lot of comments on AI as well too, but the way you ended that with you know nobody wants to do the dishes is wonderful, which got my mind thinking about AI in detail and what AI is doing and how AI is shaping.
00:04:23.194 --> 00:04:34.411
You know business, how AI is shaping how we interact socially, how AI is shaping the world, so I was hoping we could talk a little bit about AI with everyone today.
00:04:34.411 --> 00:04:39.689
So with that, what are your thoughts on AI?
00:04:39.689 --> 00:04:45.879
And also, maybe, christian, what do you think of when you hear of AI or artificial intelligence?
00:04:46.841 --> 00:04:55.971
I would say it's mostly a tool for me Getting a little bit more deeper into what it is.
00:04:55.971 --> 00:05:07.735
I'm not an AI expert, but I'm talking to people who try to elaborate how to use AI for the good of people.
00:05:07.735 --> 00:05:41.990
For example, I had a conversation with one of those experts from Germany just a few weeks before directions and he told me how to make use of custom GPTs and I got the concept and tried it a little bit custom GPTs and I got the concept and tried it a little bit and when I got to Directions EMEA in Vienna in the beginning of November, the agents topic was everywhere, so it was co-pilot and agents and it prepared me a lot how this concept is evolving and how fast this is evolving.
00:05:41.990 --> 00:06:20.084
So I'm not able to catch up everything, but I have good connections to people who are experts in this and focus on this, and the conversations with those people, not only on the technical side but also on how to make use of it and what to keep in mind when using AI, are very crucial for me to make my own assumptions and decide on the direction where we should go as users, as partners for our customers, and to consult our customers and on the other side.
00:06:20.704 --> 00:06:36.913
With the evolving possibilities and capabilities of AI, generating whole new interactions with people, it gets much more harder to have this barrier in mind.
00:06:36.913 --> 00:06:46.452
This is a machine doing something that I receive and this is not a human being or a living being that is interacting with me.
00:06:46.452 --> 00:07:11.649
It's really hard to have a bird's eye view of what is really happening here, because it's so like human interaction that we have with AI, that is hard to not react as a human on this human interaction and then have an outside view of it.
00:07:11.649 --> 00:07:17.927
How can I use it and where is it good or bad, or something like that, that moral conversation we're trying to have.
00:07:17.927 --> 00:07:26.050
But having conversations about it and thinking about it helps a lot, I think.
00:07:27.312 --> 00:07:29.141
Yeah, it does, Saren.
00:07:29.141 --> 00:07:34.533
You have quite a bit of insight into the agents and working with AI.
00:07:34.533 --> 00:07:36.949
What is your comments on AI?
00:07:38.564 --> 00:07:40.591
I think I'll start from the same perspective as Christian.
00:07:40.591 --> 00:08:05.634
From the same perspective as Christian, that for me, ai is also a tool in the sense that when looking at this from a business perspective, you have your business desires, your business goal, your business strategy and whatever lever you can pull to get you closer to that business goal you have AI might be a tool you can pull to get you closer to that business goal you have.
00:08:05.634 --> 00:08:07.706
Ai might be a tool you can utilize for that.
00:08:07.706 --> 00:08:12.812
It's not a hammer to hit all of the nails.
00:08:12.812 --> 00:08:14.987
I mean it's not the tool to fix them all.
00:08:14.987 --> 00:08:18.528
In some cases it's not at all the right tool.
00:08:18.528 --> 00:08:21.348
In many cases it can be a fantastic tool.
00:08:21.348 --> 00:08:22.632
So that depends a lot on the scenario.
00:08:22.632 --> 00:08:22.884
It depends a lot on the goal.
00:08:22.884 --> 00:08:23.281
It can be a fantastic tool.
00:08:23.281 --> 00:08:23.687
So that depends a lot on the scenario.
00:08:23.687 --> 00:08:29.339
It depends a lot on the goal.
00:08:30.279 --> 00:08:38.412
I will say that I'm fortunate in the way that I don't need to know the intricate details of every new GPT model that comes out and stuff like that.
00:08:38.412 --> 00:08:44.366
So that's too far for me to go and I could do nothing else.
00:08:44.366 --> 00:08:45.946
And to your point, christian.
00:08:45.946 --> 00:08:48.625
So you said you're not an ai expert.
00:08:48.625 --> 00:08:55.003
So but I mean by by modern standards and the ai that we typically talk about these days.
00:08:55.003 --> 00:08:58.956
Well, lms, it's only been out there for such a short while.
00:08:58.956 --> 00:09:01.923
Who who can actually be an ai expert yet?
00:09:01.923 --> 00:09:05.533
Right, I mean, it's been out there for a couple of years.
00:09:05.740 --> 00:09:10.148
In this modern incarnation, no one is an expert at this point.
00:09:10.148 --> 00:09:19.811
I mean, you have people who know more than me and us, maybe given in this audience here, but we all try to just learn every day.
00:09:19.811 --> 00:09:22.668
I think that's how I would describe it.
00:09:22.668 --> 00:09:28.291
There's some interesting things.
00:09:28.291 --> 00:09:31.570
I mean from my perspective as a product manager.
00:09:31.570 --> 00:09:41.033
What I'm placed in this world to do is to basically rank customer opportunities and problems.
00:09:41.033 --> 00:09:43.969
That's my primary job.
00:09:43.969 --> 00:09:47.245
Whether or not AI can help solve some of those opportunities or problems that's my primary job.
00:09:47.245 --> 00:09:50.572
Whether or not AI can help solve some of those opportunities or problems great.
00:09:50.572 --> 00:10:03.107
So that's what I'm about to do, like reassess all those things that I know about our customers, our joint customers and partners, and how can AI help those?
00:10:05.832 --> 00:10:17.572
Yeah, just when you started speaking about the dishwasher, it made me chuckle and say how can you relate that to why AI was invented?
00:10:17.572 --> 00:10:19.725
And I had to look it up.
00:10:19.725 --> 00:10:23.951
I looked up, you know why was the dishwasher invented?
00:10:23.951 --> 00:10:27.549
So I thought it was pretty interesting to share to the listeners.
00:10:27.549 --> 00:10:45.471
One was to Josephine Cochran, who invented the dishwasher, and her reasoning was to protect her china dishes and she didn't want to hand wash and then free up time.
00:10:45.471 --> 00:10:49.686
And how relatable is that with AI?
00:10:49.686 --> 00:10:58.688
Is that we want to free up our time to do other things and use AI to.
00:10:58.688 --> 00:11:09.587
In this case, she had noted that hand washing, avoiding hand washing, she wanted to create a machine that could wash dishes faster and more carefully than she could.
00:11:09.587 --> 00:11:28.548
So, in a sense, when AI is invented, you kind of want to have a tool in this case an AI tool to do other things for you, maybe better than you can and maybe more carefully in feeding you information.
00:11:28.548 --> 00:11:30.826
I don't know, but I thought that was pretty interesting.
00:11:31.659 --> 00:11:37.187
The relatable component there and that makes total sense to me.
00:11:37.187 --> 00:11:49.951
That makes sense in the sense that AI is very good at paying attention to detail that a human might overlook if we're tired or it's end of the day or early morning.
00:11:49.951 --> 00:12:00.184
Even so, there's so much relatable things to what you just said that applies for AI, or even just technology, I mean, and automation.
00:12:00.184 --> 00:12:04.750
It's not just AI, because IT is about automating stuff.
00:12:04.750 --> 00:12:07.667
Ai just brings another level of automation.
00:12:08.590 --> 00:12:13.523
You could say it is a beneficial tool.
00:12:13.523 --> 00:12:23.152
But, chris, to go back to your point with the invention of dishwasher and maybe even the invention of AI, I think I don't know the history of AI and I'm not certain.
00:12:23.152 --> 00:12:26.274
If you know, I'm sure you could use AI to find the history of AI.
00:12:26.274 --> 00:12:28.277
But is AI one of those tools?
00:12:28.277 --> 00:12:35.594
I have so many thoughts around AI and it's tough to find a way to get into unpack all of the comments that I have on it.
00:12:35.594 --> 00:12:44.993
But a lot of tools get created or invented without the intention of them being invented.
00:12:51.500 --> 00:12:54.567
You know it's sometimes you create a tool or you create a process or something comes of it and you're trying to solve one problem.
00:12:54.567 --> 00:13:05.711
Then you realize that you can solve many other problems by either implementing it slightly different, you know, working on it with another invention or a tool that was created.
00:13:05.711 --> 00:13:07.815
So where does it end?
00:13:07.815 --> 00:13:17.462
And with AI, I think we're just I don't know if we'll ever or we can even understand where it will go or where it will end.
00:13:17.462 --> 00:13:20.530
We see how individuals are using it now, such as creating pictures.
00:13:21.020 --> 00:13:32.789
Right, I'm looking at some of the common uses of it outside of the analytical points, points of it people creating pitches you know a lot of your search engines now will primarily give you the ai results of the search engines, which is a summary of sources that they cite.
00:13:32.789 --> 00:13:40.701
Uh, ai gets used, you know, from that way, from like the language model points of view, but then ai also gets used from a technical point of view.
00:13:40.701 --> 00:13:43.807
Um, I'm also reading.
00:13:43.807 --> 00:14:06.676
I started reading a few weeks ago a book uh, moral ai and how we get there which is by pelican books and I think it's borg, synod, armstrong and contents I'm so bad with names which also opened up my eyes to ai and how ai impacts everybody in the world.
00:14:07.941 --> 00:14:10.990
I think it creates different iterations, right with AI.
00:14:10.990 --> 00:14:18.903
You know, clearly, you see AI in almost practically anywhere you had mentioned.
00:14:18.903 --> 00:14:31.029
You know creating images for you and started with that and then followed with creating videos for you now and and so much more, and then you know, uh, sorted.
00:14:31.029 --> 00:14:32.452
You know I was trying to.
00:14:32.452 --> 00:14:39.341
I mean, I was listening to your episode um, you know, where does ai come into play in erp and where does it go from there?
00:14:39.341 --> 00:14:50.448
Right, I'm sure a lot of people are going to create different iterations of AI and Copilot and Business Central, and that is where I'm excited about.
00:14:51.280 --> 00:14:57.730
We're kind of scratching the surface in the ERP and what else can it do for you in the business sense?
00:14:57.730 --> 00:15:05.173
Of course, there's different AIs with M365 and all the other Microsoft ecosystem product lines.
00:15:05.173 --> 00:15:11.312
What's next for businesses, especially in the SMB space?
00:15:11.312 --> 00:15:25.951
I think it's going to create a level playing field for SMBs to be able to compete better and where they can focus more on strategy and be more tactical in the way they do business.
00:15:25.951 --> 00:15:44.609
So that's where I'm excited about and and I think a lot of us here in this call we're the, I guess, curator and and that's where we become more of business consultants in a sense of how you would run your business utilizing all these Microsoft tools and AI.
00:15:46.251 --> 00:15:46.913
I think yeah.
00:15:46.932 --> 00:15:48.514
I think, Go ahead.
00:15:48.534 --> 00:15:48.955
Christian.
00:15:49.035 --> 00:16:02.086
Okay, I think that we see some processes done by AI or agents which we never thought would be possible without doing the human.
00:16:02.086 --> 00:16:23.004
What was presented is really mind what level of steps and pre decisions AI can make and offer a more, better result into the process until a human needs to interact to that.
00:16:23.004 --> 00:16:26.734
And I think that will go further and further and further.
00:16:26.734 --> 00:16:54.875
What I'm thinking is where is the point where the human says okay, there is a new point where I have the feeling that now I have to grab into this process because the AI is not good enough and that point is, or this frontier is, leveraged on and on and on, something like that.
00:16:54.875 --> 00:17:05.109
But to have this feeling, to have in mind this is the thing AI cannot do.
00:17:05.109 --> 00:17:31.765
I have to be conscious and cautious and I think, on the one hand side, with AI we can make more processes, we can make more decisions easily, and on the other side, the temptation is high that we just accept what the AI is prompting to us or offering us.
00:17:32.506 --> 00:17:36.192
I like the concept of the human in the loop.
00:17:36.192 --> 00:17:50.684
So at least the human at some point in this process has to say, yes, I accept what the AI is suggesting, but having more time to process.
00:17:50.684 --> 00:17:54.750
More communication is also critical.
00:17:54.750 --> 00:17:57.434
Just to click yes, okay, okay, okay.
00:17:57.434 --> 00:18:17.671
I think we should implement processes where we just say, okay, let's look at how we use AI here and take a little bit back and say, wow, what number of steps AI can make for us.
00:18:17.671 --> 00:18:22.891
But just think where it just goes too far.
00:18:25.701 --> 00:18:29.584
I think that's an interesting line of thinking, christian, and I think so.
00:18:29.584 --> 00:18:44.269
Before we go deeper, let me maybe just say that some of the stuff that we talk about in this episode like, if nothing else is mentioned, these are my personal opinions and may not reflect the opinions of Microsoft.
00:18:44.269 --> 00:19:14.053
Let's sort of get into product-specific stuff, but I would like to take sort of a product's eye view on what you just said, which is when we look at agents these days and what an agent can do and what should be the scope of a given agent and what should be its name, and so now we've released some information about the sales order agent and described how does it work and actually being fairly transparent about what it intends to do and how it works, which I think is great.
00:19:14.053 --> 00:19:24.291
We actually start by drawing up in the process today, before the agent.
00:19:24.291 --> 00:19:26.015
How would this process look?
00:19:26.015 --> 00:19:29.910
Where are the human interactions between which parties?
00:19:29.910 --> 00:19:32.606
Now bring in the agent?
00:19:34.142 --> 00:19:38.548
Now, how does that human in the loop let's say flow look like?
00:19:38.548 --> 00:19:43.088
Are there places where the human actually doesn't need to be in the loop?
00:19:43.088 --> 00:19:44.946
That's the idea.
00:19:44.946 --> 00:19:46.778
Don't bring in the human unless it's need to be in the loop.
00:19:46.778 --> 00:19:47.099
That's the idea.
00:19:47.099 --> 00:19:49.569
Don't bring in the human unless it's really necessary or adds value.
00:19:49.569 --> 00:19:54.405
So that's the line, that's the way that we think about it, to try to really apply.
00:19:54.405 --> 00:20:07.128
You know, if that A to Z process can remove the human like can automate a piece We've always been trying to automate stuff right for many years.
00:20:07.128 --> 00:20:11.330
If AI can do that better now, well, let's do that.
00:20:11.330 --> 00:20:21.127
But of course, whenever there's a risk situation or wherever there's a situation where the human can add value to a decision, by all means let's bring in the human into the loop.
00:20:21.127 --> 00:20:28.965
So that's the way that we think about the agents and the tasks that they should perform in whatever business process.
00:20:30.440 --> 00:20:43.950
And to your point, chris, I think that the cool thing about AI in ERP, as in Business Central these days, is that it becomes super concrete.
00:20:44.500 --> 00:20:55.491
Like we take AI from something that is very sort of fluffy and marketing and buzzwords that we all see online and we make it into something that's very concrete.
00:20:55.491 --> 00:21:08.573
So the philosophy is that in BC unless, of course, you're an ISV that needs to build something on top of it, or a partner, a customer wants to add more features AI should be ready to use out of the box.
00:21:08.573 --> 00:21:15.627
You don't have to create a new AI project for your business, for your enterprise to start leveraging AI?
00:21:15.627 --> 00:21:25.719
No, you just use AI features that are already there, immersed into the UI and among all other feature functions in Business Central and among all other feature functions in Business Central.
00:21:25.719 --> 00:21:36.250
So, because small medium businesses, many of them don't even have the budget to do their new AI project and hire data scientists and what have you and all these things create their own models.
00:21:36.250 --> 00:21:38.367
No, they should have AI ready to use.
00:21:38.367 --> 00:21:41.429
So that's another piece of our philosophy.
00:21:44.702 --> 00:21:45.044
AI is.
00:21:45.044 --> 00:21:52.067
I look at that as more as AI as a function, because if you have AI as a function, you can get the efficiencies.
00:21:52.067 --> 00:22:01.451
I think, to some of the comments from the conversations that we've had and the conversations that I've heard, you look for efficiencies so that you can do something else.
00:22:01.451 --> 00:22:17.826
People want to use the word something else or something that they feel is more productive and let automation or AI or robots I use the word quote do the tasks that are mundane or some would consider boring or repetitive.
00:22:17.826 --> 00:22:25.244
And we do use AI on a daily basis and a lot of the tools that we have.
00:22:25.244 --> 00:22:37.055
To your point, Soren, that it's just embedded within the application If you buy a vehicle, a newer vehicle now, they have lane avoidance, collision avoidance, all of these AI tools that you just get in your vehicle.
00:22:37.055 --> 00:22:46.291
You either turn it on or turn it off, depending upon how you'd like to drive, and it works and it helps the, the function, uh, be there for you.
00:22:46.291 --> 00:22:53.289
But to kind of take a step back from um ai in that respect.
00:22:54.250 --> 00:22:57.016
But a couple things that I come with ai we.
00:22:57.016 --> 00:22:58.321
We talk about the vehicle.
00:22:58.321 --> 00:23:00.487
Um, I'll admit I have a tesla.
00:23:00.487 --> 00:23:18.026
I love the fsd and I used it a lot and it just seems to improve and improve and improve to the point where I think sometimes it can see things I use the word see or detect things faster than I can as a human right Now.
00:23:18.026 --> 00:23:22.053
Ai may not be perfect and AI makes mistakes.
00:23:22.053 --> 00:23:23.022
Humans make mistakes.
00:23:23.022 --> 00:23:29.487
Humans get into car crashes and have accidents right for some reason, and we have accepted that.
00:23:29.487 --> 00:23:38.340
But if AI has an accident, we find fault or find blame in that process, instead of understanding that.
00:23:38.340 --> 00:23:44.353
You know, in essence, nothing is perfect, because humans make mistakes too and we accept it.
00:23:44.353 --> 00:23:48.809
Why don't we accept it when AI may be a little off?
00:23:51.763 --> 00:24:15.044
That's such a great question and the fact is, I think right now is that to a point that we don't accept it, like we don't give machines that same benefit of the doubt, or like if they don't work it's crap and we throw them out, like I mean that's like, but humans like we, we're much more forgiving, like we give them a second chance.
00:24:15.144 --> 00:24:24.548
And oh, maybe I didn't teach you uh well enough how to do it, or so, but that's a good point and I, I, I love your example with the Tesla.
00:24:24.548 --> 00:24:32.667
So I also drive a Tesla, but I'm not in the US, so I can't use the full self-driving capability, so I use the what do you call it?
00:24:32.667 --> 00:24:35.039
The semi-autonomous, so it can keep me within the lane.
00:24:35.039 --> 00:24:41.746
It reacts in an instant if something drives out in front of me much faster than I can do.
00:24:41.746 --> 00:24:48.262
So I love that mix of me being in control but just being assisted by these great features.
00:24:48.262 --> 00:24:52.351
That uh makes me drive in a much safer way.
00:24:52.351 --> 00:24:56.565
Basically, uh, I'm not sure I'm a proponent of sort of full self-driving.
00:24:56.565 --> 00:25:05.800
I don't know, I'm still torn about that, but uh, that could lead us into a good discussion as well, um I think you have that trust because that I'm.
00:25:05.901 --> 00:25:13.201
I'm the same way with brad, you know, I love, I love it, um, as as I, you know, continue to use it.
00:25:13.201 --> 00:25:15.686
But in the very beginning I could not trust that thing.
00:25:15.686 --> 00:25:17.872
I had my hand in the steering wheel.
00:25:17.872 --> 00:25:22.008
Um, you know, a white knuckle on on the steering wheel.
00:25:22.008 --> 00:25:30.509
But uh, eventually I come to accept it and I was like, oh, that's a pretty good job, uh, getting me around.
00:25:30.509 --> 00:25:31.953
Uh, am I still cautious?
00:25:31.953 --> 00:25:37.848
Absolutely, I still want to make sure that I can quickly control something if I don't believe it's doing the right thing.
00:25:38.369 --> 00:25:54.809
So I, I think, um, actually my reason for not being a sort of full believer in in sort of full self-driving, like complete autonomy with cars is is not so much because I don't I mean, I actually do trust the technology to a large extent.
00:25:54.809 --> 00:26:05.432
It's more because of many of the reasons that are now in that book that I pitched to all of you that moral AI like who has, like if something goes wrong.
00:26:05.432 --> 00:26:31.601
And there's this example in the book where, where an uber car like you would think it was a volvo they, they test an uber car, some self-driving capabilities in some state and it accidentally runs over a, a woman who's who's passing the street in in an unexpected place and it was dark and things of that nature, and the driver wasn't paying attention, and there was all these things about who has the responsibility for that end of the day.
00:26:31.601 --> 00:26:32.644
Was it the software?
00:26:32.644 --> 00:26:35.049
Was it the driver who wasn't paying attention?
00:26:35.049 --> 00:26:41.630
Was it the, the government who allowed that car to be on that road in the first place?
00:26:41.769 --> 00:26:57.825
But while testing it out all of these things and if we can't figure that out or all those things need to be figured out first before you allow a technology loose like that, right, and so that and I wonder if we can do that.
00:26:57.825 --> 00:27:08.178
If we can, we like we don't have a good track record of of doing that, uh.
00:27:08.178 --> 00:27:19.000
So I wonder I I'm I'm fairly sure the technology will, will get us there, if we can live with the uh, uh when it doesn't work well.
00:27:19.000 --> 00:27:26.811
So what happens if a self-driving car kills 20 people per year, or cars multiple?
00:27:26.811 --> 00:27:28.976
Um, can we live with that?
00:27:28.976 --> 00:27:35.266
What if 20 people is a lot better than 3000 people from from human drivers Like yeah, that is.
00:27:35.460 --> 00:27:37.704
I think in the United States there's 1.3.
00:27:37.704 --> 00:27:39.279
I don't don't quote me on the statistics.
00:27:39.279 --> 00:27:47.730
I think I heard it again with the all these conversations about self-driving and you know the Moralei book and listen to some other tools.
00:27:47.730 --> 00:27:52.290
I think in the United States is one point three million fatalities due to automobiles a year.
00:27:52.290 --> 00:27:55.409
You know I forget if it's a specific type, but it's a lot.
00:27:55.409 --> 00:28:04.817
So, to get to your point, you know not to focus on the you know, the driving portion, because a lot of topics we want to talk about.
00:28:04.817 --> 00:28:07.843
Is it safer?
00:28:07.843 --> 00:28:18.050
In a sense, because you may lose 20 individuals tragically in an accident per year, right, whereas before it was a million because AI?
00:28:18.050 --> 00:28:22.346
You know I joke and I've had conversation with Chris talking about the Tesla.
00:28:22.346 --> 00:28:28.125
I trust the FSD a lot driving around here in particular, I trust the FSD a lot more than I trust other people.
00:28:28.125 --> 00:28:52.332
And to your point of someone losing their life tragically, crossing in the evening at an unusual place and having a collision with a vehicle, that could happen with a person doing it as well, and I've driven around and the Tesla detected something before I saw it.
00:28:52.332 --> 00:29:03.776
So the reaction time is a little bit quicker because if you're driving right and it goes up to a couple points I want to talk about, which I'll bring up to is, you know, too much trust and de-skilling.
00:29:03.776 --> 00:29:05.686
I want to make sure we get to those points.
00:29:05.686 --> 00:30:02.099
And then also, if we're looking at analytics, some you know harm bias as well, and then also, if we're looking at analytics, some you know harm bias as well, no-transcript.
00:30:02.099 --> 00:30:06.351
And then to Christian's point and even your point where the humans are involved.
00:30:06.351 --> 00:30:09.469
Are the humans even capable with the skilling?
00:30:09.469 --> 00:30:14.144
Because you don't have to do those tasks anymore to monitor the AI?
00:30:14.144 --> 00:30:16.932
You know, if you look back, I'm going to go on a little tear in a moment.
00:30:17.011 --> 00:30:24.720
In in education, when I was growing up, we learned a lot of math and we did not, you know, use calculators.
00:30:24.720 --> 00:30:27.718
I don't even know when the calculator was invented, but we weren't allowed to.
00:30:27.718 --> 00:30:29.545
You know, they taught us how to use a slide rule.
00:30:29.545 --> 00:30:30.487
They taught us how to use a slide rule.