Welcome to Dynamics Corner Podcast!
Episode 353: In the Dynamics Corner Chair: The Role of AI: Ethics, Insights, and a Path Forward
Episode 353: In the Dynamics Corner Chair: The Role of AI: …
The Role of AI in Business Processes: Ethics, Insights, and a Path Forward 💻+ 🙋 How is AI shaking up business processes, especially in ERP …
Choose your favorite podcast player
Dec. 24, 2024

Episode 353: In the Dynamics Corner Chair: The Role of AI: Ethics, Insights, and a Path Forward

The Role of AI in Business Processes: Ethics, Insights, and a Path Forward

đź’»+ 🙋 How is AI shaking up business processes, especially in ERP systems like Business Central? In the latest episode of the Dynamics Corner podcast, Kris and Brad are joined by experts Søren Friis Alexandersen and Christian Lenz as they delve into critical issues of the day. We talk about using AI to meet business goals, why it's crucial to be clear about how AI makes decisions, and the ethics of using AI. We also look at balancing human touch with AI automation, the risk of de-skilling due of reliance on AI, and the limits of what AI can do. The big takeaway? We need to be smart about how we bring AI into business.
Here are some other topics we covered:
🪟 Transparency is key when working with AI: Always know how AI makes its decisions.
🤖 AI doesn't stand on its own: Humans + AI = better results.
🍀Ethics matter: We need to continually reflect on the moral aspects of AI use.
đź‘ľKnow the limits: We need to be clear on what AI can and can't do and define its boundaries to keep it effective.
🫶 Societal benefits: Yes, we have concerns, but how can AI benefit social progress?

Send us a text

Chapters

00:00 - The Role of AI in Society

15:49 - Utilizing AI in Business Processes

25:38 - The Ethical Dilemmas of AI Autonomy

41:13 - Implications of AI Automation in Work

52:59 - Human vs AI Decision Making

01:00:25 - Impact of AI on Human Skills

01:05:06 - Future Implications of AI Dependency

01:12:41 - The Future Implications of AI

Transcript

WEBVTT

00:00:00.240 --> 00:00:03.810
Welcome everyone to another episode of Dynamics Corner.

00:00:03.810 --> 00:00:07.588
Is AI a necessity for the survival of humanity?

00:00:07.588 --> 00:00:08.711
That's my question.

00:00:08.711 --> 00:00:11.569
I'm your co-host, chris, and this is Brad.

00:00:11.640 --> 00:00:15.271
This episode was recorded on December 18th 2024.

00:00:15.271 --> 00:00:16.364
Chris, chris, chris.

00:00:16.364 --> 00:00:21.751
Is AI required for the survival of humanity?

00:00:21.751 --> 00:00:26.855
Is humanity creating the requirement for AI for survival?

00:00:26.855 --> 00:00:28.379
That's a good question.

00:00:28.379 --> 00:00:34.341
When it comes to AI, I have so many different questions and there's so many points that I want to discuss about it With us.

00:00:34.341 --> 00:00:39.441
Today we had the opportunity to speak with Zoran Fries-Alexanderson and Christian Lenz about some of those topics.

00:00:39.441 --> 00:00:59.332
Good morning, good afternoon.

00:00:59.332 --> 00:00:59.973
How are?

00:00:59.993 --> 00:01:06.298
you doing there, we, there we go good day good afternoon over the pond.

00:01:06.617 --> 00:01:07.097
How are you doing?

00:01:07.097 --> 00:01:14.457
Good morning, well, good good good, I'll tell you, soren, I love the video.

00:01:14.457 --> 00:01:15.099
What did you do?

00:01:15.099 --> 00:01:22.250
You have the nice, the nice blurred background, the soft lighting yeah, it's uh.

00:01:23.531 --> 00:01:25.834
You can see great things with a great camera.

00:01:27.540 --> 00:01:30.290
It looks nice, it looks really nice, christian.

00:01:30.290 --> 00:01:30.811
How are you doing?

00:01:31.801 --> 00:01:32.945
Fine, thank you very much.

00:01:35.263 --> 00:01:38.271
Your background's good too, I like it, it's real.

00:01:38.290 --> 00:01:39.093
Back to the future.

00:01:41.061 --> 00:01:49.430
It is good, it is good, but thank you both for joining us this afternoon, this morning, this evening, whatever it may be been looking forward to this conversation.

00:01:49.430 --> 00:01:50.861
I was talking with chris prior to this.

00:01:50.861 --> 00:01:54.355
This is probably the most prepared I've ever been for a discussion.

00:01:54.355 --> 00:01:56.421
How well prepared I am we'll see.

00:01:56.421 --> 00:02:07.031
Uh, because I have a lot of things that I would like to bring up based on some individual conversations we had via either voice or via text.

00:02:07.031 --> 00:02:16.367
And before we jump into that and have that famous topic, can we tell everybody a little bit about yourself, soren?

00:02:18.401 --> 00:02:21.485
Yes, so my name is Soren Alexandersen.

00:02:21.485 --> 00:02:32.372
I'm a product manager in the Business Central engineering team working on finance features basically rethinking finance with co-pilot and AI.

00:02:33.980 --> 00:02:35.467
Excellent, excellent Christian.

00:02:37.701 --> 00:02:38.907
Yeah, I'm Christian.

00:02:38.907 --> 00:02:41.769
I'm a development facilitator at CDM.

00:02:41.769 --> 00:02:44.147
We're a Microsoft Business Central partner.

00:02:44.147 --> 00:02:45.612
Development facilitator at CDM.

00:02:45.612 --> 00:02:52.259
We're a Microsoft Business Central partner and I'm responsible for the education of my colleagues in all the new topics, all the new stuff.

00:02:52.259 --> 00:03:03.429
I've been a developer in the past and a project manager and now I'm taking care of taking all the information in that it leads to good solutions for our customers.

00:03:04.651 --> 00:03:06.966
Excellent excellent and thank you both for joining us again.

00:03:06.966 --> 00:03:14.349
You're both veterans and I appreciate you both taking the time to speak with us, as well as your support for the podcast over the years as well.

00:03:14.349 --> 00:03:32.169
And just to get into this, I know, soren, you work with AI and work with the agent portion I'm simplifying some of the terms within Business Central for the product group and you know, in our conversations you've turned me on to many things.

00:03:32.169 --> 00:03:52.675
One thing you've turned me on to was a podcast called the Only Constant, which I was pleased I think it was maybe at this point a week or so ago, maybe a little bit longer to see that there was an episode where you were a guest on that podcast talking about AI, and you know Business Central, erp in particular.

00:03:52.819 --> 00:04:23.194
I mean, I think you referenced Business Central, but I think the conversation that you had was more around ERP software and that got me thinking a lot about AI, and I know, christian, you have a lot of comments on AI as well too, but the way you ended that with you know nobody wants to do the dishes is wonderful, which got my mind thinking about AI in detail and what AI is doing and how AI is shaping.

00:04:23.194 --> 00:04:34.411
You know business, how AI is shaping how we interact socially, how AI is shaping the world, so I was hoping we could talk a little bit about AI with everyone today.

00:04:34.411 --> 00:04:39.689
So with that, what are your thoughts on AI?

00:04:39.689 --> 00:04:45.879
And also, maybe, christian, what do you think of when you hear of AI or artificial intelligence?

00:04:46.841 --> 00:04:55.971
I would say it's mostly a tool for me Getting a little bit more deeper into what it is.

00:04:55.971 --> 00:05:07.735
I'm not an AI expert, but I'm talking to people who try to elaborate how to use AI for the good of people.

00:05:07.735 --> 00:05:41.990
For example, I had a conversation with one of those experts from Germany just a few weeks before directions and he told me how to make use of custom GPTs and I got the concept and tried it a little bit custom GPTs and I got the concept and tried it a little bit and when I got to Directions EMEA in Vienna in the beginning of November, the agents topic was everywhere, so it was co-pilot and agents and it prepared me a lot how this concept is evolving and how fast this is evolving.

00:05:41.990 --> 00:06:20.084
So I'm not able to catch up everything, but I have good connections to people who are experts in this and focus on this, and the conversations with those people, not only on the technical side but also on how to make use of it and what to keep in mind when using AI, are very crucial for me to make my own assumptions and decide on the direction where we should go as users, as partners for our customers, and to consult our customers and on the other side.

00:06:20.704 --> 00:06:36.913
With the evolving possibilities and capabilities of AI, generating whole new interactions with people, it gets much more harder to have this barrier in mind.

00:06:36.913 --> 00:06:46.452
This is a machine doing something that I receive and this is not a human being or a living being that is interacting with me.

00:06:46.452 --> 00:07:11.649
It's really hard to have a bird's eye view of what is really happening here, because it's so like human interaction that we have with AI, that is hard to not react as a human on this human interaction and then have an outside view of it.

00:07:11.649 --> 00:07:17.927
How can I use it and where is it good or bad, or something like that, that moral conversation we're trying to have.

00:07:17.927 --> 00:07:26.050
But having conversations about it and thinking about it helps a lot, I think.

00:07:27.312 --> 00:07:29.141
Yeah, it does, Saren.

00:07:29.141 --> 00:07:34.533
You have quite a bit of insight into the agents and working with AI.

00:07:34.533 --> 00:07:36.949
What is your comments on AI?

00:07:38.564 --> 00:07:40.591
I think I'll start from the same perspective as Christian.

00:07:40.591 --> 00:08:05.634
From the same perspective as Christian, that for me, ai is also a tool in the sense that when looking at this from a business perspective, you have your business desires, your business goal, your business strategy and whatever lever you can pull to get you closer to that business goal you have AI might be a tool you can pull to get you closer to that business goal you have.

00:08:05.634 --> 00:08:07.706
Ai might be a tool you can utilize for that.

00:08:07.706 --> 00:08:12.812
It's not a hammer to hit all of the nails.

00:08:12.812 --> 00:08:14.987
I mean it's not the tool to fix them all.

00:08:14.987 --> 00:08:18.528
In some cases it's not at all the right tool.

00:08:18.528 --> 00:08:21.348
In many cases it can be a fantastic tool.

00:08:21.348 --> 00:08:22.632
So that depends a lot on the scenario.

00:08:22.632 --> 00:08:22.884
It depends a lot on the goal.

00:08:22.884 --> 00:08:23.281
It can be a fantastic tool.

00:08:23.281 --> 00:08:23.687
So that depends a lot on the scenario.

00:08:23.687 --> 00:08:29.339
It depends a lot on the goal.

00:08:30.279 --> 00:08:38.412
I will say that I'm fortunate in the way that I don't need to know the intricate details of every new GPT model that comes out and stuff like that.

00:08:38.412 --> 00:08:44.366
So that's too far for me to go and I could do nothing else.

00:08:44.366 --> 00:08:45.946
And to your point, christian.

00:08:45.946 --> 00:08:48.625
So you said you're not an ai expert.

00:08:48.625 --> 00:08:55.003
So but I mean by by modern standards and the ai that we typically talk about these days.

00:08:55.003 --> 00:08:58.956
Well, lms, it's only been out there for such a short while.

00:08:58.956 --> 00:09:01.923
Who who can actually be an ai expert yet?

00:09:01.923 --> 00:09:05.533
Right, I mean, it's been out there for a couple of years.

00:09:05.740 --> 00:09:10.148
In this modern incarnation, no one is an expert at this point.

00:09:10.148 --> 00:09:19.811
I mean, you have people who know more than me and us, maybe given in this audience here, but we all try to just learn every day.

00:09:19.811 --> 00:09:22.668
I think that's how I would describe it.

00:09:22.668 --> 00:09:28.291
There's some interesting things.

00:09:28.291 --> 00:09:31.570
I mean from my perspective as a product manager.

00:09:31.570 --> 00:09:41.033
What I'm placed in this world to do is to basically rank customer opportunities and problems.

00:09:41.033 --> 00:09:43.969
That's my primary job.

00:09:43.969 --> 00:09:47.245
Whether or not AI can help solve some of those opportunities or problems that's my primary job.

00:09:47.245 --> 00:09:50.572
Whether or not AI can help solve some of those opportunities or problems great.

00:09:50.572 --> 00:10:03.107
So that's what I'm about to do, like reassess all those things that I know about our customers, our joint customers and partners, and how can AI help those?

00:10:05.832 --> 00:10:17.572
Yeah, just when you started speaking about the dishwasher, it made me chuckle and say how can you relate that to why AI was invented?

00:10:17.572 --> 00:10:19.725
And I had to look it up.

00:10:19.725 --> 00:10:23.951
I looked up, you know why was the dishwasher invented?

00:10:23.951 --> 00:10:27.549
So I thought it was pretty interesting to share to the listeners.

00:10:27.549 --> 00:10:45.471
One was to Josephine Cochran, who invented the dishwasher, and her reasoning was to protect her china dishes and she didn't want to hand wash and then free up time.

00:10:45.471 --> 00:10:49.686
And how relatable is that with AI?

00:10:49.686 --> 00:10:58.688
Is that we want to free up our time to do other things and use AI to.

00:10:58.688 --> 00:11:09.587
In this case, she had noted that hand washing, avoiding hand washing, she wanted to create a machine that could wash dishes faster and more carefully than she could.

00:11:09.587 --> 00:11:28.548
So, in a sense, when AI is invented, you kind of want to have a tool in this case an AI tool to do other things for you, maybe better than you can and maybe more carefully in feeding you information.

00:11:28.548 --> 00:11:30.826
I don't know, but I thought that was pretty interesting.

00:11:31.659 --> 00:11:37.187
The relatable component there and that makes total sense to me.

00:11:37.187 --> 00:11:49.951
That makes sense in the sense that AI is very good at paying attention to detail that a human might overlook if we're tired or it's end of the day or early morning.

00:11:49.951 --> 00:12:00.184
Even so, there's so much relatable things to what you just said that applies for AI, or even just technology, I mean, and automation.

00:12:00.184 --> 00:12:04.750
It's not just AI, because IT is about automating stuff.

00:12:04.750 --> 00:12:07.667
Ai just brings another level of automation.

00:12:08.590 --> 00:12:13.523
You could say it is a beneficial tool.

00:12:13.523 --> 00:12:23.152
But, chris, to go back to your point with the invention of dishwasher and maybe even the invention of AI, I think I don't know the history of AI and I'm not certain.

00:12:23.152 --> 00:12:26.274
If you know, I'm sure you could use AI to find the history of AI.

00:12:26.274 --> 00:12:28.277
But is AI one of those tools?

00:12:28.277 --> 00:12:35.594
I have so many thoughts around AI and it's tough to find a way to get into unpack all of the comments that I have on it.

00:12:35.594 --> 00:12:44.993
But a lot of tools get created or invented without the intention of them being invented.

00:12:51.500 --> 00:12:54.567
You know it's sometimes you create a tool or you create a process or something comes of it and you're trying to solve one problem.

00:12:54.567 --> 00:13:05.711
Then you realize that you can solve many other problems by either implementing it slightly different, you know, working on it with another invention or a tool that was created.

00:13:05.711 --> 00:13:07.815
So where does it end?

00:13:07.815 --> 00:13:17.462
And with AI, I think we're just I don't know if we'll ever or we can even understand where it will go or where it will end.

00:13:17.462 --> 00:13:20.530
We see how individuals are using it now, such as creating pictures.

00:13:21.020 --> 00:13:32.789
Right, I'm looking at some of the common uses of it outside of the analytical points, points of it people creating pitches you know a lot of your search engines now will primarily give you the ai results of the search engines, which is a summary of sources that they cite.

00:13:32.789 --> 00:13:40.701
Uh, ai gets used, you know, from that way, from like the language model points of view, but then ai also gets used from a technical point of view.

00:13:40.701 --> 00:13:43.807
Um, I'm also reading.

00:13:43.807 --> 00:14:06.676
I started reading a few weeks ago a book uh, moral ai and how we get there which is by pelican books and I think it's borg, synod, armstrong and contents I'm so bad with names which also opened up my eyes to ai and how ai impacts everybody in the world.

00:14:07.941 --> 00:14:10.990
I think it creates different iterations, right with AI.

00:14:10.990 --> 00:14:18.903
You know, clearly, you see AI in almost practically anywhere you had mentioned.

00:14:18.903 --> 00:14:31.029
You know creating images for you and started with that and then followed with creating videos for you now and and so much more, and then you know, uh, sorted.

00:14:31.029 --> 00:14:32.452
You know I was trying to.

00:14:32.452 --> 00:14:39.341
I mean, I was listening to your episode um, you know, where does ai come into play in erp and where does it go from there?

00:14:39.341 --> 00:14:50.448
Right, I'm sure a lot of people are going to create different iterations of AI and Copilot and Business Central, and that is where I'm excited about.

00:14:51.280 --> 00:14:57.730
We're kind of scratching the surface in the ERP and what else can it do for you in the business sense?

00:14:57.730 --> 00:15:05.173
Of course, there's different AIs with M365 and all the other Microsoft ecosystem product lines.

00:15:05.173 --> 00:15:11.312
What's next for businesses, especially in the SMB space?

00:15:11.312 --> 00:15:25.951
I think it's going to create a level playing field for SMBs to be able to compete better and where they can focus more on strategy and be more tactical in the way they do business.

00:15:25.951 --> 00:15:44.609
So that's where I'm excited about and and I think a lot of us here in this call we're the, I guess, curator and and that's where we become more of business consultants in a sense of how you would run your business utilizing all these Microsoft tools and AI.

00:15:46.251 --> 00:15:46.913
I think yeah.

00:15:46.932 --> 00:15:48.514
I think, Go ahead.

00:15:48.534 --> 00:15:48.955
Christian.

00:15:49.035 --> 00:16:02.086
Okay, I think that we see some processes done by AI or agents which we never thought would be possible without doing the human.

00:16:02.086 --> 00:16:23.004
What was presented is really mind what level of steps and pre decisions AI can make and offer a more, better result into the process until a human needs to interact to that.

00:16:23.004 --> 00:16:26.734
And I think that will go further and further and further.

00:16:26.734 --> 00:16:54.875
What I'm thinking is where is the point where the human says okay, there is a new point where I have the feeling that now I have to grab into this process because the AI is not good enough and that point is, or this frontier is, leveraged on and on and on, something like that.

00:16:54.875 --> 00:17:05.109
But to have this feeling, to have in mind this is the thing AI cannot do.

00:17:05.109 --> 00:17:31.765
I have to be conscious and cautious and I think, on the one hand side, with AI we can make more processes, we can make more decisions easily, and on the other side, the temptation is high that we just accept what the AI is prompting to us or offering us.

00:17:32.506 --> 00:17:36.192
I like the concept of the human in the loop.

00:17:36.192 --> 00:17:50.684
So at least the human at some point in this process has to say, yes, I accept what the AI is suggesting, but having more time to process.

00:17:50.684 --> 00:17:54.750
More communication is also critical.

00:17:54.750 --> 00:17:57.434
Just to click yes, okay, okay, okay.

00:17:57.434 --> 00:18:17.671
I think we should implement processes where we just say, okay, let's look at how we use AI here and take a little bit back and say, wow, what number of steps AI can make for us.

00:18:17.671 --> 00:18:22.891
But just think where it just goes too far.

00:18:25.701 --> 00:18:29.584
I think that's an interesting line of thinking, christian, and I think so.

00:18:29.584 --> 00:18:44.269
Before we go deeper, let me maybe just say that some of the stuff that we talk about in this episode like, if nothing else is mentioned, these are my personal opinions and may not reflect the opinions of Microsoft.

00:18:44.269 --> 00:19:14.053
Let's sort of get into product-specific stuff, but I would like to take sort of a product's eye view on what you just said, which is when we look at agents these days and what an agent can do and what should be the scope of a given agent and what should be its name, and so now we've released some information about the sales order agent and described how does it work and actually being fairly transparent about what it intends to do and how it works, which I think is great.

00:19:14.053 --> 00:19:24.291
We actually start by drawing up in the process today, before the agent.

00:19:24.291 --> 00:19:26.015
How would this process look?

00:19:26.015 --> 00:19:29.910
Where are the human interactions between which parties?

00:19:29.910 --> 00:19:32.606
Now bring in the agent?

00:19:34.142 --> 00:19:38.548
Now, how does that human in the loop let's say flow look like?

00:19:38.548 --> 00:19:43.088
Are there places where the human actually doesn't need to be in the loop?

00:19:43.088 --> 00:19:44.946
That's the idea.

00:19:44.946 --> 00:19:46.778
Don't bring in the human unless it's need to be in the loop.

00:19:46.778 --> 00:19:47.099
That's the idea.

00:19:47.099 --> 00:19:49.569
Don't bring in the human unless it's really necessary or adds value.

00:19:49.569 --> 00:19:54.405
So that's the line, that's the way that we think about it, to try to really apply.

00:19:54.405 --> 00:20:07.128
You know, if that A to Z process can remove the human like can automate a piece We've always been trying to automate stuff right for many years.

00:20:07.128 --> 00:20:11.330
If AI can do that better now, well, let's do that.

00:20:11.330 --> 00:20:21.127
But of course, whenever there's a risk situation or wherever there's a situation where the human can add value to a decision, by all means let's bring in the human into the loop.

00:20:21.127 --> 00:20:28.965
So that's the way that we think about the agents and the tasks that they should perform in whatever business process.

00:20:30.440 --> 00:20:43.950
And to your point, chris, I think that the cool thing about AI in ERP, as in Business Central these days, is that it becomes super concrete.

00:20:44.500 --> 00:20:55.491
Like we take AI from something that is very sort of fluffy and marketing and buzzwords that we all see online and we make it into something that's very concrete.

00:20:55.491 --> 00:21:08.573
So the philosophy is that in BC unless, of course, you're an ISV that needs to build something on top of it, or a partner, a customer wants to add more features AI should be ready to use out of the box.

00:21:08.573 --> 00:21:15.627
You don't have to create a new AI project for your business, for your enterprise to start leveraging AI?

00:21:15.627 --> 00:21:25.719
No, you just use AI features that are already there, immersed into the UI and among all other feature functions in Business Central and among all other feature functions in Business Central.

00:21:25.719 --> 00:21:36.250
So, because small medium businesses, many of them don't even have the budget to do their new AI project and hire data scientists and what have you and all these things create their own models.

00:21:36.250 --> 00:21:38.367
No, they should have AI ready to use.

00:21:38.367 --> 00:21:41.429
So that's another piece of our philosophy.

00:21:44.702 --> 00:21:45.044
AI is.

00:21:45.044 --> 00:21:52.067
I look at that as more as AI as a function, because if you have AI as a function, you can get the efficiencies.

00:21:52.067 --> 00:22:01.451
I think, to some of the comments from the conversations that we've had and the conversations that I've heard, you look for efficiencies so that you can do something else.

00:22:01.451 --> 00:22:17.826
People want to use the word something else or something that they feel is more productive and let automation or AI or robots I use the word quote do the tasks that are mundane or some would consider boring or repetitive.

00:22:17.826 --> 00:22:25.244
And we do use AI on a daily basis and a lot of the tools that we have.

00:22:25.244 --> 00:22:37.055
To your point, Soren, that it's just embedded within the application If you buy a vehicle, a newer vehicle now, they have lane avoidance, collision avoidance, all of these AI tools that you just get in your vehicle.

00:22:37.055 --> 00:22:46.291
You either turn it on or turn it off, depending upon how you'd like to drive, and it works and it helps the, the function, uh, be there for you.

00:22:46.291 --> 00:22:53.289
But to kind of take a step back from um ai in that respect.

00:22:54.250 --> 00:22:57.016
But a couple things that I come with ai we.

00:22:57.016 --> 00:22:58.321
We talk about the vehicle.

00:22:58.321 --> 00:23:00.487
Um, I'll admit I have a tesla.

00:23:00.487 --> 00:23:18.026
I love the fsd and I used it a lot and it just seems to improve and improve and improve to the point where I think sometimes it can see things I use the word see or detect things faster than I can as a human right Now.

00:23:18.026 --> 00:23:22.053
Ai may not be perfect and AI makes mistakes.

00:23:22.053 --> 00:23:23.022
Humans make mistakes.

00:23:23.022 --> 00:23:29.487
Humans get into car crashes and have accidents right for some reason, and we have accepted that.

00:23:29.487 --> 00:23:38.340
But if AI has an accident, we find fault or find blame in that process, instead of understanding that.

00:23:38.340 --> 00:23:44.353
You know, in essence, nothing is perfect, because humans make mistakes too and we accept it.

00:23:44.353 --> 00:23:48.809
Why don't we accept it when AI may be a little off?

00:23:51.763 --> 00:24:15.044
That's such a great question and the fact is, I think right now is that to a point that we don't accept it, like we don't give machines that same benefit of the doubt, or like if they don't work it's crap and we throw them out, like I mean that's like, but humans like we, we're much more forgiving, like we give them a second chance.

00:24:15.144 --> 00:24:24.548
And oh, maybe I didn't teach you uh well enough how to do it, or so, but that's a good point and I, I, I love your example with the Tesla.

00:24:24.548 --> 00:24:32.667
So I also drive a Tesla, but I'm not in the US, so I can't use the full self-driving capability, so I use the what do you call it?

00:24:32.667 --> 00:24:35.039
The semi-autonomous, so it can keep me within the lane.

00:24:35.039 --> 00:24:41.746
It reacts in an instant if something drives out in front of me much faster than I can do.

00:24:41.746 --> 00:24:48.262
So I love that mix of me being in control but just being assisted by these great features.

00:24:48.262 --> 00:24:52.351
That uh makes me drive in a much safer way.

00:24:52.351 --> 00:24:56.565
Basically, uh, I'm not sure I'm a proponent of sort of full self-driving.

00:24:56.565 --> 00:25:05.800
I don't know, I'm still torn about that, but uh, that could lead us into a good discussion as well, um I think you have that trust because that I'm.

00:25:05.901 --> 00:25:13.201
I'm the same way with brad, you know, I love, I love it, um, as as I, you know, continue to use it.

00:25:13.201 --> 00:25:15.686
But in the very beginning I could not trust that thing.

00:25:15.686 --> 00:25:17.872
I had my hand in the steering wheel.

00:25:17.872 --> 00:25:22.008
Um, you know, a white knuckle on on the steering wheel.

00:25:22.008 --> 00:25:30.509
But uh, eventually I come to accept it and I was like, oh, that's a pretty good job, uh, getting me around.

00:25:30.509 --> 00:25:31.953
Uh, am I still cautious?

00:25:31.953 --> 00:25:37.848
Absolutely, I still want to make sure that I can quickly control something if I don't believe it's doing the right thing.

00:25:38.369 --> 00:25:54.809
So I, I think, um, actually my reason for not being a sort of full believer in in sort of full self-driving, like complete autonomy with cars is is not so much because I don't I mean, I actually do trust the technology to a large extent.

00:25:54.809 --> 00:26:05.432
It's more because of many of the reasons that are now in that book that I pitched to all of you that moral AI like who has, like if something goes wrong.

00:26:05.432 --> 00:26:31.601
And there's this example in the book where, where an uber car like you would think it was a volvo they, they test an uber car, some self-driving capabilities in some state and it accidentally runs over a, a woman who's who's passing the street in in an unexpected place and it was dark and things of that nature, and the driver wasn't paying attention, and there was all these things about who has the responsibility for that end of the day.

00:26:31.601 --> 00:26:32.644
Was it the software?

00:26:32.644 --> 00:26:35.049
Was it the driver who wasn't paying attention?

00:26:35.049 --> 00:26:41.630
Was it the, the government who allowed that car to be on that road in the first place?

00:26:41.769 --> 00:26:57.825
But while testing it out all of these things and if we can't figure that out or all those things need to be figured out first before you allow a technology loose like that, right, and so that and I wonder if we can do that.

00:26:57.825 --> 00:27:08.178
If we can, we like we don't have a good track record of of doing that, uh.

00:27:08.178 --> 00:27:19.000
So I wonder I I'm I'm fairly sure the technology will, will get us there, if we can live with the uh, uh when it doesn't work well.

00:27:19.000 --> 00:27:26.811
So what happens if a self-driving car kills 20 people per year, or cars multiple?

00:27:26.811 --> 00:27:28.976
Um, can we live with that?

00:27:28.976 --> 00:27:35.266
What if 20 people is a lot better than 3000 people from from human drivers Like yeah, that is.

00:27:35.460 --> 00:27:37.704
I think in the United States there's 1.3.

00:27:37.704 --> 00:27:39.279
I don't don't quote me on the statistics.

00:27:39.279 --> 00:27:47.730
I think I heard it again with the all these conversations about self-driving and you know the Moralei book and listen to some other tools.

00:27:47.730 --> 00:27:52.290
I think in the United States is one point three million fatalities due to automobiles a year.

00:27:52.290 --> 00:27:55.409
You know I forget if it's a specific type, but it's a lot.

00:27:55.409 --> 00:28:04.817
So, to get to your point, you know not to focus on the you know, the driving portion, because a lot of topics we want to talk about.

00:28:04.817 --> 00:28:07.843
Is it safer?

00:28:07.843 --> 00:28:18.050
In a sense, because you may lose 20 individuals tragically in an accident per year, right, whereas before it was a million because AI?

00:28:18.050 --> 00:28:22.346
You know I joke and I've had conversation with Chris talking about the Tesla.

00:28:22.346 --> 00:28:28.125
I trust the FSD a lot driving around here in particular, I trust the FSD a lot more than I trust other people.

00:28:28.125 --> 00:28:52.332
And to your point of someone losing their life tragically, crossing in the evening at an unusual place and having a collision with a vehicle, that could happen with a person doing it as well, and I've driven around and the Tesla detected something before I saw it.

00:28:52.332 --> 00:29:03.776
So the reaction time is a little bit quicker because if you're driving right and it goes up to a couple points I want to talk about, which I'll bring up to is, you know, too much trust and de-skilling.

00:29:03.776 --> 00:29:05.686
I want to make sure we get to those points.

00:29:05.686 --> 00:30:02.099
And then also, if we're looking at analytics, some you know harm bias as well, and then also, if we're looking at analytics, some you know harm bias as well, no-transcript.

00:30:02.099 --> 00:30:06.351
And then to Christian's point and even your point where the humans are involved.

00:30:06.351 --> 00:30:09.469
Are the humans even capable with the skilling?

00:30:09.469 --> 00:30:14.144
Because you don't have to do those tasks anymore to monitor the AI?

00:30:14.144 --> 00:30:16.932
You know, if you look back, I'm going to go on a little tear in a moment.

00:30:17.011 --> 00:30:24.720
In in education, when I was growing up, we learned a lot of math and we did not, you know, use calculators.

00:30:24.720 --> 00:30:27.718
I don't even know when the calculator was invented, but we weren't allowed to.

00:30:27.718 --> 00:30:29.545
You know, they taught us how to use a slide rule.

00:30:29.545 --> 00:30:30.487
They taught us how to use a slide rule.

00:30:30.487 --> 00:30:36.064
They taught us how to use even believe it or not, when I was really young an abacus, and now and then I could do math really, really well.

00:30:36.064 --> 00:30:45.845
Now, with the, you know, ease of using calculators, ease of using your phone or ease of even using AI to do math equations?

00:30:47.007 --> 00:30:49.189
can you even do math as quickly as you used to?

00:30:49.189 --> 00:30:53.294
So how can you monitor a tool that's supposed to be calculating math, for example?

00:30:54.655 --> 00:30:58.941
I, I, I think you're, I mean, you have very good points about the like.

00:30:58.941 --> 00:31:06.582
Just coming back to the car for a second, because, uh, I mean, technology will speak for itself and what it, what it's capable of, I think.

00:31:06.582 --> 00:31:23.230
I think where we have to take some decisions that we haven't had to before is when we dial up the autonomy to 100% and the car drives completely on its own, because then you need to be able to question how does it make decisions?

00:31:23.230 --> 00:31:27.313
And get insights into how does it make decisions based on what?

00:31:27.313 --> 00:31:32.516
Who determines how large an object has to be before the car will stop if it runs?

00:31:32.536 --> 00:31:45.250
So I think back in the old days in Denmark, insurance companies wouldn't cover if the object you ran over was smaller than a small dog, something like that.

00:31:45.250 --> 00:31:48.028
So who set those rules?

00:31:48.028 --> 00:31:54.568
And the same thing for the technology too Should I just run that pheasant over or should I stop?

00:31:54.568 --> 00:31:55.211
For the pheasant?

00:31:55.211 --> 00:31:58.266
Those kind of decisions.

00:31:58.266 --> 00:32:05.288
But if it's a human driving in control, we can always just point to the human and say, yeah, you need to follow the rules, and here they are.

00:32:05.288 --> 00:32:19.008
But if it's a machine, all kinds of things, and eventually if the machine fails or we end up in some situation where there's a dilemma who's responsible, who's accountable and that just becomes very hard questions.

00:32:19.008 --> 00:32:34.553
I don't have the answer, but I think when we dial up the autonomy to that level, we need to be able to have you know and we need to talk about what level of transparency can I demand as a user or as a bystander or whatever?

00:32:34.553 --> 00:32:35.944
So there's just so many questions.

00:32:35.944 --> 00:32:37.328
That opens up, I think.

00:32:39.820 --> 00:32:51.772
And if you are allowed to turn off AI assistance, will, at some point in time, when a failure is occurring, you be be responsible for turning that assistance off.

00:32:53.895 --> 00:32:54.896
That's a very good point.

00:32:55.361 --> 00:32:56.144
Someone could say.

00:32:56.144 --> 00:33:02.492
So you have to keep in mind that with assistance you're better.

00:33:02.492 --> 00:33:09.809
Like in the podcast episode you mentioned, a human together with a machine is better than the machine.

00:33:09.809 --> 00:33:17.813
Other ways you could say a human with a machine is better than another human or just a human.

00:33:17.813 --> 00:33:34.153
And I think at some point in time, companies who are looking for accountability and responsibility will increase the level of you have to turn on AI assistance.

00:33:35.480 --> 00:33:58.644
You could imagine when you get into a car that is recognizing you as a driver your facial expression or something like that that it can recognize if you're able to drive or not, and then the question is will it allow you to drive or will it decide no, don't touch the wheel, I will drive, or something like that.

00:33:58.644 --> 00:34:07.009
Or if something pops up you're not able to drive, I decide that for you and I won't start the engine.

00:34:07.009 --> 00:34:09.403
Will you override it or not?

00:34:09.403 --> 00:34:13.380
That are those scenarios that pop up in my mind.

00:34:13.380 --> 00:34:21.699
And and how will you decide as a human when you have something, uh, emergent happening?

00:34:21.699 --> 00:34:25.532
You have to drive someone to the, to the hospital or something like that?

00:34:25.532 --> 00:34:31.389
You will override, but will the system ask is it really an emergency?

00:34:31.389 --> 00:34:32.291
Or something like that?

00:34:32.291 --> 00:34:35.028
You say I just want to do this.

00:34:35.028 --> 00:34:38.469
How are you reacting in this moment?

00:34:40.121 --> 00:34:42.168
I think that's super interesting.

00:34:42.168 --> 00:35:10.306
And coming back to the transparency thing, one of my favorite examples is if I go to the bank and I need to borrow some money, for many years, and even before AI, there's been some algorithm that the bank person don't even know about how it works, probably, but can just see a red or green light after I ask so, okay, how much money do you want to borrow?

00:35:10.306 --> 00:35:11.842
Oh, I want to borrow 100K.

00:35:11.842 --> 00:35:13.965
No, you can't do that, sorry.

00:35:13.965 --> 00:35:16.648
Uh, machine says no, right.

00:35:16.648 --> 00:35:24.909
And and uh, even before ai, if something is complex enough, uh, it doesn't really matter if it's ai or not.

00:35:25.530 --> 00:35:33.251
But in these sort of life impacting situations, do I have a right for transparency?

00:35:33.251 --> 00:35:38.130
Do I have a right to know why they say no to lend me money, for example?

00:35:38.130 --> 00:35:44.567
The same if I get rejected for a job interview based on some decision made by an algorithm or AI.

00:35:44.567 --> 00:35:52.990
These are very serious situations where that will impact my life and of course, they don't go.

00:35:53.030 --> 00:36:03.253
You can't claim transparency everywhere, but I think there are some of these situations where, as humans, we do have a right for transparency and to know how do these things know?

00:36:03.253 --> 00:36:08.005
And there is a problem if the person who's conveying the information to us.

00:36:08.005 --> 00:36:11.954
The bank bank person doesn't even have that insight, doesn't even know how it works.

00:36:11.954 --> 00:36:17.952
They just push the button and then the light turns red or green.

00:36:17.952 --> 00:36:45.086
So that's yeah, but again, so many questions, and that's why I'm actually happy that today I don't know if you saw it we released a documentation article for BC about the sales audit agent that, in very detailed way, describes what this agent does, what it tries to do, what kind of data it has access to, what kind of permissions it has, all these things.

00:36:45.086 --> 00:36:51.286
I think that's a very, very transparent way of describing a piece of AI and I'm actually very, very proud of that.

00:36:51.286 --> 00:36:51.768
We're doing that.

00:36:52.730 --> 00:36:55.228
Yeah, just want to make that, doesn't make that segue.

00:36:56.121 --> 00:37:06.744
Yeah, it's filling the need of humans to know how does the system work or does the system make decisions?

00:37:06.744 --> 00:37:25.362
To proceed to the next step, Because I think there's a need to have a view on is what has happened before and has an influence on me as a human is judged in a way that is doing good for me or not?

00:37:25.362 --> 00:37:31.634
Like your example, what is evaluated when you ask for a back credit or something like that.

00:37:31.634 --> 00:37:49.713
And having this transparency brings us back to yes, I have an influence on how it is needed, Because I can override the AI, because I can see where it makes a wrong decision or wrong step or something like that.

00:37:49.713 --> 00:38:01.952
Make the wrong decision or wrong step or something like that, Like I would do when I talk to my bank account manager and say, hey, does it have the old address?

00:38:01.952 --> 00:38:03.134
I moved already.

00:38:03.134 --> 00:38:05.137
Oh no, it's not in the system.

00:38:05.137 --> 00:38:12.585
Let's change that and then make another evaluation or something like that.

00:38:12.606 --> 00:38:24.552
And I think this autonomy for us as users to keep this in play, that we can override it or we can add information, new information, in some kind of way.

00:38:24.552 --> 00:38:31.166
We can just do it when we know where is this information taken.

00:38:31.166 --> 00:38:35.429
We can just do it when we know where is this information taken, how old is it and how is it processed.

00:38:35.429 --> 00:38:38.572
So I like that approach very much.

00:38:38.572 --> 00:38:58.704
I don't think every user is looking at it, but as an ERP system owner like I'm in our company as well needs to have answers to those questions from our users when we use these features, but it's true and just so.

00:38:58.724 --> 00:39:01.568
Yeah, coming back, just come back to the banking sample just again.

00:39:01.568 --> 00:39:19.751
So the bank person probably doesn't know if their AI or algorithm takes into account how many pictures they can find with me on it on Facebook where I hold a beer, like would that be an influencing factor on if they want to lend me money?

00:39:19.751 --> 00:39:20.922
So all these things.

00:39:20.922 --> 00:39:26.005
But we just don't have that insight and I think that's a problem in many cases.

00:39:26.005 --> 00:39:34.025
You could argue I don't know how the Tesla autopilot does its.

00:39:34.025 --> 00:39:43.889
You know whatever influences it to take decisions, but that's why I like the semi-autonomous piece of work right now.

00:39:45.512 --> 00:39:47.021
No, it is, I think.

00:39:47.021 --> 00:39:53.454
But listening to what you're saying, I do like the transparency, or at least the understanding.

00:39:53.454 --> 00:39:57.990
I like the agent approach because you have specific functions.

00:39:57.990 --> 00:40:03.932
I do like the transparency so that you understand what it does, so you know what it's making a decision on.

00:40:03.932 --> 00:40:08.472
So if you're going to trust it in a sense or you want to use the information, you have to know where it came from.

00:40:08.472 --> 00:40:14.864
Ai or computers in general can process data much faster than humans.

00:40:14.864 --> 00:40:24.809
So, being able to go back to your bank credit check example, it can process much more information than a person can.

00:40:24.809 --> 00:40:36.643
I mean a person could come up to the same results, but it may not be as quick as a computer can, as long as that information is available to it.

00:40:36.643 --> 00:40:47.407
But I do think for certain functions the transparency needs to be there because in the case of bank credit, how can you improve your credit if you don't know what's being evaluated to maybe work on or correct that?

00:40:47.407 --> 00:40:51.070
Or, to Christian's point, there may be some misinformation in there that, for whatever reason is in there, that's impacting, so that.

00:40:51.070 --> 00:40:56.561
Or to Christian's point, there may be some misinformation in there that you know, for whatever reason was in there, that's impacting so that you need to force it.

00:40:57.043 --> 00:41:00.429
Some other things, to the point that Christian also made.

00:41:00.429 --> 00:41:04.385
You know humans with a machine is better than a human.

00:41:04.385 --> 00:41:13.130
You know, potentially in some cases, because the machine can be the tool to help you do something, whatever it may be.

00:41:13.130 --> 00:41:16.076
You referenced the hammer before and I use that example a lot.

00:41:16.076 --> 00:41:17.905
You have hammers, you have screwdrivers, you have air guns.

00:41:17.905 --> 00:41:20.177
Which tools do you use to do the job?

00:41:20.177 --> 00:41:21.782
Well, it depends on what you're trying to put together.

00:41:21.782 --> 00:41:29.768
Are you doing some rough work on a house where you need to put up the frame, so maybe a hammer or an air gun will work, and if you're doing some finish work, maybe you need a screwdriver.

00:41:29.768 --> 00:41:31.510
You know, with a small screw to do something.

00:41:31.510 --> 00:41:33.954
So there does have to be a decision made.

00:41:33.954 --> 00:41:39.253
And at what point can AI make that decision versus a human make that decision?

00:41:39.253 --> 00:41:43.210
And, to your point, where do you have that human interaction?

00:41:43.210 --> 00:41:51.148
But I want to go with the human interaction of de-skilling, because if you have all these tools that we rely on.

00:41:51.168 --> 00:41:58.965
To go back to the calculator, and you know we've all been reading, you know I think we all read the same book and I think we all listened to some of the same episodes.

00:41:58.965 --> 00:42:06.548
But you look at pilots and planes with autopilots right same thing with someone driving a vehicle like, do you lose the skill to?

00:42:06.548 --> 00:42:08.762
You know ai does so much portion of flying a plane.

00:42:08.762 --> 00:42:09.568
I didn't even really think about that.

00:42:09.568 --> 00:42:10.715
You know AI does so much portion of flying a plane.

00:42:10.715 --> 00:42:11.519
I didn't even really think about that.

00:42:12.000 --> 00:42:15.809
You know the most difficult or the most most dangerous is what?

00:42:15.809 --> 00:42:19.385
The taking off and landing of a plane, and that's where AI gets used the most.

00:42:19.385 --> 00:42:23.442
And then a human is in there to take over in the event that AI fails.

00:42:23.442 --> 00:42:34.148
But if the human isn't doing it often right, even with the reaction time, okay well, how quickly can a human react, you know, to a defense system?

00:42:34.148 --> 00:42:46.003
Same thing, you know, if you look at the Patriot missile examples, where you know the Patriot missile detects a threat in a moment and then will go up and try to, you know, disarm the threat.

00:42:46.003 --> 00:42:52.949
So at what point do we as humans lose a skill?

00:42:52.949 --> 00:43:01.637
Because we become dependent upon these tools and we may not know what to do in a situation because we lost that skill.

00:43:04.382 --> 00:43:05.085
That's a good point.

00:43:05.085 --> 00:43:07.106
Sorry, go ahead.

00:43:07.106 --> 00:43:08.181
No, it's a really good point.

00:43:08.240 --> 00:43:08.541
Sorry, go ahead.

00:43:08.541 --> 00:43:09.322
No, it's a really good point.

00:43:09.322 --> 00:43:30.376
I like that example from I think it was from the Moral AI book as well, where there's this example of some military people that you know they sit in their bunker somewhere and handle these drones like day in and day out and, because they're so autonomous, everything happens without their.

00:43:30.376 --> 00:43:37.840
You know they don't need to be involved, but then suddenly a situation occurs.

00:43:37.840 --> 00:43:43.349
They need to react in sort of a split second and take a decision, and I think one of the outcomes was you know, their manager says that.

00:43:43.349 --> 00:43:47.514
Well, who can blame them if they take a wrong decision at that point?

00:43:47.514 --> 00:43:55.405
Because it's three hours of boredom and then it's three seconds of action.

00:43:55.405 --> 00:43:56.849
So they, they're just not feeling it.

00:43:56.969 --> 00:44:05.402
Where, to your point, right, if they were like they're, they're, they're being de-skilled for two hours and 57 minutes and now there's three minutes of action where everything happens.

00:44:05.402 --> 00:44:14.003
Right, who can, who can expect that they keep up the level of you know, skills and what have you if, if they're just not involved.

00:44:14.003 --> 00:44:15.786
So it's super interesting point.

00:44:15.786 --> 00:44:23.163
Um, yeah, so many, so many questions that it raises.

00:44:23.583 --> 00:44:31.085
Uh this, it goes on, it goes on, it goes on, it's, and it is in that moral a book is, and it was the patriotot missile example.

00:44:31.085 --> 00:44:37.440
Because the Patriot missile had two failures, one with a British jet and one with an American jet shortly thereafter.

00:44:37.440 --> 00:44:44.094
And that's what they were talking about is how do you put human intervention in there, you know, to reconfirm a launch?

00:44:44.094 --> 00:44:47.708
Because in the event, if it's a threat, it will use the word threat.

00:44:47.708 --> 00:44:52.438
How much time do you have to immobilize that threat?

00:44:52.438 --> 00:44:54.583
Right, you may only have a second to two.

00:44:54.583 --> 00:44:56.067
I mean, things move quickly in the.

00:44:56.067 --> 00:45:10.730
In the case of the patriot missile, again, it was intended to disarm, uh, you know, and again, missiles that are coming at you, that are being launched, you know, over the pond, as they say, so they can take them down, and that's the point with that.

00:45:11.170 --> 00:45:14.514
And if I could step back for a second.

00:45:14.514 --> 00:45:31.289
You know when we're having a conversation about the usefulness of AI is based upon the source that it has access to and you know understanding where it's getting its source from and what access it has.

00:45:31.289 --> 00:45:55.489
If you're limiting the source that it can consume to be a better tool, are we potentially limiting its capabilities as well, because we wanna control it so much, in a sense, to where it's more focused, but are we also limiting its potential, right?

00:45:55.489 --> 00:46:00.295
Yes, so yeah, go ahead, sorry.

00:46:01.280 --> 00:46:08.568
Yeah, no, I think that's very well put and I think that's a consequence and I think that's fine.

00:46:08.568 --> 00:46:11.829
I mean, just take the sales auto agent again as an example.

00:46:11.829 --> 00:46:17.047
We have railed it very hard.

00:46:17.047 --> 00:46:23.349
We put many constraints up for it, so we can only do a certain number of tasks.

00:46:23.349 --> 00:46:27.083
We can only do task A, b and C, d, e, f.

00:46:27.083 --> 00:46:27.684
It cannot do.

00:46:27.684 --> 00:46:30.449
We had to set some guardrails for what it can do.

00:46:31.721 --> 00:46:40.068
It's not just about and I think this is a misconception sometimes people think about agents and say here's an agent, here's my keys to my kingdom.

00:46:40.068 --> 00:46:48.490
Now, agent, you can just do anything in this business, in this system, and user will tell you what to do or we've given you a task.

00:46:48.490 --> 00:46:51.347
That's not our approach to agents.

00:46:51.347 --> 00:46:51.748
In BC.

00:46:51.748 --> 00:46:58.329
We basically said here's an end-to-end process or a process that has sort of a natural beginning and a natural ending.

00:46:58.329 --> 00:47:06.429
In between that process you can trigger the agent in various places, but the agent has a set instruction.

00:47:08.402 --> 00:47:13.106
You receive inquiries for products and eventually you'll create a sales order.

00:47:13.106 --> 00:47:23.465
Like everything in between there could be all kinds of you know human in the loop and discussions back and forth, but that's the limit of what that agent can do and that's totally fine.

00:47:23.465 --> 00:47:24.945
It's not fully autonomous.

00:47:24.945 --> 00:47:38.409
You can't just now go and say, oh, by the way, buy more inventory for our stock, that's out of scope for it, and at that point I think that's totally fine.

00:47:38.409 --> 00:47:54.266
And it's about finding those good use cases where there is a process to be automated, where the agent can play a part, and not about just creating a let's call it a super agent that can do anything with like.

00:47:54.266 --> 00:47:57.713
So I think that's it's a very natural development.

00:47:58.559 --> 00:48:06.360
So you don't aim for a T-shape profile agent like it is in many job descriptions Now.

00:48:06.360 --> 00:48:11.833
You want a T-shape profile employee with a broad and deep knowledge.

00:48:11.833 --> 00:48:19.393
We as human can develop this, but the agent approach is different.

00:48:19.393 --> 00:48:29.000
I would more say it's not limiting the agent or the AI of the input or the capabilities.

00:48:29.000 --> 00:48:33.306
It is more like going more deep, having deep knowledge.

00:48:33.306 --> 00:48:38.155
In this specific functionality, the AI agent is assisting.

00:48:38.155 --> 00:48:45.818
That can be more information and it can go deeper than a human can be.

00:48:45.878 --> 00:48:56.014
For example, I was very impressed by one AI function I had in my future leadership education.

00:48:56.179 --> 00:49:07.039
We had an alumni meeting in September and the company set up an AI agent that is behaving like a conventional business manager.

00:49:07.039 --> 00:49:43.460
Because we learn how to set up businesses differently and when you have something new you want to introduce to an organization, often you are hit by the cultural barriers and just to train that more without humans, they invented an ai model where you can put your ideas in and you have a conversation with someone who has traditional tayloristic business thinking and something like that.

00:49:43.460 --> 00:50:11.226
So you can train how you um put your ideas to such a person and what will the reactions will be just to train your ability to be better when you place these new ideas to a real person in a traditional organization or something like that and that had such a deep knowledge about all these methodologies and thinking and something like that.

00:50:11.226 --> 00:50:31.105
I don't know who I could find to be so deep in this knowledge and have exactly this profile, this deep profile that I needed to train myself on.

00:50:31.125 --> 00:50:32.507
That is a really interesting use case.

00:50:32.527 --> 00:50:47.311
I think then it becomes to continuing a conversation about maybe there's a misconception or misunderstanding in the business space, because right now, you know, I've had several conversations where AI is going to solve their problems.

00:50:47.311 --> 00:51:12.512
Ai is going to solve their business challenges, but they, you know, from a lot of people's perspective, it's just this one entity of, like it's going to solve all my business problems, whereas for us engineers, we understand that you can have specific AI tool that would solve a specific problem or a specific process in your business.

00:51:12.512 --> 00:51:50.914
But right now a lot of people believe, like I'm just going to install it, it's going to solve everything for me, and so not realizing that there are different categories for that, you know different areas and I think having these kinds of conversation in hopes that know it's it's not just a one-size-fit-all um kind of solution out there, yeah, and indeed, and when you see, like the um industrial work developed in the first phases, it's like going back to um having one person just fitting is a bold or a school or something like that.

00:51:51.054 --> 00:51:55.027
That is the agent at the moment, just one single task it can do.

00:51:55.027 --> 00:52:23.840
But it can do many, many things into this task at the moment and what I think it will take some time to develop is developing this T-shape from the ground of the T to have this broad knowledge and broad capabilities out of one agent, or the development of the network of agents.

00:52:23.840 --> 00:52:31.217
So in some sessions in Vienna that was presented, the team of agents, that was presented, the team of agents.

00:52:31.217 --> 00:52:38.067
So you have a coordinator that coordinates the agents and then brings back the proposal from the agent to the user or something like that.

00:52:38.067 --> 00:52:45.228
That will look like the one agent can do all of these capabilities for the user.

00:52:45.228 --> 00:52:46.211
That is presented.

00:52:46.211 --> 00:52:55.313
But in the deep functionality there is a team of agents and a variety of agents doing very specific things.

00:52:57.661 --> 00:52:58.682
I like that case.

00:52:58.682 --> 00:53:09.184
It goes to, chris, to your point of sometimes it's just a misunderstanding of what AI is, because I think there's so many different levels of AI and we talked about that before.

00:53:09.184 --> 00:53:11.690
You know what is machine learning, what is large language models.

00:53:11.690 --> 00:53:13.284
I mean, that's all in AI.

00:53:13.284 --> 00:53:16.271
A lot of things you know can fall into AI.

00:53:16.271 --> 00:53:33.583
But to the point of the agents to go into ERP software, even Christian, to your point, maybe even in an assembly line or manufacturing, I'd like the agents in the business aspect to have a team of agents together so they all do specific functions.

00:53:33.884 --> 00:53:46.588
To Soren's point of where do you have some repetitive tasks or some precision tasks, or even, in some cases, some skilled tasks that need to be done, and then you can chain them together.

00:53:46.588 --> 00:53:52.331
Because even if you look at an automobile we talked about an automobile there isn't an automobile, that just appears.

00:53:52.331 --> 00:53:57.597
You have tires, you have engines, you have batteries, you have right.

00:53:57.597 --> 00:54:03.880
The battery provides the power, the wheel provides, you know the, the ability to easily move right.

00:54:03.880 --> 00:54:05.626
The engine will give you the force to push.

00:54:05.626 --> 00:54:11.108
So putting that all together see, this is how I start to look at putting that all together now gives you a vehicle.

00:54:11.108 --> 00:54:13.467
So the same thing if you're looking at erp software.

00:54:13.467 --> 00:54:35.425
That's why when I first heard about the agent approach when we talked some months ago, soren, that having an agent for sales orders or having an agent for finance or having an agent for purchase orders or something, a specific task, you can put them all together and then use the ones you need and then have somebody administer those agents, so you have like an agent administrator.

00:54:35.445 --> 00:54:45.860
That is where the human comes back into the loop, because at some point you have to put these pieces together.

00:54:45.860 --> 00:54:53.934
I think at the moment, this is the user that needs to do this, but this will develop further in the future.

00:54:53.934 --> 00:55:06.481
So you have another point where you end in or where you need ideas or something like that, because that is also what I learned and found very interesting.

00:55:06.481 --> 00:55:23.054
When you see an AI suggesting something to you, this feeling this is a fit for my problem is inside your body and at the moment, you cannot put this into a machine.

00:55:23.054 --> 00:55:46.485
So the idea, if the suggestion is right and you decide to take it and to use it, you need a human to make this decision, because you need the human body, the brain and everything together seeing and perceiving this, to make this decision if it is wrong or good for this use case.

00:55:48.782 --> 00:55:52.771
I think that depends a bit Christian, if I may.

00:55:52.771 --> 00:56:01.806
So there are places where, let's say, one AI could you could give it a problem to tackle and it will come with some outcomes.

00:56:01.806 --> 00:56:16.755
And there could then be another AI and now I use the term loosely but another process that is only tasked with assessing the output of the first one within some criteria, within some aspects.

00:56:16.755 --> 00:56:31.608
So that has been, say loosely, now trained, but its only purpose is to say, okay, give me the outcome here and then assess that with complete fresh eyes like it was a different person.

00:56:31.608 --> 00:56:38.449
Of course it's not a person and we should never make it look like it's a person but one machine can assess the other.

00:56:38.940 --> 00:56:51.606
Basically, that's what I'd say to a certain degree, right, if we can frame the problem, right, yeah, and you had mentioned about from the human aspect, to take over and said you know that's wrong.

00:56:51.606 --> 00:56:55.509
Right, like, oh, it's wrong, I know it's wrong, I'm going to take over.

00:56:55.509 --> 00:57:19.054
It reminds me of a story when I did a NAV implementation a while back where we had demand forecasting and when we introduced that to the organization it does like tons of calculation and it's going to give you a really good output of what you need based upon information and data that you have.

00:57:19.054 --> 00:57:32.708
And I had this individual person that I was working with, or that person was working for this organization, where that's not right, that's wrong, and I would ask can you tell me why it's wrong?

00:57:32.708 --> 00:57:35.965
I'd love to know, like, how are you feeling?

00:57:35.965 --> 00:57:38.146
Like, what made you feel like it was wrong?

00:57:38.166 --> 00:57:39.331
Do you have any calculations?

00:57:39.331 --> 00:57:47.356
No, I just know it's wrong because typically we do it, you know we, typically it's this number right, but they couldn't prove it.

00:57:47.356 --> 00:57:57.445
So that's also a dangerous component where a person could take over and then whatever decision, whatever they feel like it's wrong, it could.

00:57:57.445 --> 00:57:59.849
Where they think it's wrong, they can also be wrong.

00:57:59.849 --> 00:58:03.682
Right, it's just like the human aspect of it.

00:58:03.682 --> 00:58:04.885
But, but they can.

00:58:04.885 --> 00:58:07.009
But they can.

00:58:07.028 --> 00:58:10.940
Yes, but they can, yeah, yeah and I think I mean and that.

00:58:10.940 --> 00:58:27.952
So the first time when I learned more about sort of ai, like these recent years, was some eight, nine years ago when we we did some of the classic sort of machine learning stuff for some customers and what was an eye-opener for me was that it didn't have to be a black box.

00:58:27.952 --> 00:58:30.748
So back then, let's say, you had a data set.

00:58:30.748 --> 00:58:45.347
I think the specific customer wanted to predict which of their subscribers would churn right, and there was a machine learning model for that on Azure that they could use for that.

00:58:45.347 --> 00:59:02.067
I don't know the specific name of it and the data guy that helped us one of my colleagues from Microsoft back then showed them data because they had their ideas on what were the influencing factors that made consumers churn.

00:59:02.067 --> 00:59:29.090
These were, these were magazines that they were subscribing to, and when he told them, show them the data, and then said uh, and showed them because they could do that with with the machine learning tools they could, he could show them these are the influencing factors, like actually determine based on the data that you just see and he had validated against their historic data.

00:59:29.090 --> 00:59:31.146
They were just mind-blown.

00:59:31.280 --> 00:59:38.400
So it turned out I'm just paraphrasing now that people in the western part of the country were the ones who churned the most.

00:59:38.400 --> 00:59:43.431
So the geography was the predominant influencing factor to predict churn.

00:59:43.431 --> 00:59:48.380
They were just mind-blown because they had never seen that data.

00:59:48.380 --> 00:59:51.088
They had other ideas of what it means to churn.

00:59:51.088 --> 00:59:52.431
Like to your point, chris, like.

00:59:52.431 --> 01:00:02.793
But that was just so cool that we could bring that kind of transparency and say this is how the model calculates, these are the influencing factors that it has found by looking at the data.

01:00:02.793 --> 01:00:12.913
So I just thought that was a great example of bringing that transparency when humans, like you say, are just being stubborn and saying no, it doesn't work, it's not right.

01:00:15.219 --> 01:00:24.329
That's definitely another factor, because we've all come into those situations where that just doesn't feel right and in some cases it could be correct.

01:00:25.139 --> 01:00:26.405
But it depends on the skills.

01:00:26.405 --> 01:00:29.507
That's what I want to go back to is the skills.

01:00:29.507 --> 01:00:30.349
It's the skills.

01:00:31.260 --> 01:00:45.210
How, if we're going to keep creating AI tools to help us do tasks okay, one, I'm going to go off on a tangent a little bit.

01:00:45.210 --> 01:00:49.431
One how do we ensure we have the skills to monitor the AI?

01:00:49.431 --> 01:00:53.951
How do we ensure that we have the skills to perform a task?

01:00:53.951 --> 01:00:55.025
Now I understand.

01:00:55.025 --> 01:00:58.000
The dishwasher Chris you talked about was invented.

01:00:58.000 --> 01:01:03.503
Now we don't have to wash dishes manually all the time to save us time to do other things.

01:01:03.503 --> 01:01:13.884
We're always building these tools to make things easier for us and, in essence, up the required skill to do a function, saying we need to work on more valuable things.

01:01:13.884 --> 01:01:16.971
Right, we shouldn't have to be clicking post all day long.

01:01:16.971 --> 01:01:20.304
Let's have the system do a few checks on a sales order.

01:01:20.304 --> 01:01:22.891
If it meets those checks, let the system post it.

01:01:24.420 --> 01:01:30.693
But when is there a point where we lose the ability to have the skill to progress forward?

01:01:30.693 --> 01:01:41.340
And then with this, with all of these tools that help us do so much, because now that we have efficiency with tools, oftentimes it takes a reduction of personnel.

01:01:41.340 --> 01:01:43.242
I'm not trying to say people are losing their jobs.

01:01:43.242 --> 01:01:46.271
It's going to take a reduction of personnel to do a task.

01:01:46.271 --> 01:01:50.543
Therefore, relieving the dependency on others.

01:01:50.543 --> 01:01:51.465
Humans are communal.

01:01:51.465 --> 01:02:00.347
Are we getting to the point where we're going to lose skill and not be able to do some complex tasks because we rely on other tools?

01:02:01.088 --> 01:02:14.952
And if the tools are to get more complex and we need to have the skill to determine that complexity, if we miss that little middle layer of all that mundane building block stuff, how do we have the skill to do something?

01:02:14.952 --> 01:02:24.474
And two, if I can now I see AI images, I see AI videos being created all the time.

01:02:24.474 --> 01:02:25.394
It does a great job.

01:02:25.394 --> 01:02:41.355
Before we used to rely on artists, publishers, other individuals to create that content for the videos, for brochures, pictures, images, the B-roll type stuff we'll call it.

01:02:41.355 --> 01:02:51.849
If we don't need any of that stuff and we're doing it all of ourselves, what are we doing to us to be able to work together as a species if now I can do all the stuff myself with less people?

01:02:52.219 --> 01:02:53.586
So I have many points there.

01:02:53.586 --> 01:02:56.012
One, it's the complexity of the skill.

01:02:56.012 --> 01:03:04.902
And how do we get that skill if we immediately cut out the need, for we no longer need someone to put the screw on that bolt.

01:03:04.902 --> 01:03:09.822
As you pointed, christian, we need someone to come in and be able to analyze these complex results of ai.

01:03:09.822 --> 01:03:15.773
But if nobody can learn that by doing all those tasks, what does that give us?

01:03:15.773 --> 01:03:18.704
So that's my little, so two points so what is?

01:03:19.326 --> 01:03:21.710
yeah, no, that's great, great questions.

01:03:21.710 --> 01:03:32.465
So what you're saying is how do we determine if this car is built right if there's no drivers left to to to test it, like no, no one has the skill to drive anymore.

01:03:32.465 --> 01:03:33.028
So how?

01:03:33.028 --> 01:03:37.103
How can they determine if this car is built up to a certain quality standard and what have you?

01:03:37.103 --> 01:03:41.811
Well, the other answer would be you don't have to because it will drive itself.

01:03:41.811 --> 01:03:56.193
But until we get that point, like in that time in between, you need someone to still be able to validate and probably for some realms of our work and jobs and society, you will always need some people to validate.

01:03:56.193 --> 01:03:56.800
So what do you do?

01:03:56.800 --> 01:04:00.666
I think those are great questions and I certainly don't have the answer to it.

01:04:01.427 --> 01:04:17.427
I would say I've had this conversation with Brad for a couple of years, I think him and I, you know, we just we love where I love where AI is coming and I pose the question about, you know, is AI becomes a necessity for the survival of humanity.

01:04:17.427 --> 01:04:27.717
Becomes a necessity for the survival of humanity Because, as you all pointed out, that eventually you'll lose some of those skills because you're so dependent.

01:04:27.717 --> 01:04:29.545
Eventually you'll lose it.

01:04:29.545 --> 01:04:34.887
And I've had tons of conversation Right now we don't need AI.

01:04:34.887 --> 01:04:57.454
We don't need AI for the survival of humanity, but as we become more dependent, as we lose some of those skills, because we're giving it to AI to do some tedious tasks sometimes it could be in the medical field or whatnot it becomes a necessity in the future.

01:04:57.454 --> 01:05:03.170
It will eventually become a necessity in the future for humanity's survival, but we're forcing it.

01:05:03.170 --> 01:05:03.880
Right now we don't need it.

01:05:03.920 --> 01:05:06.740
We are forcing the dependency by losing this Because.

01:05:06.740 --> 01:05:16.512
I'm not saying it's right or wrong, but I'm listening to what you're saying, saying that we are going to be dependent on machine for the survival of the human race.

01:05:16.512 --> 01:05:19.547
I mean, humans have been around for how long?

01:05:23.940 --> 01:05:24.911
But we're already dependent on machines.

01:05:24.911 --> 01:05:25.329
Right, we've been around for how long?

01:05:25.329 --> 01:05:25.974
But we're already dependent on machines.

01:05:25.974 --> 01:05:26.186
Right, we've been there for a long time.

01:05:26.186 --> 01:05:29.023
We're forcing ourselves to be dependent upon it.

01:05:29.063 --> 01:05:35.760
That's why I use the word machine, because we force ourselves to be dependent upon that right.

01:05:35.760 --> 01:05:45.989
We force ourselves to lose the skill or use something so much that it's something that we must have to continue moving forward.

01:05:47.601 --> 01:05:49.949
Yeah, my point was that that's not new.

01:05:49.949 --> 01:05:56.871
I mean, we've done that for 50 years like force dependency of some machines, right?

01:05:56.871 --> 01:06:01.266
So without them we wouldn't even know where to begin where to do that task.

01:06:01.266 --> 01:06:07.371
So AI is just probably accelerating that in some realms now, I think.

01:06:07.552 --> 01:06:16.653
Yeah, it is, Because, you know, as humans' desire is to improve quality of life, expand our knowledge and mitigate risk.

01:06:16.653 --> 01:06:18.545
It's not improving quality of life.

01:06:18.644 --> 01:06:19.547
It's to be lazy?

01:06:19.547 --> 01:06:21.360
I hate to tell you it's.

01:06:21.360 --> 01:06:26.208
Humans take the path of least resistance and I'm not trying to be there's a little levity in that comment.

01:06:26.208 --> 01:06:30.117
But why do we create the tools to do the things that we do?

01:06:30.117 --> 01:06:30.739
Right?

01:06:31.240 --> 01:06:41.284
We create tools to harvest fruits and vegetables from the farm, right, so we can do them quicker and easier and require less people, right?

01:06:41.284 --> 01:06:46.708
So it's not necessarily, you know, we do it because to make things better.

01:06:46.708 --> 01:06:57.659
We do it because, well, we don't want someone to have to go to the field and, you know, pick the cucumbers from the cucumber vine, right, we want, you know, they shouldn't have to do that, they should do something else.

01:06:57.659 --> 01:07:00.061
We're kind of, in my opinion, forcing ourselves to go that way.

01:07:00.061 --> 01:07:09.467
It is necessary to harvest the fruits and the vegetables and the nuts to eat, but, you know, is it necessary to have a machine do it?

01:07:09.467 --> 01:07:15.150
Well, no, we just said it would be easier, because I don't want to go out in the hot sun all day long and you know harvest.

01:07:16.331 --> 01:07:20.034
You can do the dishes by hand if you like, right yeah?

01:07:20.054 --> 01:07:22.856
If you like, yeah, if you choose to.

01:07:22.856 --> 01:07:23.436
No one wants to.

01:07:23.436 --> 01:07:24.556
No one wants to do the dishes.

01:07:24.597 --> 01:07:26.958
trust me I will never live in a place without a dishwasher.

01:07:26.958 --> 01:07:29.585
I mean, it's the worst that can happen.

01:07:31.163 --> 01:07:33.699
It is, and the pots and the pans forget it right.

01:07:35.827 --> 01:07:39.166
If you take this, further at some point in time.

01:07:39.166 --> 01:07:57.351
If you have a new colleague and you have to educate him or her, do you educate him to make these steps the sales order agent is doing by him or herself, just to have the skill to know what you're doing.

01:07:57.351 --> 01:08:01.931
Or if you are saying, just push the button.

01:08:06.864 --> 01:08:07.606
Yeah, but I think what?

01:08:07.606 --> 01:08:15.228
Eventually, as you continue to build upon these co-pilots in AI, eventually you just have two ER pieces and talk to each other.

01:08:15.228 --> 01:08:17.384
And then what then?

01:08:17.384 --> 01:08:20.381
Where are we then?

01:08:23.328 --> 01:08:24.149
Yeah, super interesting.

01:08:24.149 --> 01:08:24.371
What then?

01:08:24.371 --> 01:08:24.931
Where are we then?

01:08:24.931 --> 01:08:25.654
Yeah, super interesting.

01:08:25.654 --> 01:08:27.327
I mean, who knows?

01:08:27.327 --> 01:08:32.228
I think it's so hard to predict where we'll be even just in 10 years.

01:08:34.582 --> 01:08:37.421
I don't think we'll be able to predict where we'll be in two years, I think it's.

01:08:40.000 --> 01:08:43.061
Will we ever be able to press a button Like right now?

01:08:43.061 --> 01:08:46.532
I can create video images and still images.

01:08:46.532 --> 01:08:51.368
I'm using that because a lot of people relate to that, but I can create content, create things.

01:08:51.368 --> 01:08:57.033
I've also worked with AI from programming in a sense, to create things.

01:08:57.100 --> 01:08:58.827
I was listening to a podcast the other day.

01:08:58.827 --> 01:09:04.927
In the podcast they said within 10 years, the most common programming language is going to be the human language.

01:09:04.927 --> 01:09:09.167
Because it's getting to the point where you can say create me this.

01:09:09.167 --> 01:09:15.292
It needs to do this, this and this, and an application will create it, it will do the test and produce it.

01:09:15.292 --> 01:09:17.764
You wake up in the morning and now you have an app.

01:09:17.764 --> 01:09:20.975
So it's going to get to the point where what happens now?

01:09:20.975 --> 01:09:25.774
Let's move fast forward a little bit, because you even look at github, co-pilot for coding, right.

01:09:25.774 --> 01:09:29.286
You look at the sales agents chris's point erp systems can just talk to each other.

01:09:29.286 --> 01:09:30.028
What do you need to do?

01:09:30.028 --> 01:09:37.524
Is there going to be a point where that's what I was getting at where we don't need other people because we can do everything for ourselves?

01:09:37.524 --> 01:09:44.320
And then how do we survive if we don't know how to work together because we're not going to need to?

01:09:45.643 --> 01:09:49.149
that is so how we go yeah, I'm sorry.

01:09:49.189 --> 01:09:50.192
Sorry, now, that's so.

01:09:50.192 --> 01:10:27.583
To go to your point, how is ai going to help progress, the human civilization, right, or the species, if we're going to get to the point where we're not going to need to do anything, we're all just going to sit in my house because I can say make me a computer and click a button, it will be, you know there and that's you know where I come from with I would in that other podcast show that you mentioned, where I quote james burke when he says that we will have these nanofabricators, that in 60 years, everyone will have everything they need, and just produce it from air, water and dirt.

01:10:27.663 --> 01:10:31.391
Basically right, so and uh, so that that's the end of scarcity.

01:10:31.391 --> 01:10:37.603
So all the stuff that we're thinking about right now are just temporary issues that we don't need to worry about in 100 years.

01:10:37.603 --> 01:10:40.408
So that that's just impossible to even imagine.

01:10:40.408 --> 01:10:48.744
But because, as one of you said just before, we'll probably always just move the needle and figure out something else to desire, something else to do.

01:10:48.744 --> 01:10:55.947
But I think it is a good question to ask but what will we do with this productivity that we gain from AI?

01:10:55.947 --> 01:10:57.925
Where will we spend it?

01:10:57.925 --> 01:11:00.469
So now you're a company, now you have saved 20% cost because you're a company.

01:11:00.469 --> 01:11:06.729
Now you save 20% cost because you're more efficient in some processes due to AI or IT in general.

01:11:06.729 --> 01:11:09.106
What will you do with that 20%?

01:11:09.106 --> 01:11:12.868
Do you want to give your employees more time off?

01:11:12.868 --> 01:11:16.369
Do you want to buy a new private jet?

01:11:16.369 --> 01:11:16.951
I don't know.

01:11:18.844 --> 01:11:22.002
You have choices right, but as a humanity, I definitely personally my's.

01:11:22.002 --> 01:11:23.525
Uh, you have choices, right, um, but as a but as a humanity, I definitely.

01:11:23.525 --> 01:11:33.002
I personally, my personal opinion is I mean, I would welcome a future where we would, where we could work less, where we could have machines to do things for us.

01:11:33.002 --> 01:11:41.332
But it requires that we have a conversation, start thinking about how will we interact in such a world where we don't have to work the same way we do today.

01:11:41.332 --> 01:11:41.600
What?

01:11:41.600 --> 01:11:43.006
What will our social lives look like?

01:11:43.006 --> 01:11:44.404
Why do we need each other?

01:11:44.404 --> 01:11:45.748
Do we need each other?

01:11:45.748 --> 01:11:48.908
We are social creatures, we are communal creatures.

01:11:48.908 --> 01:11:51.006
So, yes, I think we do.

01:11:51.006 --> 01:11:53.686
But how, what will that world look like?

01:11:53.686 --> 01:11:56.309
I think this keeps me up at night sometimes.

01:12:04.479 --> 01:12:07.583
I can't imagine, and nor did I imagine, there'd be full self-driving vehicles within a short period of time, as it had to.

01:12:07.583 --> 01:12:16.655
I mean, I think, as you made a great point, soren, I don't think anyone can know what tomorrow will be or what tomorrow will bring with this, because it's advancing so rapidly.

01:12:16.655 --> 01:12:23.546
And go back to the points I said I had mentioned you talked about the podcast with James Burke, which was a great podcast as well too.

01:12:23.546 --> 01:12:30.850
That was the You're Not so Smart episode I think it was 118 on connections, which talked a lot about that.

01:12:32.020 --> 01:12:33.527
And yes, it was a great episode.

01:12:33.527 --> 01:12:41.372
That's another great podcast, and a lot of this stuff is going to be building blocks that we don't even envision what it's going to build.

01:12:41.372 --> 01:12:42.865
You know, look at the history of the engine.

01:12:42.865 --> 01:12:45.069
You look at the history of a number of inventions.

01:12:45.069 --> 01:12:47.988
They were all made of small little pieces.

01:12:47.988 --> 01:12:49.405
So we're building those pieces now.

01:12:49.405 --> 01:12:53.520
But also our mind is going to need to be I use the word stimulated.

01:12:53.520 --> 01:13:00.173
If we're going to get to the point where we don't have to do anything, how are we going to entertain ourselves?

01:13:00.173 --> 01:13:04.921
We're we going to entertain ourselves?

01:13:04.921 --> 01:13:11.194
We're always going to find something else right to have to do, but is there going to be a point where there is nothing else because it's all done for us?

01:13:12.042 --> 01:13:14.488
yeah, just want to comment on that one thing.

01:13:14.488 --> 01:13:20.568
You said there like that you referenced that no one, no one just imagined the car, like you.

01:13:20.568 --> 01:13:29.890
You know, people did stuff, invented stuff, but suddenly some other people could build on that and invent other stuff and then eventually you had a car, right?

01:13:29.890 --> 01:13:32.488
Or anything else that we know in our life.

01:13:32.488 --> 01:13:40.826
And I think James Berg also says that innovation is what happens between the disciplines, and I really love that.

01:13:40.826 --> 01:13:42.847
I mean, just look at agents today.

01:13:42.847 --> 01:13:47.347
Like four years ago, before LLMs were such a big thing.

01:13:47.347 --> 01:13:58.908
I know they were in a very niche community, but with sort of the level of LLMs today, no one said let's invent LLMs so we could do agents.

01:13:58.908 --> 01:14:02.054
No, I mean, LLMs was invented Now because we have LLMs, so we can do agents.

01:14:02.054 --> 01:14:04.644
No, I mean, llms was invented Now because we have LLMs.

01:14:04.644 --> 01:14:10.930
Now we think, oh, now we can do this thing called agents and what else comes to mind in six months, right?

01:14:10.930 --> 01:14:17.394
So it just proves that no one has this sort of five-year plan of, oh, let's, in five years, do this and this.

01:14:17.394 --> 01:14:26.631
No, because in six months someone will have invented something that, oh, we can use that and oh, now we can build this entirely new thing.

01:14:26.631 --> 01:14:28.337
So that's what's just super.

01:14:28.457 --> 01:14:30.706
It's both super exciting, but it's also a bit scary.

01:14:30.706 --> 01:14:34.680
I mean I can, I can speak for as as a product developer.

01:14:34.680 --> 01:14:47.166
It's definitely challenged me to rethink my whole existence as a product person, because now I don't actually know my toolbox anymore.

01:14:47.166 --> 01:14:51.791
Two years ago I knew what AL could do Great.

01:14:51.791 --> 01:14:53.868
I knew the confines of what we could build.

01:14:53.868 --> 01:14:55.887
I knew the page types in BC and stuff.

01:14:55.887 --> 01:15:00.891
So if I had a use case, I could visualize it and see how we can probably build something.

01:15:00.891 --> 01:15:05.903
If we need a new piece from the client, then we could talk to them about it and we can figure that out.

01:15:05.903 --> 01:15:12.868
But now I don't even know if we can build it until we're very close to having built it.

01:15:12.868 --> 01:15:14.082
I mean, so it's.

01:15:14.082 --> 01:15:25.832
There's so much experimentation that, yeah, we're building the airplane where we're flying it in that sense, right and so that also challenges our whole testing approach and testability and frameworks.

01:15:25.832 --> 01:15:36.222
But so, which is super exciting in itself, so it's just a mindset change, right, um, but, but definitely challenge your product people oh, it definitely does.

01:15:36.564 --> 01:15:43.554
I I think uh ai is um, it's definitely changing things and it's here to stay.

01:15:43.554 --> 01:15:44.314
I guess you could say.

01:15:44.314 --> 01:15:48.266
I'm just wondering, you know.

01:15:48.266 --> 01:15:52.033
I say I think back of a movie was it from the 80s, called Idiocracy.

01:15:52.033 --> 01:15:57.511
You know, if you haven't watched it it's a mindless movie, but it is.

01:15:57.511 --> 01:16:03.041
It's the same type of thing where a man from the past goes into the future movie, but it is.

01:16:03.041 --> 01:16:08.171
It's the same type of thing where a man from the past goes into the future and you know what happens to the human species in the future and how they are.

01:16:08.171 --> 01:16:09.594
It's pretty comical.

01:16:09.594 --> 01:16:14.029
It's funny how some of these movies are some of these circling back.

01:16:14.029 --> 01:16:15.832
Yeah, they circle back, you know with.

01:16:16.240 --> 01:16:19.929
You know star trek, star wars I'm wondering when we will be there.

01:16:25.856 --> 01:16:26.420
That already happened.

01:16:26.420 --> 01:16:42.115
I just hope we won't get to the state where I think you said that cartoon or that animated movie Wall-E where the people are just lying back all day and eating and their bones are deteriorating because they don't use their bones and muscles anymore.

01:16:42.115 --> 01:16:49.844
So the skeleton sort of turns into something like they just become like wobbly creatures that just lie there.

01:16:50.405 --> 01:17:19.472
As I don't know seals, or consuming what was really interesting with Back to the Future is this thing here, because Doc Brown made this time machine using a banana to have the energy of 1.2.1 gigawatts or something like that.

01:17:19.472 --> 01:17:23.400
You don't have to wait for a thunderstorm to travel into time a bit.

01:17:23.400 --> 01:17:34.086
This idea was mind-blowing back then and and I I'm dreaming of using my using free time as as a human to to make this leaps.

01:17:34.086 --> 01:17:36.851
Because we are.

01:17:36.851 --> 01:17:50.074
We have this scarcity in resources and, even if this goes further and further and further, I assume that we don't have enough resources to make this machine computing power to fulfill all that.

01:17:50.074 --> 01:18:05.180
I think there will be limitations at some point in time, and most of what is AI freeing us up is to have ideas on how are we using our resources that is sustainable.

01:18:07.744 --> 01:18:08.164
I like that.

01:18:08.164 --> 01:18:33.712
I have no way to say what you fear will become true or not, but I like the idea of using whatever productivity we gain for more sort of humanity-wide purposes, and I also hope that whatever we do with technology and AI will reach a far audience and also help the people who today don't even have access to clean drinking water and things like that.

01:18:33.712 --> 01:18:40.090
So I hope AI will benefit most people and, yeah, let's see how that goes.

01:18:41.020 --> 01:18:44.529
Yeah, I think it's going to redefine human identity.

01:18:44.529 --> 01:18:44.890
Yeah.

01:18:44.939 --> 01:18:46.880
I'd like to take it further and I'd say the planet.

01:18:46.880 --> 01:18:56.594
I think you know, with the AI, I hope we gain some efficiencies, to go to your point, christian, that we don't.

01:18:56.594 --> 01:19:03.471
We can have it all sustainable so we're not so destructive, because you know the whole circle of life, as they say.

01:19:03.471 --> 01:19:09.489
You know it's important to have all of the species of animals.

01:19:09.489 --> 01:19:13.527
You know plants, water, you know anything else is on the planet.

01:19:13.527 --> 01:19:16.048
It's an entire ecosystem that needs to work together.

01:19:16.048 --> 01:19:20.625
So I'm hoping, with this AI, that's something that we get out of.

01:19:20.625 --> 01:19:31.631
It is how to become less destructive and more efficient and more sustainable, so that everything benefits, not just humans because we are heavily dependent upon everyone else.

01:19:32.320 --> 01:19:33.827
That's the moral aspect of it.

01:19:34.680 --> 01:19:52.326
So if we use it to use all of the resources, then it is moral aspects bad because it is not sustainable for us as a society and as human beings on this planet.

01:19:53.659 --> 01:20:11.777
So, as I see, moral is a function of keeping the system alive, because we use the distinction between good and bad in that way that it is not morally good to use all the resources.

01:20:11.777 --> 01:20:32.470
So if we could extend anything that we can do with AI using all of the resources, that is not really good and that what we can use with our brains is think ahead when will this point in time will be and label it as bad behavior.

01:20:32.470 --> 01:20:43.689
So the discussion we are having now and I'm very glad that you brought this point, sorin is that we have this discussion now to think ahead.

01:20:43.689 --> 01:20:52.548
Where will the use of AI be bad for us as a society and as human beings and for the planet?

01:20:52.548 --> 01:21:08.113
Because now is the time we can think ahead what we have to watch out in the next month or years or something like that, and that is the moral aspect I think we should keep in mind when we are going further with AI.

01:21:09.381 --> 01:21:12.189
I think there are so many aspects there to your point, christian.

01:21:12.189 --> 01:21:18.390
So one is of course the whole, like we all know, the energy consumption of AI in itself, of AI in itself.

01:21:18.390 --> 01:21:44.391
But there's also the other side, I mean the flip side, where AI could maybe help us spotlight or shine a bright light on where can we save on energy in companies and where can AI help us, let's say, calibrate our moral compasses by shining a light on where we don't behave as well today as a species.

01:21:44.391 --> 01:21:46.547
So I think there's a flip side.

01:21:46.547 --> 01:21:54.573
I'm hoping we will make some good decisions along the way to have AI help us in that.

01:21:58.860 --> 01:22:06.051
There's so many things I could talk about with AI and we'll have to have I think we'll have to schedule another discussion to have you on, because I did.

01:22:06.051 --> 01:22:26.810
I had a whole list of notes of things that I wanted to talk about when it comes with AI, not just from the ERP point of view, but from the AI point of view, because, you know, after getting into the more AI book and listening to several podcasts about AI and humanity, there's a lot of things that I wanted to jump into.

01:22:26.810 --> 01:22:29.426
You know we talked about the de-skilling.

01:22:29.426 --> 01:22:30.569
We talked about too much trust.

01:22:30.569 --> 01:22:36.411
I'd like to get into harm bias and also, you know how AI can analyze data.

01:22:37.672 --> 01:22:43.162
You know that everyone thinks anonymous because, reading that Morley, I booked some statistics they put in there.

01:22:43.162 --> 01:22:44.203
I was kind of fascinated.

01:22:44.203 --> 01:22:54.630
Just to throw it out, there is that 87% of the United States population can be identified by their birth date, gender and their zip code.

01:22:54.630 --> 01:22:57.082
That was mind blowing.

01:22:57.082 --> 01:23:03.051
And then 99.98% of people can be identified with 15 data points.

01:23:03.051 --> 01:23:05.515
So all of this anonymous data.

01:23:05.515 --> 01:23:13.645
You know, with the data sharing that's going on, it's very easy to make many pieces of anonymous data no longer anonymous.

01:23:13.645 --> 01:23:14.646
Is what I got from that.

01:23:14.646 --> 01:23:15.427
Um.

01:23:15.427 --> 01:23:31.969
So all that data sharing with those points, that um, the, the birth date, gender and five digit us zip code here again, that's in the united states was one that that shocked me, and now I understand why those questions get asked the most because it's going to give, with a high probability, 87 percent.

01:23:32.930 --> 01:23:40.109
Uh who you are maybe just for the audience, uh, watching this or listening to this.

01:23:40.109 --> 01:23:44.756
So so the book that we're talking about is this one Mole AI.

01:23:44.756 --> 01:23:46.363
I don't know if you can see it.

01:23:46.363 --> 01:23:47.345
Does it get into focus?

01:23:47.345 --> 01:23:48.509
I don't know if it does.

01:23:48.850 --> 01:23:49.612
Yeah now it does.

01:23:50.220 --> 01:23:53.680
So it's this one, mole, ai and how we Get there.

01:23:53.680 --> 01:24:08.268
It's really a great book that goes across fairness, privacy, responsibility, accountability, bias, safety, all kinds of and it tries to take sort of a pro-con approach.

01:24:08.268 --> 01:24:12.868
You know, because I think maybe this is a good way to end the discussion, because I have to go.

01:24:12.868 --> 01:24:33.889
I think one cannot just say AI is all good or AI is all bad, like it depends on what you use it for and how we, how we use it and how we let it be biased or not, or how we implement fairness into algorithms, and so there's just so many things that we could talk about for an hour.

01:24:33.889 --> 01:24:39.167
But that's what this book is all about and that's what triggered me to to share a month back.

01:24:39.167 --> 01:24:45.969
So just thank you for the, for the chance to talk about some of these things, and I'd be happy to jump on another one.

01:24:46.168 --> 01:24:50.248
Absolutely, We'll have to schedule one up, but thank you for the book recommendation.

01:24:50.248 --> 01:24:53.409
I did start reading the Moral AI book that you just mentioned.

01:24:53.409 --> 01:24:54.604
Again, it's Pelican Books.

01:24:54.604 --> 01:24:55.707
Anyone's looking for it.

01:24:55.707 --> 01:24:57.564
It's a great book.

01:24:57.564 --> 01:25:04.930
Thank you, both Soren and Christian, for taking the time to speak with us this afternoon, this morning, this evening, whatever it may be anywhere.

01:25:04.930 --> 01:25:11.226
I know where I have the time zones and we'll definitely have to schedule to talk a little bit more about AI and some of the other aspects of AI.

01:25:11.226 --> 01:25:24.033
But if you would, before we depart, how can anyone get in contact with you to learn a little bit more about AI, learn a little bit more about AI, learn a little bit more about what you do and learn a little bit more about all the great things that you're doing?

01:25:26.442 --> 01:25:29.786
Soren, so the best place to find me is probably on LinkedIn.

01:25:29.786 --> 01:25:33.329
That is my only media that I participate in these days.

01:25:33.329 --> 01:25:37.134
I deleted all the other accounts and that's a topic for another discussion.

01:25:37.335 --> 01:25:38.876
It's so cleansing to do that too.

01:25:38.895 --> 01:25:44.859
Yeah, and for me it's also on LinkedIn and on Blue Sky.

01:25:44.859 --> 01:25:48.908
It's Curate Ideas excellent, great.

01:25:48.927 --> 01:25:49.390
Thank you both.

01:25:49.390 --> 01:25:50.966
Look forward to talking with both of you again soon.

01:25:51.086 --> 01:25:52.743
Ciao, ciao thanks for having us.

01:25:52.743 --> 01:25:54.609
Thank you so much bye, thank you guys.

01:25:57.148 --> 01:26:01.707
Thank you, chris, for your time for another episode of In the Dynamics Corner Chair and thank you to our guests for participating.

01:26:01.707 --> 01:26:04.247
Thank you for your time for another episode of In the Dynamics Corner Chair and thank you to our guests for participating.

01:26:04.539 --> 01:26:06.046
Thank you, brad, for your time.

01:26:06.046 --> 01:26:09.510
It is a wonderful episode of Dynamics Corner Chair.

01:26:09.510 --> 01:26:13.029
I would also like to thank our guests for joining us.

01:26:13.029 --> 01:26:16.048
Thank you for all of our listeners tuning in as well.

01:26:16.048 --> 01:26:30.645
You can find Brad at developerlifecom, that is D-V-L-P-R-L-I-F-Ecom, and you can interact with them via Twitter D-V-L-P-R-L-I-F-E.

01:26:30.645 --> 01:26:47.622
You can also find me at matalinoio, m-a-t-a-l-i-n-oi-o L I N O, dot I O, and my Twitter handle is Mattelino16.

01:26:47.622 --> 01:26:49.148
And see, you can see those links down below in their show notes.

01:26:49.148 --> 01:26:49.828
Again, thank you everyone.

01:26:49.828 --> 01:26:50.551
Thank you and take care.

Søren Friis Alexandersen Profile Photo

Søren Friis Alexandersen

Principal Product Manager

Christian Lenz Profile Photo

Christian Lenz

Organizational Designer & Development Facilitator working from CTM Computer Technik Marketing GmbH