April 8, 2025

Episode 414: Business Central is happy, and so are you: CentralQ turns 2!

In this episode of Dynamics Corner, Kris and Brad are joined by Dmitry Katson, a 20-year veteran in the Business Central ecosystem. Listen in as Dmitry shares his experience developing CentralQ, an AI-powered tool designed to enhance Business Central by leveraging a robust knowledge base. He emphasizes the tool’s ability to automatically update its knowledge daily, drawing from sources like blogs and YouTube, and its role in streamlining processes, improving user experience, and supporting AL development. Dmitry highlights challenges such as creating deterministic AI solutions and the importance of source referencing for credibility. Looking ahead, he discusses plans for CentralQ, including reasoning models, agent coordination, deep search capabilities, local language models, and page scripting for automated documentation. The conversation underscores AI’s transformative impact on development roles, shifting them toward management and architecture and the need for AI agents to access live data while addressing user permissions.

Send us a text

Support the show

#MSDyn365BC #BusinessCentral #BC #DynamicsCorner

Follow Kris and Brad for more content:
https://matalino.io/bio
https://bprendergast.bio.link/

00:00 - Introduction to Central Q's Birthday

03:45 - Meet Dimitri Katzin: AI & BC Expert

10:15 - The Birth Story of Central Q

19:32 - Knowledge Grounding & Source References

28:32 - Central Q App & Page Script Integration

39:50 - Central Q Stats: 300,000 Questions & Beyond

46:26 - AL Development with AI Tools

01:04:50 - The Future of AI in Business Central

01:19:25 - Closing Thoughts & Conference Preview

WEBVTT

00:00:00.881 --> 00:00:03.770
Welcome everyone to another episode of Dynamics Corner.

00:00:03.770 --> 00:00:06.588
It's someone's birthday and someone's turning two.

00:00:06.588 --> 00:00:09.628
I don't know who, because we're rhyming.

00:00:09.628 --> 00:00:10.984
I'm your co-host, Chris.

00:00:12.223 --> 00:00:12.885
And this is Brad.

00:00:12.885 --> 00:00:18.794
This episode was recorded on March 5th and March 6th 2025.

00:00:18.794 --> 00:00:21.868
Chris, chris, chris, I liked your rhyme.

00:00:21.868 --> 00:00:24.003
Someone is turning two.

00:00:24.003 --> 00:00:27.672
Are they blue, I wonder who?

00:00:27.672 --> 00:00:39.073
With us today, we had the opportunity to learn who is turning two, as well as a wonderful conversation about the place for AI within Business Central.

00:00:39.073 --> 00:00:43.781
With us today, we had the opportunity to speak with Demitri Katzin about Central Q turning two.

00:00:43.781 --> 00:01:03.274
Good morning sir, hey guys, how are you doing?

00:01:03.293 --> 00:01:06.215
Good morning sir, hey guys, how are you doing Good?

00:01:06.215 --> 00:01:06.596
Morning.

00:01:08.177 --> 00:01:10.117
Doing great, good, good.

00:01:10.117 --> 00:01:11.364
You look like you just woke up.

00:01:13.641 --> 00:01:14.405
Yes, thank you.

00:01:17.561 --> 00:01:23.253
And I've been waiting a very long time to say happy birthday to you.

00:01:23.253 --> 00:01:31.685
Well, not to you, but to your child yeah which one of them, central q, turns two.

00:01:32.045 --> 00:01:42.766
I've been waiting to say that for months now yes, yes, thank you very much, it's, it's coming, the birthday is coming when is the exact birthday?

00:01:43.266 --> 00:01:48.555
I know we spoke with you shortly after it was out some years ago.

00:01:48.840 --> 00:02:00.813
Well, it seems like just yesterday yeah, I need to double check when I first tweet that, but it was the beginning of march and maybe seven or something oh, wow, so we're right.

00:02:00.852 --> 00:02:01.614
We are right there.

00:02:01.614 --> 00:02:03.322
We scheduled this on purpose.

00:02:03.322 --> 00:02:08.532
Yes, yes, yes, to be there at the birthday of your child.

00:02:08.532 --> 00:02:20.810
I call it and it's great, and, before we talk about your child and many other things that are around it, I like calling it your child because I think it's wonderful.

00:02:20.810 --> 00:02:23.246
Can you tell us a little bit about yourself?

00:02:23.265 --> 00:02:24.711
Yeah, so I'm.

00:02:24.711 --> 00:02:25.092
It's wonderful.

00:02:25.092 --> 00:02:26.217
Can you tell us a little bit about yourself?

00:02:26.217 --> 00:02:26.800
Yeah, so I'm Dmitry.

00:02:26.800 --> 00:02:31.290
I'm in a business central world for like 20 years.

00:02:31.290 --> 00:02:35.247
I'm passionate about business central and artificial intelligence.

00:02:35.247 --> 00:02:43.719
I started with a majority in ML, or machine learning or AI, whatever you call it nowadays.

00:02:43.719 --> 00:02:47.647
I started in 2016.

00:02:48.468 --> 00:03:05.331
So it was almost like eight years ago right when I headed their AI department and a big partner, and I didn't know anything about that, so that's where my journey started.

00:03:05.331 --> 00:03:16.312
And then I was passionate to combine AI with a business central for years and I think now my mission is accomplished.

00:03:17.259 --> 00:03:19.064
Your mission is accomplished.

00:03:19.085 --> 00:03:19.948
Mission accomplished.

00:03:20.950 --> 00:03:23.460
That's great and you've been doing a lot of great things.

00:03:23.460 --> 00:03:24.411
You've been doing a lot of speaking things.

00:03:24.411 --> 00:03:29.433
You've been doing a lot of speaking sessions, presentations and yeah, and like I see you all over the place.

00:03:29.433 --> 00:03:39.665
You're very busy not only with business, central and central q, but uh, sometimes it seems like a world traveler to me yeah, it's well.

00:03:39.847 --> 00:03:47.608
There are two uh seasons where I travel, so it's definitely directions.

00:03:47.608 --> 00:03:55.225
So usually it's directions Asia, as it's not far away from me, just one hour of flight.

00:03:55.225 --> 00:03:58.629
That's nice, sometimes using bike.

00:04:01.021 --> 00:04:01.805
That's even better.

00:04:01.805 --> 00:04:02.426
That's good.

00:04:03.842 --> 00:04:05.347
I think I saw a picture of you last year.

00:04:05.388 --> 00:04:15.102
You took your motorbike that's right yeah yeah, but but to be honest, yes, it's still 800 kilometers, so we prefer to use bike to go to the airport.

00:04:15.983 --> 00:04:19.086
Yeah, it would be a long ride.

00:04:19.187 --> 00:04:28.437
A long ride, yeah, and then then Besitek Days and Directions EMEA.

00:04:28.437 --> 00:04:38.468
So that's my three conferences that I usually attend as a speaker, yeah, and that's where we can meet.

00:04:38.468 --> 00:04:55.372
I really hope to go this year to directions North America, but it seems that my visa is not ready yet, so I don't think that they will issue that on time.

00:04:56.040 --> 00:05:03.673
I'm hoping that they issue it on time, because I would enjoy meeting you in person in Las Vegas this year.

00:05:04.601 --> 00:05:22.163
I know it's a long trip for you too yes, but it's still already two months of visa processing and they, you know, uh waiting okay you got like a little over three weeks left, four weeks left, so you still have time, you, you just have to.

00:05:22.242 --> 00:05:22.605
When's your?

00:05:22.605 --> 00:05:24.170
When's your cutoff day?

00:05:24.170 --> 00:05:25.115
Do you cut-off day?

00:05:25.115 --> 00:05:29.069
Whereas if you don't have a visa by a certain day, then you definitely won't be attending.

00:05:31.521 --> 00:05:32.987
I think that it's already passed.

00:05:34.627 --> 00:05:40.492
Oh man, We've got to make sure that you make it next year, then I'm hopeful to run into you somewhere.

00:05:40.819 --> 00:05:42.206
I'm hopeful to run into you somewhere then.

00:05:42.206 --> 00:05:52.033
So you've been doing a lot of great things and for those that do not know about CentralQ, can you tell us a little bit about CentralQ briefly?

00:05:52.033 --> 00:05:54.524
And then I have a whole list of questions for you.

00:05:55.365 --> 00:06:27.283
Right, yes, so I've been doing different machine learning things before and then, like, I was speaking in the conferences about how we can implement machine learning in Business Central and I remember that first time I talked about this in 2018, I think in Harvard, in the direction of EMEA, and I was the only weird person that talked about this in the conference Even Microsoft didn't talk about that.

00:06:27.283 --> 00:06:42.204
And then, in directions in me last year, I found myself that, like 60-70% of all the content, everyone speaks about Copilot and AI.

00:06:42.204 --> 00:06:45.093
So that's where we are.

00:06:45.093 --> 00:06:47.961
That's where I think that my mission was accomplished.

00:06:47.961 --> 00:07:05.314
But I returned back like two years ago a little bit more, when the first chat, gpt appeared right, and we were like all mind-blowing about the power of Linguistic Models.

00:07:05.314 --> 00:07:17.211
We all saw them for the first time and what I did actually I think many people did I thought, hey, great, now I can use it to help myself with a business.

00:07:19.100 --> 00:07:27.528
And just after some quick queries, I figured out that, no, that doesn't work.

00:07:27.528 --> 00:07:43.903
It just suggested me features that doesn't exist, suggested me you know code that doesn't compile, suggested me you know routes where I it's just hallucinated a lot.

00:07:43.903 --> 00:08:03.365
But I still thought that, yeah, that could be a good framework to build around and to help our community to use it to help with the business central problems.

00:08:03.365 --> 00:08:12.149
Yeah, the problem with the business central is that it's still very, you know, narrow, uh, comparing to the whole internet.

00:08:12.149 --> 00:08:15.254
Yes, so our al development is.

00:08:16.040 --> 00:08:19.024
you know it's several github repos.

00:08:19.024 --> 00:08:29.622
Comparing to the millions of you reviews, Our documentation for the Business Central is still small comparing to all other products.

00:08:29.622 --> 00:08:36.034
So probably at this point, at those point of time it was GPT 3.5.

00:08:36.034 --> 00:08:39.104
It maybe knew something.

00:08:39.104 --> 00:08:48.043
But you know, the main goal of the light language models is to answer all the questions, no matter if it's correct or not.

00:08:48.043 --> 00:08:52.532
So it was just imagine the answer.

00:08:53.894 --> 00:09:03.792
However, I found and in those periods of time it was very hard that there are still a way how we can make it better.

00:09:03.792 --> 00:09:47.447
So if we just make the big knowledge base about everything that we know about the business central in one place and then not just ask directly Flash language model, but first a language model, but first query our knowledge base, find the potential answers, like some text that will potentially answer on the user question, and then we'll send this to the language model together with the user question, this increases the correct answer a lot.

00:09:47.447 --> 00:09:57.184
So that's what we call it fact grounding, yeah, or the knowledge grounding.

00:09:57.184 --> 00:10:05.296
So that's where the idea was born about hey, I think that that will work.

00:10:06.096 --> 00:10:19.374
So the next problem with that was that I I need to find a way how to build it because there was no exact documentation, there was nothing.

00:10:19.374 --> 00:10:24.828
So actually my, my only source of knowledge at this point of time was Twitter.

00:10:24.828 --> 00:10:39.652
So I followed some guys that also did some experimenting, chat with them, and so I built a knowledge base.

00:10:39.652 --> 00:11:04.164
I took first the blogs and the Microsoft Learn, Then I added at some point of time YouTube, then it was Twitter also as a source of knowledge and yeah, so it took like two months of building, I remember, and the Central Queue was fun.

00:11:05.125 --> 00:11:29.076
So Central Queue in essence is a large language model that's built or it's grounded, or it has its knowledge based upon popular blogs from community members of Business Central, from the development point of view, as well as from the functional point of view, the Microsoft Learn documents, which keep getting better and better, twitter and the YouTube videos.

00:11:29.076 --> 00:11:40.583
So anybody who uses Central Queue, similar to ChatGPT you mentioned, which a lot of people use it, will pull the knowledge from those sources to return the result.

00:11:41.424 --> 00:11:54.625
Yes, and also the problem with just a pure LashLanguage model was and still is that it's trained and has a cut-off knowledge date.

00:11:54.625 --> 00:12:02.332
So it's usually for the OpenAI models it's one year before.

00:12:02.332 --> 00:12:25.264
So the current models I think that they have a cut of days like 2024 or somewhere in the maybe autumn, maybe summer, but as we use, as we ask about Business Central, so this area is growing fast.

00:12:25.303 --> 00:12:43.105
The new features appears every day oh yes no, like, oh yeah, not every day, okay, but we have uh, waves, uh, and they are much appears, much quicker than this, that large models are trained based on that it does seem like every day, by the way yes, every month we have new features.

00:12:43.105 --> 00:12:44.067
So it's, it's just like every day by the way?

00:12:44.086 --> 00:12:47.895
Yes, exactly Every month we have new features, so it's just like every day is a holiday.

00:12:47.916 --> 00:13:07.846
I guess you could say yeah so this was the second problem that I wanted to solve and the Central Queue not just have this knowledge base that is trained and used, but it updates automatically every day it but it's updates automatically every day.

00:13:07.866 --> 00:13:30.399
So we search for the web for the new information regarding business central and updates this knowledge base, and you know this is very exciting to see that, for example, when Microsoft release, before the wave, the launch videos yeah, so it's, and they are published on the YouTube.

00:13:30.399 --> 00:13:36.498
So on the next morning, centralq knows everything from all the videos.

00:13:36.498 --> 00:13:42.370
So it's you can just go and ask what's new features, how it works.

00:13:42.370 --> 00:13:47.566
So in the tool answer based on just what was just published what's new features, how it works.

00:13:47.566 --> 00:13:50.288
So in the tool answer based on just what was just published.

00:13:50.288 --> 00:13:52.792
I think that's very useful.

00:13:53.352 --> 00:14:01.200
I think it's extremely useful because, as you had mentioned, there aren't a lot of sources or a collection, even with those other language models.

00:14:01.200 --> 00:14:03.230
Because Business Central, there are a large number of users using the application.

00:14:03.230 --> 00:14:05.660
We have large number of users using the application.

00:14:05.660 --> 00:14:14.128
We have a lot of members in the community, but it's still small compared to other languages and other pieces of information on the internet.

00:14:14.128 --> 00:14:23.523
So it's a great tool for anybody that uses Business Central, and it's not just development and it's not just functional, it's a combination of both.

00:14:23.523 --> 00:14:30.849
So, whether you're a developer, a user or somebody working to consult others with Business Central, it's a good tool to have.

00:14:32.395 --> 00:14:33.259
Yes exactly.

00:14:33.600 --> 00:14:46.912
And the second thing that I thought should be really mandatory and it now became a standard in all these Copilot things, things is to reference the source.

00:14:46.912 --> 00:15:04.678
So In the in the pure Charge EPT on those periods of time, you got the answer, but you, you know, you don't know if it's correct or not, so you need to double check that and there were no sources where you can double check that.

00:15:04.678 --> 00:15:17.842
So that was my uh initial design from the beginning, that, hey, you not only need to get the answer but also the links to the sources where this answer was uh pulled from.

00:15:17.842 --> 00:15:23.267
Uh, and I found this uh also a very uh.

00:15:23.267 --> 00:15:27.669
I found this also a very widely used flow.

00:15:27.669 --> 00:15:38.136
When you ask a question in the sexual queue, it gives you the answer and then if you want to go deeper, you just click on the link.

00:15:38.136 --> 00:15:42.221
It opens the blog, so there is more detailed information.

00:15:42.221 --> 00:15:44.163
You can just read it.

00:15:44.163 --> 00:15:57.677
And I found that around I think 30 or 40% of all redirects to my website are going now from the central queue, which is also interesting.

00:15:59.142 --> 00:16:00.145
Well, I like that.

00:16:00.145 --> 00:16:21.783
I do like that because, as we all hear, if you haven't heard AI, then I don't know where you are, and if you haven't heard AI within the last hour, I don't know where you are either, because I don't think you can go an hour without hearing AI copilot, large language model, machine learning no matter where you are on the planet you could be using it too.

00:16:21.842 --> 00:16:40.711
You just don't know maybe, maybe the the ability for users of tools such as this to validate the information, because everyone talks about how this hallucinates hallucinations where you had mentioned large language models will always give you an answer.

00:16:40.711 --> 00:16:41.961
They never return.

00:16:41.961 --> 00:16:44.365
I don't know, so it could be an incorrect answer.

00:16:44.365 --> 00:17:01.371
So, knowing that individuals are utilizing or following those links to learn more about the answers or validate the answers, it's nice to hear, instead of everybody just saying give me the answer and it creating something that may or may not even exist, and then people spread that information.

00:17:01.371 --> 00:17:19.997
So, with Central Queue, when we started talking about planning this because we planned this a long time ago with Central Queue turning two, you said you may have a lot of new things in store for Central Queue.

00:17:20.240 --> 00:17:20.421
Yeah.

00:17:20.421 --> 00:17:50.221
So I hoped that I will release the second version before we talk, but it's still in development mode Because, well, there are some other projects that I'm doing, oh, I understand.

00:17:50.221 --> 00:17:51.905
Well, there are some other projects that I'm doing, yeah, oh, I understand.

00:17:51.905 --> 00:17:54.413
Yeah, but but also, uh, I think that the most important reason for me was to postpone a little bit.

00:17:54.413 --> 00:18:29.402
Uh, many new things appeared in the ai world since our, you know, since my first planning way, and the most important of them now there are new type of the models, which are called reasoning models, so they don't give you the answer directly, they think about the answer first and then produce the answer.

00:18:29.402 --> 00:18:35.953
That's a little bit different type of models that I want to also implement in the central queue.

00:18:35.953 --> 00:18:47.507
So, and also, the other thing is the concept of agents that you also, I think, hear a lot, concept of agents that you also, I think, hear a lot.

00:18:47.507 --> 00:19:08.541
And I started experimenting with the agents, I think, in September last year August, september and the first agents that I showed were in directions in a year, and I was really mind-blowing about this concept and how it works.

00:19:08.541 --> 00:19:31.185
So the example that I showed in the directions in here was that I created a team of agents that, yeah, so there were a team of agents that were the goal was to ask any questions in the natural language and it will convert it to the API.

00:19:31.185 --> 00:19:32.669
Calls to the business central.

00:19:32.669 --> 00:20:03.046
Do the calls to the business central, grab the data and provide the answer to the user the user and the problem with that if I do it the classical way is that in many cases, if I just ask in a simple call to the life language model, hey, take this query and convert it to the API, this API in most of the cases will not work.

00:20:03.046 --> 00:20:24.588
But if I make a team of agents, there will be one agent that will be responsible to generate the AI, another agent will be responsible to call this API and another agent will be responsible to provide the final answer, and they actually communicate with each other.

00:20:24.588 --> 00:20:26.266
So first one generated API, the second one called it, and they actually communicate with each other.

00:20:26.184 --> 00:20:28.667
So first one generated API, the second one called it and didn't work.

00:20:28.667 --> 00:20:34.487
It returned back to the first one and said hey, this didn't work, so you need to do this job better.

00:20:34.487 --> 00:20:41.380
It generated something and once again sent it to the other agent.

00:20:41.380 --> 00:20:42.673
The other agent once again said hey, this didn't work, send it to the other agent.

00:20:42.673 --> 00:20:44.002
The other agent once again tells hey, this didn't work.

00:20:44.002 --> 00:20:54.488
So the first agent actually went to the knowledge base that I also connected to that searched for the information.

00:20:54.488 --> 00:20:58.909
Actually, I connected to the Jeremy's book, the whole book about the API.

00:20:58.909 --> 00:21:10.828
So it went, read the book, found the exact endpoint that potentially will work and then generated the good API.

00:21:10.828 --> 00:21:13.528
The second agent executed this API.

00:21:13.528 --> 00:21:14.109
That worked.

00:21:14.109 --> 00:21:17.786
The other agent produced the answer and it was like online.

00:21:17.786 --> 00:21:20.107
You can see their internal communication.

00:21:22.161 --> 00:21:24.750
That is all amazing to me.

00:21:24.750 --> 00:21:29.320
It's the whole agentification.

00:21:29.320 --> 00:21:38.587
We talk about this a lot now because everybody's in it, but it's almost like having a staff that's working for you and each one of them does a different task agent coordinator.

00:21:38.848 --> 00:21:40.132
So you have two features coming in.

00:21:40.132 --> 00:21:43.311
One is the reasoning right, so it's going to reason itself.

00:21:43.311 --> 00:21:45.681
It sounds like, yes, it's a kind of new feature.

00:21:45.681 --> 00:21:50.171
And in the second one you're almost adding a um, an agent coordinator.

00:21:50.171 --> 00:22:01.632
It sounds like it's like I just want to talk to this one thing and then it's going to pull in whatever agent I need to accomplish this task yes, so it's, um, actually what I'm thinking of.

00:22:02.212 --> 00:22:08.946
Uh, because there are simple questions.

00:22:08.946 --> 00:22:11.098
So how this feature works.

00:22:11.098 --> 00:22:19.087
It will go to the my Knowledge Base, find this feature and produce the answer.

00:22:19.087 --> 00:22:21.522
That's how this works nowadays.

00:22:21.522 --> 00:22:48.145
But let's say you want to ask something like hey, please find me the apps on the app source that do this, compare them by something, produce me the output table which one with maybe some feedback from the users, and suggest me the best I can use.

00:22:48.145 --> 00:22:58.565
It's like a multi-step process and this currently will not work using the current version of Central Queue.

00:22:58.565 --> 00:23:03.785
It will work at some point, but the answer will be limited.

00:23:03.785 --> 00:23:19.289
So I want to now serve more advanced queries with a central queue, which I call central queue 2.0, which I'm working on.

00:23:19.289 --> 00:23:27.641
So that's why central queue turns 2, not only in years in age, but also in the version.

00:23:27.641 --> 00:23:44.519
But, yeah, I want it to be agentic, I want it to use reason models and also the new thing that appears in many cases in many areas AI areas nowadays.

00:23:44.539 --> 00:23:47.030
It's called deep search or also deep research.

00:23:47.994 --> 00:23:49.721
So it's because now deep search or also deep research.

00:23:49.721 --> 00:24:17.388
So it's um because now I'm using and most of the this uh, the chat, gpt, the complexity uh, other uh co-pilots, today in a simple mode, they're using like a maximum of 10 different sources, um depending, because that's actually usually the limitation of the one call, you know to the Lash language model, but with a deep search.

00:24:17.848 --> 00:24:19.057
It's also multi-step.

00:24:19.057 --> 00:24:21.442
So we you can ask a complex query.

00:24:21.442 --> 00:24:24.617
It will break down this query into the multiple queries.

00:24:24.617 --> 00:24:32.317
It will search them one by one, then find maybe 50-70 different sources.

00:24:32.317 --> 00:24:43.762
It will understand which sources it should go and read, depending on the different evaluations.

00:24:43.762 --> 00:24:50.851
It will go read, it will find the trusted sources and then produce the answer.

00:24:50.851 --> 00:24:54.565
So usually this process takes longer.

00:24:54.565 --> 00:25:07.343
Yeah, so, because the simple question answer in the central queue takes about 10, 10 to 10 seconds to the first token.

00:25:07.343 --> 00:25:14.155
The deep search, according to my experiments, nowadays it's around one minute.

00:25:14.155 --> 00:25:27.265
So it's one minute, one minute and a half, but it will go really deep and find more information and produce them more advanced answer.

00:25:27.906 --> 00:25:52.111
And so, yeah, three things that I want to combine together and it's, um, it's not very, you know, obvious how to do this it sounds logical, it sounds wonderful, but how a large language model or how the deep research knows which source to read based upon the content.

00:25:52.111 --> 00:25:54.903
And that goes back to the reasoning.

00:25:54.903 --> 00:26:00.028
I mean, I know how the human mind works with reasoning, reasoning based upon history and understanding.

00:26:00.028 --> 00:26:11.651
I still have difficulty understanding how these language models really put this information together to know it's.

00:26:11.651 --> 00:26:11.974
It's.

00:26:11.974 --> 00:26:20.018
It's to me, uh, I mean mind-blowing when I go with like it, just my mind, it, like everything you said, sounds great.

00:26:20.218 --> 00:26:29.465
And if I had 10 people sitting in the room that were humans working with me, I could say, okay, let's go through these sources, find the ones that are relevant for the question.

00:26:29.465 --> 00:26:35.141
Okay, let's take the pieces back and put them together, because you know that humans have reasoning in how the mind thinks.

00:26:35.141 --> 00:26:47.146
But getting a computer to do this or to getting a piece of software to do this, which is in essence what it is right, it is software, if I stand correct.

00:26:49.198 --> 00:26:49.419
Hold on.

00:26:49.419 --> 00:26:53.339
Can I recommend the fourth one as a wish?

00:26:53.339 --> 00:27:02.559
Maybe, maybe text to audio or audio to text that'd be really cool to add, or someone just have conversation with, that would be awesome to to do.

00:27:02.559 --> 00:27:11.111
I'm not trying to add more work for you, but yeah, so actually, audio-to-text is a great way.

00:27:13.277 --> 00:27:26.458
I'm personally using this with external software because I know that maybe in Windows it's already implemented by default.

00:27:26.458 --> 00:27:27.121
I'm using Mac.

00:27:27.121 --> 00:27:33.538
There is no such feature, but I'm using, you know, let me, what's called?

00:27:33.538 --> 00:27:35.977
It is called Flow.

00:27:35.977 --> 00:27:38.703
Yeah, so this software is called flow.

00:27:38.703 --> 00:27:46.288
You can just Talk to this and it will automatically transcribe and then use it in the query.

00:27:46.587 --> 00:28:24.901
Yeah, but I would also want to add okay, the fifth feature to that is multi-models, multi-model support, which means that now I'm pulling just text from the sources, so from the blogs, it's just text, from the YouTube videos, it's a transcript, and in many cases it's not enough.

00:28:24.901 --> 00:28:28.729
Especially in the blogs, I found that very often people just paste the screenshots inside of the block.

00:28:28.729 --> 00:28:32.705
Yeah, so they don't describe these screenshots.

00:28:32.705 --> 00:28:35.304
That's how this feature works.

00:28:35.304 --> 00:28:38.762
And then there is an image with different arrows.

00:28:39.263 --> 00:28:47.583
Yes, there is, and I actually now don't get this information, which is very important information.

00:28:47.583 --> 00:28:50.769
So I want to grab this information, which is very important information.

00:28:50.769 --> 00:28:53.162
So I want to grab this information as well.

00:28:53.162 --> 00:28:58.641
But also, that's the back-end, so that's how to improve my knowledge base.

00:28:58.641 --> 00:29:17.464
On the other side, on the user side, I want it really just to copy-paste the screenshot and send it directly to the central queue and ask about the you know the error, for example uh, this really will help to improve the answers.

00:29:17.464 --> 00:29:23.824
So, yeah, this five pillows that I'm working uh, right now.

00:29:23.824 --> 00:29:32.642
Uh, and also yeah, so that's that's the, the area that right now and also yeah, so that's the area that I'm focusing on.

00:29:33.596 --> 00:29:36.401
That's a lot and for you to do this.

00:29:36.401 --> 00:29:39.821
You're doing this all on your own, now, correct, and in your free time.

00:29:40.684 --> 00:29:43.032
Yes, when I say free time it's.

00:29:43.134 --> 00:29:45.663
You still work with Business Central.

00:29:45.663 --> 00:29:47.342
You do all the stuff that we talked about.

00:29:47.342 --> 00:29:48.921
So when do you sleep?

00:29:51.234 --> 00:29:58.229
You see that I already wake up, so it's 6am here.

00:29:58.229 --> 00:30:00.080
Yes, yes yes.

00:30:00.080 --> 00:30:02.896
Yes, once again thank you.

00:30:06.584 --> 00:30:08.988
My day starts very early.

00:30:09.048 --> 00:30:12.713
I have more time for work for the central queue after that.

00:30:12.733 --> 00:30:13.336
No, that's good.

00:30:13.336 --> 00:30:23.587
That's why we we said we could do this, but, as we talked about last time, you're in the future for us, so it's six in the morning, or six zero six hundred, where you are.

00:30:23.969 --> 00:30:31.737
Thursday on tomorrow for us, tomorrow yeah so I like to talk with you because I get to know what will happen tomorrow.

00:30:31.737 --> 00:30:46.648
The you're doing a lot of great things with central q and another thing that has come out and again with this deep research models is local large language models.

00:30:46.648 --> 00:30:58.382
Do you see a place for that with Central Queue to maybe help with some of the processing or offloading some of the resources or knowledge for Central Queue?

00:31:02.336 --> 00:31:25.205
Yeah, I thought about that, but I didn't find where this can fit with the central architecture and the users right now, because I don't have like an app for the phone, for example.

00:31:25.205 --> 00:31:30.451
Maybe we need to do it at some point of time, but let's see.

00:31:30.451 --> 00:31:49.195
And still, it's like a web service which works in the web, which communicates with Azure OpenAI nowadays and all the whole infrastructure is in Azure and the whole infrastructure is in Azure.

00:31:49.195 --> 00:32:09.571
There is one thing that maybe can be useful in this case I mean these local language models is using the I call it private data with the central queue.

00:32:09.571 --> 00:32:38.944
So maybe you know or not, after our previous call when we discussed the web version of the central queue, I released the business central version of the central queue, and so this is the AppSource app, which is actually a paid version, which costs like $12 per user per month, which is like not a lot, I think.

00:32:41.368 --> 00:32:50.576
But with this, you have the central queue inside of the Business Central and you can upload your own documentation there.

00:32:50.576 --> 00:33:16.500
So you can upload the documentation about how your business central works, like the instructions about your processes, the instructions about your pretend extensions and all of that, and one of the nice features also there is that you can use the page script, the basic user, the Business Central page script, to record the steps.

00:33:16.500 --> 00:33:22.701
It will take the URL of this page script or a YAML file.

00:33:22.701 --> 00:33:38.143
You can export that and upload to the Central Queue app inside of the Business Central and it will automatically produce the user manual from that and use it as an internal knowledge about how your Business Central works.

00:33:38.143 --> 00:33:41.343
And you can just ask a question.

00:33:42.305 --> 00:33:42.906
That's gold.

00:33:42.906 --> 00:33:54.807
We were just talking about this, right, brad, like we were just talking about like taking a page script result and then turning that into a usable guide or documentation.

00:33:54.807 --> 00:34:05.373
Especially for someone that is maybe in the middle of an implementation, documentation is usually like the last thing people create, but if you can make it easy with this tool, that's incredible.

00:34:05.393 --> 00:34:21.275
That's going to save a ton of time I'm on the page scripting kick because I've always been big into testing and page scripting is, in essence, a way that you can enhance your user acceptance testings, but with the way that it records it, as you had just mentioned, to create user documentation.

00:34:21.275 --> 00:34:32.842
So now, with the CentralQ app for Business Central, not only do you get the ability to use the CentralQ knowledge, only do you get the ability to use the CentraQ knowledge base that you update daily with information from Business Central.

00:34:32.842 --> 00:34:41.706
You have the ability to upload your own private documentation, right, and that stays separate from everything else.

00:34:42.007 --> 00:34:54.101
That's just yeah, yeah, so this is a separate knowledge base that is per environment or pattern-based, depending on your choice.

00:34:54.101 --> 00:35:07.822
So you have a dedicated knowledge ID and all your knowledge that you upload this is linked to this ID and only you can use it.

00:35:07.822 --> 00:35:21.128
It is all very secure and you can upload the PDF files, word documents, txt files and page scripts.

00:35:21.128 --> 00:35:22.494
And there is a chat window.

00:35:22.494 --> 00:35:26.501
It's also not a question and answer.

00:35:26.501 --> 00:35:40.351
It's a chat chat so you can go and chat about uh, that and you can decide if it's uh, if it, when you ask a question, what sources it can use.

00:35:40.351 --> 00:35:53.682
So you can decide if it can use only your private documentation and nothing else, or, in addition, it can use the whole central queue knowledge or, in addition to that, you can use the Microsoft Learn.

00:35:53.682 --> 00:36:02.407
So it's three big buckets of knowledge that you decide that what you can use and it's yeah, you can go and install it.

00:36:05.059 --> 00:36:05.780
Chris, to your point.

00:36:05.780 --> 00:36:06.001
That's.

00:36:06.001 --> 00:36:06.943
That's where it is.

00:36:06.943 --> 00:36:13.485
Nobody wants to document a process and everybody relies on someone in the office to have that process.

00:36:13.485 --> 00:36:15.961
But something may happen where they're out.

00:36:15.961 --> 00:36:22.724
One day they go on vacation, they, for personal reasons, make a change in their career and all that information is lost.

00:36:22.724 --> 00:36:38.739
But now, with this, to be able to use page scripting to have it generate documentation and then have that documentation searchable is a huge time saving and it's gold because you can record as somebody's working, say that this is that process.

00:36:41.244 --> 00:36:57.181
I'm just even going on that too brad because even if for some example like if you process change right, people process, business process change, you want to go update that, you just do a page script and have it change that in your document and then there's your updated document.

00:36:57.181 --> 00:37:04.648
Because a lot of people like business change process, your business process changes and then nobody ever updates the original document.

00:37:04.648 --> 00:37:05.389
So with this.

00:37:05.630 --> 00:37:06.811
It would make it easy.

00:37:06.811 --> 00:37:11.579
It's the original document, so with this it would make it easy To be honest with you.

00:37:11.579 --> 00:37:23.356
And again, as you had mentioned, it's a relatively low price for what you get, because for the ability to keep the business continuity there is extremely valuable and important Central Q.

00:37:23.356 --> 00:37:36.514
This whole AI stuff is such a huge time savings if it's used appropriately yes, and you know the most.

00:37:37.798 --> 00:37:57.655
The cherry on this, on this process, is that when you have the answer, and you as well have the links to the sources and if the source was a page script, you can just click it it will open the business central in a new window and will replay it.

00:37:57.655 --> 00:38:00.722
So is that right?

00:38:00.762 --> 00:38:00.942
there.

00:38:00.942 --> 00:38:01.784
See, these are all the.

00:38:01.784 --> 00:38:03.807
These are what I call like what I would.

00:38:03.807 --> 00:38:06.157
I don't want to say hidden feature, but these, like.

00:38:06.157 --> 00:38:14.170
I know about central q and I was chatting with you a few months back about the app as well, because I had the questions about the page scripting and creating documentation.

00:38:14.170 --> 00:38:30.795
But these are things that I don't think a lot of individuals may know about CentralQ and the power that you have, because from a user point of view, that is a huge time savings for them and from any business point of view, I think there's some huge value in having that.

00:38:30.795 --> 00:38:33.742
So you have so many things on this application.

00:38:33.742 --> 00:38:35.447
I still can't believe you did it all by yourself.

00:38:36.476 --> 00:38:38.619
Yes, it was just me.

00:38:38.619 --> 00:38:42.085
So and uh um and uh I.

00:38:42.085 --> 00:38:52.559
So this is the, actually the six pillow that I wanted to also embed in the web version of the Central Cube, so Central Cube 2.0.

00:38:52.559 --> 00:39:03.804
Also we'll have at least in my plans, I really want to do this this login feature where you can log in.

00:39:03.804 --> 00:39:08.157
It will be your space where you can upload your own documentation.

00:39:08.157 --> 00:39:12.086
So I want to combine these two worlds together.

00:39:12.086 --> 00:39:18.268
Nowadays, and you can use it externally as a web or internally inside of the Business Central.

00:39:18.268 --> 00:39:23.827
So that's my goal, that I want.

00:39:24.275 --> 00:39:29.878
I think that's a great goal and I hope you get to it With CentralQ, if you can share.

00:39:29.878 --> 00:39:34.242
If you're not comfortable sharing any of these questions, please feel free to let me know.

00:39:34.242 --> 00:39:34.864
I understand it.

00:39:34.864 --> 00:39:39.588
How many searches do you get per day, per month, per quarter?

00:39:39.588 --> 00:39:41.291
What sort of metrics do you have on it?

00:39:50.815 --> 00:39:52.099
Yeah, so for the last two years almost.

00:39:52.099 --> 00:39:58.793
That's where more than 300,000 questions and answers generated.

00:39:58.873 --> 00:39:59.695
That's a lot of questions.

00:40:01.277 --> 00:40:07.568
And that's produced around 1 billion tokens.

00:40:07.568 --> 00:40:11.226
Around 1 billion tokens.

00:40:11.246 --> 00:40:14.559
1 billion tokens 300,000 questions what is a token?

00:40:16.617 --> 00:40:20.905
Yeah, so the token is one word or part of the word.

00:40:20.905 --> 00:40:41.012
One word can be split into one or more tokens, so that's actually how LLH language models they produce see the world and generate the answers.

00:40:41.012 --> 00:40:59.686
Yeah, so it's around 500 queries per day nowadays and depending on the time of the year, so the lowest number of queries is on 25th of December.

00:41:02.358 --> 00:41:04.505
I wonder, why yes.

00:41:06.197 --> 00:41:07.161
That is interesting.

00:41:08.434 --> 00:41:14.617
Yeah, but still there are questions on this date to the set to kill.

00:41:14.617 --> 00:41:17.501
Well, some people don't want.

00:41:17.543 --> 00:41:19.204
Yes the break.

00:41:19.204 --> 00:41:25.800
Do you keep track of or classify the questions?

00:41:25.800 --> 00:41:28.025
I'm just I'm thinking of could be used.

00:41:28.025 --> 00:41:33.842
What I mean by classifying is is it a finance question, a purchase and payables question?

00:41:33.842 --> 00:41:36.480
Order the cash.

00:41:38.005 --> 00:41:44.442
Yes, so I'm also classifying all the questions using the last-lunch models.

00:41:44.442 --> 00:41:54.681
So it's around, so I have the statistics.

00:41:54.681 --> 00:42:16.675
So around 20% of all the questions I call it like a general, so it's like different questions about different features of the business central, but about 20% goes around AL development, questions about different features of the business central, but about 20% goes around AL development.

00:42:16.675 --> 00:42:31.760
And then it's breakdowns by the models, like about 11% comes about the financial and then about 9% from the inventory and so on, about 9% from the inventory and so on.

00:42:31.760 --> 00:42:35.824
But also this is like a categorization by the categories.

00:42:35.824 --> 00:42:47.382
But also I classify the questions by the types, and about 50% of all questions about how to do things.

00:42:47.382 --> 00:42:48.769
So this is like how can I do things?

00:42:48.769 --> 00:42:58.550
And that's very interesting because that's where comes the power of the central knowledge.

00:42:59.094 --> 00:43:13.023
Because if you rely only on the Microsoft Learn, the Microsoft learn documentation structure is about the feature, about like 90%.

00:43:13.023 --> 00:43:18.239
I would say that this is the feature, this is how this feature works, this is AL type.

00:43:18.239 --> 00:43:20.994
This is what it's about.

00:43:20.994 --> 00:43:24.702
It's not about how the process works.

00:43:24.702 --> 00:43:28.362
There is not enough knowledge in the documentation about how the process works.

00:43:28.362 --> 00:43:36.885
So there is not enough knowledge in the documentation about how the process works, and usually people ask about this how to make this process happen.

00:43:36.885 --> 00:43:45.826
And that's where blogs come into play and YouTube videos come into play, because in many blogs, it's not about the features, it's about the process.

00:43:45.826 --> 00:43:51.757
That's actually how you can do this using multiple features.

00:43:51.757 --> 00:44:20.757
So that's where the power of this comes, and I also have this telemetry that 86% of all the knowledge comes from the blogs nowadays and 65% comes from the YouTube videos and about 50% or 60% from the Microsoft Learn.

00:44:20.757 --> 00:44:23.119
So it's a combination, of course.

00:44:23.139 --> 00:44:51.108
So it's one question can have sources for multiple different categories, but the blocks play a crucial role in this answer in general, I'm fascinated by statistics and I'm happy that you shared that, because I was curious using it and I thought the number one question, chris, would be about what's the best podcast about Business Central.

00:44:52.436 --> 00:44:57.541
I don't know we should give that to Dimitri as a link to our website, because there's transcripts in there.

00:45:00.840 --> 00:45:12.144
I think that I don't know if I can find it very quickly how many answers were used in the Dynamics Corner podcast.

00:45:12.184 --> 00:45:17.121
No, no it's okay, you can look at that afterwards, but it was just a little fun.

00:45:17.121 --> 00:45:17.804
It was a little fun.

00:45:17.804 --> 00:45:20.684
I appreciate those statistics and all that you're doing with that.

00:45:20.684 --> 00:45:25.724
With that, we always have some side conversations and such too.

00:45:25.724 --> 00:45:35.347
So where do you see AI within Business Central and the most important AL development?

00:45:38.117 --> 00:45:47.706
So I would start with AL development, because that's where I use AI for the Business Central every day.

00:45:47.706 --> 00:46:02.681
So I'm not the user of the Business Central, so I actually don't very often consume AI features inside of the Business Central for myself, but as an AL developer.

00:46:02.681 --> 00:46:17.210
So there are nowadays a choice of IDEs, yes, what we can use for AL development.

00:46:17.210 --> 00:46:23.286
So we all started from the VS Code and we started with a GitHub component.

00:46:23.286 --> 00:46:24.509
Yeah, so that's where.

00:46:24.509 --> 00:46:38.378
Yeah, so that's where I started.

00:46:38.378 --> 00:46:39.539
Many you people use that.

00:46:39.559 --> 00:46:40.900
I then switch to cursor.

00:46:40.900 --> 00:46:42.824
About eight months or so ago I found that.

00:46:42.824 --> 00:46:48.070
So at this period of time we saw supported to the cloudSawNet model.

00:46:48.070 --> 00:47:08.476
So the VS Code supported the OpenAI I think 4.0 model, which is not so good in AL development, to be honest, for any reason.

00:47:08.476 --> 00:47:10.746
That's the fact.

00:47:10.746 --> 00:47:30.338
But there is another model from the Entropiq which is called Cloudy Sonnet 3.5 and I found that it knows AL pretty good and the only IDE that supported that was Cursor.

00:47:30.338 --> 00:47:41.543
So I switched to Cursor and Cursor is actually a clone, a fork, from the VS Code, so it supports all the features from the VS Code plus.

00:47:42.619 --> 00:47:45.443
The guys did a huge job.

00:47:45.443 --> 00:48:11.114
I mean, my central queue stuff is like maybe I don't know like 5-10% of what they did about the cursor, but they have a big team and they have like millions of investments and they located in San Francisco.

00:48:11.114 --> 00:48:20.925
So, yeah, this cursor supported the so called composer mode.

00:48:20.925 --> 00:48:28.155
The composer mode is not just a chat, so it's not like a how can I develop these things in AL.

00:48:28.155 --> 00:48:30.121
It's not like a hey, how can I develop these uh things in al.

00:48:30.121 --> 00:48:40.717
It's actually you ask it to produce the feature in the natural language and it develops the feature for you and then you uh ask it and then you check.

00:48:42.079 --> 00:48:47.128
So, uh, and it worked pretty good if you know how to use.

00:48:47.128 --> 00:48:54.643
Yeah, so it also, and it worked pretty good if you know how to use it.

00:48:54.643 --> 00:48:57.753
So it also requires some change of your mindset how you deal with AL code, how to set up things.

00:48:57.753 --> 00:49:11.275
But if you know what you are doing, that's actually really increase the productivity of your development and the quality as well.

00:49:11.275 --> 00:49:35.170
And now the new model appeared from the clouding cloud, from the Atrofic Cloud, sonnet 3.7, also with the reasoning capabilities, and I found that and also in parallel, in that Coursor introduced the agentic mode.

00:49:35.170 --> 00:49:48.059
So combining these two things together in AL development, it's the next level, I mean nowadays, so should I put my resume together?

00:49:48.099 --> 00:49:48.963
Is that what you're telling me?

00:49:48.963 --> 00:50:01.442
Yes, yeah, yeah, yeah, chris, you hear that he's subtly telling AL developers just look for another job, because Actually, yeah, I updated my resume as well.

00:50:01.521 --> 00:50:02.706
so it's he knows.

00:50:07.010 --> 00:50:11.365
I did see the update on VS Code today and it was funny.

00:50:11.365 --> 00:50:13.302
We'll go back to your story in a moment.

00:50:13.302 --> 00:50:26.599
I do have another question about the sources, not to disrupt you, but all of the features, for this month's update of VS Code is primarily a lot of the features that you talked about from Cursor and it's all AI-related.

00:50:26.599 --> 00:50:28.411
They added the agent mode, the copilot edits, a lot of the features that you talked about from cursor and it's all ai related.

00:50:28.411 --> 00:50:36.717
They added the agent mode, the co-pilot edits, the next edit suggestion, which was a big one, so that you it can have the edit.

00:50:36.717 --> 00:50:37.199
So it's.

00:50:37.199 --> 00:50:39.664
It seems like a lot of that's coming back to it.

00:50:39.664 --> 00:50:45.505
Uh, to go back to the stats, you had mentioned, I believe from memory, in the conversation once.

00:50:45.505 --> 00:50:55.054
Once I play it back and hear it I'll know for certain, but I think you said 20% is AL development and you mentioned your sources were blogs, learn, youtube and Twitter.

00:50:55.054 --> 00:50:57.344
Did you ever think of GitHub repositories?

00:50:58.936 --> 00:51:05.244
Yes, so GitHub repository also appeared as a source, I think a year and a half ago.

00:51:05.244 --> 00:51:37.643
At this point of time it was an experiment, so I didn't pull all the source code from the Business Central, but just a system app repo and there is now in addition to that and there is now in addition to that it appears, a special Tumblr, an option in the central queue that I'm asking specifically about the system app.

00:51:37.643 --> 00:51:51.777
So actually, what this option is doing, it's using just the GitHub as a knowledge base and you can ask how can I create the email, for example?

00:51:51.777 --> 00:52:00.639
It will produce the AL code looking into the system app and it works pretty good.

00:52:00.639 --> 00:52:06.085
I had an experiment which was fun.

00:52:06.085 --> 00:52:15.927
I was sitting in the visitor Tech Days and there was a session about how to use the system app.

00:52:15.927 --> 00:52:31.563
In parallel, there was a code in the screen and in parallel I just asked Central Cube, how can I do this and produce the same code.

00:52:31.563 --> 00:52:39.068
So it was great to see that it really works.

00:52:40.335 --> 00:53:02.967
And, yeah, the one thing that that didn't work right nowadays with this, even Cloud 3.7 models and we still have for now, we still have a job as developers that it can really.

00:53:02.967 --> 00:53:05.039
So it's good in AL syntax.

00:53:05.039 --> 00:53:06.947
So it knows the AL syntax.

00:53:06.947 --> 00:53:11.027
Yeah, but AL development is not about just about the syntax.

00:53:11.027 --> 00:53:14.750
It's about using existing libraries.

00:53:14.750 --> 00:53:18.851
Yeah, so it's using existing code units.

00:53:18.851 --> 00:53:30.199
So we don't want to duplicate the code, we don't want to reinvent the wheels and so on, and that's something that it somewhere knows somewhere.

00:53:31.244 --> 00:53:37.929
Well then, we're safe for a long time then, because if it's trying to analyze some of those code units then I don't even think people can do it.

00:53:37.929 --> 00:53:42.130
So AI will have quite a bit of challenge.

00:53:42.130 --> 00:53:54.036
But if the language changes to be more contemporary, you know, down the C-sharp road road or like it has been going, then maybe there is no hope for us.

00:53:54.036 --> 00:53:58.070
But until all those code units get cleaned up, we're safe, I think.

00:53:59.112 --> 00:54:13.233
Yes, and that's something that I'm also looking at as a central queue opportunity, because recently there was appeared also the so-called MCP protocol in the Coursor.

00:54:13.880 --> 00:54:38.123
Actually, this is the you can think of it as an API to external tools and you can, inside of the Coursor, in this code generation mode, edit mode or chat mode, you can mention the tool and ask it hey, how can I do this?

00:54:38.123 --> 00:54:53.248
And what I'm thinking of also is to put some effort in making the knowledge graph from the whole Business Central AL.

00:54:53.248 --> 00:55:03.213
You know COVID phase, so it's the knowledge graphs is a new area also in this world.

00:55:03.213 --> 00:55:16.110
So actually it's not just take the let's say the code, let's say we take the code, unit 12.

00:55:16.110 --> 00:55:25.411
So it's the biggest one, more or less, and it has a lot of different functions that are difficult to understand.

00:55:25.431 --> 00:55:26.373
Is 12 bigger than 80?

00:55:26.373 --> 00:55:28.891
I just had to add.

00:55:28.855 --> 00:55:30.545
I'm just kidding, you know the old time is know the numbers right, so we had to add.

00:55:30.509 --> 00:55:34.869
No, I'm just kidding you know the old time is, know the numbers right, so we refer everything as number 12, 80.

00:55:35.659 --> 00:55:37.719
But it's more than 1,000 lines of code.

00:55:38.282 --> 00:55:40.949
Oh yeah, I understand, it's huge.

00:55:41.449 --> 00:55:43.927
It's huge and actually the problem with that.

00:55:43.927 --> 00:55:57.938
You can't not just take this one code unit and paste it to the LLM as in one call and ask it to you know, can you, can you just produce my code based on that and so on.

00:55:57.938 --> 00:56:00.945
They just we still have limitations of context window.

00:56:00.945 --> 00:56:15.889
So there is an, the, the area which is called knowledge graph, so actually you can create a knowledge graph, also using knowledge language models that tells you the higher level.

00:56:15.889 --> 00:56:27.626
So this is the flow, this how, the things connected to each other, and that's the internal things or the functions, for example.

00:56:28.760 --> 00:56:31.748
So it's create this how do you get that?

00:56:31.748 --> 00:56:32.471
Yeah, this is the graph.

00:56:32.471 --> 00:56:36.626
Is that part of the new language models, or do you have to use another tool for that?

00:56:37.402 --> 00:56:51.909
No, there are some open source libraries how you can do this, but still behind the scenes they are using the large language models to produce these knowledge graphs.

00:56:51.909 --> 00:57:14.849
At the end, this is the database of how things are connected to each other and when you have this database and you ask a question the user asks a question it can first go to this database instead of the knowledge base.

00:57:14.849 --> 00:57:32.771
With a viewer knowledge, with a raw knowledge, it can go first to this knowledge graph, understand which pieces of knowledge are valuable for that and then go deeper to the raw knowledge and that's really increased the quality of the answer.

00:57:32.771 --> 00:57:46.623
So my idea that I also want to put effort in to make this Central Queue API for the Cursor or VS Code, I think they will also support that.

00:57:47.985 --> 00:57:48.547
I think so.

00:57:48.547 --> 00:57:52.855
I think VS Code will follow Cursor wherever it goes.

00:57:54.219 --> 00:58:03.313
And then you can just use the Cloudy 3.7 but also Managed Central Queue.

00:58:03.313 --> 00:58:12.280
So the flow should be like hey, I want to do this feature and please use central queue to do for the best quality, or something like this.

00:58:12.280 --> 00:58:27.230
And then it will first go to central queue, finds the existing libraries that can handle this, then provide this information to the cloud, so net and the clouds, it will provide the final art.

00:58:27.230 --> 00:58:36.025
So in this, this will really, I think, increase the final quality of the features.

00:58:36.025 --> 00:58:54.744
Yeah, so maybe I will do something that will make me unusable as a eligible, make you obsolete, make me obsolete, that's what you do.

00:58:54.844 --> 00:59:00.141
Dimitri goes down as the father of CentralQ and also the same one that killed.

00:59:00.463 --> 00:59:00.885
AL development.

00:59:03.351 --> 00:59:04.141
But it's not only him.

00:59:04.724 --> 00:59:08.063
It's everyone else, everyone else.

00:59:08.224 --> 00:59:44.748
Yeah, but I think that it's really changing the way how we do the development right now, because I did for the last task that I got about the AL development, I actually decided to do the experiment, so I decided to not write any code at all, so I was using just a cursor in this chat mode with edits to produce the final code, and it appears at the end.

00:59:44.748 --> 00:59:48.909
First, it's doable, so the feature was there.

00:59:48.909 --> 00:59:56.827
The first draft was very quick.

00:59:56.827 --> 00:59:59.032
Yeah, so it's doable, so the feature was there.

00:59:59.032 --> 01:00:00.596
The first draft was very quick.

01:00:00.596 --> 01:00:00.856
Yeah, so it's.

01:00:00.856 --> 01:00:01.858
It was much quicker than I will do that.

01:00:01.878 --> 01:00:33.260
However, the next follow-ups asking to refactor something, to change something and so on, resulted in additional time, so the total time from zero to hero appeared to be more or less the same as I estimated how I would do this, but there were, in the final solution, there were many things that I didn't thought about from the beginning.

01:00:33.260 --> 01:00:39.389
So that was the problem.

01:00:39.389 --> 01:01:00.677
That asked to connect to external API, and it actually looked in the documentation of this API and found something that I didn't find by myself when I looked in the documentation, and it implemented these things inside of my API.

01:01:00.797 --> 01:01:09.273
So, something like about error lock management with nice features to discover more.

01:01:09.273 --> 01:01:17.902
It results in a more user-friendly flow at the end.

01:01:17.902 --> 01:01:27.355
So I found that these tools really help you to produce a better solution and we actually will not go anywhere.

01:01:27.355 --> 01:01:38.211
We just will make this transition from just you know AL programmer to something more manageable, managerial.

01:01:38.840 --> 01:01:40.065
Yeah, I think you're correct.

01:01:40.065 --> 01:01:44.469
You'll be more managerial and more architect and function-based.

01:01:44.900 --> 01:01:48.690
So to go back to your experiment.

01:01:49.199 --> 01:01:51.628
You had a task to consume an API.

01:01:51.628 --> 01:02:01.119
The amount of time it took and you didn't want to write any code, so you wanted AI to create the entire code for you, including refactoring.

01:02:01.119 --> 01:02:09.873
So the amount of time that it took for it to do it, you say, was about the same amount of time you thought that it would have taken you to do it.

01:02:10.554 --> 01:02:10.695
Yes.

01:02:10.760 --> 01:02:24.733
But it produced better quality code in a sense, because it had additional user-friendly and error handling and other features within it that you didn't consider as part of your first estimate.

01:02:24.733 --> 01:02:27.228
That is amazing.

01:02:27.228 --> 01:02:33.465
I think that would be a great test, but I'd like to see that test done differently.

01:02:33.465 --> 01:02:36.125
See, this is a good session, see, I like to see these types of sessions.

01:02:36.125 --> 01:02:45.670
So, if you ever do one AL developer versus AI right, so you could do it not an estimate of what you think A live event.

01:02:45.670 --> 01:02:47.360
Find an AL developer.

01:02:47.360 --> 01:02:51.184
I don't know if you can do it live, depending on what it is is how much time it takes, but find an al developer.

01:02:51.184 --> 01:02:55.672
We'll have to volunteer someone to write something, give them a task.

01:02:55.672 --> 01:03:10.130
You can do it with ai, see how long it takes them in the end result and see how long it took your ai again, chaining it together with the, the refactoring and the code completion, to see the reality of who wins.

01:03:10.130 --> 01:03:12.213
See, it's AL versus AI.

01:03:12.213 --> 01:03:12.974
I would call it.

01:03:14.581 --> 01:03:27.273
Yes, and the good news is that two days ago I just got an email from Luke, who is the organizer of the BC Tech Days, that this kind of session was approved.

01:03:27.273 --> 01:03:30.228
So there will be.

01:03:30.228 --> 01:03:39.007
There will be a session at the BC Tech Days, uh, and as all sessions at BC Tech Days, they are recorded and then published on the YouTube.

01:03:39.007 --> 01:04:14.192
Uh, so, um, and we actually, uh, we'll do the session with AJ, uh, so he will be the one old school so doing the school, doing the doing the like classical AL development, and I will be doing the same in a just typing that session alone is worth the price of admission for BC Tech Days because to see that experiment to where AL developer aj who's?

01:04:15.344 --> 01:04:16.731
so if your aj is in there, you're doing it.

01:04:16.731 --> 01:04:18.059
I don't even want, I want to ask what it's about.

01:04:18.059 --> 01:04:21.570
I'm just saying if aj is doing, I have some ideas of what type of experiment it will be.

01:04:21.570 --> 01:04:23.547
Or do you want some ideas for experiments?

01:04:26.367 --> 01:04:33.528
I'm open about the ideas I'll have to send you both some ideas for this session because I think that would be a great session or a great idea.

01:04:33.528 --> 01:04:36.608
The world is changing so fast.

01:04:36.608 --> 01:04:39.387
A little side topic now.

01:04:39.387 --> 01:04:43.346
I like to go with you with AI and I'm trying not to jump around too much.

01:04:43.346 --> 01:04:52.592
Business Central is adding a lot of AI functionality to within the application with the agents and a few other features.

01:04:52.592 --> 01:05:05.322
Where do you see that going within the application itself, outside of development, outside of everything but the future of Business Central and ERP software with AI?

01:05:08.804 --> 01:05:20.989
So Microsoft is working now and released the first agent, because the flow seems to me not real.

01:05:20.989 --> 01:05:57.757
I mean, the flow of this agent is that the user gets the email and then, based on this email, the sales and sales agents read this and generate the sales quote.

01:05:57.757 --> 01:06:01.349
Yeah then, but yeah, I took with the Microsoft.

01:06:01.349 --> 01:06:03.920
They told that they did a research and found it.

01:06:03.920 --> 01:06:07.503
So this is pretty common flow.

01:06:07.503 --> 01:06:10.067
But you know, that's just my opinion on that.

01:06:10.067 --> 01:06:28.894
But the main thing that the agents coming to the business central, I find that well, this should be a really next step.

01:06:28.894 --> 01:06:51.648
I wouldn't be very optimistic about that from where it's now, because I see that a lot of we don't still have a lot of platform support to make it really powerful.

01:06:51.648 --> 01:07:01.967
For example, what I see, we don't have so-called live queries.

01:07:01.967 --> 01:07:10.817
So in many cases an agent to work efficiently with a business central, it needs data.

01:07:10.817 --> 01:07:20.547
So it needs data to work with and it needs to search for the data autonomously.

01:07:20.547 --> 01:07:28.945
So, based on the user request, it needs to understand how to fulfill the task.

01:07:28.945 --> 01:07:39.427
It should go to the database, search for the data that is required to fulfill this task and then maybe do some action.

01:07:39.427 --> 01:07:41.686
So that's actually what agents do.

01:07:43.447 --> 01:07:57.795
There are many definitions of agents, but I prefer to call them the large language models, that action in a loop.

01:07:57.795 --> 01:08:12.512
So they understand the query, they understand what next action to produce and then they do something to prepare for this action.

01:08:12.512 --> 01:08:18.510
And then they produce the action yeah, and then this could be a small step.

01:08:18.510 --> 01:08:20.126
Then they start once again.

01:08:20.126 --> 01:08:23.069
So this is the outcome from my previous action.01:08:23.069 --> 01:08:25.145


I need to start once again.01:08:25.145 --> 01:08:26.279


What's my next action?01:08:26.279 --> 01:08:37.605


And we think of these agents that we actually don't, uh, don't program them deterministic.01:08:37.605 --> 01:08:45.643


Yeah, so we, we can set some guard rails so you can go here and you know that's your goal.01:08:45.643 --> 01:08:47.806


But how to accomplish this goal?01:08:47.806 --> 01:09:11.194


The agent should decide, and one of the big parts of this decision and the process is to go to the business central data and pull the knowledge that it requires, and I see that, for example, queries, that they are a real solution for that.01:09:11.194 --> 01:09:20.587


But we don't have a generate query on fly nowadays, like we can do the SQL query.01:09:21.127 --> 01:09:21.990


I understand.01:09:24.167 --> 01:09:35.145


Maybe Microsoft can use this internally, because they do have internal connection to the SQL, so they can generate the SQL queries directly to the database.01:09:35.145 --> 01:09:55.287


But this is once again not secure, because if the user doesn't have permission to go to this table, it shouldn't get this information, and that's why even even Microsoft should run all this through the platform layer.01:09:55.287 --> 01:10:08.492


Yeah, of course, taking into consideration all the permissions, and that's actually what I really asked them to do here.01:10:09.180 --> 01:10:12.064


So it's almost like a query API and that query.01:10:12.064 --> 01:10:19.128


Api would honor user permission, so anything that they have access to would be filtered through the platform.01:10:20.944 --> 01:10:40.841


If this will appear, this will open a lot more different scenarios for the agents inside of the Business Central, and that's when this power of agents will really be visible, because for now, I think that it's more well.01:10:40.841 --> 01:10:42.929


To be honest, I think there's more automation.01:10:42.929 --> 01:10:58.351


It's an agent, so this is very deterministic flow and there is very little space in the agent decision where it can go inside of the process.01:10:58.351 --> 01:11:15.542


And so, yeah, I know that they are also working on the next agent for the purchase invoicing, on the next agent for the purchase invoicing.01:11:15.542 --> 01:11:35.006


So when you get, when they accost, when the vendor send you the purchase invoice, also maybe by email, and it will grab this email and recognize the invoice and it will convert it to the maybe general journal or the purchase invoice based on what is in the invoice.01:11:35.006 --> 01:11:43.484


I think it's a more agent flow than the first one, but let's see where it goes.01:11:45.788 --> 01:11:55.554


That's a good point that it does sounds more of a workflow power, like more of an automate, than an actual AI, where it requires a little bit of thinking.01:11:55.554 --> 01:11:57.864


It's what it sounds like.01:11:57.864 --> 01:12:00.488


I mean sales agent and purchase agent.01:12:00.488 --> 01:12:02.225


It's very linear.01:12:02.225 --> 01:12:03.704


Yes, what it's trying to accomplish.01:12:03.744 --> 01:12:24.515


Yes, because I think that's the power of agents comes when you say that, hey, okay, this is your goal and this is your tools and you are really free to organize your workflow the way you want and use these tools in the way you want to produce the final outcome.01:12:24.515 --> 01:12:40.722


That's where these reasoning models actually really help, because they can produce a really nice plan and then reflect the outcome of this plan and maybe do the second iteration, third iteration.01:12:40.722 --> 01:12:48.849


That's where these deep search agents also work like this.01:12:48.849 --> 01:12:58.931


So they have the user query, they understand the intent and they plan how to answer on the query on their behalf.01:12:58.931 --> 01:13:05.726


So they can say that, hey, this I can go and pull from the knowledge base.01:13:05.726 --> 01:13:09.086


Or maybe this query is about the code, so I can go to the code knowledge base.01:13:09.086 --> 01:13:17.743


Or maybe this query is about the code, so I can go to the code knowledge base and feed the answers from there and many other things.01:13:17.743 --> 01:13:25.408


Or maybe I want to generate something using the Business Central API, so I can just ask in the same window.01:13:28.024 --> 01:13:29.659


Yeah, I think that goes back to.01:13:29.659 --> 01:13:36.746


I know Brad and I had a conversation with somebody where I think where the power comes when you're using AI.01:13:36.746 --> 01:13:58.341


What maybe they should have done is a full stack solution where if you want to order something, it's going to take a look to see what you have available.01:13:58.341 --> 01:14:00.725


Uh, do you have enough available?01:14:00.725 --> 01:14:05.180


Then maybe it would make a suggestion of like hey, I can create a purchase order.01:14:05.180 --> 01:14:09.948


We can probably get this vendor you, you know to send us on time To me.01:14:09.948 --> 01:14:15.534


That's a better solution in terms of like the experience wise, versus like I need you to order this.01:14:15.534 --> 01:14:28.731


Well, I don't have any, I'll just create a sales order and then it kind of stops there and then maybe call in another agent to do the purchase order when, if they had painted it as a whole solution, I think it'd be a better adoption.01:14:28.792 --> 01:14:37.804


in my opinion, gives it a power of what AI can really really do for an organization.01:14:37.823 --> 01:15:21.546


Yeah, and I fully understand where they struggle right now because, actually, if we think globally, the Asian concept really works using the language models behind the scenes, right, so this is like the combination of different calls to the life language models, orchestrating this flow calls in the right way, reflecting on the outcomes and so on, but still it's life-long-life models producing the final answer on the sub-answer internally and it's not very deterministic, right.01:15:21.587 --> 01:15:26.627


So this all things can hallucinate in any kind of level and.01:15:26.627 --> 01:15:37.460


But if we implement this in the ERP system, we want this to be trustable and we want this to be deterministic.01:15:37.460 --> 01:15:42.042


So, by design, these two roles actually don't really fit together.01:15:42.042 --> 01:15:55.672


So we want to build something deterministic with the undeterministic tools, and that's, I think, where the real problem comes.01:15:55.672 --> 01:16:03.631


You need to really understand that, hey, this is AI feature and we need to accept that it can be not trustable for now.01:16:03.631 --> 01:16:06.067


Okay, so that's where we are right now.01:16:06.067 --> 01:16:26.564


We need to accept that and we can do the much more experiments and implement much more AI features and see how they really work, instead of trying to build something very deterministic with a direct flow and call it like an agent.01:16:27.167 --> 01:16:49.085


Yes, and to Chris, your point, I think I'm hopeful that it will get there that day and I take it as maybe this is the first step to get there and hopefully they can get it to work to where it covers the point too where the agent has some reasoning and it's all inclusive and it can do the whole flow.01:16:49.085 --> 01:17:00.662


Chris, like you had mentioned, with the sales order to the purchase order, to schedule it, to do it, versus just creating a sales order and then having somebody have to go do planning or something else maybe it's on purpose.01:17:00.722 --> 01:17:08.318


It's just prolonging the uh absolution of roles in the organization.01:17:08.318 --> 01:17:13.930


It's like ah, we'll give you a little bit so you have a little bit of time to enjoy your position until it gets replaced.01:17:14.551 --> 01:17:15.314


So what are you saying?01:17:15.314 --> 01:17:17.046


It gives you more time to work on your resume, Chris.01:17:17.721 --> 01:17:19.426


Is that what you're really trying to say?01:17:19.447 --> 01:17:19.688


Maybe.01:17:19.688 --> 01:17:29.011


Well, I, after talking with Dimitri, like I just figured out now that my resume, I'm going to get the update tonight or tomorrow, whatever it may be.01:17:29.011 --> 01:17:31.354


See, he's in the future, he's telling us right.01:17:31.394 --> 01:17:32.055


He's in the future.01:17:32.140 --> 01:17:34.148


He's telling us put your resume together tonight.01:17:35.001 --> 01:17:49.694


Yeah, but I think that's the only line that you can add to your resume and you will be there in the field for the rest of the years, at least for the maybe two, three, five years.01:17:50.280 --> 01:17:52.186


It's the manager of ai agents.01:17:52.186 --> 01:17:54.462


So it's there you go.01:17:54.462 --> 01:17:56.707


Thank you very much see my new role.01:17:56.707 --> 01:17:59.613


I'm the manager of ai agents.01:17:59.613 --> 01:18:01.181


That's what I want my new title to be.01:18:01.181 --> 01:18:04.208


I'm going to put that on my email manager of ai agents.01:18:04.670 --> 01:18:05.292


No, just put it update.01:18:05.292 --> 01:18:10.126


It says future, you're a future and that's your role.01:18:10.126 --> 01:18:12.149


Put it down right now.01:18:12.328 --> 01:18:14.471


I'm a future manager of AI agents.01:18:14.471 --> 01:18:15.573


Is that what?01:18:15.594 --> 01:18:16.595


you're saying yeah.01:18:18.439 --> 01:18:20.122


Maybe I'll do that.01:18:20.122 --> 01:18:21.042


Well, mr Dimitri.01:18:21.042 --> 01:18:29.408


Sir, we appreciate you taking the time to speak with us tomorrow early in the morning and to share information with us about CentralQ, where you're going with it.01:18:29.448 --> 01:18:31.409


Congratulations on Central Q turning two.01:18:31.409 --> 01:18:36.712


I love just saying that it sounds like Central Q turns two.01:18:36.712 --> 01:18:57.627


I don't know if Central Q turns three Well, we'll have to come up with another jargon for that but we do appreciate everything you're doing for the community Central Q and all the other information that you share online as well as at these conferences, and I'm looking forward to seeing the results of this BC Tech Days session that you're doing with AI versus AL.01:18:58.680 --> 01:19:06.472


I'm looking for scenarios and, by the way, if you're looking in this podcast on YouTube, I'm open.01:19:06.472 --> 01:19:12.390


Just send me your scenarios and we'll try to do this on the stage.01:19:13.351 --> 01:19:14.453


Oh, that'd be great, that'd be great.01:19:14.453 --> 01:19:16.916


When is the BC Tech Days conference, chris?01:19:16.916 --> 01:19:22.288


We'll have to make sure this gets out far enough time before, and we'll have to share that suggestions are open.01:19:23.060 --> 01:19:25.483


Yeah, so I think it's 15, 16 June Around this days.01:19:25.483 --> 01:19:32.310


Okay, 15, 16 June Around these days.01:19:33.190 --> 01:19:33.470


Okay.01:19:34.412 --> 01:19:35.212


Plenty of time.01:19:35.212 --> 01:19:37.615


We'll put it out, for sure, yeah we'll have plenty of time.01:19:37.676 --> 01:19:38.237


It's in June.01:19:38.237 --> 01:19:44.952


Mid-june is BC Tech Days, and I'm looking forward to seeing your session.01:19:44.952 --> 01:19:51.543


If anybody would like to find more information about some of the great things that you're doing, learn a little bit more about CentralQ queue.01:19:51.543 --> 01:19:55.412


Or now, Chris, did you know that you can support central queue?01:19:55.412 --> 01:19:59.528


Dmitry does this all on his own, Uh, and many people benefit from the use of it.01:19:59.528 --> 01:20:04.185


So, uh, you do also have the opportunity to support central queue, Um, so you can do that.01:20:04.185 --> 01:20:07.024


So, uh, where is can someone get uh in contact with you?01:20:09.067 --> 01:20:14.074


Um, so the first, like centralqai, that's the free website.01:20:14.074 --> 01:20:20.034


Then from there you can go to the docs and see the documentation.01:20:20.034 --> 01:20:26.707


From there you can go to the AppSource app, or you can go to the AppSource and find CentralQ there.01:20:26.707 --> 01:20:43.680


Me, I'm LinkedIn, dmitry Kapson, almost there online, especially at 6am in the morning, always for you.01:20:44.783 --> 01:20:48.733


I know great Well, next time we'll do it at 5am.01:20:51.570 --> 01:20:53.277


Well, next time we'll do it at 5 am.01:20:53.277 --> 01:20:54.940


Now.01:20:54.940 --> 01:20:55.862


Do you really want?01:20:55.881 --> 01:20:57.484


to talk about that.01:20:57.484 --> 01:20:59.225


We'll talk about that later, we'll see.01:20:59.225 --> 01:20:59.747


We'll have you on.01:20:59.747 --> 01:21:06.774


I still hope, and I'm holding out, that you do have the opportunity to make it to the United States for the upcoming Directions Conference.01:21:06.774 --> 01:21:15.427


I know it's really close and I know it's difficult logistically to travel from the future back to the present on the short notice, but I definitely would.01:21:15.427 --> 01:21:25.507


If you do attend, just shoot me a message, because I definitely will make sure that I look out for you and Chris and I will like to hear more about the future with you while we're in Las Vegas.01:21:25.507 --> 01:21:37.944


Thank you again for all that you do and I look forward to speaking to you again soon.01:21:37.944 --> 01:21:38.265


Ciao, ciao, ciao.01:21:38.265 --> 01:21:39.109


Thanks for having me, bye-bye, thank you, bye.01:21:39.109 --> 01:21:41.779


Thank you, chris, for your time for another episode of In the Dynamics Corner Chair, and thank you to our guests for participating.01:21:42.239 --> 01:21:43.743


Thank you, brad, for your time.01:21:43.743 --> 01:21:47.251


It is a wonderful episode of Dynamics Corner Chair.01:21:47.251 --> 01:21:50.708


I would also like to thank our guests for joining us.01:21:50.708 --> 01:21:53.729


Thank you for all of our listeners tuning in as well.01:21:53.729 --> 01:22:08.247


You can find Brad at developerlifecom, that is D-V-L-P-R-L-I-F-E dot com, and you can interact with them via Twitter D-V-L-P-R-L-I-F-E.01:22:08.247 --> 01:22:21.667


You can also find me at matalinoio, m-a-t-a-l-i-n-oi-o, and my Twitter handle is matalino16.01:22:21.667 --> 01:22:25.293


And you can see those links down below in their show notes.01:22:25.293 --> 01:22:26.684


Again, thank you everyone.01:22:26.684 --> 01:22:28.229


Thank you and take care.

Dmitry Katson Profile Photo

Dmitry Katson

πŸ’» Microsoft AI & Business Central MVPπŸ‘¨β€πŸ’Ό Architect, Developer, and Team Leader 🌐 Creator of CentralQ.ai