WEBVTT
00:00:00.881 --> 00:00:03.770
Welcome everyone to another episode of Dynamics Corner.
00:00:03.770 --> 00:00:06.588
It's someone's birthday and someone's turning two.
00:00:06.588 --> 00:00:09.628
I don't know who, because we're rhyming.
00:00:09.628 --> 00:00:10.984
I'm your co-host, Chris.
00:00:12.223 --> 00:00:12.885
And this is Brad.
00:00:12.885 --> 00:00:18.794
This episode was recorded on March 5th and March 6th 2025.
00:00:18.794 --> 00:00:21.868
Chris, chris, chris, I liked your rhyme.
00:00:21.868 --> 00:00:24.003
Someone is turning two.
00:00:24.003 --> 00:00:27.672
Are they blue, I wonder who?
00:00:27.672 --> 00:00:39.073
With us today, we had the opportunity to learn who is turning two, as well as a wonderful conversation about the place for AI within Business Central.
00:00:39.073 --> 00:00:43.781
With us today, we had the opportunity to speak with Demitri Katzin about Central Q turning two.
00:00:43.781 --> 00:01:03.274
Good morning sir, hey guys, how are you doing?
00:01:03.293 --> 00:01:06.215
Good morning sir, hey guys, how are you doing Good?
00:01:06.215 --> 00:01:06.596
Morning.
00:01:08.177 --> 00:01:10.117
Doing great, good, good.
00:01:10.117 --> 00:01:11.364
You look like you just woke up.
00:01:13.641 --> 00:01:14.405
Yes, thank you.
00:01:17.561 --> 00:01:23.253
And I've been waiting a very long time to say happy birthday to you.
00:01:23.253 --> 00:01:31.685
Well, not to you, but to your child yeah which one of them, central q, turns two.
00:01:32.045 --> 00:01:42.766
I've been waiting to say that for months now yes, yes, thank you very much, it's, it's coming, the birthday is coming when is the exact birthday?
00:01:43.266 --> 00:01:48.555
I know we spoke with you shortly after it was out some years ago.
00:01:48.840 --> 00:02:00.813
Well, it seems like just yesterday yeah, I need to double check when I first tweet that, but it was the beginning of march and maybe seven or something oh, wow, so we're right.
00:02:00.852 --> 00:02:01.614
We are right there.
00:02:01.614 --> 00:02:03.322
We scheduled this on purpose.
00:02:03.322 --> 00:02:08.532
Yes, yes, yes, to be there at the birthday of your child.
00:02:08.532 --> 00:02:20.810
I call it and it's great, and, before we talk about your child and many other things that are around it, I like calling it your child because I think it's wonderful.
00:02:20.810 --> 00:02:23.246
Can you tell us a little bit about yourself?
00:02:23.265 --> 00:02:24.711
Yeah, so I'm.
00:02:24.711 --> 00:02:25.092
It's wonderful.
00:02:25.092 --> 00:02:26.217
Can you tell us a little bit about yourself?
00:02:26.217 --> 00:02:26.800
Yeah, so I'm Dmitry.
00:02:26.800 --> 00:02:31.290
I'm in a business central world for like 20 years.
00:02:31.290 --> 00:02:35.247
I'm passionate about business central and artificial intelligence.
00:02:35.247 --> 00:02:43.719
I started with a majority in ML, or machine learning or AI, whatever you call it nowadays.
00:02:43.719 --> 00:02:47.647
I started in 2016.
00:02:48.468 --> 00:03:05.331
So it was almost like eight years ago right when I headed their AI department and a big partner, and I didn't know anything about that, so that's where my journey started.
00:03:05.331 --> 00:03:16.312
And then I was passionate to combine AI with a business central for years and I think now my mission is accomplished.
00:03:17.259 --> 00:03:19.064
Your mission is accomplished.
00:03:19.085 --> 00:03:19.948
Mission accomplished.
00:03:20.950 --> 00:03:23.460
That's great and you've been doing a lot of great things.
00:03:23.460 --> 00:03:24.411
You've been doing a lot of speaking things.
00:03:24.411 --> 00:03:29.433
You've been doing a lot of speaking sessions, presentations and yeah, and like I see you all over the place.
00:03:29.433 --> 00:03:39.665
You're very busy not only with business, central and central q, but uh, sometimes it seems like a world traveler to me yeah, it's well.
00:03:39.847 --> 00:03:47.608
There are two uh seasons where I travel, so it's definitely directions.
00:03:47.608 --> 00:03:55.225
So usually it's directions Asia, as it's not far away from me, just one hour of flight.
00:03:55.225 --> 00:03:58.629
That's nice, sometimes using bike.
00:04:01.021 --> 00:04:01.805
That's even better.
00:04:01.805 --> 00:04:02.426
That's good.
00:04:03.842 --> 00:04:05.347
I think I saw a picture of you last year.
00:04:05.388 --> 00:04:15.102
You took your motorbike that's right yeah yeah, but but to be honest, yes, it's still 800 kilometers, so we prefer to use bike to go to the airport.
00:04:15.983 --> 00:04:19.086
Yeah, it would be a long ride.
00:04:19.187 --> 00:04:28.437
A long ride, yeah, and then then Besitek Days and Directions EMEA.
00:04:28.437 --> 00:04:38.468
So that's my three conferences that I usually attend as a speaker, yeah, and that's where we can meet.
00:04:38.468 --> 00:04:55.372
I really hope to go this year to directions North America, but it seems that my visa is not ready yet, so I don't think that they will issue that on time.
00:04:56.040 --> 00:05:03.673
I'm hoping that they issue it on time, because I would enjoy meeting you in person in Las Vegas this year.
00:05:04.601 --> 00:05:22.163
I know it's a long trip for you too yes, but it's still already two months of visa processing and they, you know, uh waiting okay you got like a little over three weeks left, four weeks left, so you still have time, you, you just have to.
00:05:22.242 --> 00:05:22.605
When's your?
00:05:22.605 --> 00:05:24.170
When's your cutoff day?
00:05:24.170 --> 00:05:25.115
Do you cut-off day?
00:05:25.115 --> 00:05:29.069
Whereas if you don't have a visa by a certain day, then you definitely won't be attending.
00:05:31.521 --> 00:05:32.987
I think that it's already passed.
00:05:34.627 --> 00:05:40.492
Oh man, We've got to make sure that you make it next year, then I'm hopeful to run into you somewhere.
00:05:40.819 --> 00:05:42.206
I'm hopeful to run into you somewhere then.
00:05:42.206 --> 00:05:52.033
So you've been doing a lot of great things and for those that do not know about CentralQ, can you tell us a little bit about CentralQ briefly?
00:05:52.033 --> 00:05:54.524
And then I have a whole list of questions for you.
00:05:55.365 --> 00:06:27.283
Right, yes, so I've been doing different machine learning things before and then, like, I was speaking in the conferences about how we can implement machine learning in Business Central and I remember that first time I talked about this in 2018, I think in Harvard, in the direction of EMEA, and I was the only weird person that talked about this in the conference Even Microsoft didn't talk about that.
00:06:27.283 --> 00:06:42.204
And then, in directions in me last year, I found myself that, like 60-70% of all the content, everyone speaks about Copilot and AI.
00:06:42.204 --> 00:06:45.093
So that's where we are.
00:06:45.093 --> 00:06:47.961
That's where I think that my mission was accomplished.
00:06:47.961 --> 00:07:05.314
But I returned back like two years ago a little bit more, when the first chat, gpt appeared right, and we were like all mind-blowing about the power of Linguistic Models.
00:07:05.314 --> 00:07:17.211
We all saw them for the first time and what I did actually I think many people did I thought, hey, great, now I can use it to help myself with a business.
00:07:19.100 --> 00:07:27.528
And just after some quick queries, I figured out that, no, that doesn't work.
00:07:27.528 --> 00:07:43.903
It just suggested me features that doesn't exist, suggested me you know code that doesn't compile, suggested me you know routes where I it's just hallucinated a lot.
00:07:43.903 --> 00:08:03.365
But I still thought that, yeah, that could be a good framework to build around and to help our community to use it to help with the business central problems.
00:08:03.365 --> 00:08:12.149
Yeah, the problem with the business central is that it's still very, you know, narrow, uh, comparing to the whole internet.
00:08:12.149 --> 00:08:15.254
Yes, so our al development is.
00:08:16.040 --> 00:08:19.024
you know it's several github repos.
00:08:19.024 --> 00:08:29.622
Comparing to the millions of you reviews, Our documentation for the Business Central is still small comparing to all other products.
00:08:29.622 --> 00:08:36.034
So probably at this point, at those point of time it was GPT 3.5.
00:08:36.034 --> 00:08:39.104
It maybe knew something.
00:08:39.104 --> 00:08:48.043
But you know, the main goal of the light language models is to answer all the questions, no matter if it's correct or not.
00:08:48.043 --> 00:08:52.532
So it was just imagine the answer.
00:08:53.894 --> 00:09:03.792
However, I found and in those periods of time it was very hard that there are still a way how we can make it better.
00:09:03.792 --> 00:09:47.447
So if we just make the big knowledge base about everything that we know about the business central in one place and then not just ask directly Flash language model, but first a language model, but first query our knowledge base, find the potential answers, like some text that will potentially answer on the user question, and then we'll send this to the language model together with the user question, this increases the correct answer a lot.
00:09:47.447 --> 00:09:57.184
So that's what we call it fact grounding, yeah, or the knowledge grounding.
00:09:57.184 --> 00:10:05.296
So that's where the idea was born about hey, I think that that will work.
00:10:06.096 --> 00:10:19.374
So the next problem with that was that I I need to find a way how to build it because there was no exact documentation, there was nothing.
00:10:19.374 --> 00:10:24.828
So actually my, my only source of knowledge at this point of time was Twitter.
00:10:24.828 --> 00:10:39.652
So I followed some guys that also did some experimenting, chat with them, and so I built a knowledge base.
00:10:39.652 --> 00:11:04.164
I took first the blogs and the Microsoft Learn, Then I added at some point of time YouTube, then it was Twitter also as a source of knowledge and yeah, so it took like two months of building, I remember, and the Central Queue was fun.
00:11:05.125 --> 00:11:29.076
So Central Queue in essence is a large language model that's built or it's grounded, or it has its knowledge based upon popular blogs from community members of Business Central, from the development point of view, as well as from the functional point of view, the Microsoft Learn documents, which keep getting better and better, twitter and the YouTube videos.
00:11:29.076 --> 00:11:40.583
So anybody who uses Central Queue, similar to ChatGPT you mentioned, which a lot of people use it, will pull the knowledge from those sources to return the result.
00:11:41.424 --> 00:11:54.625
Yes, and also the problem with just a pure LashLanguage model was and still is that it's trained and has a cut-off knowledge date.
00:11:54.625 --> 00:12:02.332
So it's usually for the OpenAI models it's one year before.
00:12:02.332 --> 00:12:25.264
So the current models I think that they have a cut of days like 2024 or somewhere in the maybe autumn, maybe summer, but as we use, as we ask about Business Central, so this area is growing fast.
00:12:25.303 --> 00:12:43.105
The new features appears every day oh yes no, like, oh yeah, not every day, okay, but we have uh, waves, uh, and they are much appears, much quicker than this, that large models are trained based on that it does seem like every day, by the way yes, every month we have new features.
00:12:43.105 --> 00:12:44.067
So it's, it's just like every day by the way?
00:12:44.086 --> 00:12:47.895
Yes, exactly Every month we have new features, so it's just like every day is a holiday.
00:12:47.916 --> 00:13:07.846
I guess you could say yeah so this was the second problem that I wanted to solve and the Central Queue not just have this knowledge base that is trained and used, but it updates automatically every day it but it's updates automatically every day.
00:13:07.866 --> 00:13:30.399
So we search for the web for the new information regarding business central and updates this knowledge base, and you know this is very exciting to see that, for example, when Microsoft release, before the wave, the launch videos yeah, so it's, and they are published on the YouTube.
00:13:30.399 --> 00:13:36.498
So on the next morning, centralq knows everything from all the videos.
00:13:36.498 --> 00:13:42.370
So it's you can just go and ask what's new features, how it works.
00:13:42.370 --> 00:13:47.566
So in the tool answer based on just what was just published what's new features, how it works.
00:13:47.566 --> 00:13:50.288
So in the tool answer based on just what was just published.
00:13:50.288 --> 00:13:52.792
I think that's very useful.
00:13:53.352 --> 00:14:01.200
I think it's extremely useful because, as you had mentioned, there aren't a lot of sources or a collection, even with those other language models.
00:14:01.200 --> 00:14:03.230
Because Business Central, there are a large number of users using the application.
00:14:03.230 --> 00:14:05.660
We have large number of users using the application.
00:14:05.660 --> 00:14:14.128
We have a lot of members in the community, but it's still small compared to other languages and other pieces of information on the internet.
00:14:14.128 --> 00:14:23.523
So it's a great tool for anybody that uses Business Central, and it's not just development and it's not just functional, it's a combination of both.
00:14:23.523 --> 00:14:30.849
So, whether you're a developer, a user or somebody working to consult others with Business Central, it's a good tool to have.
00:14:32.395 --> 00:14:33.259
Yes exactly.
00:14:33.600 --> 00:14:46.912
And the second thing that I thought should be really mandatory and it now became a standard in all these Copilot things, things is to reference the source.
00:14:46.912 --> 00:15:04.678
So In the in the pure Charge EPT on those periods of time, you got the answer, but you, you know, you don't know if it's correct or not, so you need to double check that and there were no sources where you can double check that.
00:15:04.678 --> 00:15:17.842
So that was my uh initial design from the beginning, that, hey, you not only need to get the answer but also the links to the sources where this answer was uh pulled from.
00:15:17.842 --> 00:15:23.267
Uh, and I found this uh also a very uh.
00:15:23.267 --> 00:15:27.669
I found this also a very widely used flow.
00:15:27.669 --> 00:15:38.136
When you ask a question in the sexual queue, it gives you the answer and then if you want to go deeper, you just click on the link.
00:15:38.136 --> 00:15:42.221
It opens the blog, so there is more detailed information.
00:15:42.221 --> 00:15:44.163
You can just read it.
00:15:44.163 --> 00:15:57.677
And I found that around I think 30 or 40% of all redirects to my website are going now from the central queue, which is also interesting.
00:15:59.142 --> 00:16:00.145
Well, I like that.
00:16:00.145 --> 00:16:21.783
I do like that because, as we all hear, if you haven't heard AI, then I don't know where you are, and if you haven't heard AI within the last hour, I don't know where you are either, because I don't think you can go an hour without hearing AI copilot, large language model, machine learning no matter where you are on the planet you could be using it too.
00:16:21.842 --> 00:16:40.711
You just don't know maybe, maybe the the ability for users of tools such as this to validate the information, because everyone talks about how this hallucinates hallucinations where you had mentioned large language models will always give you an answer.
00:16:40.711 --> 00:16:41.961
They never return.
00:16:41.961 --> 00:16:44.365
I don't know, so it could be an incorrect answer.
00:16:44.365 --> 00:17:01.371
So, knowing that individuals are utilizing or following those links to learn more about the answers or validate the answers, it's nice to hear, instead of everybody just saying give me the answer and it creating something that may or may not even exist, and then people spread that information.
00:17:01.371 --> 00:17:19.997
So, with Central Queue, when we started talking about planning this because we planned this a long time ago with Central Queue turning two, you said you may have a lot of new things in store for Central Queue.
00:17:20.240 --> 00:17:20.421
Yeah.
00:17:20.421 --> 00:17:50.221
So I hoped that I will release the second version before we talk, but it's still in development mode Because, well, there are some other projects that I'm doing, oh, I understand.
00:17:50.221 --> 00:17:51.905
Well, there are some other projects that I'm doing, yeah, oh, I understand.
00:17:51.905 --> 00:17:54.413
Yeah, but but also, uh, I think that the most important reason for me was to postpone a little bit.
00:17:54.413 --> 00:18:29.402
Uh, many new things appeared in the ai world since our, you know, since my first planning way, and the most important of them now there are new type of the models, which are called reasoning models, so they don't give you the answer directly, they think about the answer first and then produce the answer.
00:18:29.402 --> 00:18:35.953
That's a little bit different type of models that I want to also implement in the central queue.
00:18:35.953 --> 00:18:47.507
So, and also, the other thing is the concept of agents that you also, I think, hear a lot, concept of agents that you also, I think, hear a lot.
00:18:47.507 --> 00:19:08.541
And I started experimenting with the agents, I think, in September last year August, september and the first agents that I showed were in directions in a year, and I was really mind-blowing about this concept and how it works.
00:19:08.541 --> 00:19:31.185
So the example that I showed in the directions in here was that I created a team of agents that, yeah, so there were a team of agents that were the goal was to ask any questions in the natural language and it will convert it to the API.
00:19:31.185 --> 00:19:32.669
Calls to the business central.
00:19:32.669 --> 00:20:03.046
Do the calls to the business central, grab the data and provide the answer to the user the user and the problem with that if I do it the classical way is that in many cases, if I just ask in a simple call to the life language model, hey, take this query and convert it to the API, this API in most of the cases will not work.
00:20:03.046 --> 00:20:24.588
But if I make a team of agents, there will be one agent that will be responsible to generate the AI, another agent will be responsible to call this API and another agent will be responsible to provide the final answer, and they actually communicate with each other.
00:20:24.588 --> 00:20:26.266
So first one generated API, the second one called it, and they actually communicate with each other.
00:20:26.184 --> 00:20:28.667
So first one generated API, the second one called it and didn't work.
00:20:28.667 --> 00:20:34.487
It returned back to the first one and said hey, this didn't work, so you need to do this job better.
00:20:34.487 --> 00:20:41.380
It generated something and once again sent it to the other agent.
00:20:41.380 --> 00:20:42.673
The other agent once again said hey, this didn't work, send it to the other agent.
00:20:42.673 --> 00:20:44.002
The other agent once again tells hey, this didn't work.
00:20:44.002 --> 00:20:54.488
So the first agent actually went to the knowledge base that I also connected to that searched for the information.
00:20:54.488 --> 00:20:58.909
Actually, I connected to the Jeremy's book, the whole book about the API.
00:20:58.909 --> 00:21:10.828
So it went, read the book, found the exact endpoint that potentially will work and then generated the good API.
00:21:10.828 --> 00:21:13.528
The second agent executed this API.
00:21:13.528 --> 00:21:14.109
That worked.
00:21:14.109 --> 00:21:17.786
The other agent produced the answer and it was like online.
00:21:17.786 --> 00:21:20.107
You can see their internal communication.
00:21:22.161 --> 00:21:24.750
That is all amazing to me.
00:21:24.750 --> 00:21:29.320
It's the whole agentification.
00:21:29.320 --> 00:21:38.587
We talk about this a lot now because everybody's in it, but it's almost like having a staff that's working for you and each one of them does a different task agent coordinator.
00:21:38.848 --> 00:21:40.132
So you have two features coming in.
00:21:40.132 --> 00:21:43.311
One is the reasoning right, so it's going to reason itself.
00:21:43.311 --> 00:21:45.681
It sounds like, yes, it's a kind of new feature.
00:21:45.681 --> 00:21:50.171
And in the second one you're almost adding a um, an agent coordinator.
00:21:50.171 --> 00:22:01.632
It sounds like it's like I just want to talk to this one thing and then it's going to pull in whatever agent I need to accomplish this task yes, so it's, um, actually what I'm thinking of.
00:22:02.212 --> 00:22:08.946
Uh, because there are simple questions.
00:22:08.946 --> 00:22:11.098
So how this feature works.
00:22:11.098 --> 00:22:19.087
It will go to the my Knowledge Base, find this feature and produce the answer.
00:22:19.087 --> 00:22:21.522
That's how this works nowadays.
00:22:21.522 --> 00:22:48.145
But let's say you want to ask something like hey, please find me the apps on the app source that do this, compare them by something, produce me the output table which one with maybe some feedback from the users, and suggest me the best I can use.
00:22:48.145 --> 00:22:58.565
It's like a multi-step process and this currently will not work using the current version of Central Queue.
00:22:58.565 --> 00:23:03.785
It will work at some point, but the answer will be limited.
00:23:03.785 --> 00:23:19.289
So I want to now serve more advanced queries with a central queue, which I call central queue 2.0, which I'm working on.
00:23:19.289 --> 00:23:27.641
So that's why central queue turns 2, not only in years in age, but also in the version.
00:23:27.641 --> 00:23:44.519
But, yeah, I want it to be agentic, I want it to use reason models and also the new thing that appears in many cases in many areas AI areas nowadays.
00:23:44.539 --> 00:23:47.030
It's called deep search or also deep research.
00:23:47.994 --> 00:23:49.721
So it's because now deep search or also deep research.
00:23:49.721 --> 00:24:17.388
So it's um because now I'm using and most of the this uh, the chat, gpt, the complexity uh, other uh co-pilots, today in a simple mode, they're using like a maximum of 10 different sources, um depending, because that's actually usually the limitation of the one call, you know to the Lash language model, but with a deep search.
00:24:17.848 --> 00:24:19.057
It's also multi-step.
00:24:19.057 --> 00:24:21.442
So we you can ask a complex query.
00:24:21.442 --> 00:24:24.617
It will break down this query into the multiple queries.
00:24:24.617 --> 00:24:32.317
It will search them one by one, then find maybe 50-70 different sources.
00:24:32.317 --> 00:24:43.762
It will understand which sources it should go and read, depending on the different evaluations.
00:24:43.762 --> 00:24:50.851
It will go read, it will find the trusted sources and then produce the answer.
00:24:50.851 --> 00:24:54.565
So usually this process takes longer.
00:24:54.565 --> 00:25:07.343
Yeah, so, because the simple question answer in the central queue takes about 10, 10 to 10 seconds to the first token.
00:25:07.343 --> 00:25:14.155
The deep search, according to my experiments, nowadays it's around one minute.
00:25:14.155 --> 00:25:27.265
So it's one minute, one minute and a half, but it will go really deep and find more information and produce them more advanced answer.
00:25:27.906 --> 00:25:52.111
And so, yeah, three things that I want to combine together and it's, um, it's not very, you know, obvious how to do this it sounds logical, it sounds wonderful, but how a large language model or how the deep research knows which source to read based upon the content.
00:25:52.111 --> 00:25:54.903
And that goes back to the reasoning.
00:25:54.903 --> 00:26:00.028
I mean, I know how the human mind works with reasoning, reasoning based upon history and understanding.
00:26:00.028 --> 00:26:11.651
I still have difficulty understanding how these language models really put this information together to know it's.
00:26:11.651 --> 00:26:11.974
It's.
00:26:11.974 --> 00:26:20.018
It's to me, uh, I mean mind-blowing when I go with like it, just my mind, it, like everything you said, sounds great.
00:26:20.218 --> 00:26:29.465
And if I had 10 people sitting in the room that were humans working with me, I could say, okay, let's go through these sources, find the ones that are relevant for the question.
00:26:29.465 --> 00:26:35.141
Okay, let's take the pieces back and put them together, because you know that humans have reasoning in how the mind thinks.
00:26:35.141 --> 00:26:47.146
But getting a computer to do this or to getting a piece of software to do this, which is in essence what it is right, it is software, if I stand correct.
00:26:49.198 --> 00:26:49.419
Hold on.
00:26:49.419 --> 00:26:53.339
Can I recommend the fourth one as a wish?
00:26:53.339 --> 00:27:02.559
Maybe, maybe text to audio or audio to text that'd be really cool to add, or someone just have conversation with, that would be awesome to to do.
00:27:02.559 --> 00:27:11.111
I'm not trying to add more work for you, but yeah, so actually, audio-to-text is a great way.
00:27:13.277 --> 00:27:26.458
I'm personally using this with external software because I know that maybe in Windows it's already implemented by default.
00:27:26.458 --> 00:27:27.121
I'm using Mac.
00:27:27.121 --> 00:27:33.538
There is no such feature, but I'm using, you know, let me, what's called?
00:27:33.538 --> 00:27:35.977
It is called Flow.
00:27:35.977 --> 00:27:38.703
Yeah, so this software is called flow.
00:27:38.703 --> 00:27:46.288
You can just Talk to this and it will automatically transcribe and then use it in the query.
00:27:46.587 --> 00:28:24.901
Yeah, but I would also want to add okay, the fifth feature to that is multi-models, multi-model support, which means that now I'm pulling just text from the sources, so from the blogs, it's just text, from the YouTube videos, it's a transcript, and in many cases it's not enough.
00:28:24.901 --> 00:28:28.729
Especially in the blogs, I found that very often people just paste the screenshots inside of the block.
00:28:28.729 --> 00:28:32.705
Yeah, so they don't describe these screenshots.
00:28:32.705 --> 00:28:35.304
That's how this feature works.
00:28:35.304 --> 00:28:38.762
And then there is an image with different arrows.
00:28:39.263 --> 00:28:47.583
Yes, there is, and I actually now don't get this information, which is very important information.
00:28:47.583 --> 00:28:50.769
So I want to grab this information, which is very important information.
00:28:50.769 --> 00:28:53.162
So I want to grab this information as well.
00:28:53.162 --> 00:28:58.641
But also, that's the back-end, so that's how to improve my knowledge base.
00:28:58.641 --> 00:29:17.464
On the other side, on the user side, I want it really just to copy-paste the screenshot and send it directly to the central queue and ask about the you know the error, for example uh, this really will help to improve the answers.
00:29:17.464 --> 00:29:23.824
So, yeah, this five pillows that I'm working uh, right now.
00:29:23.824 --> 00:29:32.642
Uh, and also yeah, so that's that's the, the area that right now and also yeah, so that's the area that I'm focusing on.
00:29:33.596 --> 00:29:36.401
That's a lot and for you to do this.
00:29:36.401 --> 00:29:39.821
You're doing this all on your own, now, correct, and in your free time.
00:29:40.684 --> 00:29:43.032
Yes, when I say free time it's.
00:29:43.134 --> 00:29:45.663
You still work with Business Central.
00:29:45.663 --> 00:29:47.342
You do all the stuff that we talked about.
00:29:47.342 --> 00:29:48.921
So when do you sleep?
00:29:51.234 --> 00:29:58.229
You see that I already wake up, so it's 6am here.
00:29:58.229 --> 00:30:00.080
Yes, yes yes.
00:30:00.080 --> 00:30:02.896
Yes, once again thank you.
00:30:06.584 --> 00:30:08.988
My day starts very early.
00:30:09.048 --> 00:30:12.713
I have more time for work for the central queue after that.
00:30:12.733 --> 00:30:13.336
No, that's good.
00:30:13.336 --> 00:30:23.587
That's why we we said we could do this, but, as we talked about last time, you're in the future for us, so it's six in the morning, or six zero six hundred, where you are.
00:30:23.969 --> 00:30:31.737
Thursday on tomorrow for us, tomorrow yeah so I like to talk with you because I get to know what will happen tomorrow.
00:30:31.737 --> 00:30:46.648
The you're doing a lot of great things with central q and another thing that has come out and again with this deep research models is local large language models.
00:30:46.648 --> 00:30:58.382
Do you see a place for that with Central Queue to maybe help with some of the processing or offloading some of the resources or knowledge for Central Queue?
00:31:02.336 --> 00:31:25.205
Yeah, I thought about that, but I didn't find where this can fit with the central architecture and the users right now, because I don't have like an app for the phone, for example.
00:31:25.205 --> 00:31:30.451
Maybe we need to do it at some point of time, but let's see.
00:31:30.451 --> 00:31:49.195
And still, it's like a web service which works in the web, which communicates with Azure OpenAI nowadays and all the whole infrastructure is in Azure and the whole infrastructure is in Azure.
00:31:49.195 --> 00:32:09.571
There is one thing that maybe can be useful in this case I mean these local language models is using the I call it private data with the central queue.
00:32:09.571 --> 00:32:38.944
So maybe you know or not, after our previous call when we discussed the web version of the central queue, I released the business central version of the central queue, and so this is the AppSource app, which is actually a paid version, which costs like $12 per user per month, which is like not a lot, I think.
00:32:41.368 --> 00:32:50.576
But with this, you have the central queue inside of the Business Central and you can upload your own documentation there.
00:32:50.576 --> 00:33:16.500
So you can upload the documentation about how your business central works, like the instructions about your processes, the instructions about your pretend extensions and all of that, and one of the nice features also there is that you can use the page script, the basic user, the Business Central page script, to record the steps.