Oracle's Conversational AI is Revolutionizing Data Insights—See How!
August 8th, 2025
25 min read
This session from Day Three of ERP Suite AI Week features Oracle’s Data and AI team presenting Oracle Ask Data, a conversational AI platform enabling natural language queries across enterprise data sources, including JD Edwards, E-Business Suite, PeopleSoft, and non-Oracle databases. The session covers business challenges with traditional BI processes, the platform’s two personas — business user and expert reviewer — and the trust framework ensuring accuracy, security, and auditability. A live demo showcases the chat-based business user interface and the expert certification process.
Table of Contents
- Welcome and Introduction
- The Business User's Challenge
- The Vision for Oracle's Ask Data
- Structured Data, Accuracy, and Trust
- Two Personas: Business User and Expert in the Loop
- Live Demo: Business User Experience
- Live Demo: Expert Review in the Trust Framework
- Deployment Options
- Closing and Q&A
Transcript
Welcome and Introduction
Welcome, welcome everybody to day three of ERP Suite AI Week. I’m happy to be here, joined by a few colleagues and friends from Oracle. But before I introduce them, I want to quickly recap what has been two packed days of content — interactions with customers, feedback, intriguing questions, and lots of depth and breadth around artificial intelligence. We’ve had multiple keynotes and breakout sessions, and today is no different.
We have another packed day starting off with two keynotes from Oracle that we’re very pleased to have. I’ll introduce these gentlemen in just a moment, along with several breakout sessions that will go throughout the day. I’m looking forward to another action-packed day with lots of questions, interactions, and discussions with customers.
With that, I’ll introduce our special guest today: Dimmitri BV, Group Vice President at Oracle, part of Cloud Engineering and Oracle Cloud Infrastructure, and head of the Center of Excellence for AI and ML. Welcome, Dimmitri. Joining him are Raj Aurora, Principal Cloud Architect at Oracle for Generative AI and a member of the Center of Excellence team led by Dimmitri, and Bishwan, Master Principal Cloud Architect in ML and AI and also a member of the Center of Excellence.
We have some very expert people here, and it’s a pleasure to have you all. They’re going to be talking about some very cool technology they’ve produced to really take natural language conversations to the next level, enabling you to get data insights like you’ve never seen before. Over to you, gentlemen — thank you very much for presenting today.
Manuel, thanks for the intro. Can you just confirm that you can still hear my voice and see my screen?
“Yes, you’re loud and clear, and I can see your slides. Thank you, sir.”
All right, awesome. Folks, thank you for having us on the call today. I’m Demetri V, Group Vice President for Data and AI Practice with Oracle Cloud, and I’m joined by two of my team members, Raj and Bishwa. The topic for today — and we’ll spend a few minutes on this up front before jumping into a live demo — is conversational AI.
Over the last 9 to 12 months, as many customers began experimenting with generative AI and exploring enterprise-class use cases that deliver real business value, conversational AI quickly rose to the top of the list. The concept is simple: enable natural language interaction with enterprise data. At Oracle, we have a long-standing and rich ecosystem of database products, numerous packaged applications such as JD Edwards, E-Business Suite (EBS), and PeopleSoft, as well as countless custom applications built on top of the Oracle Database by partners, ISVs, and customers.
A consistent request we’ve received is: “How can I use generative AI to talk to my data?” Business users want to be able to ask ad hoc, natural language questions about the information sitting in JD Edwards, EBS, PeopleSoft, or other applications built on Oracle Database — and not just Oracle databases, but also Snowflake, PostgreSQL, MySQL, Teradata, and really any relational data store.
The Business User’s Challenge
If I'm a line of business user within a line of business and I don't want to rely on my IT folks to extract the data to munch the data, build a full pipeline, bring it into a dashboard, work with the BI engineer to build that dashboard and then that cycle repeats itself because every time you get to some sort of an insight, you want to start looking at this data in a different way. So as a business person, you're always chasing those insights and you always have to go rely on somebody else waiting for them to do the work before you can actually start looking at the data or the insights the way that you would like to do that.
The Vision for Oracle Ask Data
And that was a clear signal for us to build Oracle ask data as a conversational AI platform that can be plugged into pretty much any data source, but certainly the Oracle ecosystem. JD Edwards, Peopleoft, EBS and so on so forth. And to ensure that our customers can have truly natural language conversation with their data and in doing so we really looked at two aspects. First of all, from a business user perspective, we wanted to make sure that business users can can really have a natural language conversation with your data. Meaning that you don't need to be a SQL expert. You don't need to understand the data model. You don't need to really understand where the data is coming from. You probably need to be aware of the domain of your data. So if you asking about invoices, about payables, about receivables, about supply chain, about resource planning, you you probably need to be aware of some generic domain knowledge about the data source where it's coming from. But it's but it's really as simple as that. On the other side, while business users are interacting with their data in real time using conversational AI, we wanted to make sure that we also providing AI governance, visibility, security and full auditability.
Structured Data, Accuracy, and Trust
Because the thing with structured data and for those folks who have you know had a chance to play with AI and general AI you probably heard of of a concept called retrieval augmented generation or rag. So rag is a great technique when it comes to unstructured data, right? And when you have a rag based application, it relies on what's called vectors to find data based on vector search and then it pulls that data and it constructs a response to your question based on the data that it finds. But that response is usually, you know, it could be a sentence, could be a paragraph, could be a full page, with some references and annotations to where that original piece of data came from. In dealing with unstructured data using retrieval augmented generation technique, you are still relying on the human being to read that response, right? Whether it's a paragraph or whether it's a page long, the human being is reading the response, he or she is interpreting that response and sort of making their decision based on their understanding of whatever was generated. But when you look at natural language to SQL that is the technique NL to SQL and if I asked a question such as what were my sales for part ABC in the last 30 days in such and such store or in such and such region. I don't want the model to give me any kind of approximation or hallucination or estimation. I need the model to give me an exact number which comes from an authoritative data source. And that's why building this AI governance and the trust framework was so important for us as part of this solution.
Two Personas: Business User and Expert in the Loop
So if I look at what we've built and you will see this in a live demo here in a second is you have two parts with two personas in mind. On the right hand side you've got business user and you'll have as part of the demo Bishwa is going to be our business user for the demo today. He will be interacting with a chat UI. In this case, we're going to be looking at a very simple web-based application you built using Oracle Visual Builder Cloud Services and we will have Oracle Digital Assistant as a chatbot component embedded inside of that UI application. Now, this is just an example. We have customers who have deployed the solution in production. They've integrated with Microsoft Teams with Slack. They build their own custom UI using JavaScript technologies or they have integrated this directly into their applications such as JD Edwards because everything behind the scenes is API based. So essentially you can put any front-end UI if you know if you wanted to have this directly inside of your JD Edward application you could do that just just as well.
The SQL engine which is really the brains of the solution and is responsible for taking that natural language prompt understanding the data domain understanding the schema of your data and constructing the SQL statement that would then pass through an authorization layer to ensure that me as a business user or in this case Bishwra who's going to be doing the demo for us has the right access permissions to go look at that data. In other words if let's say I ask the question show me my sales for the last 30 days asked that same question and maybe Raj asked that same question we should all get data that is relevant to our user as it relates to to data access permissions. So you know the data may be scolded by you know by a salesperson user ID it may be scaled by a region it may be scaled by some you know some some other predicate. So authorization framework ensures that you can fully implement table row column level security when it comes to accessing the data coming from that authoritative data source. So all that happens in real time as I explained the business user comes in and truly chats with their data using nothing but natural language. While that conversation is ongoing every single interaction is getting logged audited and sent into the trust framework where we have another user persona called expert or expert in the loop and that person is responsible in offline mode. So that person can come in once a day or once a week or every once couple weeks. That person is responsible for looking at all the prompts and SQL that was generated and executed and going through a lightweight verification and validation process which again is fully automated. The idea there is that the person will look at the prompt, review the SQL that was generated, ensure that the SQL was correct, not just synthetically correct because we take care of that as part of the SQL engine. So I would say it's almost 100% guaranteed that the SQL is syntactically correct, but you want to make sure that it's not only synthetically correct, but it also has the right business logic embedded when you ask a question such as, for instance, show me all my critical invoices for the last 30 days.
So this notion of a critical invoice in this example may have a business nuance and the way that you would implement this using SQL it's not necessarily that you have a column in your data model that says is critical true or false right there may be other things that need to be considered so expert has the opportunity to read the prompt examine the SQL ensure that it has been generated correctly validate it or if it needs it can actually provide a correction and ensure that the valid and corrected SQL is then passed through to the system. Behind the scenes we employ a trust library. The trust library is the place that basically stores all the valid and verified forms of prompt user prompts and SQL pairs. The trust library is being used behind the scenes to ensure that the accuracy of the system continuously improves. Right? So as the system runs and it gets more real examples from business users, we keep validating and we keep adding more and more of those examples into the trust library. And without getting too technical and too much into the weeds, there is also continuous fine-tuning of the large language model which is one of the most critical aspects of the system and what really sets us apart. So every example, every conversation that that goes between the business user and as data platform both the valid as well as the corrected examples will be used to continuously fine-tune the large language models behind the scene. And by doing so, your large language model essentially adapts itself. It learns the environment. It learns the business lingo. It learns your data. It understands your metadata. So with the more the system is being used in real production environment, the better it becomes and the faster its accuracy will approach 100% or close to that.
Live Demo: Business User Experience
So with that in mind, let me jump forward here and I will pass this to Bishra and Raj to take us into the live demo. So on the right hand side you'll see an example of the chat UI. Again in this case we're using Oracle digital assistant which is a chatbot that is very very intuitive when it comes to having a conversation. Behind the scene all the brains of the system is part of our NLS SQL engine which takes the natural language prompt, converts it into SQL, executes that SQL against your enterprise data store and then the results are being presented back both in tabular as well as graphical formats if you choose and then on the left hand side we have an example of the trust framework which is built using Oracle Apex technology and Apex is an extremely easy to use low code no code development environment which sits directly on top of the Oracle database 23AI and that allows you to build very robust applications that essentially sit on top of the data that is stored inside of Oracle database 23AI. So with this in mind, let me pass this to to Bishva to kick us off with the demo and then we'll come back. We'll look into more details around architecture and deployment of Oracle as data. So Misha, I'm going to stop sharing and pass this over to you.
Let me share my screen quickly.
Okay. So hope you guys are able to view my screen. So before we start with the demonstration I would like to thank the ERP suits team for helping us to build the demo environment and it's been really present working with the entire team. So in today's demonstration I'll be acting as the business user. I'll be taking you through the whole R starter UI and my colleague Raj will be acting as the expert user who will be taking you through the trust framework what we have under the hoods. So to begin with the demonstration in the scope of the demonstration we have captured the JD adverbs data sets and as you can see this is the chatbot what we are talking about this is integrated on the top of the visual builder and as you can see on my screen we have got different domain functional areas and these are again customizable. So just we have kept all the JD adverts data sets inside this icon but depending upon the business use case you can always customize these tabs.
Now let's start with executing some of the simple prompts.
Okay, let's go with the first which is the show all customers on credit hold. Let's see what the output comes out. Okay. So in the output you can see the contact person, phone number and email. So for the conversation what I've initiated the LLM is returning these values out of the JD adverts and also here you have got a couple of icons which I would like to explain. The first one is the explain. Now this helps you to know about where you are in the conversation and figure out what exactly LLM is answering back to you. So you can find a text where it explains what's the query is all about and as I am the business user so I will be having less SQL knowledge. So this is good for me to evaluate the query at my level and also we have got a SQL query. So for the output what is the SQL being generated under the hood you can see over there and this is one of the places where you can view the SQL being generated as well.
So let's proceed with a new conversation and let's ask some other questions.
So this is the show the list of back order SKUs and as you can see we are capturing the line number description and the backorder quantity. So this is the output which is getting popped up. And here the most important part is you can see an exclamation mark which has a small yellow color ribbon. So here if I read this out it mentions the query is yet to be clarified. Please use it with caution until it is being certified by the admin and as Dimmitri covered earlier the users will will be having the capability to figure out the prompts in the trust library. So if a prompt is certified and registered in the trust library the sign will get converted into green. So there is a process to do it which I'll be showcasing in my next prompts. But this is just to make you aware like we also have a check to show whether the query is certified or not.
Also just quickly I want to interject, right, that right next to that warning sign that Bisa just covered you see that it's saying responding to and then it has the prompt next to it. So this as well will be changed if you have a number of follow-up prompts then like if it becomes a very long conversation so this as well keeps changing and then it gives a sense to the user where he's at in the conversation and as Visa will be covering further you will see how this responding to changes as you put in follow-up prompt.
Exactly. So let's execute another query where we can show some of the follow-up prompts. So here I'm trying to showcase the projected cash receipts. Let the query run. Okay. Now as a business analyst what I can see is the customer names the due date and due amount and I see many of the customer names are getting repeated and also there are some null values in my output. So as a subsequent conversation I would like to have the total amount due for each customer listed separately. So let's see like if I ask the system what it does.
Okay. So now you can see the fields have changed. And also I can see the queries which it is returning and based on this I can have some also and I can ask the system to limit the output values as well. Let me do it quickly.
Okay. So you can see it is restricting the output as well. Now there are a couple of more features over here. So if you want to see the table output this is the window where you can go and you can see the customer names by the due amount. So suppose you want to search or filter out based on some customers then you can also filter out using this icon. So this is you can see this is working on on a fly. So this is one of the features being available.
So essentially, sorry B, quickly interject, right, like in the chat interface we don't want to display too much data it would not be very readily consumable by the user. So in the chat interface we are just showing the first top 50 or it's just a configurable parameter. It could be top 50 or 100. And then in this view full result set is where you can actually if the data set is bigger you can go there and look at the full data set and one of the enhancements that's coming very soon is that you can you would also be able to go here and download the full data set from here and you can do filtering and as such over here.
Great and thanks Raj and just it's not only the table data what I will be eager to view but also I will be eager to see some graphs as well. So let me ask to make a graph.
Okay. So these are some of the graphs being generated on a fly and also we can view the interactive graph how it looks like. So this is being automatically built by the intelligence of the system. Well let's discuss another aspect like let me think of like let me ask this system like show me all the important customers what I have and let's see like what it replies back. Let me initiate a new conversation for that. So when I'm asking the question show all the important customers I'm not describing like what is the criteria for a customer being important right and the LLM also doesn't know like or like what are the criterias for my enterprise for a customer being important okay so as a business user what I will do I will validate the SQL statement where I see the system is telling like if the amount due is greater than 1,000 and the average days late is greater than seven then I'm categorizing this customers are important right so so this is something which I don't want or I what I see it's incorrect so we have got a couple of icons over here like if you like the query you can have a thumbs up you can click the thumbs up and if you don't like you can click a thumbs down so if you have if you're clicking a thumbs down then it ask the business analyst to leave a feedback because that's where the other part of the demo is where the expert user will be looking at to at to the feedbacks given by the users and will be whitelisting the queries after he reviews. So we also have got integration here where the users end user can come and give the feedbacks for the query and submit it to the system. Correct. So it can be any feedback and this is being submitted to the back end.
Live Demo: Expert Review in the Trust Framework
So that's how the chart interface will be covering the other part which is… so M before before you leave this UI right let's just quickly talk about some of the things that last thing that you covered right before we jump off to the expert UI interface. So if you can scroll up a little bit and so as you can see right the S we, he, Bisper asked like give me all the important customers and looks like the LLM made up some criteria where it is saying okay where the amount due is greater than 1,000 and average days late is greater than seven is is somebody who is an important customer. Now this may or may not be true for your organization. You might have a totally different criteria for defining who an important customer is. So that's where the fact about the trust library comes up where which Dimmitri covered earlier. All these queries or important things that are custom to your organizations, those can be configured in your trust library as your trusted SQL. And then once you do that, you can use the when a business user is asking questions which are totally which are known to you already those can be picked up from the trust library instead of being generated on the fly by the LLM. So this is the trust UI that Bisa is sharing.
Now as an expert user you can come here and do variety of things like and we'll we'll walk through that one at a time. So this is just the landing page and it just gives an introduction to to the user about what all is possible. So B if you can go to the bootstrap tab and right. So essentially when you are starting off with a su with a blank system, we provide you with a way where our our where where we can look at your metadata for the tables and autogenerate those prompts for you which which will kind of become your starting prompts and your trusted prompts to begin with. Now you may or may not want to use those but they will just be there initially and will will be generated using the metadata that you have for for your schema underlying schema. And then on the right hand side you see there's an upload button. So if if let's say the autogenerated prompts are not are too simplistic for your use cases. So you you may not want to use them. Then you can use this upload feature to upload the prompts that you that are more relevant or more custom to your to your business use case and upload those and have SQL be generated for those prompts right at the up front at the bootstrap stage. So think about this like when you're setting up the system you you either bootstrap it using autogenerate or you upload your custom prompts and then have your system be ready for let's say UAT for a limited amount of time and then make sure that your trusted prompts are all good and then you put your system in a live mode.
So let's let's move on to the live certified tab visa. Now in this live certified tab all the activity that happens in the background that what what was doing on the UI all that is is recorded and then if a user as an expert user you can choose to come over here and look at what the activity is going on in the system. So let's say if there is a SQL that which is which is being frequently used and the only difference is that you that it there is a difference between the parameters of of the SQL. Let's say I ask that okay what what's the overdue invoices in 2024 and you ask what's the overdue invoices in 2025. So that's there's just a minor difference between the two two queries but essentially the underlying SQL is the same. So though in such scenarios you can use this fast certify to certify those those SQL and then they will become your certified SQL. But let's say if those the SQL that are being generated are a little bit more complicated and a little bit different than what the LLM u is generating. Then let's go to the full certify tab.
Now over here what you can do is you can look at the each SQL and then either you can fail it or you can pass it and if it is pass obviously it becomes like your your certified SQL. If you fail it, then you have the opportunity to provide your corrected SQL and then that becomes your trusted SQL. Now, all in all, you you don't really have to do this like like if if you are just wanting to trust the LLM to generate your SQL, you are welcome to do that. But as an expert in the loop, if you have time and you want to do this, you can come here and you can certify the important SQL that your organization has. And then this last tab is very important and it just ties back to what Bispa was showing earlier. Now if if a user leaves feedback you actually want to look at it and you want to act on that. So this would be the place to come where you will look at the users's feedback and then depending on what the feedback is you you would want to certify the or pass fail or provide the corrected SQL and that will become your trusted SQL going forward. So unfortunately these two systems that we covered over here the business UI and the the trust UI that we are showing over here are not connected at the moment. So we cannot show the full loop as in like when a user submits feedback how it shows up over here and how it can it can be certified but essentially that's the flow and then lastly if you if you go to to this ops folder all at all points in time you have full visibility into what is going in on in your system and how how the users are interacting with with the business UI. So this provides like with you with the full log of the activity in in the system and to the extent like each main prompt and the followup prompts are logged over here and you can see the correct actual sequence in which the prompt want the business UI and this gives you very good visibility into what is going on in in your live system and then you can also see which which prompts and SQL are popular and that needs to be that that can be certified going forward.
And then if you go quickly over to the to the to the metrics tab visual. So again a bunch of metrics are provided over here like the how what is the state of your trust library and the size of your trust library. And then again if you go to the next tab the on user metrics you can see the number of prompts and the number of the trust level of the each each query and as such. So essentially we provide you with a very detailed view of how your system is looking and how it is maturing over time as your as your users are interacting with the system and the business or the expert user is actually working in the background in certifying those SQLs. Now one piece that Dimmitri covered briefly is was the finetuning aspect of it. Now with all these activity being logged, we have a very good data set that we can use to fine-tune the model. And we can we can we don't have this in this demo over here currently but essentially we have an automated workflow where all the user activity can be used to feed into the finetuning of the model and then in an automated way we can swap out the currently deployed model with the fine-tuned model and then have that be available as as the system is is maturing.
Yeah, this is all we had to cover in terms of our demo. I'll pass it back to Whisper and Demitri for any last comments.
Deployment Options
Thanks R and thanks Vish. So I see there's a number of questions here in the in the chat. Scott, did you wanna jump in? Yeah, we can address those. Yeah, go through them. Sure. Y want to go up to the top. The first question was about real-time data. So is there real-time data? Is this real-time data or training data from the ERP? So if the user enters new data, will it query immediately reflect it? Yeah. So let me just by saying that there are two ways there are two best practices that we see when customers choose to deploy one option and again there's no right or wrong. It really depends on your use case and even more so on your enterprise data strategy. So one option is to deploy Oracle as data directly against let's say your JD Edwards runs on Oracle 19c database. So you can deploy directly against that instance of the 19c database in which case as beta sees all the data that it comes into that gets stored into the database in real time or near real time. The moment that application hits commit or applies commit to the database that data is available and it can be retrieved by your conversational analytics through Oracle as data platform. That's one option. There are pros and cons for doing it this way. Another option where we see customers is they would have a data warehouse where they're taking their transactional data let's say from EBS and they're moving it into the warehouse. By doing so they're also transforming this data they're doing some ETL there could be some aggregation it's probably only the subset of the data so you're moving from a transactional view of the data to more analytical view of the data in which case it depends on how on the frequency of your ETL pipeline. So if you're going to consume the data from a warehouse that sits downstream from your JD Edwards instance, the data is available, but the frequency depends on how often you load new data into that warehouse. And again, these are two kind of architectural patterns. There are pros and cons for each one of those two.
Closing and Q&A
So I'm gonna come back to Susan's. The next one was about access and security. So with JD Edwards, so if the logged in user doesn't have permission to view the account ledger for example in a JD Edwards table, will AS data block access to it? Is it possible for AS data to respect the JD security settings based on users login? So that's something that we get a lot. The security obviously is paramount and we ensure that we provide data access controls and policies within Oracle as data. Now one thing for folks to realize is that we do not go through the application tier. So Oracle data has been designed to be data source and application agnostic meaning that we don't go call some API that is in the application tier instead we go directly against the database. So if you have your security policies defined at the user level within the application itself that is not visible to Oracle as data. So what we do is we ensure that we apply security policies directly against data stored inside of the database. We can go into details maybe on a separate call, but there is a way for you to enforce data access policies and you should. But one thing to keep in mind is that if you have a user policy defined inside of the application, those policies do not security policies do not natively apply to what we do within data platform.
Okay. Yeah, makes sense. And then another one about the naming. So on the SQL displayed back to the user friendly names are appearing, where does the translation from JD alias to these friendly names come from? And this is where Bish jump in because I know you guys worked on setting this up. But one thing I'll say is between so when when as data gets deployed there's a there's a small exercise of taking the metadata right so for in this case it's at table F0911 whatever that means taking that metadata and actually providing some natural language description for what this table actually represents right because large language model cannot just look at 0911 0912 and whatever whatever random number that is and and and figure out what is the the meaning of the data inside those tables. So in other words, we go through a small exercise very lightweight to say well table F0911 has I don't know invoices data and within this table we have let's say 15 columns columns C1 C2 C3 C15 each one of those columns you know you just give it few words or few sentence descriptions of what that column represents that's how the model starts understanding not only the data model itself but also the context actually understands the semantic meaning of that data.
So it's pulling you said also metadata is that that's already in like JD Edwards or that something that they would set up with S data. So we yeah so we would pull think of this as we would pull the DDL which is you essentially the description of of the tables and the fields. That DDL would have you know the the physical database the physical schema table and column names and they would or or we would append a you know an English sentence to say this is the meaning of this table. Yeah. And it's a onetime activity just defining the meta glossary like as Dimmetry said and we need to keep some of the human readable definitions to each and every columns if they are completely like if the name starts like E1023 or something like if it is something like the employee ID then the LLM has the capability to understand it. Yeah.
Okay. That's all the questions we had in the chat so far. Any others. We got a couple more minutes we can go through if anybody has any or I don't know if you guys need anything else to close with after the demo, we can go through that while we wait. So maybe let me just share well I'll come back to one the first question in terms of how how data makes its way into into as data platform. So let's see here you should be able to see my screen right? Okay. All right. So let's start with this. We don't need to go through every single box. So everything inside of the red box is the footprint. It's kind of under the, you know, under the hood view of the different services that make Oracle as data. So in other words, when data is deployed, all these services get stood up. It's super simple. Everything is is automated through Terraform. You know we run a single script and basically all these all these instances are spun up inside of your Oracle cloud tenancy. Now but where I want to focus now is upstream on the left hand side. So as I was saying there are two ways how customers would deploy Oracle as data next to their enterprise data source. So in this case you can think of this being your application you know 19c which underpins you know let's say your instance of JD Edwards and essentially in this case this is a direct connection from Oracle last data the SQL engine into 19 instance. So there are a couple benefits of doing it this way. First of all, every write to the database. So every time that application posts an update to the database, that data is immediately available. So you can you can have real time or near realtime insights coming back from that source database through the conversational aspect that we just saw from from Bisha and Raj. Which which is definitely beneficial. On the flip side, it exposes you to some complexity because the schema inside JD Edwards, the transaction schema inside JD Edwards is rather complex. So it does require a little bit more thinking up front in terms of you know how to ensure that all that complexity can be properly handled by the LLM. It requires those annotations. So you can take you know table name you know you know whatever that 09 one means column names which may or may not be very descriptive and and providing annotations to do that right so a little bit more complexity in terms of upfront implementation but it does give you the benefit of being able to access your transactional data in near real time. So that's one way how customers choose to deploy as data. The other way and again this largely depends on your existing enterprise data strategy is you may have a single application or multiple applications. So let me give you an example. You may have JD Edwards running in one part of your organization. You may have EBS or SAP or Salesforce or you know workday many different applications not necessarily even all coming from Oracle and you may already have ETL pipelines that are moving these data from the application silos into an enterprise data warehouse. From an enterprise data warehouse perspective, Oracle, you know, our flagship product is autonomous data warehouse or ADW, but as I said in the beginning, you know, we actually support any flavor of of SQL or many flavors of SQL. So it could be a warehouse based on snowflake, based on terod data, based on postre based on my SQL or or my SQL Microsoft SQL server and so on so forth. So, we want to make sure that we can adapt to your existing data landscape. And you've if you've already chosen to build your data integration ETL pipelines from application silos, database application silos into centralized enterprise data warehouse. You can have Oracle as data which is the red box on the right hand side tied into your data warehouse in which case it usually means that your data has already been cleansed aggregated you probably have less joints because you've gone from a transactional view of the data into more analytical tables. So fewer tables, wider wider tables which makes it easier for for large language model to understand the the data understand the meaning the context of the data tends to require less annotations up front. And then you already probably have your data domains structured inside of the warehouse. So again it becomes a little bit better tailored towards enterprise analytics and this kind of conversational AI applications. So in any case this really comes down to your specific requirements your specific business users the type of questions that they want to be asking as well as your existing data data landscape and data architecture. Whether you're going to go pull directly from from the source database, the application source like the way you have it here, or whether you would want to go and consolidate into the warehouse and then have as data sit on top of that data warehouse. Both options are available. I think this is more of a a case by case discussion.
Alright, with that, happy to take any additional questions or we can give folks a break before the next.
Topics: