Stop Losing Money! The Secret AI Tool for JD Edwards Users
August 5th, 2025
19 min read
The session focuses on leveraging AI-powered anomaly detection within JD Edwards to provide actionable insights and enhance business decision-making. The presentation, led by ERP Suites' experts, introduces machine learning concepts, including supervised, unsupervised, and reinforcement learning, and explores their applications in various business sectors, such as finance, manufacturing, and warehouse management. Key topics include anomaly detection, the importance of training models on both conforming and anomalous data, and using Oracle's autonomous database to analyze large datasets. The session highlights practical use cases like sales analysis, inventory optimization, and predictive maintenance. The demo showcases how machine learning models can detect anomalies in sales orders, warehouse inventory, and transactions, enabling businesses to make data-driven decisions and improve operational efficiency.
Table of Contents
- Introduction and Agenda
- Machine Learning Basics
- Anomaly vs. Conformity
- Financial Use Case: Supervised Learning
- Manufacturing Use Case: Unsupervised Learning
- Anomaly Detection and Demo Flow
- Demo Video: Sales and Warehouse Analytics
- Real World Examples of Anomaly Detection
- Closing Remarks
Transcript
Introduction and Agenda
Hello, Welcome, everybody, and thank you for joining today's session. My name is Manuel Nera, Vice President of AI and Products at ERP Suites. Today's session on enterprise document intelligence covers a wide range of topics around anomaly detection, conformity and how to get greater insights. So we have a great agenda planned for today. Like to hand it over to Drew for him to introduce himself and kick off the presentation.
Yeah, Hi guys. Drew, Rob, I am the AI advisor at ERP Suites. I've been at ERP Suites for five years now. I've worked actually. Give me one second now. Well, I need to pull up the presentation. Sorry, hand it up and then lost it one second. All right. Sorry about that. All right. Hello everyone, Drew Rob, AI advisor at ERP Suites. I've been with ERP Suites for five years now, worked in products and in data analytics, day consultant. Now the wonderful world of AI. I've given a number of sessions, there are a number of presentations this week on getting started with AI, but happy to teach you guys today about some anomaly detection capabilities that we, we are doing at ERP Suites, some example use cases as well. And just the high level, what, what is, what is anomaly detection?
What is machine learning and kind of the, the, the outline around all of that.
So go through a quick agenda here. So we'll first start with what is machine learning? What is the basis of machine learning? Then we'll go into kind of the, the, the, the types of machine learning, supervised, unsupervised reinforcement are the three and then anomaly versus conformity. And what's the important around those, What's important thing about those and, and why you need to do both really train on both. And then we'll go into some real world use cases. And then we'll end that with a demo that we worked at ERP suites with building out around sales analysis and warehouse analysis using Oracle machine learning and, and really Oracle OCI capabilities. And then we'll end with a, with A Q&A there.
Machine Learning Basics
With that, we'll jump into what is machine learning, right? And, and you've probably seen it before and you've probably heard about machine learning. It's, it's really just a branch of artificial intelligence that enables computers to learn data and improve from their performance over time. So really learning on historical data, we, we like to call it a feedback loop, right? So every time a user input goes in, every time new data gets added, it learns from that and becomes, becomes more intelligent. The key, the key components of those really dropping down there is, is training data really important. Training data is obviously important. I spoke a lot about that this week. Having good data quality data being kind of the gold, as Oracle likes to call it too, of your business. Having good data management leads to very good machine learning models and really good machine learning solutions to actually glean good insightful data off of that. You also need to obviously need a model to run it through. And, and the model's going to really depend on on what sort of predictions come out of it and what sort of decisions come out of that model that you run it through. And there's many different models that you can use. It just really depends on the, the type of machine learning you're looking to do, right? And that, and that runs into, you know, you know the algorithm, what algorithm are you using? Are you, are you trying to do a classification model or a cluster model, right? You know, I know the one in our demo today, we're using support vector machine learning as well. So it just really depends what idea you're trying to solve, trying to solve when you're going into the machine learning, machine learning side of things. And then inference, right? The ability for a learning model to get new and unseen data. And as as we talked about with training data, it's the ability to, to gain that knowledge into the model whenever you're training it. That that's also a very important key concept when building out these machine learning capabilities in machine learning models.
With that, I'll jump into three of them. And we, we have examples on two of these, but supervised learning, that's really learning around labeled data. So if you think of that, you think of your structured data inside JD Edwards or your SQL database or your Oracle database, that's really all the supervised learning you're actually learning and you're trying to define a single target outcome that you're getting from that. And we'll have a financial example on that coming up here that will explain more and give you a better perspective of what a supervised learning model will really look like.
Next is the unsupervised learning. So this model really looks at detecting patterns and unlabeled data. So what we like to do this, there's a few examples with this one. And that's with, for example, what we, what we use is, is vision is one of our tools that we like to use to actually gain insights from images, right? So very unstructured data. You can also think of that as getting, getting data from documents to document understanding is another, another session we had during this week. So you think of those as like unsupervised learning. And the one we're gonna talk about later as well is naturally in the manufacturing space and using machine learning with detecting anomalies in manufacturing, right, using sensors. So really looking forward to, to sharing that one with you. But that's really unsupervised learning. It's not, it's not detailed table oriented data like supervised learning. It's more unstructured, but you can still glean good insights from those. The third one's reinforcement learning and this one's very cool. So it really learns from trial and error. And it actually is what we've incorporated into a manufacturing digital assistant. Look to incorporate in a lot more of our solutions where when you actually get data back, if you ask you query on data to a sequel query or you wanna you wanna, you know, you wanna run through a process and make sure the process is correct. If you're trying to do an automation as a manufacturing assistant does it will come back and actually respond to you and say, do you like this answer? And basically you just put the thumbs up and a thumbs down and you can really train it on if the answer is good or bad based on that. If you join the keynote today for the as data solution, you also saw that as well kind of that reinforcement learning, making sure you know, once you got the data back, you can actually train it yourself. And it's really that human in the loop aspect that we tried to reiterate a lot during this week in that process. So the humans involved in training the actual model and sending back more data to the model for historical input from the user. So reinforcement learning is a different one. You've seen some real time examples this week and we're just gonna show you some high level examples on this top too. So the supervised and unsupervised types of machine learning with that we want to jump down to anomaly and conformity.
Anomaly vs. Conformity
So obviously anomaly is something we would focus on highlight today. It's something unexpected, unusual and really deviating from often a singling, a potential issue error. So just really high level examples we're showing there is sales order with a negative quantity. Obviously you'll see that you'll highlight that in your data, right? You can do it manually, but obviously we want to be more predictive today and highlight those examples and potentially get rid of them and notify people, right? So that's the big thing we're getting to with this, with this machine learning data analysis we're going to get into later today. But that's something easy to spot, right? Or even a, a spike in sales from a dormant customer account is another example we can bring up. And then even getting outside of sales, right, the user logging in from an unusual time or unusual place, even a location, that's just another example of an anomaly that might show up when you're running through your data. Now, taking it to the other side here, conformity really refers to behavior or data outputs or inputs that follow expected patterns. And so really that's like a sales order that includes all required fields and follows your typical format when you're looking at your data or use it from a company's standard operating procedures or even data entries that match historical averages. So everything seems to match up. And the reason we bring up conformity is the sole reason that when we're training these models, we don't want to just use data that's anomalous. We don't want to use data that's just no incorrect or abnormal or an outlier. We want to use this conformity data in order to train the models and not and, and this will help eliminate the, the bias in the models, right? So the bias that can show up if you have skewed data or you or you, you know, you target a data source or a data, a subset of data that might be, might be favoring one side or the other. So it's really important to understand that all data is important when we train these models, especially conformed or expected results because that will make you determine and understand what are the actual anomalous transactions or anomalous data sources or an anomalous outputs outputs in your data. So just keep in mind conformity, it's very important to train on all your data as well as the data that's also good.
Financial Use Case: Supervised Learning
With that, I'm gonna hand it over to Manuel to take you through a financial use case on the supervised learning side of things where you actually understand the data that's in the tabular format. Thank you, Drew and and great. You know, baseline on anomalous and conformity because those are both equally, as you described, equally important to have a a highly useful and accurate model. Here is specifically when we look at the supervised model, we're talking about accounts payable, you know, wanting to automatically detect erroneous invoices like overpayments, fraud or accounting errors. And as you can imagine, right, those two components that Drew talked about conformity and anomalous are, are necessary to be able to one on the conformity side to establish a baseline of what is to be expected, whether it's an average or some other mechanism that establishes A baseline for the model to say, yes, this is fine, right? From a payment perspective On the other side of the house, you, you also need anomalous, right? You need, you need any kind of those outliers to be able to describe so that the model can understand what doesn't fit why, why it needs to be a question raised and, and people need to be notified. So in this particular case, real world example, right, is a pretty simple one, but it's something that will resonate, right, in terms of being able to track kind of, you know, payments in the AP side. It is supervised falls in the supervised category because again, as Drew spoke, it is using structured data within JD Edwards and it is and is getting trained on that as well as it'll continue to get trained over time because there may be other types of scenarios that arise and, and, and those got to be, you know, taken into consideration. So the model will evolve is what I'm hoping to say.
Manufacturing Use Case: Unsupervised Learning
Next slide, please. Well, with regards to the manufacturing use case. So now let's look at the other side of the house, the unsupervised set of use cases where you might have unstructured data or unlabeled data as well. And this one's an interesting one because back when I was at at GD Edwards Oracle still, we came out or probably around the 2017 time frame IoT support for Internet of Things, right, with orchestrations orchestrator studio at the time. Why what it was called for, right is for Internet of Things. And the idea was to detect symptoms in equipment that would, you know, fall out of range and be indicators of the machine, you know, being near to failure or near to needing maintenance. Since then, right, You know the excuse me, back then we were talking about Raspberry Pi devices to, to be able to enable that interaction collection of data and have that collection of data being fed back into the GD Edwards preventive maintenance system.
Now we Fast forward, right? And here we are with machine learning type of capabilities that not only can start, you know, looking and, and kind of, you know, kind of mining that data to figure out, OK, what's really a, a, an anomalous type of situation. Maybe it's fallen out of the range, but maybe, you know, it gives an, a weighted average of like, you know, 99% of the time it does not fail at that threshold, right? It takes another, you know, step or two in terms of vibration units to really then be something that's indicator and that one in time, you know, maybe that's when the the the maintenance person should be contact to to maintain the machine before it breaks down.
So you know, you this is an augmentation type of component. If you're already using orchestrations and are doing IoT, this will enhance it.
So that's one, that's one, one use case. The other potential use case is around devices.
So I spoke a moment about, about Raspberry Pi's being the devices that were doing the basic collection of the data and then feeding it into JD Edwards via orchestrations, right, the orchestration studio. But now we Fast forward today and there's devices now that are equipped, they're ruggedized, right, for this kind of purpose. Monotron from AWS being one example that we worked with, we worked with customers and actually we use it in our own operations for, for our, for our data centers and we use it to detect and and monitor some of those symptoms that I talked about, right, the temperature, vibration and other metrics. And what it also does, it has incorporated AIAI capabilities, machine learning. So it will start tracking, it'll Start learning. So it's a smart device. So that's interesting.
And this particular example, right, with manufacturing that's we're talking about the equipment many times we're talking about unstructured data, no labels and, and being able to train it right, both on both ends is still important, like Drew said, right, both from the conformity side and the anomalous side. So yeah. And I was just going to add real quick on that, just the, the, the really, and you can kind of see it there, the actual insights you get from from that training that, that model, right? And it's, and you know, it's gonna be based on that historical data, but you also just gonna have to train again, that unstructured data becomes structured data in the end, right?
And then you're gonna start to see what's normal. And then you can, you know, and that's the human in the loop kind of aspect as well. You're gonna start to see what's normal and start to train it as well. And then from that, you're gonna then start to realize what are the outliers once you get some more data in there. So a lot of data upload and understanding of the data first before you start actually making decisions on the data. And then I can do do predictive maintenance, right A loading teams to inspect on a server. So if a machine's gonna break down is the big is the big actionable insight you get from that. So it's really preventive maintenance as Manuel was saying. So just wanted to add a few colored things as well there.
Anomaly Detection and Demo Flow
Yup. So with that, we're gonna move on to the anomaly flow here. So with this, we're going to talk about the demo we're going to show today. So with this, I just wanted to give a high level flow of what it looks like. So on the left there, you'll just see the box that's just JD Edwards. So that's your JD Edwards data. Our, our demo today is going to be on sales analysis and on a warehouse analysis, specifically the warehouse analysis. It's around moving certain items to certain warehouses for cost optimization, which is really sweet kind of stuff we're getting into here as far as anomaly detection and machine learning. So we grabbed the data directly from JD Edwards. We load it into ADW, which has Oracle Machine Learning. So that's really where the magic happens is inside of Oracle Machine Learning. We actually do the training of the models, right?
We do the data prep, the training of the models inside of there and actually build out the insights.
And then using a large language model is actually where we we build out the data visualizations that you'll see today in this demo. And we actually embed those into dashboard analytics inside of E 1 So you can see them right on your screen. You know, future capabilities is, is being able to embed these analytics with the digital assistant next to an E1 and be able to, you know, ask the digital assistant in, in the dashboards pop up automatically and glean insights there. So it's just one of the things we're trying to migrate and, and, and combine all these AI services together to, to create a full-fledged solution. You're going to see some pretty great things today and how that kind of works.
Now as far as ADW, we said we put, you know, we push the data into ADW because the LML services inside of there. But it's really just going to depend on your use case for traditional databases or the volume of data you have or the databases, the the transactional databases you have inside of JDE for where how we do that connection to the OML to make all the magic happen and make all these predictive analytics and anomaly detections occur. Now with that, we're going to move over to the demo portion and I'm going to share a video. Unfortunately, one of our colleagues, Mario Riccardi could not be here today. He is on the AI and products team here at ERP Suites and he just provides some fantastic insights, some even higher level architecture. You know how he gets the data from JD Edwards and Oracle machine learning and then insights on what he's what he's gotten from those visualizations. And what I want you guys to really think about today when you see this video is really are the possible, where can you see this working in your business? Like we're going to show sales analytics and warehouse analytics as well, but where can you see it working in there? Because just with any type of data, with any type of transactions anomaly, machine learning, you know, you can, you can provide proactive and, and predictive insights to enhance your JD Edwards experience is what I'm getting at.
So without further ado, I'm going to unshare real quick Manuel, if you want to add any talking points there while I get the video up and running. Sure thing. I would just add a little bit about the, you know, the, the comments about ADW Drew is completely spot on. It really depends excuse me, on your use cases and if you specifically are interested in being able to connect your GD Edwards data or, or connecting, connecting OML right, Oracle's machine learning directly to your transactional database, your database. That is a possibility. We just need to kind of introspect on a particular set of requirements you have. And what have you, but, but it Oracle also supports that we just need to get into the specifics. Reach out to us if, if you like to have those conversations either Drew and I can even I should be able to help you go ahead yeah and and as you're watching the video, please feel free to input any questions in the chat. We're happy to answer them as the video is going on or we can answer them at the end as well. But just feel free to, to plug questions in as needed. And, and yeah, so just enjoy the video and I'm really excited for you guys to see what, what Mario has put together, as well as the rest of our AI and, and products team.
Demo Video: Sales and Warehouse Analytics
Transforming JDE data into actionable insights, anomaly detection, ERP suites and Oracle. My goal today is to provide a comprehensive overview of ERP Suites AI-powered anomaly detection solution to demonstrate our process and to showcase some real-world examples. My name is Mario Riccardi and I am on the ERP Suites AI and Products team. And in this video, I'll be highlighting our sales and distribution anomaly detection solutions. In this, we're using Oracle's autonomous database and machine learning to perform statistical analysis of our JDE data. And this is really to highlight that today's competitive landscape businesses are running powerful ERP systems like JDE and they generate an immense amount of data. And this data holds the key to understanding performance and identifying inefficiencies and uncovering opportunities for our businesses. But with so much data, it's really almost impossible to get a sense of patterns just by observation. How do you find in this sea of information what's truly important and truly matters? How do you identify subtle anomalies? How do you identify the transactions or events that fall outside of normal business operations? Transactions that could signal fraud or process errors or identify necessary critical business changes. Manually sifting through this data is time consuming and generally impractical or even impossible. So this is where our AI-powered solution comes in. We're leveraging the power and intelligence of Oracle Cloud and specifically the Oracle Autonomous Database and its integrated machine learning capabilities. And using these tools, we're able to automatically find these critical anomalies so that you can make meaningful business decisions. So today I want to focus on practical and high-impact applications of turning your data into information that you can use.
Our first step is to extract the relevant data from Prize 1 system. And in the examples that I'm going to cover today, we're taking sales order data and also inventory location data and feeding it into our machine learning processes to discover what kind of patterns exist and also to find maybe some recommendations in the system for changes that will have meaningful value to us. So we're going to extract specific data that we want to analyze and we're going to load it up into our Oracle database. Now, Oracle has provided us with a database that's well suited for analytical workloads and machine learning, and this includes a range of powerful algorithms and statistical techniques designed for identifying patterns and deviations within large data sets. Here's a picture of the DBMS data mining algorithms and the different statistical models that we can use to find patterns in our data as well as anomalies. So this in-database processing, it's highly efficient and it keeps your data secure, and it eliminates the need to move large data sets elsewhere for analysis. Once loaded, we train these machine learning models on your historical JDE data and establish a baseline of normal behavior across various business dimensions. In this case, we're analyzing sales trends, inventory movements, purchasing patterns, and more. I should also point out that these models can be run numerous times a day and as new data arrives, the models score it, identify data points that deviate significantly from the original learned behaviors, and then they're flagged as potential anomalies.
Real World Examples of Anomaly Detection
Now let's look at some real-world examples. Here I have loaded up in my JDE instance the results of our analysis, which in this case is presented as a React website that we've created that holds a number of different charts as well as key insights. We can optionally run our statistical results through a large language model such as Cohere and have it do insights as descriptions. In this case, you can see we have a key profit margin insights, extreme negative margins for these particular month and year pairs, suspicious perfect margins as well as clustering. Under my sales anomaly tab, I have 4 different sub-tabs. I can see that the margins that I generally enjoy in my business are somewhere between zero to 40%. I have a few that are 95 to 100%. I can see sorted by month in this case my profit margins overtime. This data from our lab is probably not the best reflection of real-world data, Hopefully, it's not. But you can see that in 2021 we had a particularly bad month. But overtime, we're generally getting more profitable. We have a number of different charts that we can generate. We're not locked into these based on a customer's need and what's most meaningful to you. Here's another tab in which we're charting our salespeople's results. Here we can see that we have one salesman who's got a very negative profit margin. These two are they have negative quantities. Here I'm able to see who has the most number of sales. And again, our key findings that are specific to the data that we've analyzed and some recommendations that the system has decided that we might want to look closer at regions and any anomalies. In this case, our data only has two regions. These are based on category codes. Just wanted to point that out and recommendations.
When we look at our warehouse analysis in this case, I wanted to have data that could inform me on the efficiency of my warehouses in terms of their proximity to my customers and have the system tell me if there are some optimizations that could be made. So here I see that our shipments to Manhasset, NY for these specific items, I'm pulling from two or three different warehouses. This item, there's a warehouse that's 145 miles away, but in some instances, we're going 1600 miles in order to fulfill orders to Manhasset, NY. Here's an opportunity for us to look closer at our warehouses and maybe optimize our inventories at these different locations like Billings Montana, Chicago, IL. So you can see that we have lots of opportunity. Again, different types of charts, cluster charts Here I can see that my distance per mile or cost per mile and my distance are clustered in these different groupings. I can see what my average cost per mile tends to be based on where my warehouse is looking even closer at this specific set of data which the centroid cluster looks at the center of the cost cluster data. I can try to see where again I have unoptimized inventory and what my costs are like we did a moment ago.
Now if I am running this fairly often during the day, I can even pick out individual orders that I may be able to save a fair amount of money with by sourcing that order from another warehouse. So in this case, order 2878, if I have the inventory at a closer warehouse, I have an opportunity to save $20,000 before it's staged and sent out. And then this last tab reallocation, now we're looking at a summary that's telling us that we have the opportunity to save $100,000 by optimizing where our inventory is coming from. We can see these items here. I mean, I have really $70,000 worth of inventory or $70,000 worth of transportation costs that I could be saving a year just by optimizing where these two items are. These visualizations immediately draw your attention to potential problems that are in your warehouse and your operations. They enable your team to investigate root causes, prevent losses, and improve inventory accuracy and efficiency.
And the power of our solution isn't just in finding anomalies, it's in translating these findings into real, actionable business intelligence. We process the results from the machine learning models and present them in a clear, business-friendly format. For each identified anomaly, we provide insights into what might be happening and offer potential recommendations for action or further investigation. And this empowers your business decision-makers with the information they need to respond quickly and effectively. And of course, this means less time spent hunting for problems and more time spent making informed decisions that protect your business and improve performance. So by implementing our AI-powered anomaly detection, you gain a truly powerful tool to automatically monitor your critical business processes and identify hidden issues and transform your data into a strategic asset.
Closing Remarks
Okie, Doke, you there? Drew, can you hear me? Yeah, I'm here. Okie doke, pull the presentation back up. Great. Get the slide deck back up and now we'll have a few more words and then we can wrap up. Here we go. Any questions from anyone after seeing that? Are the possible pretty great dashboards integrated inside of JD Edwards? So, so pretty cool, right? From a data warehouse perspective, that's probably a use case that many of you have. If, however, you have other use cases, right, you're not, you know, using warehouse or you don't have a warehouse, but you have other scenarios. We touched upon manufacturing, we touched upon financials. These are this kind of anomaly detection conformity machine learning type of solution can be some stuff that we're looking at and we're creating in different areas of JD Edwards because those insights, right. Getting deeper insights to your business, as Mario discussed is incredibly valuable. So the possibilities, please don't look at this session, just be in a warehouse. We have other solutions and happy to have conversations with you about any other areas where this kind of functionality would be helpful to get a deeper pulse on your business. OK, Well, it looks like maybe no, no questions. I don't see questions on the chat or the Q&A. So thank you everyone for joining us today for the smarter analytics using machine learning to detect anomalies and Dr. Accuracy and JD Edwards. If you have any questions for Drew or myself, you can see our e-mail addresses here on the slide. Feel free to reach out to us. Happy to have follow-up conversations, address any questions that come up afterward and enjoy the rest. There's still a few more sessions this afternoon. There's a customer panel session that we'll have later this afternoon as well. That was a recent ad, so keep an eye out for that one if you want to join and hear from your fellow customers, colleagues and otherwise have a good rest of the day everyone.
Topics: