Skip to main content

«  View All Posts

The Truth About AI in JD Edwards: It Starts With Data

March 23rd, 2026

17 min read

By Nate Bushfield

 

This episode explores why clean and complete data is the critical foundation for successfully implementing AI in JD Edwards. While many organizations are eager to adopt AI for automation and optimization, they often overlook the importance of data readiness. The discussion breaks down the difference between incomplete and dirty data, highlights common problem areas, and explains how poor data can lead to unreliable AI outcomes, reduced ROI, and operational risk. Through real-world examples and practical insights, the episode emphasizes that AI cannot fix bad data—it only exposes it—and outlines what must be true about your data before AI can deliver meaningful value.

 
 
 Ask ChatGPT

Table of Contents   


  1. AI Excitement vs Data Readiness
  2. Defining Clean vs Complete Data
  3. Common Problem Areas in JD Edwards Data

  4. Why Data Becomes Incomplete or Dirty

  5. How JD Edwards Still Works with Bad Data

  6. Data Decay, Accounting Impact, and AIS Example
  7. Mergers, Data Integration, and AI Limitations
  8. Risks of Rushing into AI
  9. Real-World Example 
  10. AI Behavior, Error Detection, and Final Takeaways

Transcript

AI Excitement vs Data Readiness

Is your organization excited about AI and JD Edwards? But I'm sure if your data is actually ready for it. What happens when you deploy AI on top of inconsistent ASTER data, incomplete setups, or reconciliation issues? Today we're breaking down why clean, incomplete data is the non negotiable foundation for AI inside JD Edwards and what data readiness really means. By the end of this episode, you'll know exactly why AI can't fix bad data and what must be true before it can deliver real value.

Welcome back tonight, your grandpa's JD Edwards. AI is dominating every executive conversation right now. But here's what we're seeing. Companies want AI agents. They want automation, they want optimization, but they haven't addressed their data foundation. Today's episode is focused on one thing. If your data isn't clean and complete, AI will not work the way you expect it to.

So Manuel, welcome back to the pod. For the people that are out there right now that are seeing your face for the first time, can you give us a little bit of background of you and JD Edwards and maybe even AIA little bit? Yes, of course, Nate and pleasure being here with you always. It's it's always a fun time to to have a conversation on, on the podcast. Manuel Naira, I'm vice president of artificial intelligence and products at ERP suites have been here 4 1/2 years now and prior to that I was at part of the JD Edwards product organization through the pre acquisition phases as well as the PeopleSoft and Oracle acquisitions. So, yeah, have a little bit of background on JD words, just a little bit.

 
 

Defining Clean vs Complete Data

So yeah, obviously you're a reoccurring guest, so you know how this process works, but let's define the problem clearly from the start. What does clean and complete data actually mean when we're talking about preparing JD Edwards for AI? That's a great question. And, and sometimes folks, you know, lump those two together, they're actually different, right? There is different because there's, with, with ERP systems and certainly with Judy Edwards, there's a plethora of setups that you do, right? It's, it's a flexible system. And that's something that we always pride. It had a lot of pride with Judy Edwards, right? It was part of the, the organization is building a, an ERP that was flexible, right? And flexible across all layers. Now flexibility means configuration setup, which also means data, right, that you're setting up and and that data that you set up has many options, some of which you might look at as innocuous if you leave blank and in in in the traditional world, right there was blank wasn't allowed value in some cases. We don't always allow those, those, those values to be blank. But for AI, right, that could, that could represent something. AI could interpret some things off of that, right, by something being blank. So the incompleteness kind of scenario comes in, in those type of situations, if fields were not completed and you're not explicitly saying, no, we're not doing this or excluding that, right? Instead of leaving blank in your normal business process, which in the other words, it would be innocuous. With AI, not so much necessarily depending on how the agent is inferring things. So that's the incomplete side of it. The, the, the, the bad data or dirty data that we could talk about that comes up. That happens with time, right? Businesses evolve, situations change, new entities are on boarded and off boarded. And those actions, those events don't get triggered, don't trigger the event maintenance, if you will. So then you build a baggage and sometimes that baggage becomes bad data. And that also obviously will affect AI in certain situations. Now it, it depends on the function, right? The functions work. If we have a, an agent that's working in, in area a, let's say manufacturing and these issues exist in another isolated area, obviously that's, that's not going to impact it. But if those you know those those practices exist, it's likely that that you know that those practices were left to to age, if you will, across the board.

 
 
 
 

Common Problem Areas in JD Edwards Data

Yeah. So what would be like the most common area that maybe some of these customers or companies out there should look for their unclean data or even incomplete data. That's a great, that's a great question. And you know what one of the common areas is? Well, there's 33 categories, right? Of, of data. And I'll, I'll, I'll make this brief because I know we want to make this to the point and, and be informative at the same time, right? But there's, there's master data, which is a common area that is updated or requires upkeep, requires governance. And we'll talk about that some more in a minute. But suppliers customers is is 1 area, right that that if if they they're not up kept, if you know the the functions that they're performing for the company are not properly represented it it could it could draw some some issues there of missing tax IDs, payment terms that are inaccurate category codes, which is a very powerful construct in JD Edwards. If those don't evolve as business processes evolve will lead to issues for sure. Not to mention in a location data, contact information, right, could lead to payment failures, reporting inaccuracies and whatnot. Same kind of thing in other areas. Kind of going outside of that area item master branch plant setups, you could have duplicate or seemingly duplicate type of items, right, that are existing across different branch plants, but they're really the same thing, but they have different settings. So which one wins out could be the case. And, and you can see agents struggling with that. So, yeah, those are probably the most common areas.

 
 

Why Data Becomes Incomplete or Dirty

But why are they unclean or incomplete? Like why is it those specific ones? Yeah, no, that's, that's a great question. And, and the basic, basic analogy that I like to use, right, is ERP is a living and breathing system, right? As is our customers, business businesses, right? They evolve, they, they adjust. And those components need to get updated on, you know, some particular event that gets triggered that requires a change in settings for how, you know, the how we're going to pay a particular supplier or, you know, how a customer is typically likes to, you know, pay their invoices to the business. If those don't don't get up kept, guess what happens, right? You have legacy type of situations and then other parts of the business are assuming different types of situations so that the higher level point is governance or lack thereof of governance, right? So that as, as things evolve in the business, those changes need to be triggered, right? Or need to be at least reviewed to make sure everything's kosher. Could be a, a AA1 particular situation that, that that drives that not to mention just general business processes that evolve and update or entities that need to get rolled off and have been retired for ages, right? And let's say that, you know, entities that were related to a particular line of business that now the company's not even there. And you leave that in there, the an AI agent or AI capabilities is going to look at that. And if we're OK, we're still running that line of business, well, it's going to cause noise. So just those are just a few examples, but it's, it's pretty broad.

 
 
 
 

How JD Edwards Still Works with Bad Data

 

Yeah. So yeah, they can cause noise. There's certain issues that come with it. But how can JD Edwards still function operationally, at least even when your data is an AI ready, if there are that unclean datasets or the incomplete datasets as well? So great, great, great point, right. It could be that JD Edwards logic, right? And let's say there's some fields in some of the master data, the item master, right? They weren't populated, but it but the customer wasn't, you know, definitely using it. So the application didn't pay attention to it or the reports or the business process. It worked great. Might it assumed that it was irrelevant because it was blank. However, with an agent, as it starts looking at things, it will start to infer and the inferences that an agent might make with all the other combination of factors that it starts putting in place, it might question it and might question it. And that may lead to some issues. And I'm not saying every time they thought this will happen, but it has the possibility, right? So if there's certain factors that it would have been better off for certain, excuse me, settings in a master data kind of scenario. And what I haven't talked about is the configuration of JD Edwards beyond master data, right? All the settings that JD Rs has with processing options and system constants and whatnot instead of explicitly listing it as no, not relevant, right? And there are options for many of these settings in JD Edwards that would make it explicit and then the the AI solution would interpret it OK, not relevant with the other or leave it blank. It might not necessarily make the same type of decision. OK, so that's more of the complete data where if you can put that option in then it you could find your way around it. But if it's say that specifically an AI is going to need to utilize if it's not clean and that's when the real issue will come into it.


Data Decay, Accounting Impact, and AIS Example

Yeah. And it starts, it starts coming in, right? You know, and, and then you have, you know, dirty data, old data that has been in there, right? And and as we've talked about in other occasions, nature like customers sometimes start up new lines of businesses, right? Start up new lines of businesses, roll off some line of businesses as their businesses evolve. And if we look at in the accounting space within JD Edwards, one of the things that that that is always been fun a customer's been fond of is AIS, right? They, they require a lot of set up and then what AIS are automatic accounting instructions, right? So you can automate, you know which, which accounts get, you know, touched, updated, reconciled, etcetera during the the accounting process. But if your business changes and you've rolled off a lot of business, there's things to to roll off right of the chart of accounts and then update the automatic accounting instructions. Not doing that will have implications, reconciliation implications. It will start causing delays down the line into your period and fiscal year end close, right? It adds, it adds, it adds overhead to that, not to mention potential audit type of scenarios that will be raised, right. It's like what? Why are these in there kind of thing? So that's where you start getting bad data, right? Data is not relevant anymore, but the agent will react to it and not only the agent, like I said, it could lead to audit type of scenarios.


Mergers, Data Integration, and AI Limitations

Yeah, yeah. And we just heard about a merger that's going on in the JD Edwards space as well. And I would assume that, yes, not everyone's on JD Edwards, so it might be a little bit different there. But if you're trying to roll in that data into your JD Edwards system, even if you're running AI and everything on your side is perfect, you got to make sure that that merger, the new data that is coming in has to be clean as well, or at least complete. But Speaking of this, could AI actually fix the bad data, like maybe in a merger, maybe in any other situation? Or does it just work with what it's given? It's a great question, right? So if we look at the agents that, you know, the industry, across the industry, right, is being built largely functional type of agents, right, where it's a payables agent or to receivables agent, it could be Ledger agent, the list goes on and on, right? You know, sales and service is also another common one, all related to functional kind of scenarios we're talking about here is now going back to the massive ocean of configurations, right? Both in the master level, the app level that I talked about not and I haven't even talked about the integration level, right, where you have integrations and there's options there. How'd it go? So as, as, as that gets amplified, you know, the number of combinations even with AI is, is, is, it's numerous, incredibly numerous, right? To try to figure that out through a, through a functional agent. I think it's it's probably not the, you know, the way that we're seeing in terms of best practices for AI agents, functional AI agents, you'll be focused on functional type of stuff in terms of being able to create an agent that learns and it learns all the permutations of JD Edwards configuration, let's say for financials or for manufacturing or inventory. And, and, and be able to introspect and, or keep things in order so that things don't start falling apart. That is, is going to be a possibility in the future for sure, right. So it's not like an AI, if you roll it out, it probably won't fix your data now, but if you do fix your data, it'll keep it fixed, correct, correct.

 


Risks of Rushing into AI

That's, that's the enforcement, right? As as sometimes as, as, as, as businesses, right. We become a little LAX with the governance, with the vigilance, by making sure as well that there's a life cycle to your data. So as things evolve, it triggers the workflow, whatever that means in today's world, right? There's a variety of incarnations of that to be reviewed, be be, be be assessed and then approved before the changes made right now, you know, some customers and maybe a good amount of customers suffer from that, right? There isn't that governance, there's that life cycle there on those naming conventions as well to make it accurate, right? Because if you enter a similar entity than I, we create and and we don't follow the same standard, guess what? They're not going to look similar. They're going to look completely adjacent.

So anyways, yeah, so when these companies are looking to go into AI, there are a lot of leaders that are out there, leadership teams that really want AI and are pushing for it. But without clean data, without complete data, it could be a little bit difficult. So what would happen if a company just rushed into AI without fixing its foundation first? I would say unpredictability is, is the thing, right? And, and, and it kind of, it kind of depends on the scenario, Nate, right. So you're rolling the dice a little bit, right, in terms of, hey, you know, ultimately why are our customers adopting AI? It's not because just they want, you know, kind of bleeding edge technology. It's the value, the return on investment. And if unclean data, dirty data, incomplete data is going to cause noise and limit or severely affect, you know, the value that you get, that's a strong consideration every customer should make before jumping in, right? Ultimately they want value and then they want efficiencies and all the, you know, buzz words that we that we hear about. And data certainly could have an impact to be able to say it's going to have a massive, a medium or low. It depends on case by case, but you're rolling the dice, I guess.

 


Real-World Example

Is there maybe a real world example that you could give us about like a company trying to rush into AI with incomplete data? Yeah, yeah, there's, there's a particular particular, there's several examples, but one that that resonates is regarding accounts payable, right. Accounts payable situations. They take our customer had issues with and started applying AI and they started getting some benefits. But then as AI got deeper into, into the, the system and understanding it, it started making recommendations that were completely orthogonal to what the business was doing, right? And, and it was missing some other elements, right? As, as we looked at it, it was missing other elements that, that, that, that, that weren't relevant. So we was incorporating, you know, some of the points that were talking about earlier that were irrelevant and it was coming up with something that ultimately was watered down value, if not more, right. So thankfully, we know we caught it, you know, in in the prototyping phase, right? But that, but it certainly, it certainly kind of echoes the following, right? If you're building a house and and then you or you have a house and you want to build a second floor to your house, the first thing that you want to look at is the foundation able to support it and ERP systems. JD Edwards not the exception. And no ERP exception is based on data. And if your data is not top notch, right, you're risking some issues already with JD Edwards, right, in terms of the results that will come out. But now you're looking at technology that now can reason, can infer, can take goals and break down and and start kind of analyzing. And if that's not representative very closely, if not exactly to what your business, you're running the risk of having some unexpected results.

 


AI Behavior, Error Detection, and Final Takeaways

Yeah, because it's optimizing a broken process that, I mean, you have this AI because it can do a repeatable task, but if that task process is broken, it's kind of screwed. Yes, obviously there are notifications, there are ways that you can check in on it, but you're going to be doing that every single time. So there's a little bit of an issue there. Plus, would would the AI even know that it's doing something wrong? Like would it flag it because that's what the process looks like or what would that kind of look like?

So that's a great question. And there there could be scenarios where it would, you know, remember, if we're talking about a gentic AI agents, right? They have goals. Those agents have goals and then they break them down into tasks and it will keep on working until it hits its goal. And if it if it performs an action that is sub optimal, it will learn. And now if the data is an impediment. To be able to do that right, whether it's dirty data or incomplete data, then it's likely to a certain point in time be spinning and it's not and get stuck. And at one point out it'll, you know, kind of proverbially tap out right and say there's something missing here. I can't resolve right. So from if, if we consider that a flag that probably be at most it could be now, you know, and it's hard for me to compartmentalize what exactly it could be kind of like it could not.

You know, some of these data issues might not impact the customer, right, for 50% of their business processes and AI may just work wonderfully. But when they get to that other 50%, that other 10%, whatever it may be, and now they want optimization from that part of the, the process, You know, eventually it's going to crop up. So you know, it's, it's a little bit of a needle in a haystack, right? Sometimes, you know, where, where is it going to pop up? So it's better, you know, if we're making, you know, if we're making, I would do it myself too, right? I've had my business and I was going to adopt AI. The first thing I would do is make sure my data was complete, clean, set to go before I built that second floor of my house with AI and to make sure that I get the value for the investment that I would be making in AI.

So earlier in this, we talked about rolling the dice. What would be the financial risk of deploying AI on unstable ERP data? And that's, that's a good question, right? It, it, that's, it's, it's limiting, it's limiting your ROI, right? And limiting could be, it could be minimal, right? This is the thing, right? That's the rolling of the dice. It could be minimal, it could be done or it could be significant in terms of the value that, that you get from the from the customer, from them, from the agent or from the AI solution, right? As we're, we, we are, we are going to, or we're in an era, right, where we're looking at leveraging agents to help us execute our business processes, right? And those business processes are, are critical ones, right? We've talked about accounts payable, inventory management, manufacturing, and they need to run not only as efficiently as possible, but correctly, right? Yeah, without noise and introducing that possibility is, I don't know about about you and Nate, but I wouldn't roll the dice with that.

Yeah, I can't say that that's something a lot of companies should try to do. I mean, again, you said it perfectly earlier. If there is 50% of your business or whatever percent of your business that you're specifically trying to roll out AI to, that's kind of risky. But I can see that if you have questions across the board or in a specific spot that you're truly trying to implement AI, it's probably not a good idea to do it because you're not going to put into AI in a business process that you never do. And that's just that's just how the cookie crumblesome say.

But before implementing AI in JD Edwards, what must be true about a company's data? It needs to be, it needs to be, you know, clean, right? So accurately reflecting the, the current state of the business, right across what we call the, what I would call the three layers, right, Master data, transactional kind of level area, which is the configuration side of the house, the processing options system constants that I talked about as well as the integration layer. Those three I think are a prerequisite. I know there's a lot of details behind each of those top level topics, but I think making sure it's clean, making sure that you know that data is that's not being leveraged is not relevant, gets properly retired, right? And that you have a governance kind of model to be able to upkeep that and keep it going going forward. Then then you're in a situation as a customer to be able to get the full value and return on investment from an AI solution 100%.

But like how could you assess whether your data is truly complete or clean like if you don't know no great questions. There are, there are different types of, of metrics that you can leverage, right, to be able to look at it. Because while I talked about that, the side effects of this, the incomplete data or dirty data being maybe not impactful to Jet Airways, that's not completely true, right? I, and I want to clarify that, right? It could be minimal, right? It could be like the analogy that I like to use as a little tiny Pebble in your shoe. You know, it's there, but it's not causing any major issues. So you're living with it, right, Because you're busy as I'll get out with helping your customers, selling products, marketing, etcetera.

So there's indicators that that show in, in, in JD Edwards and they, they, they, they exasperate over time, right? So you see them in, in JD Edwards, right? And I think it's important that we listen to those things, right? That we listen to that little Pebble, figure it out, manage it so that it doesn't become a big problem. But unfortunately, right, I've seen situations where there's big problems across those those layers of of configuration data, set up data that I talked about. Did I answer your question? Yeah yeah.

So maybe what are some of the signs that a company isn't ready for AI signs are like #1 if you're having, you know, if you know, there's, you know, a little bit your pebbles or maybe big rocks in terms of your data that you're seeing, right, that there become issues. You have to acknowledge it. You have to acknowledge it and then address it #1 #2 if you look at situations and it's, it's, it gets deeper in some of these areas. But if you're having processes that are taking more time than you should expect, that was also a sign 1 area and, and, and the period you're in close and the fiscal year close is a is a complex process, right? So it could be many factors that we talked about, right? But if that process is in particular out reconciliations, you find that you're spending an extraordinary amount of time with reconciliations, that's going to be in another indicator something is going on, right?

So things, tasks, you know, get in touch to generalize those a couple examples, Nate, but to generalize if you're seeing some things that are taking longer not to execute, if you will, right to run a report or what not necessarily. But the whole process is just written with, you know, delays and a lot of a lot of things that seem that they seem mainstream now, but they seem anomalous. Those you need to listen those and, and address those, right? If your business is running largely well and whatnot, that is a, that is a good sign, right? It is a good sign. It's still good to look under the hood and be able to do that assessment, right? And, and, and look at the data and do a little bit spot assessment to see, to make sure.

Because again, right, we're, we're for us, right? In terms of AI, what we're hoping for is to unblock that value for our customers that, that, that return on investment. If they don't succeed, we don't succeed, right, You know that and you know the culture that we have here at ERP Suites. So, so we would, you know, that is what we do when we speak to customers, data comes up very early in the conversation. We we talk about what they, what they would like to do. But then when we start getting really into, you know, the implementation side of the house and what not is we do take a look under the hood and help and provide feedback. And we found some things, right. But now, you know, we've we've reverted some things that could limit the customer's value. So that's part of what we do as part of implementations.

Yeah. If you want to learn more about an AI assessment and our take on it, we actually had a podcast that came out two weeks ago with Drew Rob that talks a little bit deeper about what an AI assessment truly is in terms of your business and how we can help. But if you're exploring AI and JD Edwards, don't start with the agent, start with your data. If you're unsure whether your foundation is ready, ERP suites can help assess your data environment before you deploy AI because AI doesn't create clarity. It depends on it. Visit erpsuites.com/AI to start the conversation and schedule your AI session today.

But that's it for today's episode. If not your grandpa's JD Edwards Fanwell, huge shout out for you for jumping on talking about something that we've talked about on this podcast a lot, but we've never dove deep into what it truly means. So if you're thinking about AI, here's a take away. Don't start with the technology. Start with the truth about your data. Is AI doesn't correct about bad foundations. It exposes them. If this episode helped clarify that, share it with your team, especially the ones pushing for your AI initiatives. But until next time, build the foundation first, then build the future. No clean data, no AI. Catch you next time.

 

 

 

 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 
 ChatGPT

Nate Bushfield

Video Strategist at ERP Suites