Skip to main content

«  View All Posts

Architecting AI Agents for JD Edwards EnterpriseOne

September 18th, 2025

14 min read

By Nate Bushfield

 

This session, led by Charles Anderson, explores the role of AI agents in enhancing JD Edwards EnterpriseOne through Oracle Cloud Infrastructure (OCI). It explains how generative AI, retrieval-augmented generation (RAG), and digital assistants can bridge automation gaps, deliver contextual insights, and reduce manual tasks. Key topics include orchestrator integration, serverless architecture, managed vs. custom RAG services, and practical use cases like document processing, guided transactions, and anomaly detection. The discussion emphasizes trustworthy data, scalability, and phased AI adoption, highlighting ERP Suites’ structured AI journey framework.

 Ask ChatGPT

Table of Contents   





  1. Speaker Intro & Background

  2. Addressing the Automation Gap
  3. What Are AI Agents?
  4. Building Blocks of RAG & Oracle AI Stack
  5. Oracle GenAI Agent Service & Key Features
  6. AI Agent Service Possibilities & Architecture
  7. GenAI Service Components & Integration with JD Edwards 
  8. Key Takeaways 


Transcript

Speaker Intro & Background

so I will go ahead and turn it over to Charles. So Charles Anderson with ERP Suites. Um, he's our lead JD cloud engineer. Going to talk to you about AI agents and um, hopefully you guys can learn something here. So I will let you take it away Charles. 

Thanks Scott. Yes I am Charles Anderson, lead JD cloud engineer and formerly senior CNC consultant. I've been involved with JD Edwards Enterprise 1 for over 20 years. It's it's I'm not even going to say how long. It's been a while. Not 30, but it's over 20. Um, and we started looking at the Gen AI service. We started looking at AI in general and OCI over a year ago. And and one of the things that we that we looked at was the digital assistant. Obviously, we've got the Franklin digital assistant platform that's built on Oracle Cloud Infrastructure or OCI. And we started looking at all the options that we have in in the toolbox. And you know, one of those is the generative AI service. Um, and that gives you the ability to connect the digital assistant to a large language model and you have, you know, multimodal large language models available on OCI. There's the cohhere, there's the meta llama models that are available. And you can also host your own, but um you know, we we were looking at is it required? And the answer is no. You don't you don't need a generative AI service to be able to use the Franklin digital assistant or any Oracle OCI digital assistant. Uh but it does give you more capabilities in that you can have a natural language conversation with a front end that could then coordinate with the digital assistant that could then coordinate with say enterprise one orchestrator and kick off orchestrations for you. You could go out and have the rag agent, the retrieval augmented generation agent go out and review documents that you've uploaded to a knowledge base and really extend the amount of information that you have available to you from within enterprise one and you know if you integrate Franklin digital assistant within enterprise one as some of our other sessions have shown then you could extend that with a rag agent and be able to utilize for instance Oracle's search with open search functionality and like I said access some of that that knowledgebased information that you've uploaded. So it could be customer documents for instance, it could be purchase orders, it could be anything that you can imagine and really you're only limited by your imagination on what you can extend it to. But you do have a lot of control over that. So you don't want to necessarily extend out to public information that might be incorrect, for instance. You want to really tailor that information to your business needs.

 
 
 

Addressing the Automation Gap

I guess we should start from from scratch here. If I can get the presentation arrows to work here. Um there's an automation gap in JD Edwards and there there are some pain points—manual data lookups, fragmented knowledge, lengthy training process. So the opportunity there is that an AI agent can help deliver contextual answers and automate some of those repetitive tasks. One of the things that we were looking at with the digital assistant is automation, but we didn't want to focus on that specifically. It’s really there to make your job easier not necessarily automate everything but this point here, this analyst prediction—70% of ERP interactions will involve genai agents by 2027—so that's only two years away, right? Is that true for JD Edwards Enterprise 1? I mean probably not, it remains to be seen. That all depends on what you do with it, right? Because Oracle's published guidance that they're opening up the tool set to a degree, giving us functionality to authenticate to OCI services from the latest Enterprise 1 tools releases. But you know the Genai agent is really up to you if you want to integrate that or not. So is it going to be 70% of ERP interactions for all JD Edwards Enterprise One customers? I'd say no. It's probably lower than that. But you have the ability to drive that forward.

 
 
 
 

What Are AI Agents?

I guess we should start from what are AEI agents? They're autonomous workflow driven by LLMs and tools or actions. There is a core loop: observation, plan, action, reflect. There’s conversational assistance like we were talking about earlier with the digital assistant orchestration bots. So you can have a bot-driven architecture and decision agents. You definitely want to have a human interaction there for those decisions.

Value to the ERP system or to the business in general includes faster answers or quicker answers, guided transactions, and as it says there, reduce the swivel chair of turning around to ask somebody a question you can get for yourself. RAG 101—the key stages of retrieval augmented generation—it goes from query to retrieve the chunks. And I put over here a definition for what is a chunk. It's referring to smaller more manageable segments of text extracted from larger documents or data sets. So from that retrieval it's going to augment your prompt. The prompt would be what you're providing to the large language model for instance. And from there it's going to generate an answer. And it's a loop because you can continue to iterate on the questions and answers and the more documents you load into your knowledge base for instance the more information and the more accurate the information is likely to be.

So this is one of the major developments I think in the last couple of years. Obviously AI has been going on for 50–60 years, right, and it's really just how you define it. What's the big push around AI right now? And it's just been commoditizing easy access to those large language models that do predictive text. So the LLMs and these GPUs are getting bigger. The graphic processing units that run all the math behind all this are getting bigger and bigger and data centers are pushing all these out there and all the cloud vendors are getting into AI. That’s been the big push. And then RAG has been one of the major benefits of using something like ChatGPT. Whereas a few years ago when you'd go out to ChatGPT, you'd ask it a question and the answer would be, “Well, my data is based on 2019 or 2020 data.” And now it has the ability to go out and do a real-time lookup if you allow it to do that on the broader internet. Of course there are some IP concerns there on what information it's accessing and how it's training the model on that information, but that's really outside the scope of this discussion.


Building Blocks of RAG & Oracle AI Stack

So, the building blocks of RAG: the data ingestion and the chunking, embeddings and vectors. For instance, OCI search with AI vectors in the database. Oracle U Database 23AI has vector database built in. They've got MySQL HeatWave which has vector search built in. Then there’s the retriever and the LLM generative endpoint. So again, the LLM could be your Cohere or your Ma Lama, or it could be OpenAI if you use an API call out to the public internet for ChatGPT on the back end.

Then you have the orchestrator agent framework. I believe Frank Jordan had a pretty good session yesterday on using Franklin and orchestrations with JD Edwards. That doesn’t actually require a RAG agent—it’s just something that you can use to extend the user experience and also expand how much information it has access to as part of that flow, that workflow.

Here I pulled in a diagram—a logical diagram of the Oracle AI stack that you have available to you on OCI. Excuse me, I need to take a little drink here. I'm getting a little parched. And as you can see, I might actually have to put my glasses on so I can read it.

GenAI agents is a service that was added I believe a little more than a year ago, but the functionality has been expanding. There’s the digital assistant, the speech service, language service, vision service, document understanding. That’s something we’ve been working with in proof of concept—being able to, for instance, bring in receipts and do receipt processing or purchase order, sales order processing, that sort of thing. We got we have a question here.

Oh, that's just something. Did you post that there, Scott? Yeah, that was me. I'm just—That’s me. I just saw something pop up in my peripheral vision there.

So, obviously the autonomous database with Select AI—you have that natural query, natural language query processing where you don’t necessarily have to know how to form a SQL statement and do a join but you could tell your Select AI, “Hey I want to join these tables and these are the table names and or these are the subject areas,” and go out and find that information for me. There’s obviously machine learning built in the Oracle database. There’s just a whole host of AI services that are available in OCI. Obviously OCI is not the only game in town in terms of AI, but it’s what I’ve been focusing on, what we’ve been strategically focusing on at ERP Suites in terms of our AI development in the cloud. AWS obviously has good services on offer as well, as do other vendors. That’s why I mentioned for instance using an API call to OpenAI. You don’t have to host your own LLM. You don’t need to build a generative AI cluster in OCI, but you can if you want to. You can build your own cluster and you can train it on your data and you can increase your costs by doing so.

What we focused on though is using a serverless architecture. That’s really what we’ve been gearing this presentation around—the generative AI or RAG service integrating with JD Edwards in an OCI environment with a serverless architecture. Obviously JD Edwards being on-premises style software, not software as a service, you’re going to have compute instances either VMs in your data center or ERP Suites or in the public cloud—could be AWS, could be OCI—doesn’t really matter. It doesn’t matter where the Enterprise One instance lives, you can still go out and call these AI services from OCI.

 
 
 
 

Oracle GenAI Agent Service & Key Features

And that's one of the recent announcements. I think we'll get there in a minute. But the Oracle OCI GenAI agent service, it's a fully managed RAG pipeline. So when we say fully managed, we're talking about all the server setup being done by Oracle for you. You don't have to go out and build servers to do this. If you want to do a custom RAG service, you might want to write a Python application and run it on your own server. You can certainly do that, and there's an example here later on with a link to some instructional information on how to do that. But you don't have to—it's just an option.

There are built-in connectors for the generative AI agent service for object storage. So that's OCI's S3-compatible object storage. And you know Fusion apps obviously have AI integrated in, but JD doesn't, and that's where we come in to try to help customers figure that part out and do some of that work, some of that leg work, or do a proof of concept or do some development.

Data connectors—so JD tables, it's not a JD-specific connector, but you say, "Hey, I've got an Oracle database or a SQL database out here. I want to connect to those endpoints, those tables and bring those into the generative AI."

And as I mentioned earlier, multimodel endpoints—the Cohere, Llama 3, etc., etc. And again, pay as you go. So OPEX, not CAPEX. You don't have to go out and buy a bunch of servers with GPUs and figure out the power costs and all that and host that yourself. You can just go out and sign up for OCI and you can deploy these services as quickly as you want.

Key benefits and key features of the OCI RAG service: democratize data access. That's—you know, some IT departments might be a little more controlling than others, but the idea obviously is to make it easier for the power users and the end users that you trust to have access to the data that they need to do their jobs and more up to date, right? That's the idea with RAG that I mentioned earlier, having access to more up-to-date information than what would be available in a trained LLM that might be based on old or out-of-date information.

Understandable contextual results—so is the data that it's kicking back to me something that I understand? Obviously, sometimes JD Edwards might kick data back to you that you don't understand and it might be in a format that you don't understand. So having the ability to convert that to English or whatever your native language is could be a big boon to productivity.

So key features of it—a simple agent setup. There's an example that's linked here in the presentation as well that can show you how to go through three steps to set up an Oracle RAG agent, from deployment all the way to testing it. Simple agent setup tools, orchestration. I don't need to read all of this but as I mentioned earlier there's guardrails around it so you can control what information you have access to, keeping a human in the loop, etc.

So this I mentioned earlier as well. Release 25 brings with it extended support authentication mechanism. So orchestrator can use OCI services natively using the Oracle Cloud Infrastructure API signature version one to authenticate to services such as document understanding. That way you don't have to build that integration yourself or put a gateway or bridge in between JD Edwards and OCI—you can just connect to it natively. Obviously that's dependent on the tools release that you're on, the release that you're on. Release 25 being the marketing umbrella for the apps and tools release. You may be able to get this functionality even if you're on an older version of the apps, but it would be recommended to do them both at the same time if you can afford that—either the time or the expense to do any retrofits that you might want to do.

There's just a whole lot of functionality that's always being released, the continuous innovation with JD Edwards Enterprise One. So get more value out of the ERP system that you've invested in and have probably been running for many, many years. And obviously I'm going to be a champion of JDE in my career for the past 20-plus years. So that's where I'm definitely biased, right? I'm not going to sit here and lie and say that I'm not biased towards Oracle services or JD Edwards Enterprise 1. I've worked with IBM. I've worked with Microsoft. But you know, this is what I've focused on for a long time now.


AI Agent Service Possibilities & Architecture

AI agent service—so, endless possibilities. Virtually endless. I shouldn’t say endless—that could be subject to your corporate policies. For instance, do you want to have the ability to connect to third-party data from Enterprise 1? Is there going to be an audit concern with that? Is it going to be vetted information? Are you going to make sure that it's not hallucinating when you're making business decisions? That's definitely important. And that's one of the things about a RAG agent—the idea is that it can eliminate hallucinations, but in reality they're still possible. So you want to train the data, you want to limit the data, and you want to test and make sure that the data you're accessing is correct and that the results you’re getting back make sense and map back to the source data.

Again, you control the digital chain of AI services from end to end. You’re in control of what you deploy and you pay for those and pay as you go, right? With OCI, you don’t get access to every service just because you sign up for OCI. You’ve got to go out and deploy those.

Again, I mentioned earlier we set up a serverless architecture. So that means obviously that there are servers running in the background but you don’t have to manage them. Those services could be running in Kubernetes for instance, so it could be deployed to a cluster somewhere, but it doesn’t matter what it’s deployed to or what technology—it’s the idea that there’s a service out there and there’s an API that you can use to get to it and you don’t have to worry about managing the servers or patching. Unless there are specific features that you want to with the digital assistant—you might have a scale out there that you want to update to the latest release version so you have the latest features available to you. But you can do that when you choose to do that. There’s not necessarily going to be forced downtime to do that sort of thing.

And again, you can extend it with OCI compute as I mentioned earlier. You can use it standalone. So you can build your own custom OCI generative AI agent service that has its own user interface or you can integrate it with Enterprise One on the back end. Again that’s not something that is going to be exposed to the E1 tool set as a design aid for instance but that doesn’t mean that you can’t build an integration—either custom programming or just calling an orchestration for instance.

This is an example of an architecture here that you could build with various OCI services: AI services, generative AI embedding, generative AI chat models, using identity and access management in OCI, a REST server on a VM on a compute instance running a Python application. You could have multiple servers behind the OCI load balancer and an API gateway that has access to that. So Apex—that’s another piece of functionality that’s been extended with AI services. A serverless application, a database application, right, that’s got powerful AI features built into it. Now, you could use the OCI or the Franklin digital assistant. So lots of options available to you. And these aren’t the only options. If you wanted to use your own SQL Server database, you certainly could do that. This is just an example of oh, we’re going to use the Oracle autonomous database and access data that’s in that from this Python application running on clustered VMs.

 
 
 
 

GenAI Service Components & Integration with JD Edwards

So the GenAI service has two key components. It's retrieval and augmented generation. Obviously you have to supply data in a knowledge base that it's going to retrieve, but the idea is that once it understands what that data is, it's able to provide results back and augment your prompt with that information. So that could go against OpenSearch, the Oracle database vector index, etc. But the idea is you can design your own AI agents. These aren’t going to be provided to you by Oracle for Enterprise One.

Various ways that you could integrate with Enterprise One would be AIS orchestrations as we mentioned, event rule triggers that you could design within the tool set, orchestrator and web hooks within your web application for outbound calls, A1 UX1. So that would be the embedded interface that we talked about earlier with the RAG agent—for a conversational UI. That would be more the Franklin interface for instance.

OCI generative AI agents—the benefits of the RAG service in OCI versus a custom agent that you might build yourself. The OCI RAG service—it could take days to implement. A custom stack—if you’re building your own servers and choosing all the software solutions, it might take weeks or months.

Cost model—OPEX again being pay as you go. A custom stack could be all OPEX depending on how you deploy it, but it could be a mix of CAPEX and OPEX depending on how you want to manage your budget. That gives you an idea of what that looks like.

There are limited plugins available right now for the OCI RAG service. Custom stack—you’d have full control over that code set. And again with the compliance aspect of it, Oracle is managing that for you, and you own the controls. I’m talking about on the back end on the infrastructure side—how it’s accessed, the APIs, those types of things.

An implementation roadmap—and we can expand on this later—but crawl, walk, run model of doing a proof of concept chatbot on reference docs. We talked about earlier pulling in, say, PDFs into a knowledge base and then expanding retrieval to configuration and master data. Pilot it with power users—you don’t want to roll it out to your entire organization from the get-go—and then run with it. Agent-driven transaction orchestration, autonomous monitoring, so you know, have it monitor itself and notify you if there’s an exception, for instance.

 
 
 
 

Key Takeaways

Key takeaways from this: RAG bridges enterprise data and GenAI for trustworthy results—trustworthy being the keyword there. Do you trust the information that you're providing? Is it going out and accessing the public internet and going out to unknown websites, or is it going against your knowledge base of your single source of truth of information? OCI does offer a fast on-ramp, and a custom stack unlocks more flexibility for you there. If you wanted to get started today with OCI AI services, if you have a tenancy, you can certainly do that and you can be up and running in days as I mentioned earlier. Again, start small, secure it early—you can look at things like the OCI landing zones that we talked about in another presentation.

And that's really it. I actually thought this would be longer, but it looks like we powered through it here, Scott, in a half an hour instead of an hour. But yeah, that's okay. I think a lot of us could probably use a break. So yeah, we will keep it open for questions and we have another session going live now too as well. So they can catch that if they're available. So with that said, any questions for Charles? Put them in the chat or Q&A and we'll try and address them now.

Charles also has the AI journey overview that ERP Suites offers. So this is something that we do where we've broken it down into three phases, and that starts after an alignment day. That’s something that we think is really important—getting the entire organization aligned on what you're going to do with AI. You have to get buy-in from pretty much everybody to make it a successful project. So we really put a lot of emphasis on that step before we get started and then break it down to the next three phases. You can choose how far your organization wants to go with ERP Suites in those phases, or you can take it on yourselves, or just get yourself set up to the point that you need and then determine your next steps after that.

There’s more about that on their website. I will send a message and actually I think there’s going to be a notification coming from Excel Events here with more information on that as well. Thanks Scott.

 
 
 
 

 

 ChatGPT

Nate Bushfield

Video Strategist at ERP Suites