AI Stops Hackers Before They Attack
October 28th, 2025
18 min read
This episode of Not Your Grandpa’s JD Edwards explores how artificial intelligence is transforming cybersecurity within JD Edwards environments. Guest expert Shawn Meade, ERP Information Security Officer, discusses the evolving landscape of cyber threats, from phishing and ransomware to insider vulnerabilities, and explains why traditional defenses like firewalls are no longer enough. The episode highlights how AI and machine learning detect anomalies, build behavioral baselines, and strengthen both on-premises and cloud-hosted ERP systems.
Table of Contents
Transcript
Introduction
“What if your JD Edwards system could catch a hacker before they even touched your data? And what if it Learned from every attempt, getting smarter each time? In this episode, we're talking about how AI powered threat detection is giving JD Edwards environments real time awareness and resilience.You'll hear how artificial intelligence is transforming ERP security, how it works under the hood, and what steps you can take to make your JD Edwards environment safer today.
Today's topic, security, and not the boring kind. As cyber threats evolve, traditional defenses just can't keep up, but AI can't. It's helping JD Edwards environments detect and stop attacks before they even start.Join us today is Sean Meade, ERP Information Security Officer, to explain how AI powered threat detection works and why is becoming such a large player in information security. Sean, how you doing today?Thanks for coming back on the podcast, recurring guest, put it on the resume.
Doing very well today. Nate. Good to see you again. Yeah, it's great to have you back on talking the, talking a little bit security, talking more ERP side, more full cyber security, not our typical JD Edwards lens. But let's start with the challenge. What kind of threats are JD Edwards environments and other environments facing today?”
Are you invested in a Chady Edwards upgrade? So, weren't you seeing the ROI you expected? Could a few overlooked factors be quietly sabotaging your investment? In this episode, we break down the top reasons companies fail to realize their full return from their JDE upgrade. Stick around, you'll learn how to avoid these mistakes and turn your upgrade into a strategic advantage.Threats Today and What Attackers Want
The same ones that we usually do. I mean, I don't mean to downplay that, but it's just an evolved sense of the types of things we deal with all the time. We're still dealing with bad guys for lack of a better term. I know that we can say “nefarious actors.” We can use all kinds of things, but the dark side, if you're a Star Wars fan—however you want to word it. We have plenty of people who want to operate in a nefarious manner, in a negative manner. And they want to take the quick road and try to steal and sell or produce. As long as that's happening, we are going to have to protect ourselves because they're pretty dedicated to their craft—exactly like the Death Eaters out there for any Harry Potter fans. Very dedicated. They're following Voldemort—some say, “the one who should not be named,” I guess.
Alright, enough of the movie references. Why are they harder to detect than they used to be? Well, some of it is attitude. You go back far enough and people just wanted to make a name for themselves. Now, they want to make money. And as far as why it’s more difficult now? Same reason that we become more efficient at anything—everyone gets better with experience. And that experience gets passed on. So as people do this longer and longer, as we fight and go back and forth—actually look at a tennis player, a boxer, or any sport you wish—they study what’s worked and what hasn’t. That’s how they create new plays or strategies. It’s all based on experience. There’s a lot of experience out there. The internet’s been around for a while, and people are learning to use it in many positive ways. But the negative side is definitely increasing as well.
Yeah, and as a gamer myself, PlayStation has definitely not learned from their mistakes over the years, which is crazy to me. But you’re right—there’s a lot of people going after different types of stuff. But what are these attackers truly going for? You can look at the PlayStation view, or you can look at it from JD Edwards or one of the many ERPs out there today. What are they targeting? Is it data, credentials, integrations—what are they looking for?
Again, not meaning to sound flippant, but yes—to all. It depends on what stage they’re at in an attack. To tell you the truth, it still comes down to money. They want to make money. They want to ransom or take your business hostage until they’re paid. So any form of information that can possibly help them is valuable.
I know we look at things such as, “This is my most critical data; I need to shield it.” Good plan, and that’s definitely a step that should be taken. But we need to realize that even now, email phishing is still one of the most powerful forms of the beginning of an attack. The reason for that is simple—you can’t get started until you start making your way through the door. Once I get through the door, then I can look around the room and see, “Oh, look, there’s another door. There are no cameras right in that spot.” That’s the kind of stuff they’re trying to do.
That’s why all of that is a target—any type of information that allows them to step through that door. And of course, to use a candy reference—they want to go through the hard shell and get to that soft, chewy center. Once they get there, that’s the target. Yes, but they’ll take anything they can get to maneuver with.
Oh, it really does. It’s all about the question of how many licks it takes to get to the center of a Tootsie Pop. And yeah—you bring up phishing, which is honestly one of the funniest things out there. You’d think people would have learned by now. I’ve learned about phishing since I was a kid because it’s been around forever. People older than me have been learning about it since the early 2000s at least. But are there any other blind spots, other than phishing, that maybe some organizations don’t realize they have?
Common Blind Spots and the Power of Process
So yeah, strangely enough, one of the blind spots is process. I know that sounds weird just saying it in the form of information security, but process—how we build our security policies, how we manage our business processes—plays a big role. I know a particular incident that I was made aware of where the process was actually the biggest problem. You had one person who was doing the processing of invoices, and that one person was then asked, “Hey, do you mind if we change the email they were using? Send it to this one instead of that one?” Well, in that case, the nefarious actors only had to fool one person, and they did successfully do that.
If the process had multiple checkpoints or layers, that would have made a huge difference. So process makes a big difference in things—it allows security to embed itself between those layers. And the more of those layers there are, the safer you are. And I’m using that term loosely because, especially in my position, I consider safety to be an illusion. But the point being, it takes them longer to eat through the layers, if you will. So that’s one validation of things—making sure that again, and it sort of dovetails with process. When we’re validating things, we should ask: Is this really something that this person would normally do or has authority to do?
Putting in those proper checkpoints—those are the kind of blind spots that can make a big difference. And of course, like we just said, email’s a big one. Email phishing is still, I mean, those clicks. I will say I did see a video not too long ago where a company really did get kind of rough on their email phishing campaign and testing. They sent something out saying, “Anyone who’s nearby, come into the office. Please go ahead and log into a local restaurant’s website, order your food, and come on in for a large meeting.”
And it turns out there was no large meeting—just a three-hour course on email phishing. Yes, I did click on it and wound up coming in because of that. In at least one case, the one that I saw, their food was delivered—they actually did place the order and everything.
All that to say, however, things can be played in a lot of ways. Some of the internal chat programs companies use—Webex, for example—can be exploited too. Webex chats or however communications occur internally, I really believe that one of the things that would help eliminate a lot of blind spots is simply having people communicate with each other. “Hey, did you really send this?”
So again, it comes back to validating and knowing our surroundings. Those are the blind spots—bigger ones than we realize at times. Therefore, and going back to that earlier point, processes should be like ogres—they should have layers. Also like onions. Alright, enough movie quotes, I know, I know—but seriously, that’s good.
Thank you. Hey, you’re welcome. They’re also like cake—everyone likes cake. Cake has layers too. Alright, alright. But yeah, it’s those simple things. I know that we do a lot of phishing testing here at ERP Suites, and earlier on, I definitely clicked on one. You know—who hasn’t? But it’s all about the process and about getting yourself and your workforce prepared.
So I’m glad that we do that. It’s something that a lot of companies these days have really adapted to, which is great. Like yes, that’s a very funny way of testing it—sending out an official-looking email, making people come into the office, and then having them sit through three hours of phishing awareness—but it’s also a great learning point. All those people wasted a bunch of time and effort just to learn about phishing emails, which is hilarious.
Hilarious, but to think about the actual security side of it—why are traditional methods like firewalls or even user access controls not enough anymore?
Why Traditional Defenses Aren’t Enough
Well, we’re going back to that first principle, which is everyone’s getting better with experience. And so we know that firewalls are difficult to get through just because they serve a very specific purpose, but the holes that are there to allow legitimate traffic through, unfortunately, those also become the major points where illegitimate traffic tries to pass. More people are on the internet all the time.
Another big one is the commercialization of ransomware. So they’re not just making money from their own attacks anymore—now they’re actually turning it into a product they sell to someone else. And it could be anyone. It could be a single disgruntled employee who might have access to some funds or who gets others together—other disgruntled people—and they buy that ransomware and use it. So it’s not just single individuals. It’s not even just single groups anymore. Now it’s a productization of the entire process.
Between the commercialization of ransomware and other nefarious products, there’s a lot going on there. And of course, now things are being engaged in the psychological realm, which comes back to our earlier point about email phishing—the psychology of things. What do people normally do? How do they normally react? Those are big factors. That’s why it’s not the same as it used to be. It really isn’t.
Yeah, exactly. And even with the shift to the hybrid and cloud-hosted JD Edwards models that you see these days, how has that changed the game even more?
Well, once we start doing a hybrid situation, that means I have some here and I have some there. And if I’m using these two together, then something’s traversing between them—and it’s usually traversing the internet. So once we get out on the internet, you’re getting into a DMZ, if you will—an area that has landmines and all kinds of stuff firing all over the place.
When we look at that hybrid environment now, I am not just in my own data center, in my own building anymore. We’ve outsourced a lot of these things. And as we do so, there are a lot of security protocols, don’t get me wrong. But going with what you are saying as far as why this adds another layer—it’s because now the data that would have stayed in a building in the past or in a private network now has to traverse across the internet.
So we have to put a lot more shields up along that path. And we’re going into the whole credentials thing, with those being stolen and other forms of compromise happening along the way.
Yeah, and that path is obviously not something a lot of people think about anymore—why can’t I just transmit my data from one point to the other? Where’s the security? They think it’s all built out for them. And it’s not that it isn’t—yeah, there’s certain security and certain guidelines everyone has to follow—but how safe is that truly? Who’s to say? You have to make sure that you know your own security.
If it’s specific information that you don’t want getting out there—credentials, secure data, or sensitive topics—it’s all about your infrastructure and what you want to have on that data. But I can say you’re on the soapbox and preach about it all day.
So now that we know what we’re up against, how exactly would AI detect and even stop threats before they cause real damage?
How AI Detects and Stops Threats
So we’re going to have to feed information to AI in some form—that’s the first piece. Once we do that, the thing about AI is that it’s an advanced analyst. It winds up doing things at a higher speed in some cases. I mean, let’s just face our limitations: we don’t crunch numbers—or at least most of us don’t—as quickly as a computer does.
When the AI gets hold of that data, the patterns we’ve told it should be there versus the patterns that it may find, it can go through that much more quickly and efficiently. You know, I still haven’t met an AI that needs to sleep. I guess we could program one to emulate humans even more if we wanted to, but I’m not sure we do. The nice thing about AI right now is that it just keeps running. It’s going to keep chewing through all that information while we sleep, and when we get back up, we’ve got results.
By doing that 24/7, 365 days a year, it can compile data and analyze it far more efficiently than we can as humans most of the time. That’s actually its biggest contribution—it prevents attacks from becoming incidents, if that makes sense.
Yeah, of course it does. It’s like that intern that you never pay and you don’t allow to go to sleep. I miss those days of being the intern myself, but let me tell you, I’m glad I get to sleep now. But yeah—that’s the type of thing we want. A lot of these companies are trying to make their AIs more human, which I’ve seen. I, Robot? I don’t know if we should go that far, but whatever—not my monkey, not my circus, as I say.
So when you look at all these steps that they’re putting into AI, what types of data are they really monitoring? Like yes, it can be inside JD Edwards, but also outside of JD Edwards—what have you seen in the cybersecurity space that AI is truly monitoring?
Well, it’s going to monitor anything that we give it—and hopefully it’s not grabbing stuff that we don’t. But as far as scope, that’s really the question: what do we want AI to be looking at? That scope can be as small as just the communications in one server, all the way to “I want it to look at every single packet that goes across our entire network.” We get to determine that scope.
AI then starts analyzing based on that. And quite honestly, the most important thing we can do with AI and with that data is to create something called a baseline. A baseline is what gives AI its greatest efficiency and speed. Instead of having to analyze every piece of data as if it’s an alert, it can look and say, “Okay, all of this is expected—except for this one little bump right here.” Because it’s an AI instead of a human, it can take the time to investigate that bump.
There’s no real loss if there’s nothing there—but it could be an indicator of something trying to break through. Those models we form, those baselines—those are what allow AI to really help us and call out anomalies. We determine the scope; we determine how much we want it to see.
Yeah, of course, and it even goes a little deeper too. With AI and how these systems are built, they can go through data that would take a human many years to process. It’s great—and also a little scary—but that’s the reality. As long as we give them the right scope and tools, they can find things you might never detect in 20 years. That’s great but also unsettling, because maybe they’ll find ghost anomalies that aren’t real.
Which is why it’s important to have humans on the other side, verifying what the AI finds—making sure it’s not just inventing problems. But anyways, what role do machine learning models and behavior analytics really play in this?
Machine Learning, Behavior Analytics, and Anomaly Detection
Well, that comes back to those anomalies and baselines. Machine learning takes models that can be very complex or very simple. It could be as simple as “this server should only ever communicate with that server,” or as complex as “I need to know all the communications happening between this entire server farm and identify anything out of the ordinary.”
Anytime we get into something fuzzy like that, that’s when, like you said, humans would take a long time to figure it out. So we create the models as best as we can, but machine learning—by its nature—should also be asking, “You didn’t tell me about this specifically. Is this okay?” Then we get to say, “Yeah, that’s okay—don’t worry about that one.” So it doesn’t raise that as an alert again. But if it encounters something that’s similar yet different enough, we can say, “You know what, that’s not exactly what we’d expect—let’s investigate that.”
That’s how machine learning and those models work together. Now, behavioral analytics adds another layer. For example, imagine we have a person with high-level privileges in a system who normally does A, B, and C—things like transferring files between specific servers. The AI observes this over time and establishes a baseline for that user. But then, one night, this same user suddenly starts downloading files from Server D—a server they’ve never accessed before. They have the rights to do it, but they’ve never done it before.
That’s exactly the kind of activity where AI and machine learning shine. That small, seemingly insignificant deviation gets caught and reported. Then the team can investigate—not in a hostile way, but to confirm whether that person actually performed that action. If not, there’s a potential breach to address.
Yeah, I mean, that kind of goes into recognizing anomalies and suspicious behavior, right? Like, it’s about what you give the AI model—it learns what’s normal and what’s not.
Correct. It’s all about teaching the system what normal looks like. You give it data—like a backlog of user activity—and it compares that to what’s happening now. That’s the starting point for the model. Then you let it run for a while. To get AI working at its best, you need it in an environment that’s stable and safe at the beginning, so it can learn what “normal” truly is.
That ramp-up period can be tedious, but every time we answer a question for the AI—like, “No, that traffic’s fine,” or “Not sure about this one”—it learns. Each answer becomes part of its baseline, reducing false positives and improving efficiency. Over time, that saves both time and resources.
Absolutely—that makes a lot of sense. And to give AI that amount of data, especially right now, with the larger AI models that exist—it’s not always trustworthy. Even the ChatGPT CEO said, “I don’t know why people trust this thing.” Red flag, right? But sure, why not.
Obviously, there are other models built specifically into certain servers—ones with better protections, compliance, and governance. But how does this all integrate into systems like OCI or even ERP Suites managed services?
Integrating AI with OCI and ERP Suites Security
So OCI has, within their Cloud Guard, AI built in for many of the reasons we’ve just described. We can make that scope narrow or broad, depending on what we want to monitor. Then we have to work with it—to train it, to tell it what’s good, what’s bad, or at least confirm when something’s normal so it can learn to identify what’s not.
The way that works in this environment—OCI being a hosting platform and ERP Suites providing hosting and managed services—is that we want to gather data in such a way that the AI can flag what’s not normal. That’s its real strength: it doesn’t need to know everything, it just needs to know what stands out. Eventually, sure, it could learn much more, but that’s not the goal. We’re looking for exceptions rather than the rule. Once those exceptions are identified, the rules become clearer and stronger.
For OCI and ERP Suites, the idea is simple: this is normal, this is not. And that, really, is the greatest power of AI in my opinion—putting the guardrails in place so it doesn’t go rogue or make decisions for us. We don’t want it making business or operational decisions. At least, I’m not ready for that kind of world. But I do believe AI can make certain decisions where it has been trained, especially in recognizing what’s normal and what isn’t.
Of course, we also have to ensure that no one can mimic “normal” behavior to fool the AI. But normal is normal, and an anomaly should always pop up.
Yeah, and I mean there are a few companies out there that have even made AI entities their CEOs. I don’t know if we’re ready for that. Personally, I wouldn’t trust it—I don’t really trust robots like that. But whatever—TV has probably made me paranoid.
Anyway, let’s talk about the impact side of it. We’ve talked about what AI is and what it can bring to the table, but why should businesses truly invest in it as part of their security model? For people managing JD Edwards, what changes when AI enters the security picture?
So, let’s look at internal versus external. Internally, it’s not going to be AI working inside the JDE system itself. But if we have the right security functions turned on in JDE and use either third-party products or other integrations to draw data out, we can export logs—like who logged in, what they did, what checks were written, what inventory was moved—and analyze that.
Again, coming back to that “normal versus anomaly” theme, AI can help identify irregularities within those processes. One of the greatest discoveries in accounting history—a massive fraud case—was found because someone questioned a one-penny discrepancy in the books. That’s what we want AI to do for us. It can catch the one-penny differences that humans might overlook.
That’s how AI impacts the internal side—analyzing logins, transactions, and workflows for anomalies. Externally, it comes down to network communication. Do we expect this server to be talking to that one? Is the traffic encrypted when it should be? Is something sending data when it shouldn’t? Those questions are exactly where AI excels in protecting the JD Edwards environment.
Yeah, I mean, I’ll say—the biggest scam ever was the Fyre Festival. I don’t know if you know that one, but it was basically a guy who set up a whole festival with no headliners, no bands, nothing. People flew to an island and got stuck there. Crazy story.
Now, if AI had been around and used properly, it could have told you that was a scam—no contracts, no artists, no logistics. It could have flagged everything. Of course, AI wasn’t really a thing back then.
Enough joking around. What’s the benefit of reducing false positives or alert fatigue for AI—specifically for IT teams?
Reducing Alert Fatigue, Increasing Efficiency, and Measuring ROI
We’ll start with efficiency—number one. When AI reduces false positives, human resources can focus on evaluating real events instead of wasting time chasing noise. Once you can eliminate the clutter and highlight true alerts, your team’s attention shifts to what actually matters. Otherwise, too many alerts become white noise—and white noise never catches anyone’s attention. It just sits in the background, overwhelming people until real issues get missed.
By focusing on genuine incidents, IT teams work more efficiently. And oddly enough, that also boosts job satisfaction. When people aren’t constantly dealing with false alarms, they stay more engaged and interested in their work. That psychological boost leads to more awareness and better performance. So yes, AI helps technically and psychologically—it keeps teams sharp.
Exactly—if someone’s looking at 100 alerts a day and 90% of them are false, that’s just wasted effort. Eventually, they’ll start ignoring alerts altogether, which can be dangerous.
Right. And we often think of human resources as a cost, but they’re really a tool—a resource like any other. The question becomes: how efficiently are we using that resource? The companies that get ahead are the ones who can do more with the same number of people by working smarter. AI enables that.
So, when we talk ROI, it’s not just about money—it’s about time and efficiency. When you remove the noise and automate some of the initial threat triage, you free up experts to do higher-level work—innovation, strategy, and proactive defense rather than reaction.
That ties directly into uptime, compliance, and business continuity. Any security breach or incident costs time—and downtime costs money. We’ve all seen large providers suffer outages that ripple across industries. Reliability and trust are everything. Some companies even assign a dollar value to reputation loss from downtime or product recalls. It’s a measurable pain point, not just a PR issue.
AI contributes by reducing that downtime. When it flags issues early and helps compliance teams maintain a clear chain of evidence, it prevents minor incidents from becoming major ones. Compliance frameworks—HIPAA, PCI, ISO, SOC—all require traceability, and AI can help automate that “chain of custody.” It doesn’t get bored of checkboxes; it just does the job consistently.
In short, AI keeps the system running, improves compliance tracking, and gives leadership the data they need to prove reliability.
Now, as for whether AI is being used more for prevention or response—the answer is both. Prevention in cybersecurity is really about time: slowing the attacker down long enough to detect, respond, and stop the threat. AI buys that time. It’s like creating a bubble of slower time for the attacker, giving you a chance to block or divert the attack before it escalates.
And on the response side, AI improves both reaction time and operational efficiency. When analyzing traffic, it can spot overloaded servers or inefficient routing, helping both security and performance. So even when AI is deployed for protection, it often ends up improving overall IT efficiency.
All of this leads back to ROI. Once an AI model is trained and baselined, your return comes in the form of reclaimed time—time your teams can spend elsewhere. Fewer false positives mean fewer wasted hours, and that’s real value.
When you look at it across months, that efficiency compounds. If three analysts can be 90% more effective because they aren’t sifting through junk data, that’s a massive gain. It might not show up immediately on the balance sheet, but it shows up in innovation, response readiness, and organizational trust.
And if someone’s listening right now, ready to take security more seriously, here’s where to start: begin defining your baseline. Look at your processes and network traffic. Start small—analyze a few logs safely with AI, with proper guardrails in place. You’ll quickly see how much insight even a limited scope can provide.
Start with small wins. Use AI where it can help the most—whether that’s monitoring sensitive data, analyzing user behavior, or validating transactions. Those small steps can save your business millions—both in time and in protection from costly breaches.
Because at the end of the day, if your JD Edwards system is running critical operations, it deserves modern protection. AI-powered threat detection isn’t the future—it’s the standard. Visit ERPSuites.com to explore how AI can strengthen your JD Edwards security and see our roadmap for building an intelligent defense strategy.
I know Shawn Meade would love to talk more about security—and if you reach out through ERPSuites.com, you can connect with his team directly for more insights.
That’s all for today’s episode of Not Your Grandpa’s JD Edwards. Subscribe, leave a like, comment, send it to a colleague—anything that helps your business take the next step in security. Until next time, stay smart, stay secure, and keep innovating.
Video Strategist at ERP Suites
Topics: