Oracle’s Code Assist: The AI Co-Pilot That Will Change How You Code Forever!
August 13th, 2025
21 min read
This session features Jason Creighton, Oracle Product Manager, presenting Oracle Code Assist and Code Agent—AI tools that improve developer productivity, code quality, and workflow automation. He discusses industry adoption of coding assistants, their benefits for tasks like code generation, documentation, updates, and CI/CD integration, as well as current limitations with complex logic.
Jason explains the tools’ architecture, security measures, and customization options, including context-aware suggestions, custom agents, and integration into existing pipelines. Roadmap items include expanded language support, customer-specific knowledge bases, and enhanced orchestration capabilities.
Table of Contents
- Introduction
- AI Coding Assistance: Current Landscape, Challenges, and Oracle’s Approach
- Demos and Features
- Architecture and Technical Design
- Privacy and Security
- Future Vision and JD Edwards Orchestration
- Success Metrics and AI Stack
- Product Availability and Roadmap
- Closing and Q&A
Transcript
Introduction
Hello everyone. So apologies for the uh slight delay. Uh I was battling with some technical challenges. Uh my name is Jason Creighton. Uh work for Oracle and my role is the um product manager for all elements of coding. So code assist, code agent and then um SQL agent as well from the our new agent platform. So today I'm going to talk about code assist code agent. Um, please stop at the end for questions. I can't see the chat, so um, don't feel rude if you want to jump in and just ask some questions as we go along. Um, all right, let's get going. So, and by the way, my accent is Irish, just in case people are sort of wondering behind the scenes.
AI Coding Assistance: Current Landscape, Challenges, and Oracle’s Approach
So the this is all information that people already know. um by 2028 90% of enterprise software engineers will use um I would say that currently um that data is out of date and this is from Gartner but I would say that it's now you know most developers are using code assistance um and you know if they're not if your organization doesn't have a prescribed code assistant then the challenge is the probability is your developers are going externally and using coding assistance from you know any right there.
So the majority of my coding assistants will be developing you know productivity growth so again something we all know that the coding assistants are are actually generating productivity and we've had a lot of success as internally at Oracle.
So these are um can benefit from from AI tools and it's like nothing that you would be surprised about. You know, generating code, debugging, documenting. Um, learning about code is especially important for junior developers. We're finding a lot of the sort of lower level junior developers are ramping up on code and actually learning code from the coding assistant testing tests.
But then one of the things that we're finding a lot of use for is things like batch updates. So, version updates, language updates. I want to upgrade um a coding, you know, pattern from an older version to a later version. We're migrating between languages or versions of Java, etc. Um and so that those sort of big updates, those custom updates are some of the things that we're finding is request.
Offline tasks. So things like you know CI/CD inclusion for code reviews um for security audits um and really the flexibility for people to build offline tool chains uh for workflows I think is one of the um key items that we are seeing as as there's a huge demand for.
And this is really just saying the same thing and you know given um a stack you know, learning technical tasks is more important than writing new code. Um, one of the things that we have found is that developers don't spend a lot of time. So, depending on the seniority of the developer, um, but they spend maybe 30% of their time writing new code and a lot of the time spent in maintenance, testing, security issues, etc. And this is from an internal survey. Um, so we want to increase the red box. We want to make sure that people, the developers can spend more of their code assistants.
For as much as the hype aren't yet to the point where um they can create uh large amounts of complex business logic. Um I think we've all played with it and we know that it's great at boilerplate code or easy functions but the sort of large complex code items are are where it requires a significant um set of information regarding the um libraries that are used or the repository and dependencies that should be adhered to um or development patterns. It's just not at that point yet and but direction quickly. So I think whatever gets there a little bit rather than um we want to you know give developers more time to write code to spend a sort of detailed thought on architectural challenges and solution engineering.
So to that end, we've spent a significant amount of effort building Oracle code assist. Um, and it's it's designed to be an AI companion. It's designed to boost developer productivity, um, code consistency with reviews, and we've included it into our standardized review process internally. um and optimized for a lot of the Oracle language sets. So, Java, JavaScript, Sweet Script, PL/SQL, and then trained it on um some OCI Java code bases so that um it's um as good as we can make it for um obviously one of the things that we are constantly working on is like how do we make sure that the latest version of Java is included and so I can talk a bit more of that later on.
It's available internally. So, we've got over 12,000 internal users and that's growing very quickly. And then it's available externally is as a beta and we've got a number of large organizations signed up and testing and we're getting excellent feedback. I'll talk about some of the success stories later.
Demos and Features
Any questions so far before I move on?
Okay. Yeah, I don't think any of this is is new to anyone. So, what I'm actually going to do is uh stop sharing and uh run a demo. So, it worked earlier.
Excellent. So everyone can see. Okay. Um so this is Intelligj. Um and this is the uh Oracle code assist plugin. And the what we've tried to do is ensure that the plugin for Intelligj and there's another one for VS Code. I'm going to stick to Intelligj for now. Um but it functions very much the same. And what we're trying to do is provide a level of information for um the developers so that they can use some of the most common use cases.
And we did a lot of research and it's you can see some of the the options here. Generate code, generate documentation, explain the code, and then that's for unit test rather. And then add the class to chat and then you can you can integrate or you're going to interface with it in the the chat the chat window.
Um, what we've also done, so I I'll run some demos and then pray to the demo gods that it works. So, I'm going to just take this and run test. There we go. So, what it's doing now is going through um using the context from the file and adding unit tests for this piece of code.
Um, and again, I don't think um I don't think there's any surprises there on on what it'll do.
Um, I can also say, um, can you improve the code in any way? And it, you know, I can interact with it in chat. What we've got is a a um I'll go through the architecture in a second, but we've got um you know, it says the snippets too small to actually improve.
Um so we've got a model um we've got a trained um fine-tuned model on the OCI Java codebase. Um and this is for internal um we've then got u moderation filters. So there's a moderation filter both on the prompt and the response. Um and then on the response there are additional filters for open source code and for security. So if there's any for instance any obvious vulnerabilities um in the code um it will identify the vulnerabilities and and uh ensure that the developers will be aware that there are vulnerabilities in the code. If there's open source, it'll just block the response. And the reason is because we just don't want to um you know, we don't want any of our customers accidentally having open- source um libraries added into their code bases.
Um I've also got the option. So if I move to say a larger piece of say um add this class to chat and then again can you improve it.
Okay. So, I'm going to um No, it's obviously too small a snippet to to generate. So, I'm going to ask for um this is a live demo. Um, you know, generate a boiler pick code and not able to generate code. Oh, hold on a second.
Yeah. So, that's an example of where I was in a unit test chat. And um I just going to close that chat down and then I can I'm moving into this sort of base chat.
So, um if I take this um and say generate documentation for this piece of code.
Great. Um, let me restart the chat and do it again.
Okay, I'm going to restart the IntelligJ.
And this is more my setup than the developer setup.
because I'm not using it day and day.
Okay, I'm going to try one more time. I'm gonna abandon this demo now, but you can sort of get a sense of the point of focus, the uh the usage of the um features, the sort of base use cases. I've also got some base use cases down below. Um and you can see that it's bringing some of it across.
What I can then do is add files. So I can add context files. So if I've got some files that are associated with libraries that I want to continue using or development patterns or dependencies, then I can add these into the um the prompt and the the so that the um the plugin will then take those as context and pass those across to the model and the model will start using those as additional context. to be more precise to what the you know the development repository or the development standards are.
okay in this demo sacrifice enough to the demo gods um and bring it back to the presentation.
Architecture and Technical Design
So here's here's a rough architecture of the kod.
You can see on I I'll sort of roughly go through it. On the left hand side, we've got the various sort of clients. So, um the Oracle integration cloud. Um these are some internal customers. AD is an internal customer. We've got the VS Code and the plug-in and they're using an API integration via the agent platform. So this is the genai agent platform within OCI and within that then there's some obviously some elements of like tool config that becomes more important as we talk about some of the custom tools that we can create. Um uh and we've then there's some control mechanisms and content moderation within that. And finally over to the code assist data plane and you can see we're using we're using code checks um and then to the OCI GI platform for the um the you know the code generation models.
Um so it's a standard enough architecture. I think one of the differences is we've got the agent platform and what that allows us to do is to then have custom potentially custom um essentially custom agents to to do a specialized one specialized um feature like code reviews or if um security reviews or code version upgrades or code migration um customer to do a migration from Cobalt to Java. That's a significant undertaking because of the different paradigms and um development patterns. Um and so therefore we've built some um additional uh custom tooling that goes into the agent platform. So then the code assist and the AI platforms remain the same and we can do custom um very quick iterations on the agent platform to iterate on these tool sets and give us those sort of custom workflows.
So again, I'll just go through what code assist is and what it does. Um really the code assist is a plugin. Um there's models um at the back end. So we use um we use a llama model at the back end. Um we are constantly going through obviously model reviews. Um we're looking at fine tuning. Um and so uh we don't um you know we we'll reserve the right to you know adapt that model as and really to improve accuracy and latency of the two key items. And then we want to make sure we've got a polyglot but that is sort of a specialist on the Oracle uh tool set the Oracle language set. So Java uh sweet script for enhance and PL/SQL.
we're working through at the minute is how do we sort of start chaining them together. So you can build a, you know, Python script or PL/SQL function that then calls out to SQL that then calls to a database, returns it, validates it, does some further updates. And you can build all of this um without understanding the models at the back end all within the agent platform, the Genai agent platform.
So then talking about sort of code agent. Um code agent is uh an API uh integration predominant also does it allows you to tune hyperparameters and then allows for u prompt management. So the um one of the things that we're thinking about as well is a a model change. So I can select a specific agent tool. there'll be a preconfigured set but then people can configure their own um and that will um enable different features select different prompts and make sure that you've got for instance if you're trying to do version or language upgrade the best model for instance the best prompt to do a you know Java 12 to Java 18 migration um you know if that's your desire from you know code modernization.
Um, as I said before, we've integrated this into the CI/CD pipeline for um, internal and Oracle. Um, and that means that we've got code reviews happening in an automated manner. Um, and it frees up senior developers um, to um, input first of all, so they can um, improve the prompts and improve the code reviews, but then frees them up from running the code reviews all the time.
um I think there's still a human element from a code review perspective, but what it does is it catches some of the easy um easy issues that junior developers would have on a day-to-day basis. And it means that junior developers are um creating more coherent and higher quality code that then goes to a senior developer. And so there's a training mechanism on this as well.
Um, and then with the code agent, we've got sort of custom third party developer integration. Um, and that that what that means is we can um allow for um additional if if you want to say run it into your own um development environment or you want to integrate into your own tool chain, then um having this sort of custom integration means that allow any type of integration um any type of integration um and it frees up um you know we've got about 35 separate teams internally all integrating into this now and you know there's use cases that we can't think of that um they're they're constantly going through it.
Privacy and Security
um hopefully I can hear people because it is very quiet Um,
okay. Okay, perfect. Thanks. I thought maybe I was talking to the void there. Um,
so one of the, um, obviously one of the core items is privacy and security.
So uh we do not store any of your code, your customer's code, your user data, any comments, what your original code was, what the prompt was that was created from the code or from your associated files, any of the user data that um you're storing. We don't store the responses either.
There is a feedback button which is just a thumbs up, thumbs down. Um and um we can also choose to not store that. Um and really this is the the sort of adherence of a lot of the Oracle um security policies and data policies.
Um as as the next line goes or any data that is sent to the models never leaves OCI and is not and obviously is encrypted in transit. Um and uh yeah, we're very cautious about the um any feedback we take or any code. You know, we're not storing it.
And what this means is you can be confident that your IP isn't leaving your building.
Um, as I said earlier, if your developers are using um, an external system like GitHub Copilot, um, unless you've got the right level of purchase, um, you know, GitHub is the C-pilot is taking your code and using it to further train enhance the model. Um, and we're not doing that.
So, um, it's one of the things that we're thinking about doing at a future stage. If for instance a very large customer would like to integrate and they want to create their own custom um pipelines and their own custom models um maybe that there's a there's a discussion around collecting feedback specific for that customer um and for we're actually using a data source is used for we're taking a short of it and used for health data.
So we're we're adhering to sort of HIPPA standard um data encryption for internal um and it's one of the options we're thinking about um and as I said there we're not using any custom code for model fine tuning.
Future Vision and JD Edwards Orchestration
So one of the things that that we are where we see the future is the sort of enhanced scripting and leveraging sort of robust open source repositories um and scripting language like J J Ruby and Groovy um and some of the external orchestrations around JD Edwards um and and I think that the the tool code assist um because of the nature of having um additional files and um contextual repository aware um information that we can um add to a knowledge base upon your decision.
We can then start um adding um some complex orchestrations. We can start thinking about okay what is the XML templating for some JD custom agents the creation and integration of custom scripts. So really we want the flexibility of code agent code assist to be really whatever your devops team your development uh managers can um can think about.
We we are moving to a world of sort of um multi-tools. So you can create a tool that um has a rag pipeline. it um calls out to database. It'll extract information based on natural language query using SQL tool. It'll then create a report um and use the information in the SQL return to then create a report code the report um deploy the report to your environments or deploy the orchestration and because the nature of the ownership of the full pipeline and the fact that you know we database it means that we can really start to to uh um allow for this type of custom agent custom pipeline.
Um and I think this is the sort of future of um orchestration and you know the word ajantic is overused but this type of multi-tool orchestration is some somewhere I think that the um the industry will be going very quickly and I think that you're seeing it as well in some of the additional features that um some of the competition are like how do we build this type of repository information and then really make sure that our devops procedures are sort of ad here at um utilizing as much of the um coding agent workflow as possible.
Success Metrics and AI Stack
So I'm going very quickly um I don't have many more slides you'll be pleased to hear. Um so from our success um short I did test about 12,000 active users
um we have you know there we did run a control um across or what's the impact of using it versus not using it for a new um like a hackathon and we find that from some standard request tests that were put in like a standard set of questions. The team that were using it got about a 55.8% speed bump from a cold start for some problems.
Um the you know alpha testers and beta testers and the acceptance rate is up to 35% and it'll build a unit test as good as a human which you would like to think it would.
Um but really one of the things is it's saving sort of 10 to 15% of effort effort and boosting sort of unit testing code coverage. These are the things that we think that are um the real benefit and even if it saves you know five to six hours a week and we're seeing slightly more than that from a developer time standpoint and you know this is a significant cost improvement and you know it means that your team can scale without adding resources potentially.
Um but uh I think that all well either are or will be working in this manner and having a cohesive single tool that all teams use that is up to date on your repository is actually one of the main benefits.
So there's a there is an AI stack briefly touch on where it sits. If you can see code assist sort of sitting um I don't know if you can't see my mouse um within the sort of third across at the top of the AI services
um and it's part of the genAI agents um and therefore the coding agent sits within the agent but the code assist plugin is a separate product.
I think there's a lot of other um applications there and what what we really want is to start connecting them in seamless ways so that you can think about Oracle database using SQL over to code assist you know building these sort of um pipelines and knowledge bases to really enhance the um enhance the sort of the overall offering and optimize full um workflows.
Product Availability and Roadmap
So I put a link to a demo um so you can um time you can have a look at I think a five or six minute demo um and obviously it's available at the minute fully internally um it's the um beta externally
um we are targeting gas later this year. Um, you know, it's not available yet. There's some features that we're working to add which I'll talk about in a second.
Um, so so the like where are we at the minute? So we're at the V1 release. um we're starting to roll out across regions to make sure the coding assistant and coding agent are available in multiple regions.
The these are the features I'd like to add before G. Um and we've got SQL. So sweet script is available. PL/SQL is in final stages of accuracy um assessment and benchmarking.
And then customer context. So, um if you're um this is where we would make sure that you've got either dedicated dedicated capacity um or that you're signing up to um ensure that there's a very specific um knowledge base that's built on your repository, your documentation, coding standards, library, security patterns, design patterns.
Um and then the um the knowledge base, the retrieval from the knowledge base then because it's code and there's um a lot of different document types is is complex. So we're trying to work through that at the minute and and ensure that accuracy is high.
It's a very challenging product space and one that really a lot of competitors aren't doing a good job on. Um and that means that we can then um ensure that you've got the most relevant responses based on your repository.
Um one of the challenges I think for everyone is you know making sure that you're giving the model the right high quality responses or high quality code. Um and just the model that creates with your repository isn't enough.
Um we found from our own um reviews of the internal knowledge bases that um from a initial library set of you know over a million files when you start to remove duplicates and remove um depreciated branches of your repository or even remove files that have never been put into a build. Uh remove, you know, older versions and remove um any code files that have a comment in it like do not use or, you know, deprecated or um draft or anything that you start to find that the actual code base is, you know, maybe a third of what you initially thought.
Um and so therefore that type of curation of your repository as we're building these repositories, but ultimately it's the way um you know you want the model to be providing responses that are completely in context for your repository and the more in context the the the more you can push down the value chain of you know building larger more business relevant code um and I think that's one of the key um one of the keys to see for overall model improvement and it's the number one request we've had from internal customers.
Plugin Context, Custom Agents, and Model Tuning
Um so there's also elements of plug-in context. So you saw that we can add files or for specific responses you can then start thinking about customizing.
Um, and what that means is you can take a um, for instance, you know, you've got a an old version of a codebase and an updated version and you're doing a version migration and it might not be a a an actual version of code. It could be a pattern migration.
So you want the plug in to understand the latest code but then also at that point in time understand the context of where you're operating within the the repository tree. So if you're operating in an older branch, it should adhere to some of the uh dependencies and libraries within the older branch.
And therefore that's a hard balance because you you know you you've you a lot of developers are working in multiple languages in multiple uh code branches um and they don't spend all their day working for instance on Java within a most up-to-date branch.
You sort of bounce around quite a bit. I think the the challenge from our perspective is to make sure you've got context, the model understands context and it's got the languages are relevant as people move and start coding in different languages as well.
Um then we've got custom agents. I think I've talked a lot about some of the the custom agents and some of the benefits that that's going to bring.
And finally um potentially and we we're we'd like to um reserve this right and I think because you're building you know everyone thinks about fine-tuning a model that I want to fine-tune a model based on my code base and and maybe but there's probably knowledge bases and rags and retrieval and sort of semantic modeling of a code base with a you know docu document library and dependency tree.
And so we want to solve as many problems as possible in the cheapest way. Um and really building a custom model is not cheap for anyone. Um the you know it could be up to you know 6 weeks to two to three months to build the model, run the iterations, improve your benchmarking, you know improve the you know input data.
um and it takes a specific skill set in that process. So although it's on the road map and we want to do it, we want to try and make sure the tooling is as simple as possible so that we take away the um the pain of trying to build a model um and abstract as much as possible away so that we allow this for external team.
Closing and Q&A
Okay. So, I've left about actually I'm pretty much on time. Obviously spoke too much. Um I've any questions? I've left about sort of 10 minutes for questions.
I'm going to stop sharing.
All right, Jason, I'm I'm back on. Um yes, folks that are u connected. Good. Very good content there, Jason. Excellent context. Uh, glad you touched upon the JD Edwards components. I know that must be a top of mind for the folks that are on uh the call right now. So, that's that's great to hear.
Um, any questions, folks, uh, that, uh, you would have for Jason uh, that maybe he could address. So maybe we'll just give you guys a few minutes and see if some people.
Yeah, there's a question there from uh Susan. Um any plans to incorporate the JD Edwards tool set?
So um I think JD Edwards and the sort of tool sets comes into where you start building a repository that's really relevant for your um uh for the for the in um the work that you're doing so directly into the model I think probably not but I think that there's um the ability to create the jabs tool sets with the c the code agent um and the custom uh repositories and I think that's the real key.
Um orchestration enables custom Java Uh yeah. Yeah. No, that's that's that's a great one, AJ. Right. in terms of that that certainly is the possibility even if the the GD Edwards traditional kind of legacy tool right for creating apps and whatnot are not in the immediate scope but the the scripting capabilities and certain for orchestrator as well as you touched upon briefly Jason I don't know if AJ you were on at that point in time uh regarding uh the the actual XML potential uh as as you guys start yeah vending that into that area.
Jason, yeah, and I think I think AJ, you've you've actually hit upon a really important point. Training it is expensive. Documenting the rules and creating a knowledge base based on the rules is relatively cheap, relatively. You still need to create the documents but but then very quickly iterate on a um a set of documents update the knowledge base run some trial queries determine what the accuracy of the output is. You know, I think one of the key elements that we tell customers is build your benchmarks test set. Um, and do that right at the start and then try it before you even start doing any training or doing any sort of custom scripting because out of the box it's probably got a you know 30 to 40% accuracy. Um, and then that's something to build.
I'll give you an example of our sweet script train. So out of the box our model train our model delivered at 0% accuracy. Um and then after I think four iterations we were up above 95% and that was a combination of multiple strategies.
Exactly AJ exactly where people have got customizations. So you've got in fact maybe there's a base set of um the the customizations that you want to train a model on and then allow customers to augment that with some of the things that they've done that are specific. And then you've got a really powerful um custom model that um provides information to customers based on not only the uh base but then any augmented ones.
So I can read the J I would help you document. Yes, I um if you want you can give me a sample of the code um and I can uh I don't know what the communication the post session communication is but I would put money on the fact that it could um it'll read the code and document it.
Yes, we can certainly coordinate that uh effort. Um Jason and uh and by the way AJ just so you know is a colleague of yours. He's he's part of the Oracle Judy Edwards team. So maybe we could, you know, I won't sign AJ up. I know he's got his own things, but certainly you got my commitment and if AJ can join the frey, that would be awesome. But I would uh I will definitely connect with you on that and we can give you snippets of what we have in our in our lab environment.
Okay. Yeah. AJ, good luck.
Yeah, the the customizations and whatnot. That's a big thing, you know, documenting. That's that's one thing sometimes that we hear from from our customers is uh a customization was built. It was built by a gentleman or a lady that used to be here 10 years ago. Uh we don't know what they're doing. Um and documenting would be would be a good first step.
Yeah. Yeah. And I think as I said very early on um ramp up um for the sort of poorly documented poorly unit tested code bases um we have one customer internally one team that are building a pipeline that takes their code um updates unit testing to above 90% updates all the coding documentation um does code reviews on it security reviews and then provides it ready to check in. It still requires a developer to check it, you know, in case there's any issues, but but it means that you can then quickly go through and bring your code without changing sort of versions of code. You just bring it up to a stand more supportable in the future.
Absolutely. So, we're still in charge, right? As humans, we're still in charge even with AI, right? We're still in charge. We still need to review. We still need to assess. But but the idea, right, is that it it does provide uh efficiencies.
Yeah. And you know, no one wants to open up a piece of code that no one's touched in 10 years, add unit test, add documentation, then do a code review, then check it in. Like that's a that's a a low. There's no value ad to the organization.
And so you want to remove the lower value tasks and allow developers to focus on the higher value tasks like writing new business logic or writing new customizations.
Okay. So apologies for the um uh the demo not working as well as it should have. Um it honestly worked about 20 minutes before the session. So um
Okay. No, this was this was great, Jason. Thank you for for sticking with it and uh and for the presentation. I think that was uh it was valuable for sure. Uh you know, and certainly the direction that you guys are heading in uh also kind of adds to possibility some other use cases. So, we're looking forward to that and separately, you know, we we will we will follow up on a couple of things that I think we discussed here. So, we'll I'll reach out to you there.
And uh looks like maybe we don't have any more questions. So thanks everyone for participating, everyone that joined today's session. Hope you found it valuable. Um keep on the lookout as as Jason said, right? Oracle continues to invest in this area and uh you know as they as they go live externally, right? There'll probably be some more goodies and and more capabilities there. So uh this is just the beginning, folks. So thanks again for joining.
Topics: