analyst @ nohacky :~/briefings $
cat / briefings / pulsar-rat...
analyst@nohacky:~/briefings/pulsar-rat-npm-supply-chain-attack.html
reading mode 25 min read
category Supply Chain
published February 26, 2026
read_time 25 min
author NoHacky

Hiding in Plain Pixels: How Attackers Smuggled a RAT Inside PNG Images

A deep technical breakdown of the Pulsar RAT NPM supply chain attack, with plain-English explanations of steganography, DLLs, JavaScript type coercion, hardware debug registers, and AMSI bypass techniques.

In February 2026, application security firm Veracode published findings on one of the most technically elaborate supply chain attacks ever documented against the NPM package ecosystem. The attack involved a malicious package called buildrunner-dev — a typosquat of legitimate, abandoned packages — that concealed a complete, functional Remote Access Trojan (RAT) inside the pixel values of ordinary-looking PNG image files. The malware, called Pulsar RAT, gives an attacker complete remote control over a victim’s Windows computer.

What made this attack exceptional was not the RAT itself, which is based on freely available open-source code, but the extraordinary lengths the attacker went to in order to conceal it. Veracode’s researchers peeled back a twelve-stage attack chain before they could see what they were dealing with. Along the way they encountered Japanese Unicode characters used as variable names, hundreds of fake decoy payloads, antivirus-specific evasion paths, memory-only code execution, and a novel technique for bypassing Windows security by weaponizing the CPU’s own debugging infrastructure.

“Things are not always what they seem; the first appearance deceives many.” — Phaedrus, quoted by Veracode researchers in their technical writeup

This article explains the full attack chain in detail, with plain-English explanations of the technical concepts involved, so that cybersecurity educators, students, and practitioners can understand not just what happened, but why each technique works — and what defenders can do about it.

Understanding the Attack Surface: NPM and Supply Chain Risk

Before examining the malware itself, it helps to understand why the NPM ecosystem is such an attractive target. NPM, which stands for Node Package Manager, is the world’s largest repository of reusable JavaScript code. It hosts more than two million packages and serves more than seventeen million developers globally. When a developer types npm install to bring a third-party library into their project, that single command can silently execute code on their machine — and that is precisely the entry point this attack exploited.

A supply chain attack targets the tools, dependencies, or infrastructure that legitimate software relies on, rather than attacking the final software or its users directly. By poisoning a library that developers trust and routinely install, an attacker can compromise developer workstations, build servers, or production systems without ever needing to phish an individual user or exploit a specific vulnerability in a product.

note

The package in this attack, buildrunner-dev, impersonated the legitimate but abandoned buildrunner and build-runner packages. This technique is called typosquatting: publishing a package with a name close enough to a real one that a developer might accidentally install the malicious version. Because the legitimate packages had been abandoned, a developer finding the recent, active-looking impostor might reasonably assume it was a maintained fork.

The Postinstall Hook: Execution on Installation

Every NPM package can include a package.json file with a “postinstall” script — a command that runs automatically the moment npm install completes. Legitimate packages use this for setup tasks like compiling native code or creating configuration files. The malicious package used it to download and execute a batch file.

The init.js file that the postinstall hook executed did several important things. It downloaded a batch file called packageloader.bat from a Codeberg repository and wrote it to the Windows Startup folder with a randomized filename — meaning it would run every time the victim logged into their computer. It also checked for certain conditions before executing: it skipped non-Windows platforms, skipped continuous integration environments (where automated security scans might be running), and skipped sessions where a debugger was attached. This anti-analysis awareness is a hallmark of professional-grade malware.

Every variable name in the script used innocent-sounding telemetry terminology: telemetryEndpoint, traceToken, metrics_startup_. A casual glance at the file would suggest it was an analytics or monitoring tool.

Layers 2 Through 7: The Art of Looking Like Noise

The downloaded batch file was 1,653 lines long. Of those, only about 21 lines did anything meaningful. The rest was an elaborate performance of complexity designed to confuse both human analysts and automated security scanners.

What Is Obfuscation and Why Does It Work?

Obfuscation is the practice of making code difficult to understand without changing what it does. Security tools often look for known-dangerous strings — words like “powershell,” “malware,” or “AmsiScanBuffer” — in files they scan. By hiding those strings behind layers of encoding and fragmentation, an attacker can prevent signature-based detection. Human analysts face a similar problem: a file that looks like random gibberish is harder and slower to reverse-engineer than one with readable code, buying the attacker time.

Veracode identified seven distinct obfuscation techniques stacked in this single batch file:

Ghost Variables

Every command was disguised by inserting references to environment variables that did not exist. In Windows batch scripting, an undefined variable resolves to an empty string at runtime, so the text is simply ignored. A line like %wkQBXHZ%set varname=value becomes simply set varname=value when the script runs. No two ghost variable names were alike across the entire 1,653-line file — a deliberate choice to prevent pattern-matching analysis tools from finding commonalities.

Payload Fragmentation

The actual malicious command was split across 909 separate variables, each holding two to eight characters. Only at execution time were they concatenated into a single command. Because no single variable contained enough characters to be recognized as a known-dangerous string, signature scanners saw nothing alarming.

REM Comment Noise

219 lines of innocuous-looking comments scattered random words throughout the file: “raven monsoon galaxy glacier bright secure swift nova ice.” These inflated the file size and raised the entropy profile of the file in ways designed to trigger false positives in tools that flag high-entropy content, sending analysts on irrelevant hunts.

Decoy Base64

51 variables contained base64-encoded strings that decoded to harmless commands like “echo nexus && pause.” These were never executed. They existed solely to trigger security alerts, wasting analysts’ time investigating content that did nothing.

Base64 Word Lists

53 additional variables held base64-encoded English word lists — nature terms, mythology references, and science vocabulary. Like the decoy payloads, these served no functional purpose and existed purely to increase the volume of suspicious-looking content an analyst would need to sort through.

Junk Strings

Dozens of variables contained random alphanumeric strings between 1,000 and 3,000 characters long, with fake base64 padding (“==”) spliced into the middle. They were not valid base64 and decoded to nothing, but their presence further inflated the file and added visual noise to any automated analysis.

Randomized Variable Names

Every variable name in the file was meaningless gibberish: mUuCpEbawrsmMHS, BYjdGABxDWzfVdntjsL, zPXIYTTlY. No semantic meaning could be derived from any name in the entire file, preventing analysts from using naming conventions to understand the code’s structure.

What the Batch File Actually Did

Stripped of all the noise, the batch file performed four actions. First, it copied itself to the Windows AppData folder for persistence. Second, it checked whether it was already running with administrator privileges. Third, if it lacked those privileges, it executed a UAC (User Account Control) bypass using the fodhelper.exe method — a well-documented technique categorized by MITRE under T1548.002 (Abuse Elevation Control Mechanism: Bypass User Account Control) in which the malware hijacks a Windows registry protocol handler to trick a legitimate auto-elevating Windows binary into launching the attacker’s code with elevated rights, all without a UAC prompt appearing. Fourth, now running as an administrator, it concatenated all 909 payload fragments and executed them as a PowerShell command in a hidden window.

JavaScript Type Coercion: A Weaponized Language Feature

Before examining the next obfuscation layer, it is worth understanding JavaScript type coercion, because the attack’s earlier layers exploited this language feature in a remarkable way.

JavaScript is a weakly typed language, which means variables do not have fixed types. A variable can hold a number one moment and a string the next. When you perform operations on values of different types, JavaScript automatically converts them to a compatible type rather than throwing an error. This automatic conversion is called type coercion, or implicit type casting.

A simple example: if you write the code 5 + "hello" in JavaScript, rather than crashing with a type error, JavaScript converts the number 5 into the string “5” and produces the result “5hello.” Type coercion produces some genuinely strange results — the expression "" + {} converts the empty object {} to the string "[object Object]", and this exact trick appears in the Pulsar RAT campaign.

warning

In June 2025, Veracode documented an earlier campaign by the same threat actor involving a package called solders. In that attack, the initial JavaScript payload was written entirely using Japanese Hiragana and Katakana characters as variable names, making the file appear to be random gibberish to any English-language analyst or scanner. The malicious strings were entirely assembled at runtime from fragments derived through type coercion — the same technique described above. That campaign also used a twelve-stage attack chain ending in Pulsar RAT delivery, confirming a pattern of escalating sophistication from this threat actor.

Steganography: Hiding a Weapon Inside a Photograph

Steganography is the practice of concealing a message or file inside another file in a way that is not obvious to an observer. The word comes from the Greek steganos (covered) and graphia (writing). Unlike encryption, which scrambles data so that it is unreadable, steganography hides the fact that hidden data exists at all. The carrier file — in this case, a PNG image — looks completely normal.

How PNG Images Store Color

Every pixel in a PNG image is defined by three numerical values: one for Red, one for Green, and one for Blue — the RGB color model. Each value is typically an 8-bit number, meaning it ranges from 0 to 255. Here is the key insight: the human eye cannot reliably distinguish between a pixel with an RGB value of 200 and one with a value of 201. This means that if you change the value of a color channel by a small amount — replacing whatever value was there with a value that encodes one byte of your hidden payload — the image will look essentially the same to any human observer, but will now carry a hidden byte of data.

The Attack’s Steganographic Scheme

Veracode’s analysis revealed that the attackers used a straightforward but effective encoding scheme. The first two pixels of the image were used to store the total size of the hidden payload as a 32-bit unsigned integer. From the third pixel onward, each pixel’s R, G, and B channels each carried one byte of the actual payload, read left-to-right, top-to-bottom across the image.

Veracode extracted payloads from two separate images. A small 41 by 41 pixel image (just 2.3 kilobytes on disk) contained a 4,903-byte PowerShell script. A larger 141 by 141 pixel image (67 kilobytes) contained 59,176 bytes of compressed data that expanded into a 136-kilobyte Windows executable.

critical

Both images were hosted on ImgBB, a legitimate free image hosting service. From the perspective of a network firewall or security monitoring tool, the traffic looked like an application downloading ordinary images from a legitimate hosting platform — a completely normal and expected type of network activity.

DLLs, .NET Assemblies, and Process Hollowing

A DLL (Dynamic Link Library) is a file format used on Windows that contains compiled code and data that multiple programs can use simultaneously. Think of a DLL as a book in a library — any number of programs can check it out and use the information without each needing its own copy. The malware in this attack was entirely written in .NET and delivered as .NET assemblies — files that look like DLLs but execute as managed code inside the .NET runtime.

Key DLLs that appear in this attack include amsi.dll, which implements the Windows Anti-Malware Scan Interface, and kernel32.dll, which provides core Windows API functions including GetProcAddress and VirtualProtect. Notably, the .NET loader never calls these functions through standard imports. Instead, it manually parses the PE export table of loaded modules, resolving function addresses by matching a custom rotating hash (a DJB2 variant) against each export name. This means the Import Address Table contains no suspicious API references, and static analysis tools scanning for calls to VirtualProtect, GetThreadContext, or AddVectoredExceptionHandler in the imports will find nothing.

What Is Process Hollowing?

Process hollowing is a code injection technique in which an attacker creates a legitimate Windows process in a suspended state and then replaces the code inside that process’s memory with malicious code before allowing it to resume. From the outside, the process appears completely legitimate — its name in Task Manager is correct, its file path is correct, its parent process is correct. But the code actually executing inside the process is entirely the attacker’s. In this attack, conhost.exe — the Windows console host — was used as the hollowing target.

Bypassing Windows Security: AMSI and the Hardware Debug Register Trick

AMSI (Anti-Malware Scan Interface) is a Windows security feature that provides a standardized interface through which applications — including PowerShell and the Windows Script Host — can submit content to the installed antivirus solution for scanning before executing it. Because AMSI is such an effective defense, it has become a primary target for attackers. The Pulsar RAT attack included three distinct AMSI bypass techniques, executed in sequence at startup for redundancy — if one failed, the others would still disable scanning.

The PowerShell AMSI Patch (Small Image Payload)

When Malwarebytes or F-Secure was detected, the attack took an alternate path: it downloaded the smaller PNG image, which contained a PowerShell script with an AMSI bypass. This script located AmsiScanBuffer in memory using dynamic .NET delegates built via Reflection.Emit (avoiding static P/Invoke declarations that antivirus products would flag), used VirtualProtect to change memory permissions, and overwrote the first six bytes of the function with a return instruction that immediately returns the E_INVALIDARG error code. After this patch, every call to AmsiScanBuffer returns “invalid argument” without scanning anything, and whatever comes next runs without antivirus inspection.

note

Notably, when ESET was detected on the target system, the malware simply stopped execution entirely. This suggests the attacker had no working evasion for ESET and preferred to abort rather than risk detection — a telling sign that this was tested against specific products rather than deployed blindly.

The .NET Loader’s Triple AMSI Bypass

The .NET assembly extracted from the larger image carried its own set of three AMSI bypass techniques, executed in sequence at startup for redundancy. Veracode’s decompilation revealed all three:

Bypass 1: AMSI Memory Scan

The first technique scanned the loaded amsi.dll module in memory for the byte pattern corresponding to the string “AmsiScanBuffer” and zeroed it out, preventing AMSI from initializing properly. Rather than patching the function’s code or intercepting its execution, this approach prevents the function from being located at all.

Bypass 2: Hardware Breakpoints and the Debug Registers

The second bypass technique is substantially more sophisticated. Rather than modifying the AMSI DLL’s code at all, it achieves the same effect by exploiting the x86 processor’s built-in hardware debugging infrastructure.

The x86 processor architecture includes special-purpose registers dedicated to hardware debugging: DR0, DR1, DR2, and DR3 are Address Breakpoint Registers, each holding a single memory address where the processor should pause execution. DR7 is the Debug Control Register, which controls which breakpoints are active and what kind of access triggers them. DR6 is the Debug Status Register, recording which breakpoint fired.

The attacker’s code registered a Vectored Exception Handler, then set DR0 to the memory address of AmsiScanBuffer and configured DR7 for instruction execution breakpoints. From that point forward, whenever any code attempted to call AmsiScanBuffer, the processor would halt execution before the function could run and raise a hardware debug exception. The attacker’s exception handler then modified the instruction pointer in the thread context to skip past AmsiScanBuffer and set the return value to indicate a “clean” result.

“No memory in amsi.dll is ever modified. Integrity-checking defenses see nothing. The assembly even names the method HardwareBreakpointAmsiPatchHandlerMethod, which is almost refreshingly honest.” — Veracode Threat Research, February 2026

Bypass 3: Direct Memory Patching with NOP Sled

The third technique used VirtualProtect to make the AmsiScanBuffer memory writable, then overwrote the function with an 11-byte NOP sled followed by a RET instruction (90 90 90 90 90 90 90 90 90 90 90 C3), and restored the original memory protection. This causes AmsiScanBuffer to immediately return without executing any scanning logic.

Veracode’s analysis revealed that all three bypasses were executed in sequence at startup, providing layered redundancy. Additionally, after downloading and decrypting the final Pulsar RAT payload, the loader re-patched AMSI a fourth time before invoking the payload’s entry point — ensuring that even if a bypass had been partially restored during the download process, the final payload would still execute unscanned.

The Final Payload: Pulsar RAT

Pulsar RAT is a Remote Access Trojan derived from Quasar RAT, a well-known open-source RAT. Once installed, Pulsar provides the attacker with keylogging, webcam and microphone access, credential theft via an integrated module called Kematian Grabber (which harvests credentials from browsers, VPNs, FTP clients, email clients, and password managers), file management, remote command execution, cryptocurrency wallet clipping (intercepting copied wallet addresses and replacing them with attacker-controlled addresses), reverse proxy support, a plugin system for modular customization, and hidden virtual network computing (HVNC) for invisible remote desktop access. The malware communicates over TLS-encrypted connections, using BCrypt encryption and the MessagePack binary serialization protocol for efficient command transmission. It retrieves its command-and-control server address from public pastebin sites rather than hardcoding it, allowing the attacker to change infrastructure without pushing a new malware update.

Pulsar includes checks to detect security researcher analysis environments, inspecting disk labels for strings associated with virtualization platforms like QEMU. It also implements a custom SSL certificate validation callback that accepts all certificates, allowing it to communicate through TLS inspection proxies without triggering certificate errors. The loader contains per-antivirus persistence strategies, with dedicated methods for Avast alongside the branching logic for Malwarebytes, F-Secure, and ESET — revealing that the attacker tested their payload against at least four major antivirus products.

The Complete Twelve-Layer Attack Chain

  1. Layer 1: The package.json postinstall hook executes init.js automatically on installation.
  2. Layer 2: init.js downloads a batch file from a Codeberg repository and writes it to the Windows Startup folder.
  3. Layer 3: The batch file uses 909 fragmented variables and seven obfuscation layers to reconstruct and execute a PowerShell command.
  4. Layer 4: The PowerShell command uses WMI to enumerate installed antivirus products and selects the appropriate attack path.
  5. Layer 5: PowerShell downloads a PNG image from ImgBB.
  6. Layer 6: A steganographic extraction algorithm reads payload bytes from the RGB pixel values of the image.
  7. Layer 7: The extracted data is a .NET executable loaded into memory using .NET reflection — never written to disk.
  8. Layer 8: The .NET assembly applies three AMSI bypass techniques in sequence to disable antivirus scanning.
  9. Layer 9: The loader downloads a third PNG image from yet another ImgBB URL (discovered via decrypted strings within the .NET assembly).
  10. Layer 10: A third steganographic extraction reads an encrypted payload from this image’s pixels.
  11. Layer 11: The payload is decrypted using AES or TripleDES (with keys derived via SHA-256 hashing) and decompressed with GZip.
  12. Layer 12: The decrypted assembly — Pulsar RAT, a 976 KB .NET assembly — is injected into a hollowed-out conhost.exe process and begins executing.

Impact and Defensive Recommendations

The primary targets of this attack were software developers who install third-party libraries, particularly those working on Windows systems. Veracode noted that the buildrunner-dev campaign bore strong similarities to a June 2025 attack they had previously documented, involving packages called solders and @mediawave/lib (first published around May 2025). Veracode linked the campaigns based on shared techniques: both used ImgBB for hosting steganographic images, both employed pixel-based payload extraction, and both delivered Pulsar RAT as the final payload. Those earlier packages were published by an npm user named codewizguru, who registered the account in April 2025, and had accumulated approximately 320 weekly downloads before being identified. Any.Run estimates that remediating a Pulsar RAT infection requires 200 to 500 person-hours of effort.

  1. Package vetting matters: Use dependency review tools that flag new, unvetted, or suspicious packages before they are installed. The buildrunner-dev package was published by a brand-new account with no history.
  2. Network monitoring: Watch for unexpected outbound connections to image hosting services from developer workstations, particularly connections that transfer large amounts of data.
  3. Memory-based detection is essential: Because the malware never wrote its final payload to disk, traditional file-based antivirus was ineffective. EDR platforms that monitor process behavior and suspicious API call sequences are significantly more effective.
  4. PowerShell logging and AMSI protection: Ensure that PowerShell script block logging is enabled and that AMSI integrity is monitored.
  5. Developer security awareness: The single most effective prevention was not installing the package in the first place. Training developers to scrutinize new dependencies represents meaningful risk reduction.

Conclusion: The Sophistication Gap

What the Pulsar RAT NPM attack ultimately demonstrates is a significant sophistication gap between attacker capability and the intuitions many organizations apply to supply chain risk. The final payload is based on freely available open-source code. But the delivery mechanism — with its twelve layers of obfuscation, its steganographic payload delivery, its CPU-level AMSI bypass that modifies no files and leaves no artifacts, and its per-antivirus evasion logic — represents a level of engineering investment that substantially exceeds what many developer security training programs prepare people to recognize.

Understanding these techniques in depth is no longer exclusively the domain of penetration testers and malware analysts. It is foundational knowledge for anyone responsible for defending modern software development environments.

“As the journey from a single cryptic file to a full-blown RAT serves as a potent reminder that a simple npm install can expose an organization to extreme risk. The sheer depth of this attack underscores the critical need for automated, deep code analysis and continuous vigilance in protecting our development pipelines.” — Veracode Threat Research
note

Veracode’s original report includes indicators of compromise (IOCs) such as the malicious NPM package name, payload URLs for the steganographic images hosted on ImgBB, and the Codeberg repository URL. Defenders seeking actionable IOCs for detection rules should consult the primary Veracode report linked below.

Sources and Further Reading

February 2026 Campaign (buildrunner-dev)

  • Veracode Threat Research. “Hiding in Plain Pixels: A Malicious NPM Package Hides .NET Malware Inside Images.” February 19, 2026. veracode.com
  • Any.Run. “Pulsar RAT: Malware Overview.” January 19, 2026. any.run
  • GBHackers. “Pulsar RAT Abuses Memory-Only Execution and HVNC for Stealthy Remote Takeover.” January 19, 2026. gbhackers.com

June 2025 Campaign (solders / @mediawave/lib — same threat actor)

  • Veracode Threat Research. “Down the Rabbit Hole of Unicode Obfuscation.” June 9, 2025. veracode.com
  • SC Media. “Complex npm attack uses 7-plus layers of obfuscation to spread Pulsar RAT.” June 9, 2025. scworld.com
  • The Stack. “‘Absurd’ 12-step malware dropper spotted in npm package.” June 10, 2025. thestack.technology

Technical Reference

  • Intel Corporation. Intel 64 and IA-32 Architectures Software Developer’s Manual, Volume 3: System Programming Guide. Chapter 17: Debug, Branch Profile, TSC, and Intel Resource Director Technology Features.
  • Wikipedia. “x86 debug register.” wikipedia.org
  • MDN Web Docs. “Type coercion.” Mozilla Developer Network. developer.mozilla.org
  • MITRE ATT&CK. “Abuse Elevation Control Mechanism: Bypass User Account Control.” T1548.002. attack.mitre.org
  • MITRE ATT&CK. “Process Injection: Process Hollowing.” T1055.012. attack.mitre.org
  • MITRE ATT&CK. “Obfuscated Files or Information: Steganography.” T1027.003. attack.mitre.org
  • MITRE ATT&CK. “Impair Defenses: Disable or Modify Tools.” T1562.001. attack.mitre.org
  • MITRE ATT&CK. “Supply Chain Compromise: Compromise Software Supply Chain.” T1195.002. attack.mitre.org
— end of briefing