/> Update cookies preferences
A green haired anime character is shown in a black and white image.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

How Juul Labs Rebuilt Its Security Operations Using Abstract

As COO of Abstract and someone with years of experience in security operations, I know how challenging it is to modernize security in fast-moving environments. When the team at Juul Labs came to us, they weren’t just looking for visibility. They needed a practical, high-leverage way to simplify their stack and empower their team to act faster. Here’s how we helped make that happen.

Improving Juul Labs’ Security Posture

Juul Labs, a leader in consumer electronics manufacturing, was looking to modernize its security posture and IT infrastructure. Under the leadership of Pablo Quiros, VP & Global Head of Security and IT, they needed a solution with powerful security capabilities to support their growth, all while operating with a lean team.

There had been multiple attempts to build visibility into our systems. What we inherited was outdated, overlapping, and broken logging infrastructure.”
Pablo Quiros, CISO at Juul Labs

The Need for a Force Multiplier

Given the lean nature of his team and Juul’s ambitious growth plans, Quiros recognized the need for a solution that could provide powerful security capabilities without requiring a massive headcount or heavy operational lift.

THE CHALLENGE

  • Limited visibility into telemetry and attack surfaces
  • Difficulty scaling threat detection across a small but specialized team
  • Complex, legacy logging systems that were expensive and inefficient
  • Slow, resource-intensive processes for pipeline creation and data onboarding
  • Rigid tools from legacy vendors that didn’t adapt to Juul’s modern architecture

Juul Labs needed to consolidate its logging and detection systems, gain actionable insight across its infrastructure, and do so in a way that maximized agility and minimized cost.

Enter Abstract

After evaluating the market, Quiros and his team selected Abstract Security to support their modernization journey.

"The moment we saw a pipeline being built in Abstract with drag-and-drop simplicity, our logging engineer lit up. That kind of usability is rare in this space — and for a small, high-impact team like ours, it’s invaluable."
Pablo Quiros 

Juul Labs chose Abstract’s Pipelines and Analytics products to simplify their security data infrastructure and improve operations without replacing their existing SIEM and saw results immediately. Just an hour after being deployed, all of Juul’s systems and data were up and running, delivering value from day one.

Leveraging the Abstract Security Platform, Juul Labs was able to:

  • Quickly deploy data pipelines, decoupling data destinations from sources, enabling improved data quality, threat intel enrichment, and ease of migration
  • Cut infrastructure complexity and operational overhead
  • Integrate seamlessly with existing tools, including Splunk
  • Reduce onboarding time and training requirements

Proven Results and Return On Investment

Though still early in the deployment, Juul Labs is already seeing strong results:

Increased Operational Efficiency

“What previously took four days can now be done in a single day, freeing up engineering time for high-value tasks. “

Reduced Time to Value

“The Abstract platform was delivering insights within an hour of deployment.”

Improved Visibility & Detection

Juul now has

“full understanding of our attack surface — who, what, where, when, and why.”

Positive Early ROI

Juul expects infrastructure costs to drop by up to 75% over the next year as legacy systems are phased out.

Why Abstract Security

TIME
“Time is our most valuable resource. Abstract gives us time back — in deployment, in operations, in impact.”
USABILITY
“You don’t need a six-month onboarding. It just works.” 
COST-TO-VALUE RATIO
“The infrastructure is cheaper to run, easier to maintain, and delivers real outcomes faster.”

Looking Towards the Future

Quiros sees Abstract Security as more than a platform — it's a strategic partner.

We like that Abstract is young and responsive. We feel like we have a voice in shaping where the product goes.

Juul Labs is also exploring Abstract’s analytics module as part of its broader push to adopt modern, adaptable technologies.

We’re always looking for ways to extend our team’s capabilities without increasing headcount,” said Quiros. “The analytics capabilities of the platform will enhance our detection and analytics workflows and increase our team’s capacity.” 

Using Abstract, Juul expects infrastructure costs to drop by up to 75% over the next year as legacy systems are phased out. Juul Labs is already seeing strong results in their operational efficiency, improved visibility and detection, reduced time to value, and return on investment (ROI). 

What previously took four days can now be done in less than a day, freeing up resources for other high-value tasks. Additionally, Juul now has a full understanding of their attack surface—with clear answers to the “who, what, where, when, and why” behind security events.

“Implementing Abstract Security brought immediate clarity and efficiency to our security operations. Their platform provided immediate visibility into our security environment, making it far easier to detect anomalies and understand our telemetry in real time. What stood out most was the seamless deployment—it was up and running in under an hour, delivering value from day one. As a lean team, we value tools that amplify our effectiveness without adding complexity. Abstract Security helps us cut through the noise and focus on what matters. The intuitive UI, real-time search capabilities, and the ability to pivot quickly between data sources have significantly improved our detection and response capabilities.

This isn’t just another tool—it’s a force multiplier. 

Abstract has helped us rethink how we approach security operations, allowing us to be proactive rather than reactive.”

Juul Labs is one of several modern enterprises rethinking their approach to security operations with Abstract If you’re looking to simply your data architecture, increase visibility, and move faster without adding complexity, we’d be happy to show you how.

Chris Camacho
Jul 9, 2025
Product

10 Cybersecurity Focus Areas That Actually Move the Needle

When someone steps into a security leadership role, there’s immediate pressure to prove value. You’re expected to assess risk, build relationships, and start showing momentum, all before the 90-day mark. That’s why I wanted to highlight this post from my friend Frank McGovern.  

Frank has worked across multiple industries, launched Blue Team Con, and advised teams on how to modernize security programs in the real world. His take on what actually moves the needle in your first 180 days is one of the most practical breakdowns I’ve seen.  

We asked if we could include it in this C2 Corner series. The ideas are Frank’s, written in his voice, with a few added thoughts from what we see at Abstract Security, especially around log strategy and detection coverage.  

If you’re leading a security program, this is a strong foundation to start from.

1. Budget and Staffing  

Before making any promises, align with finance and HR. Know your current headcount, available spending budget, and where the gaps are; especially around SIEM and data pipeline costs.

Quick note: Teams that come in with a clean data strategy and visibility into SIEM spend tend to get faster support when budget season hits.

2. Documentation and Governance

Before making any promises, align with finance and HR. Know your current headcount, available spending budget, and where the gaps are; especially around SIEM and data pipeline costs.  

Quick note: Teams that come in with a clean data strategy and visibility into SIEM spend tend to get faster support when budget season hits.

3. SOC Transformation  

Whether internal or MSSP-driven, the SOC needs tuning. That includes improving SLAs, tightening escalations, and refining detections.

4. Identity and Access Management

Get MFA enforced, consolidate identity platforms, and review AD hygiene. These are fast credibility wins with big risk reduction.

5. Posture Assessments

Run internal scans, check cloud configs, and get a full picture of exposure before the next pen test or audit brings it to light.

6. Email Security

Still the number one way attackers get in. Lock down SPF, DKIM, and DMARC, and reduce noise in your SOC at the same time.

7. Firewall and Segmentation Reviews

Stale firewall rules can stick around for years. Cleaning them up early sets the stage for proper segmentation later.

8. Application Security

Check your SDLC. Make sure AppSec tools are in place, being used, and feeding results back to dev teams for action.

9. Log Strategy and Detection Engineering

More logs do not mean more security. Focus on visibility. Pick high-value data sources and align detections to real threats.

From Abstract:
Modern data pipeline platforms play a crucial role here. At Abstract, we focus on helping teams route, filter, and enrich the logs that matter most ensuring the right data reaches the right tools, fast. While some organizations opt for platforms like Cribl for broader routing needs, Abstract is purpose-built for security teams. Either way, a modern log strategy delivers quick wins and long-term value.

10. Third-Party Risk Management

Keep the process lightweight and focused on your most critical vendors. Contracts and SOC 2 reviews cover most of what you need at this stage.


After these ten are under control, you can shift attention to data classification, insider threat, and cyber insurance. Those are important, but don’t come first.  

Final Thoughts:

The first six months are about earning trust and driving results. Focus on the fundamentals. Deliver visible outcomes. The rest gets easier from there.  

Big thanks to Frank for letting us share this playbook. If you’re stepping into a new security role or helping someone who is, this list is a great place to start.

We’ll be back in two weeks with the next C2 Corner post. Until then, let us know what you’d add to the list or how you’d prioritize differently.  

Want to dive deeper?

Grab our Applied Security Data Strategy eBook or sign up for future C2 Corner posts here.

Read Frank's full blog here

Chris Camacho
Jul 2, 2025
C2 Corner

Automating macOS Incident Response: DFIR-as-Code in Action Against AppleProcessHub

AppleProcessHub and the Application of DFIR-as-Code in MacOS Incident Response

Digital forensics has historically relied on manual collection, inspection, and interpretation—an approach ill-suited for modern operational demands. With increasingly distributed environments and rapid adversary movement, a more scalable, consistent methodology is needed. DFIR-as-Code proposes such a methodology. By encoding forensic and response workflows into reusable logic, organizations can transition from reactive, one-off analyses to systematic, verifiable, and expedited incident response.

We introduced the concept of DFIR-as-Code in our previous post, of this series (https://www.abstract.security/blog/dfir-as-code-scaled-and-repeatable-incident-response). This approach helps programmatically deliver repeatable forensic and response actions collecting consistent data across incidents, analysts and teams.

A recent case involving the AppleProcessHub malware provides a practical context in which to examine the application of DFIR-as-Code within a MacOS environment. This malware specimen, catalogued in public repositories and identified as both an infostealer and a trojan, offers a compelling use case for standardized forensic automation.

AppleProcessHub Overview

AppleProcessHub is a malicious Mach-O binary that, upon execution, deploys a Bash script. A deep dive of this malware is available from Kandji here(https://www.kandji.io/blog/macos-appleprocesshub-stealer ) This script exfiltrates a set of commonly targeted configuration and credential files, including:

  • Shell history (.bash_history, .zsh_history)
  • Git configuration in .gitconfig
  • SSH configurations and key files
  • MacOS keychain databases in Login.keychain-db

The nature of these targets underscores a central concern in most modern intrusions: credential theft. Once exfiltrated, these credentials enable lateral movement, privilege escalation, or external access from attacker-controlled infrastructure. For organizations reliant on manual triage, identifying such theft is a race against time—with inconsistent results. By contrast, DFIR-as-Code enables immediate detection, categorization, and escalation of these indicators.

Instrumenting the Response: Codified Triage of AppleProcessHub Artifacts

Using DFIR-as-Code principles, a security team can define specific countermeasures that automatically process secondary data sources such as filesystem images and memory captures. In the case of AppleProcessHub, the following countermeasures are relevant:

1. Credential Artifact Detection with Yara

Yara is a powerful rules language which is used to describe file contents and attributes. Frequently used to describe malware and their corresponding families, Yara can also be leveraged to detect credential files.  Below is an example of a Yara rule to detect SSH private keys:

rule SSH_Private_Keys { 
    meta: 
        description = "Detects SSH private keys in files" 
        author = "Justin Borland – Abstract Security" 
        date = "2025-01-01" 
        reference = "https://datatracker.ietf.org/doc/html/rfc4253#section-6.6" 
    strings: 
        $ssh_key_begin = "-----BEGIN OPENSSH PRIVATE KEY-----" 
        $rsa_key_begin = "-----BEGIN RSA PRIVATE KEY-----" 
        $dsa_key_begin = "-----BEGIN DSA PRIVATE KEY-----" 
        $ecdsa_key_begin = "-----BEGIN EC PRIVATE KEY-----" 
    condition: 
        any of ($ssh_key_begin, $rsa_key_begin, $dsa_key_begin, $ecdsa_key_begin) 
} 

You can use this methodology to describe dozens of different types of credential files. If you find these important files on a compromised system, you should immediately audit and revoke them to prevent further credential abuse. A few examples of credentials worth alerting on include:

SSH_Private_Keys 
AWS_Secrets 
GCP_Service_Account_Keys 
Kubernetes_Secrets 
Env_Files_Credentials 
Okta_API_Tokens 
Duo_Integration_Keys 
Azure_API_Keys 
Google_API_Keys 
Slack_API_Tokens 
GitHub_Tokens 
General_Credential_Theft 

These yara rules in our public GitHub repository here. 

Once codified, this logic can be systematically applied to forensic captures, ensuring that credential exposure is identified early in the investigative process. Countermeasures can then be tied to response actions such as initiating credential rotation or notifying appropriate identity governance systems.

In this example, the Abstract Security Platform enables alerting on SSH private keys found on the imaged system. This allows the system to immediately identify and escalate the discovery of credentials on a compromised asset.

A screenshot of a computerDescription automatically generated, Picture, Picture

Instrumenting basic alerting criteria around your secondary data sources and curated countermeasures enables foundational blocks upon which to expand and build.

Once deployed, immediate notification and escalation of the affected keys can be generated and actioned, as seen in our Abstract Security findings. This can be expanded to target any credentials which may be present in your environment.

A screenshot of a computerDescription automatically generated, Picture, Picture

Immediate detection and triage of potentially compromised credentials will help DFIR teams “shift left” and reduce mean-time-to-remediation, potentially saving them from additional abuse or exploitation.

2. Processing with Plaso

Plaso or similar tools enable automated extraction and categorization of MacOS keychain contents. Within a DFIR-as-Code pipeline, these parsers operate on every imaged MacOS endpoint, extracting credentials tagged as either macos:keychain:application or macos:keychain:internet.

Below is an example of a Nextcloud macos:keychain:application entry:

{ 
       "__container_type__": "event", 
       "__type__": "AttributeContainer", 
       "account_name": "justin_app-password:https://nc0.mynextcloud-domain.com/:0", 
       "data_type": "macos:keychain:application", 
       "date_time": { 
              "__class_name__": "TimeElements", 
              "__type__": "DateTimeValues", 
              "time_elements_tuple": [ 
                     2023, 
                     1, 
                     9, 
                     2, 
                     11, 
                     18 
              ] 
       }, 
       "display_name": "OS:/Users/justin/Library/Keychains/login.keychain-db", 
       "entry_name": "Nextcloud", 
       "filename": "/Users/justin/Library/Keychains/login.keychain-db", 
       "inode": "-", 
       "message": "Name: Nextcloud Account: justin_app-password:https://nc0.mynextcloud-domain.com/:0", 
       "parser": "mac_keychain", 
       "pathspec": { 
              "__type__": "PathSpec", 
              "location": "/Users/justin/Library/Keychains/login.keychain-db", 
              "type_indicator": "OS" 
       }, 
       "sha256_hash": "adc1ea058a2654995a5404d03b272c5ad491a1b8f85ef4524b83de91b25ed8ce", 
       "ssgp_hash": "2b89b6e8af5a3716bea50770c256eddc243a48732a54c6876209974f2f34d6479f93a8602dc372e38edc7a49a08b9f093b5c7c8c7fcc1694badd609852ee8924927df961daab8070af8d290116f5952cfc2ded9cd5e31494a3154e69464b405dd712307aff4b972d", 
       "timestamp": 1673230278000000, 
       "timestamp_desc": "Content Modification Time", 
       "yara_match": [ 
              "General_Credential_Theft" 
       ], 
       "event_id": "event_1187" 
} 

The goal is twofold: to identify what services may have been accessed with these credentials, and to quantify the risk surface that must be mitigated. Keychain records—containing application or web credentials—are often neglected in manual reviews but are particularly valuable to attackers. Automating their collection and review ensures that no sensitive data type is overlooked.

During an incident, your team must quickly determine what the compromised system had access to. This enables timely triage, containment, and remediation. DFIR-as-Code aims to minimize wasted time and maximize the speed and depth of analysis.

Below is an example rule that detects MacOS keychain entries not built into the system.  

A screenshot of a computerDescription automatically generated, Picture, Picture

This rule allows us to quickly triage credentials that may have been exposed during an incident and immediately begin containment and remediation. Below is an example of insights and findings generated by rapidly identifying the keychain scope of impact.

A screenshot of a computerDescription automatically generated, Picture, Picture

A screenshot of a computerDescription automatically generated, Picture, Picture

3. Persistence Mechanism Identification

In the MacOS ecosystem, the launchd subsystem is frequently co-opted by malware for persistence. While default services and agents are well-documented, malware often creates additional .plist files within user or system launch directories such as:

System/Library/LaunchDaemons/
/Library/LaunchAgents/
~/Library/LaunchAgents/

By deploying comparison rules against known baselines, security teams can isolate non-standard entries that may indicate malicious behavior. These entries can be automatically triaged and flagged for further analysis, reducing the time analysts spend identifying persistence mechanisms in compromised environments.

Below is an example rule that detects non-default launchd entries found on the imaged system:  

A screenshot of a computerDescription automatically generated, Picture, Picture

This DFIR-as-Code countermeasure is ideal for quickly triaging MacOS persistence mechanisms utilizing launchd. Some examples of benign entries, like Zoom, VirtualBox, and GoogleUpdater can be seen in the figure below.

A screenshot of a computerDescription automatically generated, Picture, Picture

As with other valuable forensic findings, your team can generate insights that help engineers and analysts track and understand the full scope of the incident. You can quickly route tickets to the appropriate teams for immediate remediation. This approach reinforces the “shift left” methodology by delivering relevant information to analysts promptly, allowing them to make faster, more informed decisions with reduced risk.

A screenshot of a computerDescription automatically generated, Picture, Picture

Structured Application: When, Who, What, and How

To standardize the response to AppleProcessHub and other mac malware intrusions, DFIR-as-Code breaks the incident response process to focus on the following questions:

  • When: Timestamp correlation and alert ingestion establish the timeline of the incident. 
  • Who/What: The user identities and compute resources that have been impacted. This can include hostnames, usernames, API keys, non-human identities, IP addresses and more. Identifying the Who and What allows for secondary data acquisition via automation and human analysis to understand the scope of intrusion. 
  • How: The operations that occurred in the intrusion. 

Using DFIR-as-Code allows teams to detail each of these areas in a version-controlled and auditable manner, ensuring that forensic techniques are uniformly applied across incidents and time.

Reducing Variability and Human Error

A central concern in incident response is inconsistency between analysts. Two individuals reviewing the same forensic image are unlikely to produce identical findings unless provided with a standardized playbook. DFIR-as-Code mitigates this discrepancy by removing procedural ambiguity. Every analyst inherits the same operational tooling, the same decision logic, and the same forensic lens.

This methodology transforms DFIR into a science—not in abstraction, but in execution. For AppleProcessHub, and for future incidents involving similar tactics, this approach ensures that investigative depth is achieved without sacrificing speed or reproducibility.

Continuous Expansion Through Libraries of Countermeasures

The strength of DFIR-as-Code lies not only in its reproducibility but in its extensibility. Organizations may build libraries of countermeasures tailored to specific platforms or malware families. Over time, these collections evolve into institutional memory—codified, scalable, and easily audited.

For AppleProcessHub and similar MacOS threats, relevant countermeasure categories include:

  • Credential detection via YARA and parsing logic
  • Launchd anomaly detection
  • Keychain analysis and service attribution
  • Exfiltration pattern recognition in memory and disk artifacts

Generally this can take many forms:

  • SIEM and Sigma rules for secondary data sources (Plaso output)
  • Advanced analytics using deterministic logic or probabilistic machine learning and AI tooling
  • Yara rules for file and memory images
  • Python scripts for advanced use cases and Response procedures to quickly implement required actions
  • Data models and correlation rules

And more

Below are some examples of DFIR-as-Code countermeasures and response procedures which can be automated to speed up investigations, freeing engineering resources:

Detection Criteria Countermeasure Type Ruleset Response Procedure
Any credentials or key files Yara Forensic filesystem image Escalate for credential revocation. Start secondary case for affected credentials / lateral movement
Injected or hollowed processes Volatility Memory dump Dump process, extract payload, collect files, start sandboxing / triage
Persistence via RunKeys and other suspicious entries RegRipper Windows Registry Extract keys, collect related files, start sandboxing / triage
Exfiltrated data confirmed from Plaso output (compression and transfer tools used) Sigma Forensic filesystem image Collect exfiltrated data for evidence. Start scanning exfiltrated data for credentials / impact radius.Escalate and engage Legal.
Malicious or suspicious persistence mechanism found (baseline deviation) Yara Forensic filesystem image Collect related files, start sandboxing / triage. Impact radius evaluation on pervasiveness
Commonly abused binaries found masquerading as other binaries (putty.exe, etc.) Yara Forensic filesystem image Collect related files, start file analysis. Start impact radius queries looking for tool usage and affected resources

Each countermeasure is independently testable, allowing organizations to progress through a well-defined lifecycle: from initial detection, to controlled blocking, and finally to estate-wide deployment.

Conclusion

The emergence of malware like AppleProcessHub highlights is an example of a threat targeting MacOS systems. DFIR-as-Code provides the necessary framework to bring parity between platform complexity and response capacity.

DFIR-as-Code gives your organization’s Security Operations team a powerful force multiplier. When handling complex incident response scenarios, your team can quickly evaluate the exposure of an incident to effectively respond. The result is reduced containment time and accelerated eradication efforts. This framework enables rapid development of use cases which enables further automation.

When your team understands what data an attacker exfiltrated during an incident, they can significantly reduce the time needed for incident disclosure. In some environments, teams must send out notifications within hours, not days or weeks. Automating these elements of the incident response process enables SecOps teams to respond faster and more effectively. Continuously evolving your detection and prevention controls helps close exploitable gaps, while DFIR teams keep expanding their libraries of detection and response capabilities.

By embracing DFIR-as-Code, your team actively detects and signals threats from slower, more complex data sources. With forensic images and memory dumps, you don’t just improve response times; you transform digital forensics from a reactive process into a proactive, adaptive capability. As your organization matures these practices, you build intelligent incident response pipelines that scale with complexity, reduce human error, and stay ahead of modern threats.

Abstract Astro
Jul 1, 2025
Security

Introducing Abstract LakeVilla: Pipeline-Powered, Analyst Ready, and Efficient Storage

Security teams are drowning in data management. While collecting the growing volumes of data from all sources is a struggle in itself, storing it is an even bigger problem to solve. And in today’s enterprise environment, data is everywhere; to the point that many organizations don’t know what to do with it. The burden of storing, managing, and making sense of it has drastically increased. And security teams are often forced to make a choice: overspend for real-time systems that house only slices of data or route the majority of it to low-cost storage that is practically inaccessible.  

With LakeVilla, you don’t have to make that compromise.

When it comes to data, you shouldn’t have to compromise between cost and performance. LakeVilla is designed to give you a reliable, cost-effective way to store and access historical data without the high price tag or slow performance of traditional SIEM solutions. It makes cold storage actually work for security teams—no more painful rehydration and no more expensive re-ingestion.  

Extract Security, Store Data

Roughly only 10-15% of security telemetry drives detection and real-time analytics. This means you don’t need to overflow your real-time systems with everything. While the remaining 85-90% is important for investigations, compliance, and traceability, and needs to be stored for longer durations. Despite this, most architectures send all of your data (including the things you don’t need) through the same high-cost pipelines, resulting in you paying unnecessary expenses and operational overhead.  

One of the most important aspects of a proper data strategy is recognizing that not all data is equal in urgency or value. LakeVilla embodies this principle by giving organizations the capability to easily and clearly separate real-time and forensic data paths.

How? Relevant security data goes to the analytics engine. Everything else goes into LakeVilla’s cost-effective, searchable, and immediately available storage - without rehydration, reprocessing, or secondary tooling!

 

What is LakeVilla?

LakeVilla is Abstract’s cloud-native cold storage solution, purpose-built for security data and deeply integrated into the Abstract Security platform. Unlike traditional archive solutions that bury logs in inaccessible storage, Lake Villa keeps your data instantly searchable and seamlessly connected to your detection workflows.  

It eliminates rehydration delays, avoids expensive re-ingestion, and removes extra hidden costs at retrieval. That means you can scale retention, meet compliance, and conduct deep investigations—all while maintaining performance, reducing storage costs, and simplifying your security stack.

One Platform: Unified and Modular Architecture

LakeVilla isn’t an external archive or add-on utility. It is a native component of the Abstract Security Platform. This integration eliminates the need for redundant ingestion paths, parallel systems, or additional operational burden. Data sources only need to be onboarded once and are immediately usable across the entire platform.  

Here’s what you get, out of the box:
  • Fully Searchable Cold Storage
    Retain long-term telemetry in a low-cost storage tier without sacrificing accessibility—no rehydration, re-ingestion, or external query layers required.
  • Seamless Integration with Live Pipelines
    LakeVilla is embedded directly into Abstract’s data pipeline architecture, enabling instant access to historical data without operational handoffs or tooling sprawl.
  • Replay for Forensics and Detection Enhancement
    Historical data can be replayed through detection workflows on demand, supporting investigations, rule tuning, and retrospective threat identification.
  • Cost-Efficient Retention at Scale
    Store years of security data at a fraction of traditional SIEM or hot storage costs, enabling compliance and threat hunting without budget constraints.

By combining these capabilities into one platform, LakeVilla dramatically simplifies onboarding, eliminates maintenance overhead, and gives security teams a unified environment that works together by design, not by patchwork.

Why LakeVilla?

To recap, LakeVilla changes the game by giving you one flexible solution: keep your long-term data live and useful without the usual trade-offs of cost, speed, or complexity. Here’s how it’s different from traditional storage solutions:

1. Store More, Pay Less

Filter out noise at the source, aggregate repetitive events, and enrich telemetry with context before it ever hits your SIEM or cold storage. By shaping the data in motion, you dramatically reduce volume and eliminate unnecessary storage spend. Then, route that refined data directly into LakeVilla for long-term, low-cost retention that’s still instantly accessible when you need it. It’s smarter storage with cost savings. This means you’re storing smarter in a cost-effective manner, without losing full visibility.  

  • Cut noisy events at the source
  • Route only high touch data to SIEM or other analytic platforms  
  • Route long-term storage data directly to LakeVilla for cost-efficient cold storage

2. Searchable Cold Storage (No Rehydration Required)

Traditional cold storage solutions create complexity and a long checkbox of actions to complete before you can actually leverage the data you’re storing.  

You archive logs to save on cost, but accessing them later requires expensive rehydration and, many times, manual retrieval workflows. Many storage solutions offer cheap storage costs but charge extra for querying and retrieving stored data. With LakeVilla, data is shaped and normalized before storage, making it ready for fast retrieval and response without any rehydration or hidden costs.  

  • Cold logs are query-ready instantly  
  • No rehydration process  
  • No hidden API fees or retrieval penalties  

This means your data remains useful and accessible for:  

  • Incident response that needs to look back months, not just minutes
  • Audit trails for compliance and reporting
  • Threat investigations that unfold over longer timelines
  • No more waiting. No more “fetch-and-hope” workflows

3. Adapts in Real-Time

LakeVilla supports seamless routing of enriched, filtered data into cloud object storage solutions like AWS S3, Azure Blob, and Google Cloud Storage. With a simple drag-and-drop interface, you can route data based on content, severity, or compliance needs - no manual rewrites or complex scripts required. It’s flexible routing that keeps up with your environment as it evolves and helps you avoid vendor lock-ins.

4. Built for Security, Not Just Storage

Leveraging the Abstract Security Platform’s pipeline module, LakeVilla prepares data before it even reaches storage. This means data is aggregated, normalized, schema-aligned, threat-enriched, and usable in-stream; ready for faster access the moment you need it.  

  • Pre-storage processing: Aggregates, normalizes, and enriches data before it hits cold storage.  
  • Schema alignment: Ensures data is structured and query-ready before it is stored.  
  • No vendor lock in: Keeps your data portable - no vendor lock-in.  

Whether you're responding to an incident, fulfilling a compliance request, or conducting a deep investigation, your data is always ready.

Why have a Lake House when you can have a Lake Villa!

LakeVilla isn’t just a cold storage solution, it’s a smarter way to retain and use your security data. From filtering out noise at the source to storing at scale, LakeVilla ensures your security data remains usable, searchable, and under your control. No rehydration, no hidden fees, no lock-in.  

Whether you’re optimizing SIEM costs or future-proofing your detection and compliance strategy, LakeVilla gives you more than storage. It gives you leverage.    

Store smarter. Search faster. Stay in control.  

Want to see this in action? Contact us, or sign up for our monthly newsletter for the latest information regarding Abstract’s solutions.

Aqsa Taylor
Jun 23, 2025
Product

C2 Corner: Achieving Fortune 50 Security With a SMB Budget

From Enterprise to SMB: A Trial by Fire

Jake Lee

In my past lives at larger enterprises I often had access to best-in-class technologies, dedicated vendor representatives, and an extensive budget to pursue defense-in-depth (and in many cases, redundant solutions) layered controls and solutions. However, when I made a paradigm shift from a Fortune 50 to a fast, lean SMB (Small-to-Medium Business), I soon found myself in a proverbial trial by fire that would require innovation, agility, efficiency, and resourcefulness.

Not only was it a new environment, but it was also a new industry—going from the financial sector to healthcare—and I soon found myself on Day One wearing multiple hats and stretching every dollar; while still being expected to deliver enterprise-level protection. 

Rebuilding the Security Program with Less

Without deep resources, robust vendor ecosystems, and specialized roles and departments, I needed to focus on what I could achieve. This was a whole new playing field, so before I committed to anything I first needed to identify my primary objectives: consolidate, prioritize, and shift focus on core, native platforms that need to deliver despite having multiple security issues.

The first thing I focused on was to identify gaps in our security, while also reducing our vendor footprint. This approach often gets negatively associated with cutting corners, but this was far from that. It was about starting from the blueprint and architecting the foundation.

For example, if your task is to secure a home, what’s the point of putting five deadbolts on the front door while it, and all your windows can’t be closed? When your resources and budgets are less than 1% of prior life, managing a dozen tools is not scalable. Instead, it becomes an exercise in futility. I focused on multi-functional platforms with solid integration and automation capabilities, strong customer support, and roadmaps that aligned appropriately with our expected growth, while still being able to filter and ignore frivolous bells and whistles.

Fortune 50 Strategy, SMB Budget

In a SMB you need to adapt and improvise. But that doesn’t mean you can’t have a strategy. Enterprise-level strategies don’t have to disappear, so what I did was focused on solid governance frameworks such as the NIST Cybersecurity Framework, assessed core and highest risk based on our core business model—looking at the hundreds of SaaS/PaaS technologies + architecture that the business depended on, and created a multi-phased implementation roadmap that mimicked enterprise structure for operation and governance.

That resulted in a key lesson: security and risk isn’t about how much money you throw at it, it’s about how to invest wisely and intelligently! Every dollar has to be justified in its spending and every decision, no matter how big or small, needs to show a return in risk reduction, compliance, and/or operational efficiency.

Built for Enterprises and Lean Teams Alike

Chris Camacho

At Abstract, we work with customers on both ends of the spectrum. Many of them have backgrounds like Jake, where they helped build or lead large scale security programs across global enterprises. Now finding themselves in fast growing companies or critical infrastructure environments where resources are limited but expectations are still high.

This is exactly why we built Abstract. Our platform scales to support both the largest data volumes and the scrappiest of teams. We handle the heavy lifting for you, whether that means deploying real-time pipelines, delivering threat analytics, or supporting compliance and detection strategies out of the box. We act as a force multiplier for smaller security teams and as a central foundation for larger organizations looking to consolidate and modernize.

Our customers use Abstract to control cost, improve detection, and simplify security operations. Whether they are managing hundreds of terabytes of data or standing up their first security operations center, Abstract provides the tooling, support, and flexibility to meet them where they are.

Running a security program without the benefit of a big budget or specialized teams isn’t a step down. It is one of the clearest tests of leadership in this field. If your organization is looking for a skilled security leader who understands how to scale programs intelligently, efficiently, and with impact, I highly recommend connecting with Jake Lee on LinkedIn. He brings a rare mix of enterprise level strategy and hands-on execution that makes him a powerful asset for any security team.

Jake Lee
Jun 18, 2025
C2 Corner

Uncovering Compromised Git Admins: How to Detect actors like theCom

While our technology estates have evolved beyond workstations, laptops, servers and networks to include numerous Software services, many organization’s detection and analysis capabilities are still focused on the malware involved in endpoint detection and response tools, and exploits sent detected by network Intrusion Detection Systems and next-gen Web Application Firewalls.  

Meanwhile the loosely disorganized sets of actors associated with theCom (aka Muddled Libra, Scattered Spider, and more) have cemented themselves as the opportunistic threat actor of the day. Known for leveraging social engineering and identity-focused intrusions, their operations have targeted critical infrastructure, telecom, and software supply chains.  The actors associated with theCom have truly democratized the infamous “I hunt sys admins” approach. They often gain initial access by compromising IT help desk staff and administrators.  

The actors frequently target Version Control Systems (VCS) such as GitLab and GitHub. These platforms enable development teams to store, manage, and deploy source code and infrastructure configurations. Development teams often mistakenly commit credentials for other systems to their VCS repositories, which enable the actors to move laterally within the technology estate.  

This post dives into their tactics and techniques used when compromising version control systems (VCS) such as Git, how to detect and mitigate these intrusions, and how Abstract Security helps.

Overview of Intrusion

These intrusions can be roughly categorized into 4 main phases. Initial Access, where the actor uses carefully crafted social engineering techniques to gain initial access to a privileged system. Establishing Persistence to enable the actors to maintain access to the systems. Further Reconnaissance and Lateral Movement where the actors gain access to other parts of the technology estate. Data Exfiltration, when the actors proceed to exfiltrate or encrypt the organization’s data for ransom.

A diagram of a computer systemDescription automatically generated, Picture, Picture

We’ll break the intrusions down below while describing detection and protection opportunities to help mitigate.

Initial Access

These intrusions often begin through the compromise of a laptop or workstation. The actors associated with theCom have professionalized  

path and often target initial access via IT help desks and known administrators identified by using publicly available data from platforms such as LinkedIn.

Establishing Persistence  

After gaining access to a GitLab instance or GitHub user account, the actors often create a new personal access token (PAT). This personal access token will allow the actors to have persistent access to the Git instance, or GitHub organization. They may also add users or outside collaborators to the GitLab instance or GitHub organization.  

Detection  

To identify this activity in GitHub Audit logs look for actions of  

  • `personal_access_token.access_granted`
  • `org.add_member`
  • `org.add_outside_collaborator`

Detecting GitHub actions of `personal_access_token.request_created` can provide early identification of these requests, though this action is more prone to detecting True Positive Benign activity.  

Actors associated with theCom have also been observed adding a member or outside collaborator to your GitHub organization. To detect this activity look for events with `org.add_member` or `org.add_outside_collaborator`

Entries in the GitHub Audit Log with the action “personal_access_token.auto_approve_grant_requests_disabled” are a very high fidelity alert signal that occur when this configuration option is changed. This activity almost never occurs within an organization, and can easily be followed up with your GitHub administrators.  

An example of this activity can be seen in the image below, with identification information obscured.  

A screenshot of a computerDescription automatically generated, Picture

There are additional GitHub org activities that are suspicious, rarely occur, and provide high fidelity signals to alert upon. Abstract Security includes out of the box detection rules for this activity and more, including the following GitHub configuration changes which can be useful to detect suspicious activity:

  • disabling MFA for users - org.disable_two_factor_requirement
  • disabling SAML authentication - action equals org.disable_saml
  • Adding or inviting admins: action equals any of business.add_admin or business.invite_admin
  • Adding or approving access to Oauth Applications – action equals oauth_application.create or oauth_application.create or  org.oauth_app_access_approved
  • Modifying Single Sign On requirements to access your organizations repositories – action equals any of sso_redirect.disable or sso_redirect.enable


The ASTRO detection rule trigger criteria for org.disable_two_factor_requirement is seen below.  

A screenshot of a computerDescription automatically generated, Picture

Defense

1. Enable admin access for fine-grained personal access token creation
To protect your organization from unauthorized PATs, enable admin approval of fine-grained personal access tokens. Here's how it looks in the GitHub administrative console.

A screenshot of a computerDescription automatically generated, Picture, Picture

  1. Restrict access via classic personal tokens
    While the above option requires administrator approval for fine-grained personal access tokens, classic PATs are either completely allowed or completely restricted. Once you Allow Access via classic personal access tokens, you will not have further visibility into users creating and associating those to your organization.
A black background with white textDescription automatically generated, Picture, Picture

These controls provide a useful control on static tokens from less permissioned developers. The problem here is that theCom actors are targeting the GitHub Administrators who have permission to change these controls! Luckily GitHub requires the use of multi-factor authentication to make these config changes.  

Further Reconnaissance and Lateral Movement

In the past, actors associated with theCom have connected trial accounts of GitGuardian to git repositories. While GitGuardian is a great tool for security teams to detect leaked credentials in code commits, using this tool allows the actors to find additional credentials to move laterally into additional technology infrastructure such as AWS accounts, GCP projects, Azure subscriptions, VSphere and OpenShift infrastructure, other SaaS platforms, and sometimes event SSH keys. This behavior isn’t limited to GitGuardian, the actors could use other tools such as trufflehog, gitleaks, shhgit, and more.

Detection

You can detect this and other mass git repository collection activity by detecting a threshold of unique repositories within a reasonable time frame. The detection rule below looks for 10 distinct repositories upon which git.clone or git.fetch has been performed within 10 minutes.  

A screenshot of a computerDescription automatically generated, Picture, Picture


Intrusions into SaaS, PaaS, and IaaS technologies are often hard to accurately detect because nearly all of the actions within these platforms are resident to the platform. This is a similar challenge to the Living off the Land (ref: https://www.youtube.com/watch?v=j-r6UonEkUw ) techniques unveiled by Matthew Graeber and Christopher Campbell  at DerbyCon 3 in 2013. When adversaries are Living off the Clouds the activity blends in with legitimate administrator and developer activity.  

To increase your true positive rates you can create a table of known login IP addresses from your Identity provider, and whether they have been Multi-Factor Authenticated or not. If the activity is originating from the known MFA’d source addresses, then the activity can be treated with a lower severity.

Perhaps your organization does legitimately use GitGuardian or other tools, You can similarly opt to include known infrastructure source_addresses to further improve your true positive rates of Critical detections.

Exfiltration  

There are a few actions on objectives the actors can take to exfiltrate repositories from compromised organizations. The actors can exfiltrate repositories via git.clone actions manually to their local systems.  

Some actors associated with theCom have also been observed manually downloading.zip archives of repositories.

A screenshot of a computerDescription automatically generated, Picture, Picture

This activity is odd by itself, as most developers prefer to use git clone versus a web UI. Manually downloading multiple .zip files is almost always a sign of suspicious activity, whether it is performed by external actors or a malicious insider.  

To increase the true positive rate of external actors performing this activity, you can correlate the activity against known MFA’d IP-addresses from your Identity Provider logs.  

Pulling the timeline together

While the above activities and actions can be a strong indicator of suspicious, malicious or compromised GitHub user accounts within your organization. Analysts need to pull the timeline together to tell the story of what happened.  

What and Who? – collect the suspicious source_addresses paired with the GitHub user_names and user_ids associated with the activity. Expand and pivot from those users to other users associated with the source_addresses, as well as other source_addresses associated with the users.  

When? – to determine your window of compromise, analyze the additional activity from these IPs and users. Typically there will be activity that is much more benign to help determine when the compromise began.  

You’ll often find that a compromised laptop or workstation led to this activity, and the actors began interacting with a long lived login session to your version control system software.  

Conclusion

Our technology estates are now composed of a wide range of systems and services including Cloud, SaaS, self-hosted services, and the traditional computers and networks.  This expanded footprint opens threat actors to a larger realm to compromise and operate upon. Our detection and analysis methods need to expand to collect telemetry from these systems and improve our skills to analyze this telemetry.  

Secure with Abstract

Abstract Security provides data integrations to collect the GitLab and GitHub audit logs, paired with filtering and aggregation functions to reduce unnecessary data without compromising your ability to detect, investigate and respond to intrusions. Paired with the ASTRO’s Git detection rules to detect suspicious and malicious activity from insider threats and external actors.  

Applicable ASTRO detection rules

ASTRO provides over 50 GitHub detection rules as Out of the Box content for customers to detect suspicious GitHub activity. This activity can be tuned using contextual information with additional data models to increase true positive rates. The rules most relevant to this post include:

  • GitHub PAT Approval Disabled
  • GitHub Collaborator Added to Repo
  • GitHub Enterprise Add Admin
  • GitHub Enterprise Invite Admin
  • GitHub Oauth Created
  • GitHub Oauth App Approved
  • GitHub SSO Disabled
  • GitHub SAML disabled in Enterprise Account  
  • GitHub MFA disabled in Enterprise Account
  • GitHub Mass Repository Retrieval
  • GitHub Suspicious Repository Archival Activity  


References:

GitHub, Audit log events for your organization, https://docs.github.com/en/organizations/keeping-your-organization-secure/managing-security-settings-for-your-organization/audit-log-events-for-your-organization

Abstract Astro
Jun 11, 2025
Product

Thinking Beyond the Budget: Why Your Security Team Isn’t a Cost Center

Beyond the Cost Center Fallacy

By Vesko Pehlivanov

What is cybersecurity in a business sense? Is it a cost center or is it a revenue generator? As tired as this question is, many executives attempt to put security into either one of those buckets. But regardless of which side you choose, both options fundamentally misframes the role of cyber and creates a false dichotomy between security investments and business growth. Why?

In business, there are three levers to grow: revenue, cost, and risk. With this in mind, the goal is straightforward—increase revenue, reduce costs, and maintain risk within acceptable parameters. However, in practice, security doesn’t necessarily fit into all of these neatly.

The Primary Function of Security

Cybersecurity has notoriously been hard to “prove” its value. Every day without incident is a win, but all of us know that this is not as easy as it sounds. From an outsider’s view who doesn’t know all of the moving parts, it is very tempting to think, “No incidents? Great, then let’s reduce the budget. It looks like sunny skies and smooth sailing from here.” And once you go down that slippery slope, too many things go wrong.

If it isn’t a cost center, then can it be a revenue generator? It rarely is, but it can be, especially if you’re a security vendor who can bill for products or features. However, going too far down that route will burn goodwill within the industry and might land you on a “most wanted” list like the SSO Wall of Shame.

Ultimately, security’s primary function is risk management for the business. Any business understands risk explicitly or implicitly as executives have to make risk-based decisions on a daily basis. As a risk management function, security should be treated separately from revenue generation and cost centers—being defined by its own value. It needs to be treated as a set of value-driven business services.

Transforming into a Service Mindset

Why services? Services are defined by the value perceived by its customers, not by the practitioner. For example, while I may be fine with a buzz cut, my wife most definitely would not and would never let a pair of clippers get anywhere near her hair. So taking it back to security, as an industry we need to adopt the same mindset and produce successful achievement of the outcomes expected by our internal and external customers. But to do that, we need clearly defined metrics that can demonstrate that those outcomes are achieved.

Measuring Service Effectiveness

There are four key properties of services that need to be measured on an ongoing basis:

  1. Effectiveness 
  2. Quality
  3. Efficiency
  4. Risk

Let’s borrow from my previous example for some context. In terms of a service, I have certain expectations for what I’m paying for and the results I expect to see. If I was to get a haircut, I would expect my hair to be shorter than when I came. However, even though my barber was “effective” in achieving that, there’s more that matters. Was it of good quality? Sure, completely shaving my head would fit the criteria for “shorter hair,” but it’s definitely not the service I was expecting!

Additionally, even if my barber gave me the exact cut I wanted, what about the cost to value? Would the best buzz cut in the world warrant a price tag of $100? Maybe to some, but not to me. Also, let’s not forget about risk. If at any point my barber gives me a nasty cut with a razor, it doesn’t matter if they went above and beyond in providing the service. I’d still be pretty unhappy as a customer! 

Putting the Model into Practice

By Chris Camacho

If we want to operationalize a service and value driven approach to cyber, there are three ways for security leaders to do so:

  1. Align with business outcomes
  2. Define service-level metrics
  3. Build a feedback loop

Aligning with Business Outcomes

Start at the top. If you want to shift to a service-driven security model, collaborate with your executive board to agree on expected outcomes. Link security strategy directly to specific business objectives. This could mean building customer trust through data protection, ensuring business resilience, maintaining regulatory compliance, or accelerating the secure adoption of new technologies.

Defining Service-Level Metrics

For each security service you intend to provide, define clear metrics for effectiveness, quality, and efficiency. Being effective will mean a reduction in incident rates. Having high quality might mean seeing reduced friction or complaint volumes. For efficiency, consider optimizing unit costs, or cost per risk mitigated. This could involve improving process automation, introducing self-service capabilities, or simplifying your tech stack.

But don’t forget about risk! Try to ensure that there are no surprises. No outages from fail-close controls and no botched incident responses.

Building a Feedback Loop

Implement robust operational practices. Set Service Level Objectives (SLOs) with your stakeholders. Your KPIs, KRIs, and OKRs should directly tie back to these SLOs, which will result in continuous improvement. Then, make sure to report in terms of the business impact, not just security activity.

Why This Matters More Than Ever

Embracing this shift transforms your security teams from a perceived cost center into a strategic partner, speaking the language of business value and demonstrable outcomes. How does your organization currently measure the value of its security function? Security is entering a new era—driven by automation, AI, and an expectation to “do more with less.” If we don’t evolve how we communicate value, we risk being sidelined or starved of investment.

About Vesko Pehlivanov

Vesko Pehlivanov is a seasoned cybersecurity executive and former Director of Security Engineering at ID.me. He led transformative initiatives across cloud, infrastructure, and security operations—helping modernize security pipelines and drive strategic investment. Vesko was an early enterprise buyer of Abstract Security and remains a passionate advocate for security teams focused on operational excellence.

Vesko Pehlivanov
May 23, 2025
C2 Corner

DFIR-as-Code: Scaled and Repeatable Incident Response

Digital Forensics and Incident Response (DFIR) is at the forefront of cybersecurity. DFIR is where the edge of human and technological defensive capability meets the edge of human and technological offensive operations.

This continuous cat and mouse game has been ongoing for decades. It’s shaped by curiosity, fueled by competition, or sometimes just for the lulz. Constantly dealing with an evolving adversary requires timely, accurate, and consistent reaction and response. The output from DFIR can often inform financial, litigation, or legal actions, where the results are analyzed, reviewed, and critiqued in depth.

What does this look like without DFIR-as-Code?

In many organizations DFIR feels more like art than science. Analysts scramble to collect logs, dig into artifacts, and document procedures under pressure, with no guarantees of repeatability or accuracy. 

Manually processing and combing through data can be a good first step. Sophisticated adversaries, however, will take advantage of slow response time and any lack of capabilities. Paving well-travelled paths within your DFIR program is essential to shortening the OODA loop and evolving faster than the threat actors targeting your organization.

Ad-hoc approaches cause problems:

  • Inconsistent response across incidents
  • Slower investigations due to manual data collection
  • Risk of error in high-stakes environments
  • Missed evidence from unprocessed artifacts
  • Hard-to-audit processes for legal or regulatory review

When an incident hits, this can be catastrophic. Time matters and slow, manual, ad-hoc procedures result in lost evidence, prolonged breaches, and missed opportunities.

Enter DFIR-as-Code: From Manual to Automated

Just as DevOps revolutionized infrastructure through “Infrastructure-as-Code,” DFIR-as-Code extends the gains brought by Detection-Engineering, and brings structure, automation, and repeatability to the forensics and response lifecycle.

Maturing a DFIR program requires building up libraries of use cases and response procedures, much like continuous monitoring programs, and mature detection-engineering programs. This allows advanced organizations to adopt a very similar approach to their investigation and response processes: DIFR-as-Code.

DIFR-as-Code delivers:

  • Automation of repeatable forensic and response actions
  • Consistency across incidents, analysts, and teams
  • Auditable workflows for regulatory compliance
  • Scalable processes that don’t crumble under pressure
  • Knowledge transfer across teams via codified playbooks

With DFIR-as-Code, humans stop reinventing the wheel for each incident. They focus on solving new problems, not rerunning the same play manually.

What does DFIR-as-Code look like?

After an Initial Stimulus kicks off the process, the workflow for DFIR-as-Code repeats through 3 steps

  • Collection
  • Triage
  • Contextualization

These 3 steps repeatedly feed the Incident Response process.

DFIR-as-Code workflow
A diagram of automationDescription automatically generated, Picture, Picture

Collection

Collection is the process of acquiring and processing the artifacts and their associated metadata to produce timeline entries. Applied Security Data Strategy: A Leader’s Guide discusses how collection programs and data ingestion form the foundation of a mature data strategy. 

Image preview, Picture, Picture

Collection expands upon continuous Data Ingestion including point in time acquisition and processing of artifacts, such as the Master File Table from a host, metadata cloud resources, or the contents of etcd from a container.

The data gathered in Collection is often processed by tools such as FTK imager, Autopsy, Plaso, and Log2Timeline among others.

By maturing the collection portion of your DFIR lifecycle, you can ingest additional data in the same analytics platform as the rest of your data sources, providing a much more holistic view of the incidents and events taking place within your environment. Below is an example of the Abstract platform’s Plaso Log2timeline dashboard for analyzing forensic image output.

Picture 1928797354, Picture, Picture

Triage

Triage takes the Collected artifacts and timelines and applies signaling mechanisms to them. The signaling mechanisms can take the form of deterministic detection rules such as Sigma, Yara, Suricata rules and programmatic processors such as RegRipper, or inference mechanisms such as XGBoost based classifiers or scoring systems. 

Although implementing this logic can be challenging, it significantly improves the signal-to-noise ratio by correlating multiple data sources to confirm or refute observed activity.

Storing common Triage logic for different artifacts speeds response by confirming malicious activity, highlighting the most effective artifacts, and providing results in a familiar format that is easy to consume by the analyst.

Contextualization

Contextualization enriches the artifact summaries from Collection and Triage and correlates separate data sources to provide and scope a 360-degree view of the evidence.

Enriching data with Context can: 

  • Highlight known infrastructure and/or user IPs from MFA logins, allowing those to be filtered out of an investigation if appropriate
  • Match known user handles with their user identity
  • Understand if attack traffic is being sent to vulnerable hosts
  • Understanding how prevalent an executable is within your environment

Context allows your team to make higher-confidence assertions about what has or hasn’t occurred.

Next Steps

This entry outlines the DIFR-as-Code concept. This framework can be implemented and adopted to jump-start or rapidly mature DFIR programs within existing organizations, providing a foundational basis upon which to build.

Highly regulated environments and organizations can create well-defined approval processes that govern how they collect data while maintaining compliance and oversight. This enables faster, more well governed Collection while ensuring compliance and governance. By confirming that the requested data is both necessary and permitted, teams can operate successfully in heavily regulated environments.

DIFR-as-Code empowers teams to automate manual processes, enabling the teams to scale human effort and execute more. It ensures that artifacts and evidence are processed to at least a base level of confidence, allowing humans to decide upon additional artifact gathering and processing as necessary. 

Join us in the next entry to dive into applying this concept.

Abstract Astro
May 22, 2025
Security

Trust and Community: The Peer-to-Peer Security Advantage

The Human Signal

By Sandip Wadje

After working in cybersecurity for so long, I’ve come to find that the most valuable signals rarely come from a dashboard. Instead, they come from a DM, a quick text, or an off-the-record gut check from someone you trust. And that’s how it has always been. The security community is always operating in the shadows of formal channels—quietly sharing war stories, threat data, validation checks, and career advice.

Both Chris and I have lived that dual reality; starting in the vendor world, learning from customers, and then later, learning even more from our networks. More than what any feed or platform could provide.

Community Provides Clarity in Chaos

Modern security is incredibly complex and is outpacing the ability of individual organizations to adapt. Budgets are tight. Threats are coming in faster, and unfortunately, the tools we are forced to use are often noisier. And despite all the investments organizations are making I still hear many voice the same frustrations:

  1. Am I doing this right?”
  2. “Is this tool actually working for anyone else?”
  3. What’s everyone else seeing?”

If you’ve ever asked the same questions, where do you turn? In security, real collaboration doesn’t happen out in the open. It happens in a quiet, yet intentional space that is built on trust, context, and experience. This can take the form of a text to a friend or mentor. Or a ping on a Slack thread or an email to your trusted crew. But if you’re new to the space, you may not have that luxury. So, what do you do?

Find Your Tribe

If you aren’t a part of a community yet, consider reaching out and joining an already established circle. Chris and I have built our own communities because we believe security tools and programs must be shaped by those who live it every day.

CW (Crimeware)

CW is a private email list and Slack group that is managed by Chris. It is focused on real-time intel and operational strategy across threat detection, response, and pipeline tooling. It’s where security leaders and operators compare notes across industries. The rules are simple:

  1. No sales pitches
  2. No posturing
  3. Just help your fellow practitioners be practitioners

The Cyber Collective

The Collective is an invite-only WhatsApp group for C-Suite security leaders—a space where senior practitioners trade real-time insights, gut-check vendors, and stress-test each other’s thinking. It’s private, fast, and high signal by design. Sandip launched this in collaboration with senior executives in the industry to proactively discuss emerging issues. 

Scaling Trust Without Losing Signal

By Chris Camacho

Being a part of a community is vital to personal growth. However, as groups grow, there are challenges in preserving signal and authenticity. I’ve found that smaller, curated channels such as WhatsApp, Slack, and Signal outperform broader forums. Additionally, it’s important to bring out insights from all members, as the quiet ones often do the most listening. To bring out their voices, anonymized polls and “quiet queries” help draw out sensitive insights.

Most importantly, the strongest groups blend the strategic and the tactical. Cross-pollination between CISOs and technical SMEs creates powerful alignment. A CISO may ask, “How do I justify this spend,” while a detection engineer counters with, “that product breaks under volume.” Both voices matter, yet these discussions would not often take place in a traditional workplace setting.

Building Around Community Input

At Abstract, this isn’t a marketing angle—it’s our DNA. It’s the reason why I helped form Abstract. Our company was built by practitioners frustrated with broken tools and ignored feedback. So, we decided to flip the model.

Going against the norm, we sponsor invite-only dinners where CISOs and detection leads meet without salespeople. We co-author content with leading experts, as well as operators between roles to provide the industry with down-to-earth content. We collect roadmap inputs through private Slack channels and user groups and shape our platform to serve both CISO priorities and SOC realities.

Our philosophy is simple: if your tools don’t reflect the community’s needs, you’re not solving the right problems. Cybersecurity will always be complex. But solving it doesn’t need to be a lonely and difficult process. The more we can normalize peer collaboration the more resilient we become.

So whether you’re in a job, between jobs, or building something new, find your circle. Reach out, nurture it and pay it forward. There’s always a seat at the table.

About Sandip Wadje

Sandip Wadje is Managing Director and Global Head of Emerging Technology Risks at BNP Paribas since 2017 where he is focused on governance and oversight of emerging technology domains such as cloud, artificial intelligence, digital assets, and quantum. Sandip also leads group-wide oversight on all aspects of “outside-in” intelligence collection with a particular focus on external attack surface, geopolitical risks, social media liquidity risk, cyber-posture of corporate clients and emerging threats such as deep fakes or synthetic identities.

Sandip also represents the bank at various external forums. Sandip has been a member of European Cybercrime Centres (EC3) Financial Services Advisory Group since September 2017 and is also co-chair of European Financial Roundtable (EFR) Cyber Experts Group since February 2020. EFR comprises of 25 leading Financial Institutions in the UK & Europe and is represented by Chairman / CEO’s of the participating institutions. He has over 24 years of experience in cybersecurity, operational risk, IT risk, internal controls, and compliance. Sandip has a hands-on technology background, and his previous stints include BT Global Services, RiskIQ, SecureWorks, KPMG, JPMorgan Chase, and Deloitte.

Sandip is very passionate about leveraging AI for cyber and what adoption of AI means to the convergence of cyber jobs and re-skilling needed to build a Gen AI ready workforce.

Sandip Wadje
May 12, 2025
C2 Corner

Redefining the SOC: Why AI Elevates (Not Replaces) the Next Generation of Analysts

In this edition of C2 Corner—a blog series where we spotlight sharp minds in security—I’m joined by Nick Goulding, a seasoned leader in Cyber Threat Intelligence (CTI) and Security Operations Center (SOC) operations who recently led the Cyber Fusion Center at Carrier.

Let’s Talk Artificial Intelligence

There’s no denying that Artificial intelligence (AI) is a hot topic within the industry, and it’s rightly so. We’ve only just scratched the surface, but in its infancy, AI has already allowed organizations and casual users to accomplish some amazing things. However, like any new technology, it also introduces new types of risk such as data privacy concerns, intellectual property disputes, and especially for many industries—job disruption.

Do cybersecurity practitioners have to worry about AI taking their jobs away? Nick suggests this isn’t the case. Here’s his compelling argument for why AI won’t replace analysts—but instead raise the bar for them.


AI + SOC = Transformation

By Nick Goulding

There’s no getting around the AI boogeyman these days. The idea that artificial intelligence will replace SOC analysts—especially entry-level L1s—is gaining serious traction in some circles. However, I want to make it known that this is a misguided assessment.

We’ve already seen this form of fear before.

Did mathematicians disappear with the invention of calculators? No. Architects didn’t vanish when Computer-Aided Design (CAD) software entered the scene. Teachers didn’t become obsolete with the rise of online curriculums.

In every profession, technology transforms the work; it doesn’t eliminate it. It instead enhances the capabilities of the workforce, broadens their scope, and elevates their impact. And that’s exactly what AI is going to do for cybersecurity operations.

Frankly, it’s already happening. My daughter is crushing math problems that I couldn’t solve until college—and it’s all thanks to an educational system that is embracing technology, not fearing it.

The SOC of the Future: The New L1 Analyst

We cannot deny that AI will be deeply integrated into cybersecurity operations. But the discussion involving AI and cybersecurity shouldn’t be about it replacing people. It should be focused on how people are using automation to be better, faster, and smarter at their jobs.

Using AI-assisted platforms like Abstract, L1s can run detections powered by AI, interpret threat data in natural language using ASSE, and move beyond triage into active threat validation. Additionally, it can enable new analysts to operationalize threat intelligence and correlate across telemetry with no scripting or tuning needed.

Leveraging this kind of technology, tomorrow’s L1 SOC analyst won’t just reset passwords or escalate every ticket upstream. Instead, they’ll be able to:

  • Conduct initial static malware analysis
  • Collaborate with AI to draft and implement remediation plans
  • Build and iterate automated playbooks to handle repetitive tasks
  • Use natural language interfaces to rapidly query security data

In short: L1s leveraging AI will do the work that today’s L2 analysts are often responsible for. And this shift will cascade upwards. L2s will stretch into L3 territory. L3s will evolve into advanced threat analysts, incident responders, and forensics experts. This is a necessary evolution because attackers are evolving too.

Attackers are Already Using AI

According to the Harvard Business Review, attackers have begun leveraging AI to supercharge phishing attacks, with success rates rising to as high as 62%. Not only are they getting harder to spot using AI, but they are also becoming much cheaper, which allows attackers to increase the scale of their attacks.

Research from IBM X-Force and the Oxford Internet Institute reinforces this, showing a 95% reduction in cost for spear phishing attacks when AI is involved. This means more attacks at better quality, at a fraction of the cost. So how do we keep up?

How Abstract Security Helps Security Teams Keep Up

By Chris Camacho, Co-Founder & COO at Abstract Security

Fighting AI with AI is only part of the equation. The real answer is AI guided by professionals and practitioners. That’s where security platforms like Abstract come in.

Here’s how Abstract Security is helping to tip the balance:

  • AI-powered data streaming platform: Eliminate noise by focusing on the alerts that matter. Detect threats faster with a streaming-first approach.
  • Optimized storage management: Smart tiering of log data (hot, warm, cold) cuts costs and ensures relevant data is always available.
  • Natural language queries: Say goodbye to memorizing cryptic syntax. Instead just say things normally like, “Show me inbound IP traffic from Russia” and get an answer instantly.
  • Integrated detection rules: Use pre-built and custom rules that combine machine learning and expert logic to surface meaningful correlations automatically.

We Need Cybersecurity Pros More Than Ever

Nick is right, the idea that AI will replace SOC analysts is not just misguided and wrong—it’s dangerous. Eliminating L1s would dismantle our talent pipeline. Instead, we should be reimaging what an L1 analyst can be and then give them the tools and training to grow.

AI isn’t going anywhere. Neither are security experts. The future belongs to those who adapt, integrate, and lead. Even attackers have recognized the power of harnessing AI. Now, it’s our turn. Let’s build a SOC that isn’t afraid of the future.

Many thanks to Nick for sharing his vision on the next-gen SOC. If any of our readers are hiring for a security leadership role, please connect with Nick. If you’re looking to contribute your voice to C2 Corner, drop me a note.

About Nick Goulding

Nick Goulding is a Cyber Security Leader with 18 years of experience specializing in Cyber Threat Intelligence, Threat Hunting, Detection Engineering, SOC/IR, Insider Threat, and Attack Surface Management.

Connect with Nick on LinkedIn

Nick Goulding
May 1, 2025
C2 Corner

Introducing C2 Corner: By Practitioners, For the Industry

Throughout my two decades in cybersecurity, I’ve met and spoken with the smartest people I’ve known; all who have shared incredible ideas on how to improve the industry. However, they often don’t have a public voice, or a wide-enough platform to share their insights.

Whether it’s due to company policies, NDAs, or extreme workloads, their wealth of insights, innovation, and raw creativity are locked inside the confines of their last SOC, Incident Response team, or threat intel shop. But that changes today.

Giving Practitioners a Voice

That’s why I’m excited to introduce C2 Corner. This is a new blog series where I will be collaborating with cyber pros in the midst of finding their next opportunity. We won’t be talking about job hunting. We will be highlighting the value they’ve already created, with each post showcasing their perspective on real-world security challenges, lessons learned, and their ideas for where the industry should go next.

Why We’re Doing This

I’m starting C2 Corner for three reasons:

  • To provide the industry with high-signal, no-fluff content straight from the front lines
  • To give practitioners a platform to speak their mind
  • To reinforce Abstract Security’s belief that real innovation starts with real operators

What to Expect

Every edition of C2 Corner will be co-authored by me and a seasoned practitioner. Together, we’ll tell their story—sharing their unique perspective and technical insights, while tying it back to real-life challenges security teams are facing now.

Want to Contribute?

If you have a story to share—a detection lesson, a CTI use case, a reflection on burnout, or anything in between, collaborate with us. Reach out directly, or message me on LinkedIn. We can work together to create something that others can learn from.

Let’s make space for real voices in cybersecurity. Let’s help great talent get seen and heard, and let’s build a corner within the industry where practitioners lead the conversation.

Chris Camacho
May 1, 2025
C2 Corner

Abstract Security Named in SACR’s 2025 SDPP Market Guide: The Rise of Security Data Pipelines

Download Report

I, and the Abstract Security team are incredibly proud to announce our inclusion in the latest Security Data Pipeline Platform Market Guide, published by the Software Analyst Cyber Research (SACR).

This new report delves into one of the most critical evolutions in cybersecurity today: the rise of Security Data Pipeline Platforms (SDPPs) as the foundation for a modern, AI-ready Security Operations Center (SOC). In its analysis, SACR identifies Abstract Security as a standout player in the space, acknowledging our role in modernizing SOC infrastructure, our commitment to proactive and efficient security operations, and recognizing our integrated approach to data management and threat analysis.

Here’s what the SACR report had to say about Abstract:

Overall, we were impressed with Abstract Security’s capabilities as a SOC pipeline and threat detection platform that goes beyond SDPP functionality. Abstract is positioning itself beyond traditional data pipelines by integrating advanced threat detection directly into the data streaming process.

Its streaming analytics include real-time correlation, behavioral anomaly detection, and threat intelligence matching, enabling contextual intelligence at the point of ingestion. By analyzing and acting on data as it flows, Abstract supports a more proactive detection posture, aligned with the needs of modern SOCs facing alert fatigue, data sprawl, and cloud-native complexity.

-Francis Odum, Founder at Software Analyst Cybersecurity Research

Why This Matters: Overcoming the Limitations of Legacy SIEMs

The traditional SIEM model is facing significant challenges. It’s breaking under the sheer weight and variety of security telemetry generated by modern environments, leading to unsustainable costs and operational complexities. Outdated pricing structures based on ingestion volume penalize organizations for increased visibility, forcing difficult trade-offs between coverage and budget.

That’s why the Abstract Security Platform is redefining how security data is ingested, enriched, and analyzed in real-time. By providing security teams with the flexibility to stream, transform, and act on data before it reaches downstream analysis tools, we empower them to regain control. Our intelligent pipelines enable efficient data reduction, enrichment with critical context, and real-time analysis, ensuring that only high-fidelity, relevant data is forwarded for further processing and long-term storage.

How Abstract Security is Leading This Shift

The SDPP market is rapidly evolving, and Abstract is leading this shift, providing a platform that embodies these key characteristics:

  1. Integrated Analytics and Detection: We go beyond basic pipeline functionality by embedding real-time streaming analytics directly within the data flow. This enables immediate threat detection and correlation at the point of ingestion, reducing alert fatigue and accelerating response times.
  2. AI-Powered Intelligence: Our platform leverages AI for data enrichment, anomaly detection, and provides AI-generated explanations of analytical findings, making complex data more understandable and actionable.
  3. Intuitive, No-Code Interface: We offer a user-friendly interface for building and managing pipelines and conducting analysis. This empowers analysts to take control of their data without requiring deep technical expertise.
  4. Flexible Data Handling: We support robust data reduction and filtering capabilities, ensuring cost optimization and reduced noise.
  5. Enabling SIEM Evolution: Abstract isn’t just an augmentation layer—our platform is already being adopted by organizations as a full SIEM replacement, or as a crucial bridge during SIEM migrations, demonstrating our ability to handle both data ingestion and advanced analytics.

Learn More

We’re proud to partner with forward-thinking SOC leaders who are ready to make data actionable. I encourage you to check out SACR’s full report, as it provides a very deep analysis of the emerging SDPP market.

Also, check out our Applied Security Data Strategy eBook to see how we help organizations achieve tangible results like:

  • Cut SIEM costs by 30 - 40%
  • Route and enrich data in real-time
  • Accelerate threat detection without reengineering your entire stack
Chris Camacho
Apr 28, 2025
News

The Invisible Enemy: Unmasking Microsoft 365’s Logging Blind Spots

Introduction

Security teams depend on logs to tell them what happened, when, and how. But what happens when the logs are incomplete—or worse, misleading?  

Microsoft environments are a common target for attackers and many organizations rely on their services.  Microsoft’s activity logs are utilized to gain insight into malicious activity to assess scale and determine next steps. Yet crucial information is often invisible, making these threats more difficult to detect and triage.

In this blog, we’ll dissect the limitations of Microsoft’s activity logs, walk through real-world simulations using the msInvader tool, and highlight what gets seen—and what silently slips through the cracks.

Overview of MS Activity Logs

Microsoft’s Office 365 (M365) and Graph APIs are the backbone of log-based detection for the Microsoft suite.  

M365 management API audit logs are divided into the following classifications:

  • Audit.AzureActiveDirectory
  • Audit.Exchange
  • Audit.SharePoint
  • Audit.General
  • DLP.All

Digging a little deeper, some unsettling realities emerge. Microsoft’s documentation (ref: https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference#working-with-the-office-365-management-activity-api) describes the following issues:

  • Content blobs can take up to 12 hours to appear.
  • Events in later blobs may predate those in earlier ones.
  • Logs often miss critical events—especially reconnaissance and failed attack attempts.

Ref: https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference#working-with-the-office-365-management-activity-api

These aren’t edge cases. They’re systemic blind spots.

Microsoft’s GraphAPI provides ‘auditLog’ endpoints for directoryAudits and signIns as well as customSecurityAttributeAudits and provisioning which are still in beta. (ref: https://learn.microsoft.com/en-us/graph/api/resources/azure-ad-auditlog-overview?view=graph-rest-1.0)  

Some events may be available from both sources, for example, authentication or sign-in logs, however each source has a different format.

Microsoft 365 events on the left have key differences from the Graph API entry on the right.

  1. Two fields for source IP address: ActorIpAddress vs. ipAddress
  1. The timestamps are formatted differently, missing the timezone in Microsoft365 events.
  1. The Microsoft Audit logs include additional request information such as the User Agent.
  1. Microsoft Audit logs include the Operation/action of the event. Graph API logs don’t include this.
  1. GeoIP information is included about the source client in Graph API logs

Meet msInvader: A Reality Check for Defenders

To test the boundaries of Microsoft’s visibility, we ran a series of attack simulations using msInvader (ref: https://github.com/mvelazc0/msInvader), a red team tool packed with over 20 post-auth techniques.

Using common endpoints like Graph API, Exchange Web Services (EWS), and the REST API, msInvader can:

  • Enumerate users, groups, roles, and apps
  • Read emails, reset passwords, or forward inboxes
  • Attempt privilege escalation via mailbox or folder delegation

Let’s break down what we saw—and what we didn’t.

Enumeration: The Silent Operator

Technique: enumerate_users
API Call: GET /v1.0/users
Result: 24 users successfully enumerated
Logged? ❌ No trace in either Graph or O365 logs

Reconnaissance is a common first step to gather information about the environment. msInvader has multiple techniques for enumeration. Running the ‘enumerate_users’ technique which uses the Graph API endpoint.

We can see that msInvader was able to successfully pull a list of users from the environment. Both Graph and Office 365 logs only provide authentication logs.

A screenshot of a computerAI-generated content may be incorrect.

This raises a few questions: Why are there ‘UserLoginFailed’ events and a ‘UserLoggedIn’ event? Why are there two ‘Entra ID Sign-In’ events? Where is the log for the user enumeration query? Let’s take a step back and do a quick overview of the authentication process.  

msInvader has the option to authenticate using MFA by using the ‘device_code’ authentication type in its configuration. This allows the attacker to log in via a browser with a provided code.

Login proceeds normally from here and execution of msInvader automatically continues.

Getting back to the previous questions:

Why is there a ‘UserLoginFailed’ event?

A black screen with white textAI-generated content may be incorrect.

This means the authentication process was interrupted to add an additional confirmation step for the app. This is a little misleading since the ‘Operation’ for the event is ‘UserLoginFailed’ which suggests the login was unsuccessful. We see in the following event the login was eventually successful.

Why are there two ‘Entra ID Sign-In’ events?

  • Similar to what was mentioned above, the initial event has the same error code of ‘50199’ but provides a ‘failureReason’ of ‘For security reasons, user confirmation is required for this request. Please repeat the request allowing user interaction.’

Where is the log for the user enumeration query?

  • None of the enumeration techniques (users, groups, applications, service principals, and directory roles) produce any events from either source.  

After testing a few techniques, I started receiving 429 status code responses for too many requests. There were no logs of this either, unfortunately, as this information could be useful to help identify suspicious activity.

Failed Attempts = Invisible Threats

You’d expect failed attacks to light up your detection and analysis tools, right? Not always.

Take these examples:

Technique: add_mailbox_delegation
Status: 403 Forbidden
Logged? ❌ Nope

msInvader has the technique ‘add_mailbox_delegation’ which can grant full access to another user’s mailbox if the user has the correct permissions. Attempting this action with a user that does not have the required permissions produces a 403 status code (Forbidden) from the API.

A black background with white textDescription automatically generated

No log of this event appears from either source. This appears to be common with other techniques where only the successful attempts make it through to a log.

Technique: read_email, change_user_password
Status: 403 Forbidden
Logged? ❌ Still nothing

Technique: password_spray
Status: 403 Forbidden
Logged? ✅ Yes

Failed login attempts will show up in the logs and msInvader has a ‘password_spray’ technique we can use to show this.

Failed logs:

The reason provided for why the attempts failed:

What can you detect?

Not all hope is lost. Certain actions are reliably logged and can form the basis of effective detection rules. For instance:

Technique: create_rule
Action: Add inbox forwarding rule
Result: 200 OK
Logged? ✅ Yes

A relatively simple action to detect would be for a new inbox forwarding rule added for a user’s mailbox. msInvader has a technique for this.

The Abstract Security platform has content packs containing analytics that can detect this, and other activity, in Microsoft logs.

More complicated analytics can be created that use conditions based on parameters in the requests to Microsoft to make detections more specific and help reduce false positives. For instance, when a mailbox permission is set, the access rights and mailbox folder need to be provided. It’s possible to create analytics to create findings on specific events like setting a user to be ‘Owner’ of another user’s ‘Inbox’ folder.

This produces the finding:

Taking things a step further, we can also add conditions for the grantee receiving the permission. Example finding detecting a user’s ‘Inbox’ folder granting ‘Owner’ access to the ‘Anonymous’ user effectively allowing make anyone an owner without needing credentials:

Detection Is Only As Good As Your Data

Here's the hard truth: many early-stage attack techniques—especially recon and failed modifications—simply don’t generate logs in Microsoft’s default configurations.

Want more visibility? Microsoft offers enhanced logging for Graph API, but it comes with extra costs and some limitations. (ref: https://learn.microsoft.com/en-us/graph/microsoft-graph-activity-logs-overview)

Until then, defenders must focus on:

  • Detecting what is logged (e.g., inbox rules, sign-ins, modified permissions)
  • Building detections based on attack chains, not just atomic events
  • Understanding the environment to uncover misconfigurations

Closing Thoughts

Attackers are getting stealthier and Microsoft’s default logging, while robust in many ways, still has significant blind spots—especially when it comes to enumeration, failed attacks, and misclassified activity.  

By understanding what’s logged, what’s not, and how attackers operate within those blind spots, defenders can build more resilient detection strategies.

Stay Curious. Stay Defensive.
Want more research like this? Follow Abstract Security’s CANVAS blog and subscribe to our monthly newsletter.

Abstract Astro
Apr 22, 2025
SIEM

Strengthening Integrations in the Abstract Security Platform

Going into 2025, Abstract Security continues to provide best-in-class technology to our customers and build upon our award-winning platform to eliminate the unneeded complexity involving data operations, all while enhancing threat insights. Here’s what you need to know about the latest integrations our teams have made available!

Integration Station, Your Destination for Seamless Connections

We recognize that one cloud service provider, or one data-logging platform is enough for every organization. With that in mind, our teams at Abstract are always at work enhancing our growing list of integrations!

Abstract Security's platform enables enrichment, filtering, aggregation as well as analysis, and detection and now supports the following data sources:  

  • CloudFlare via Webhook and Cloud Storage buckets
  • Jumpcloud Events
  • AWS WAF Logs
  • GitLab on-prem
  • OnFIDO via Syslog
  • Squid Proxy via Syslog
  • Cisco ASA via Syslog
  • Auth0 via WebHook

Travel to More Data Destinations!

With growing compliance and data regulatory needs, it’s essential to unify security data operations within a central platform. Abstract has added support for the following data destinations:

Additionally, we’ve enhanced the following data sources already available within the Abstract Security Platform!

  • Microsoft Graph API
  • SentinelOne API Integration
  • Microsoft 365
  • Intel471 Intelligence API
  • Windows Event Logs
  • Proofpoint

Additional Pipelines Content  

We’ve enhanced the following data integrations with additional pipelines content to filter, aggregate, transform, and enrich. CloudFlare users can reduce data by atleast 40% (with just OOTB rules), and we’ve added new networking pipeline rules to aggregate and filter data. Microsoft Azure and Entra ID enrichment users can now easily map Application names to AppID GUIDs.

Better Threat Insights with ASTRO (Abstract Security Threat and Research Org)

Here’s the latest enhancements and product updates we’re making to our platform via ASTRO:

  • Phish Detection: Improve security via improved Lure keywords.
  • SentinelOne: New detection rules have been added to help aggregate and automate detection alerts.
  • GitHub: Added detection rules for suspicious branch and repo creation, transfer, and modification activities.

Don’t miss out on future updates. Subscribe to Abstract Security’s newsletter to stay up-to-date on our most exciting announcements. Also, follow us on LinkedIn and subscribe to our Youtube channel!

Aqsa Taylor
Jan 14, 2025
Product

Abstract Security Joins Forces with Analytica42 to Supercharge Integration Delivery to Google SecOps and More

Colby DeRodeff
Dec 16, 2024
News

Why your SIEM needs a Data Pipeline Management platform

Feel locked-in with your SIEM?

How do you integrate your data sources with data destinations such as SIEM platforms, XDRs, and Datalakes?

Whether you are new in security or a lifer, chances are you’ve heard the industry discussions relating to log management strategies, collection, routing, storage and integrating data sources with SIEMs and accompanying costs associated with that effort. It’s fundamental that companies must ensure high data quality as they collect and manage security data. Challenges arise because data comes from potentially hundreds of unique sources with differing formats, structures and of course tremendous volumes of useless logs tangled with relevant data. These attributes require cleansing, complex mapping, normalizing or other processes to ensure accuracy and consistency while making sure all stakeholders are happy with how the data is being handled and how the organization is being protected.  

And even if you figure all of that out, you’re left with having to work individually with each of the logging platforms because of their differing architectures, data mapping approaches and analytics.  

What’s in the way?

Direct integration of your data sources with SIEMs results in noisy data and a heavy onboarding or migration effort. Problems that arise can include:

  • Cloud visibility – Most logging platforms don’t support all the complex cloud sources you need ingestion from, either because of cost or supportability. This leads to blind gaps in data consolidation.  
  • Paying ingestion costs on data that is not useful.
    Not all data from your data sources is useful for threat detection. Some events are internal service updates, some logs have extraneous fields, and some have additional info that is irrelevant. When you don’t have the right method to filter out the unnecessary data, you end up storing it in a high-cost data destination causing both your data volumes and costs to increase.
  • Noisy data blinding your SecOps.
    State of SecOps and Automation report states that “99% of organizations report high volumes of alerts cause problems for security teams in identifying real threats buried in the noise”. Having noisy data filling your SIEM / datalake platforms isn’t just a cost problem, it's a fundamental security problem. Noise from 90% of irrelevant data takes away focus from the 10% data that matters.
  • Compromising on visibility
    All data must be accounted for, to have full visibility, right? But when the data is not filtered or normalized before being routed to destinations, what compromises are you making? How do you decide which data sources need to be onboarded, and which ones are of lesser value or can wait until later to avoid blurry data? Having a lot of data but not filtering it correctly can actually impact visibility negatively. If that’s the case, then the organization will be unable to see real threats through all of the noise.  
  • Vendor Lock-In and SIEM migration complexity
    Every SIEM / XDR / datastore platform is different. Each vendor may have its own data structure and querying language. Once you’ve onboarded a data source with a particular SIEM vendor, several factors come into play before you can make a switch to a new one, like integration complexity, data volume that needs to be re-routed, analytic policies migration and the operational burden on the team to make the transition.  This creates vendor lock-in for an already saturated security team trying to defend the organization against real threats.

So how do you solve the potential problems that might arise from integrating data directly with your SIEMs and other platforms? What if there was a “helper” or a translation layer in between your data sources and data destinations that can take the heavy lifting of data operations off your internal team’s plate by decoupling the sources from destinations?

Introducing Abstract Security’s Data Pipeline Management

A data pipeline management tool helps decouple the data sources from data destinations and adds the ability to operate on the data before it reaches a destination. This removes individual onboarding dependency, and the prebuilt source and destination integrations make data easily routable to any destination.  

Abstract’s pipelines feature goes beyond a generic data pipeline management tool with its data and threat expertise. The main difference lies in Abstract’s strong security focus. There are a lot of DPM tools that can be the “helper” to route your data from one platform to another, however, not all tools are built with security in mind. Abstract Security has the data and threat expertise that enables it to distinguish between legit threat data and noise, to mask sensitive data before routing, to apply threat enrichments with live streaming intelligence, and most importantly to recognize what data should not be dropped (under noise reduction).  

With Abstract’s “no-code required” model, you can easily perform all these operations without having to hire a dedicated, certified professional to work with the platform. With Abstract Security’s pipeline features, you get

  • Streamlined Quality Data: Abstract collects, reduces, enriches and routes data from various cloud sources such as AWS CloudTrail, Azure Activity Logs, and GCP logs. Abstract’s out-of-the-box rules filter out low-value data (e.g., debugging logs or redundant telemetry) before sending it to high-cost SIEM platforms, improving the quality of data ingested at destinations. In addition, Abstract’s data aggregation features further reduce data sizes by 40-50%.    
  • Normalization and Enrichment: Cloud logs can be enriched with contextual information (e.g., geolocation, IAM role mappings) before reaching the SIEM, improving the relevance of security alerts for cloud environments. Abstract’s Intel Gallery consists of in-house threat feed (ASTRO feed) that is constantly updated and an option and the ability to bring your own threat intelligence feed into a single platform to apply enrichments.  
  • Dynamic and Context-Aware Routing: Abstract allows for dynamic routing of logs to multiple destinations, enabling the organization to split the stream based on predefined analytic use cases or specific security scenarios. This approach supports cloud use cases, ensuring holistic visibility. Abstract’s ability to send data to multiple SIEMs and cloud monitoring tools ensures that the right data reaches the right platform, whether for compliance, security operations, or cloud monitoring.  
  • Simplified SIEM Transitions: Abstract’s architecture decouples data sources from specific SIEM platforms, enabling the organization to seamlessly replace SIEMs without significant re-architecting. By allowing simultaneous data flow to multiple destinations, Abstract can facilitate easy transitions to new SIEMs or cloud monitoring tools during migration periods, reducing integration costs and minimizing operational disruptions.  

With Abstract’s pipelines feature, you can remove the complexity from data operations and make the most out of your SIEM investments without getting locked-in.  

Leave the data operations to Abstract so your teams can focus on stopping the adversaries who threaten our collective livelihood.      

Aqsa Taylor
Dec 15, 2024
News

Abstract Security + Amazon Security Lake + OCSF = Upgraded Security Data Management!

Thinking of integrating your data sources with AWS’s Amazon Security Lake offering? We got you covered! We are excited to announce Abstract Security’s new addition to our ever-growing integration list – Amazon Security Lake!  

Quick recap on Amazon Security Lake

Amazon Security Lake from AWS is purpose-built for you to centrally query your data with Amazon Athena and OpenSearch and build machine learning algorithms on your data via Amazon Sagemaker. With growing compliance and data regulatory needs, it’s essential to unify security data operations in a central platform for visibility and reporting, and that’s essentially what Amazon security lake aims to deliver.  

Image source: AWS docs (https://docs.aws.amazon.com/security-lake/latest/userguide/what-is-security-lake.html)  

Another Vendor Lock-In on Data?

Not at all! What’s cool about this offering is that it supports bringing custom data sources, outside of AWS into Amazon Security Lake via the Open Cybersecurity Schema Framework (OCSF) standard. OCSF is an open and collaborative project that creates a standardized, vendor-agnostic database structure to normalize security events. This creates a uniform format for events regardless of the source origin’s format or syntax. The benefit of such normalization is that data can be routed from any source to any destination, and analytics can be shared between platforms.

So why do I need Abstract Security?

Amazon Security Lake provides the basic building blocks to bring in custom data sources outside of AWS services. To fully realize the power of the Amazon Security Lake requires you to build software to gather the data and normalize the data coming from your external data sources into the OCSF format.  

Abstract Security's pipeline platform normalizes data in real time from your data sources to the same OCSF schema. So, you can route data from any source to Amazon Security Lake, without needing to normalize or optimize it yourself. By integrating with Abstract, you can remove noise, ingest data from hybrid cloud environments, normalize in real time and enrich with live-streaming threat intelligence before storing it. This integration plays a key role for our joint customers in multiple ways

  1. Easily onboard any data source
    Abstract has over 100 (and counting) prebuilt, easy to configure integrations with support for Syslog, HTTP, and API based data sources, in addition to robust support for Cloud data sources such as S3, Kinesis Firehose, GCP GCS and Pub/Sub, or Azure EventHubs. Easily onboard any data source – hybrid cloud environments, SaaS solutions and even on-premises data sources without worrying about normalizing or optimizing it after storage.

  1. Don’t pay storage costs for data that is unnecessary
    With large volumes of data coming in, the task of eliminating noise becomes cumbersome without full visibility. Abstract Security is built on data and threat expertise with out-of-the-box data processing rules. These eliminate unnecessary entries, redundant telemetry and other noise from incoming data in real time before routing to your storage.  

  1. Easily migrate from existing storage data lake to Amazon Security Lake
    Normalizing data in real time so you can migrate your data from any other storage destination to Amazon Security Lake without worrying about the operational burden and switching costs.

  1. No code-experience required for data operations!
    Abstract not only makes data operations simplified but also does it in a user-friendly way with a drag and drop, no code experience model; empower your security team with time saving tools without the overhead of custom engineering or yet another query language.

Wait, there’s more! Abstract Security also provides streaming analytics on incoming data from your data sources and matches them against millions of Indicators of Compromise (IOCs) amplifying threat insights, before routing it to Amazon Security Lake.  

Get Abstract Security to see all this and more in action today!

Aqsa Taylor
Dec 5, 2024
Product

Abstract Security Welcomes OmegaBlack To Growing Roster of New Customers

Abstract Security, building the future of AI enabled security operations, today announced it has added threat intelligence expert OmegaBlack along with several new MSSP customers over the past two months as the company solidifies its position as channel friendly, working the majority of its new business deals through the channel.

OmegaBlack is using Abstract Security for threat detection based on threat intelligence data to correlate against security logs and is utilizing the newly released Abstract Intel Gallery. Before Abstract Security, OmegaBlack would spend many hours every week dealing with complex integrations and legacy security tools. Now, the company combines all of its threat intel data with customer events, adds context, and does it all with less headcount and none of the manual work to set up playbooks.

"Our team has been searching for a better solution to weave threat intelligence into our data pipelines and detection workflows and came up short. We considered developing our own solution, but the complexity and resources required made this impossible for us," said Jonathan Kovacs, CEO of OmegaBlack. "Abstract Security has completely redefined security platforms, delivering solutions we never thought possible to serve our clients more effectively. Nobody else is doing this and we didn't even know it was possible to serve our clients this way."

Channel Program Expansion

Abstract Security's channel program is expanding with the addition of several new customers which represent the finserv, healthcare and technology industries.

"The channel continues to be number one for Abstract Security and we've been working very closely since our official launch in the spring with our partners to reach more potential customers," said Chris Camacho, COO and co-founder of Abstract Security. "We are thrilled to be working with Jonathan and the team at OmegaBlack and love seeing customers getting value from the solutions that our engineering team works extremely hard to build."

Abstract has seen growing demand since emerging from stealth and announcing its Seed funding in March 2024. In April, Abstract announced the opening of its first Middle East office. In May, the company announced the addition of Christopher Key to its Board of Directors and was selected as a "Pioneering Cybersecurity Startup" winner, as part of the 2024 Global Infosec Awards.

About Abstract Security

Abstract Security, founded in 2023, has built a revolutionary platform equipped with an AI-powered assistant to better centralize the management of security analytics. Crafted by category creators and industry veterans known for redefining the cybersecurity landscape, Abstract transcends next-gen SIEM solutions by correlating data in real time between data streams. As a result, compliance and security data can be leveraged separately to increase detection effectiveness and lower costs – an approach that does not currently exist in the market.

The leadership team of Colby DeRodeff, Ryan Clough, Aaron Shelmire, Chris Camacho, and Stefan Zier bring a unique set of experiences and backgrounds in product development and company-building expertise, at companies such as ArcSight (acq. by HP), Mandiant (acq. by Google), Palo Alto Networks and Sumo Logic. For more information about the company, please follow the journey @Get_Abstracted.

Abstract Team
Oct 9, 2024
PR

Abstract and Chris Camacho: Threat intelligence - the next evolution

In this episode, Ryan had the pleasure of welcoming Chris to Abstract Security and discussing the importance of threat intelligence in the cybersecurity field. Chris and Ryan, both with operational roots in the financial services space, shared insights on how customers are utilizing threat intelligence in their Security Operations Centers (SOCs). We highlighted the shift towards more efficient and automated processes, with many organizations integrating threat intelligence into their Security Information and Event Management (SIEM) systems for proactive threat hunting.

We covered the evolution of threat intelligence use cases, from basic indicator investigation to advanced analytics and event enrichment. Chris emphasized the need for scalable solutions like Abstract Security to help organizations of all sizes effectively manage and leverage threat intelligence data. We also touched on the challenges of prioritizing and integrating threat intelligence feeds into detection and prevention tools, underscoring the importance of a platform that can handle large volumes of data.

A key highlight of the episode is the announcement of the Abstract Intelligence Gallery, a platform designed to streamline the management and correlation of intelligence data from various vendors. This new offering aims to empower organizations to make their threat intelligence actionable, enabling them to proactively identify and mitigate potential security threats.

Abstract Team
Aug 1, 2024
SIEM

Abstract and Jon Oltsik: Future of Security Operations

In this episode of Abstract Canvas, I had the pleasure of discussing the evolution of SIEM with Jon Oltsik, a former ESG analyst and security operations expert. We took a look at the history of SIEM, starting from the early days of focusing on perimeter use cases to the current challenges of managing vast amounts of data, especially with the transition to the cloud.

Jon highlighted the importance of adopting an architectural approach in security operations and analytics, emphasizing the need for a common data service layer to handle the increasing volume and variety of data sources. We also discussed the shift towards platformization in the industry and the challenges of balancing specialization with the need for a comprehensive security solution.

We also touched upon the significance of standardization in cybersecurity, with a particular focus on the OCSF initiative and the importance of industry-wide support for such standards. And we highlighted the ongoing debates and challenges in the industry, such as the need for log health monitoring even after two decades.

It was a super insightful and thought-provoking conversation with Jon, shedding light on the current state of security operations and the need for real innovation and collaboration in the industry. Listen until the end for our shared vision for the future of security operations and a commitment to driving positive change in the field.

Abstract Team
Jun 18, 2024
SIEM

Cal Poly Incubator Interview

Abstract Team
Apr 1, 2024
SIEM

Top Bottlenecks that impact your Data Strategy

Aqsa Taylor

Why your data storage and retrieval strategy may need a revisit

Abstract Team

Why is there a need for Data Pipeline Management?

Abstract Team

How is Abstract a Complete Data Platform for Security?

Abstract Team

Why your SIEM needs a Pipeline platform

Abstract Team

How does Abstract Simplify Data and Amplify Insights?

Abstract Team
Get In Touch