Salesforce Forensics: Telemetry for Detection & Response
Abstract Security Threat Research Organization (ASTRO)
Oct 7, 2025
Blog
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Reclaiming the Pipeline: Real-Time Detection Without Blowing the Budget
Security teams are burning cash on ingesting logs that won’t ever be used, while also getting overwhelmed by alerts they can’t trust. Meanwhile, the data pipeline that feeds everything from detections to dashboards is often controlled by IT, not security. That’s a problem. Because if security doesn’t own the pipeline, they can’t shape it to deliver the outcomes that matter: trusted alerts, accurate detections, and controlled costs.
The SIEM Model is Broken, But Abstract Changes the Game
Today’s SIEM approach is outdated, backwards, and needlessly expensive. The SIEMs that the majority of the industry rely on come with major tradeoffs—most prominently skipped detections and slow queries—which often force organizations to choose between visibility and budget. However, in most cases, CISOs end up losing in both.
Organizations should be responding to threats as soon as possible, not storing everything first and then detecting after. But that is exactly what happens when relying on legacy SIEMs, which force you to:
Store before you detect
Index before you ask questions
Pay for every byte, whether it’s useful or not
To address this, Abstract Security flips the model by enabling organizations to detect threats while data is still in the pipeline. This gives security teams the chance to detect threats in real-time, while also integrating threat intelligence during every stage of normalization and enrichment. The result is catching more threats faster on the real things that matter.
Own Your Pipeline. Own the Mission.
The Abstract Security Platform puts the data pipeline back into your hands. Using the platform, security teams can decide:
What to enrich, transform, or drop
What goes into Splunk, Snowflake, or other business analytic tools
What stays in the pipeline for real-time detection
The Future is Faster—and Smarter.
Regain autonomy without sacrificing visibility or settling for overly high costs. By shifting detections upstream, organizations can gather insights immediately without indexing, delays, or noise.
Most legacy SIEMs are built on a batch-and-query mindset: collect logs, store them, index them, then query them later. That approach may have worked when data volumes were manageable, and attackers weren’t chaining multiple systems in real-time. However, today’s threat landscape is much faster and interconnected than ever before.
To solve this, Abstract treats data as live context, normalizing, enriching, and correlating data while the event is happening. This enables detections like “first seen activity from a new country” or “lateral movement between identity systems” to fire in seconds—not minutes, hours, or never.
As organizations build out their own AI tools, this capability becomes even stronger, allowing for the detection of more patterns with higher accuracy, while being given enriched, consistent, and in-flight data. Rehydrating state logs from cold storage and guessing at missing context is a pain point from the past.
Doing it the Right Way
Abstract is doing it the right way, rebuilding the modern SIEM the way it should’ve been built from scratch. Security teams shouldn’t have to choose between what they need to detect threats and what they can afford to store. Using Abstract, organizations can detect earlier, enrich smarter, and route with intention. However, most importantly, it allows for security teams to lead the pipeline, not just follow it.
Abstract Team
Oct 16, 2025
Product
Salesforce Forensics: Telemetry for Detection & Response
Background
The recent Salesforce–Drift campaigns are yet another example of SaaS “supply-chain” lateral movement. While detection and awareness of these operations have improved in recent years, many organizations still lack full visibility into their SaaS technology stack.
Beyond the various actors and intrusion sets associated with theCom (including groups such as ShinyHunters, Scattered Spider, and Sp1d3rHunters), many others continue to exploit SaaS applications for data collection, lateral movement, and persistence.
Leveraging SaaS platforms has become common across numerous intrusions: compromising GitLab and GitHub accounts, stealing and abusing Slack tokens, misusing Workday accounts, co-opting infrastructure-as-a-service tools, and pillaging platforms that store sensitive operational data.
While the SalesDrift intrusions have drawn significant attention and placed Salesforce in the spotlight, it’s far from a new target. As early as spring 2025, rumors were circulating about a tangled web of cybercriminal actors (Scattered, some may say; Shiny, others might) quietly compromising Salesforce environments. In fact, members of theCom have been experimenting with Salesforce persistence techniques for years.
One of the most striking methods involves abusing Salesforce’s support for multiple identity providers. By standing up a phantom Identity Provider, attackers can maintain persistent access, bypassing centralized identity management entirely while their rogue provider continues authenticating in the background.
In a previous blog, we examined the GitHub Audit Log and the many ways to identify compromised GitHub organizations, accounts, and the activity performed within them.
This entry focuses on the forensic evidence left behind in Salesforce, a platform so extensible and feature-rich that it often falls outside traditional forensic playbooks. With near-limitless customization, Salesforce enables organizations to create custom objects for storing a vast range of sensitive data, data that can easily escape standard incident response workflows.
In this post, we’ll break down the available telemetry sources within Salesforce, how they can be used to investigate compromise and persistence, and what security teams should understand about the limits and timing of that data.
Analyzing Salesforce Activity
Available Telemetry in Salesforce
Salesforce provides several sources of telemetry that can be leveraged for security monitoring and investigation, including:
Login History – captures user login activity and authentication details.
Setup Audit Trail – records administrative and configuration changes made within the Salesforce environment.
Event Monitoring – logs nearly every access, modification, update, and deletion event across the platform.
Field History Tracking – extends field-level change history retention from the standard 18 months to indefinite.
Object Records – include metadata showing when each record was created, modified, and by whom.
Salesforce bundles features and services into seven different editions. The various editions have access to different levels of information. The Developer edition is noted as having access to Event Monitoring and Field History tracking, however in our testing these features were not accessible.
Edition
Login History
Setup Audit Trail
Event Monitoring
Field History Tracking
Essentials
TRUE
FALSE
FALSE
FALSE
Starter
TRUE
FALSE
FALSE
FALSE
Pro Suite
TRUE
TRUE
FALSE
FALSE
Professional
TRUE
TRUE
FALSE
LIMITED
Enterprise
TRUE
TRUE
TRUE
TRUE
Unlimited
TRUE
TRUE
TRUE
TRUE
Developer
TRUE
TRUE
TRUE*
TRUE*
*Available according to Salesforce documentation, but could not be found in our testing.
Login History
Login History is included in all Salesforce editions and contains several useful fields, including SourceIp, Browser, Application, LoginTime, and LoginUrl, which capture details about when, where, and how a user logged in.
In Salesforce’s LoginHistory records, each user is identified only by an internal UserId GUID (e.g., 0055g00000A8vYZAAZ). Because this ID isn’t human-readable or consistent with external identity systems, it can make analysis more difficult. To correlate login activity with real users, analysts need to query the User table and join it to the LoginHistory records, this reveals key details like the user’s email address, username, and role.
Within Abstract Security we pull the User data into our Identity Model, enabling enrichment of Salesforce LoginHistory events and any object modifications with the appropriate UserName from the User table.
Login History Timing and Retention
Salesforce retains Login History events for six months within the platform (Salesforce documentation). Most login events are generated in near real time, typically within a few seconds of the user’s login.
In rare cases, however, Login History records may be delayed, sometimes by several hours after the original login event occurred. Analysts should account for this delay when correlating authentication activity across data sources.
Login Forensics
Salesforce Enterprise, Unlimited, and Developer editions can optionally include Login Forensics. This feature extends Login History retention for up to 10 years and adds Suspicious Activity Detection for authentication and authorization events (Salesforce documentation).
Setup Audit Trail
The Setup Audit Trail records configuration changes made within Salesforce. These can include administrative actions such as adding new users, modifying the identity provider, or enabling session security policies. It also tracks interface-level changes, like updates to page layouts. In short, nearly any change made through the Setup menu is captured by the Setup Audit Trail.
Setup Audit Trail events contain a limited set of fields, primarily the event Id, the Action performed, the CreatedDate of the modification, and the user who made the change (identified by their UserId or DelegatedUser).
The Setup Audit Trail captures events such as password changes, updates to authentication providers, and other administrative modifications. A full list of tracked actions is available in the Salesforce documentation.
Setup Audit Trail entries are typically recorded in near real time. However, as with other Salesforce object histories, event creation can occasionally be delayed, sometimes by up to 24 hours. These records are retained for up to 180 days within the platform.
Event Monitoring
Event Monitoring is available to subscribers of Salesforce’s Enterprise, Unlimited, and Developer editions, and can be purchased as an add-on for other subscription tiers. It records nearly all object-level activity performed by users—including creations, modifications, executions, and data retrievals—providing some of the richest telemetry available within Salesforce.
This data is particularly valuable for:
Detecting anomalous data retrievals that may indicate reconnaissance or exfiltration of sensitive information.
Reconstructing user activity timelines to understand what actions were taken, by whom, and what data was exposed.
There are almost 50 distinct event types available through Event Monitoring, covering actions such as API calls, report executions, and login activity. A complete list is available in Salesforce’s documentation. Each event type can have a slightly different schema, but most include core fields such as Timestamp, EventType, and UserId_Derived.
While individual events are generated in near real time, the log file delivery process varies by subscription:
Developer Edition – log files retained for 1 day Enterprise Edition – log files retained for 30 days
Essentials, Starter, ProSuite, and Professional Editions – not available
“An event generates log data in real time. However, daily log files are generated during non-peak hours the day after an event takes place. Therefore, daily log file data is unavailable for at least one day after an event. For hourly log files, depending on event delivery and final processing time, expect an event to take 3–6 hours from the time of the event to become available in the log file.”
In practical terms, Event Monitoring offers near-real-time insight for active investigations but introduces delays in finalized log file delivery, meaning analysts must account for this lag when performing time-sensitive correlation or detection.
Threat Detection
Salesforce Enterprise, Performance, and Unlimited editions can optionally include Threat Detection (Salesforce documentation).
Most Threat Detection alerts focus on identifying anomalous user behavior—for example, when a user opens significantly more reports, views, or objects than normal. Additional capabilities, such as detecting session hijacking, rely on internal Salesforce telemetry that isn’t available through standard event logs.
Threat Detection events are published through the Event Monitoring API and are typically delivered in near real time.
Object forensics and Field History Tracking
Beyond Event Monitoring data, Salesforce Objects themselves contain valuable metadata that can be used to build a forensic timeline of record creations and modifications.
By default, Salesforce retains 18 months of object history. When Field History Tracking is enabled, that history is preserved indefinitely, allowing analysts to trace changes long after standard retention windows would have expired.
Summary
Salesforce exposes a wealth of telemetry that can help analysts investigate intrusions, trace suspicious user activity, and understand how data moves across the environment. While Enterprise and Unlimited editions offer the most granular visibility, even baseline Salesforce editions contain valuable signals for identifying unauthorized access or configuration changes.
At Abstract Security, we’ve built native integrations to automatically collect and normalize telemetry from Salesforce, along with GitHub, GitLab, Cloudflare, and hundreds of other SaaS platforms. Our pipeline ensures every event is captured and preserved in real time, giving defenders complete visibility across their SaaS ecosystem without the risk of data loss or fragmentation.
Abstract Security Threat Research Organization (ASTRO)
Oct 7, 2025
Security
Shifting Detection Left: How to Decouple SIEM Detection Criteria for Modern Security Operations
Security Operations migrations are common as more organizations modernize. However, the decision-making of choosing the right SIEM platform isn’t straightforward. Some organizations are choosing to move from legacy SIEM solutions to newer cloud-native platforms, while others choose to make the exact opposite decision.
While other organizations are migrating from older SIEM solutions such as Splunk and Elastic to newer cloud native solutions like Google SecOps or Azure Sentinel. A growing number of organizations are migrating from Google SecOps to other solutions, and even to Splunk. Other organizations are migrating from those technologies to modular solutions, consisting of Online Analytical Processing (OLAP) platforms, data lakes such as Snowflake, Databricks, Azure Data Explorer, or plain Object Storage with a query platform like Athena or BigQuery on top of the data.
When it comes to migrations, moving data is just the beginning; the real complications lie in migrating and optimizing your detection and analytics capabilities. At Abstract Security, we’ve guided numerous organizations through this once-complex transition and through these journeys. We’ve found that the most successful migrations aren’t just about moving data and rules from A to B. It’s about rethinking how detection engineering should work, and modernizing security operations processes along the way.
The Hidden Complexity of Security Analytics Migrations
While connecting data sources to a new platform might seem straightforward, the reality is far more nuanced. Organizations frequently encounter obstacles with source integrations or struggle to map legacy data structures to new schemas, such as OSCF.
Even when the mapping and logic translation hurdles are cleared a deeper challenge emerges: How do you grapple with years of accumulated detection content?
This is where most teams face a critical decision point. Many choose the path of least resistance to simply recreate what they had before. While this approach feels safe and comfortable, it will likely perpetuate inefficiencies and miss the opportunity to build a more strategic detection program.
Three Paths Forward
When it comes to migrating detection content, security teams typically choose one of three approaches:
Approach #1: Lift and Shift
This approach involves moving all existing detection rules to the new platform with minimal changes. While this preserves institutional knowledge, it's often the most resource-intensive approach and carries forward all the technical debt accumulated over years.
This approach may seem like the easiest decision, however, planning and execution can extend throughout many cycles. It will also typically migrate legacy content that wastes computation resources and detection rule limits.
Approach #2: Starting Fresh
Starting from the beginning means entirely abandoning legacy content and building from vendor-provided rules and templates. This approach offers a clean slate but risks losing valuable organizational context and hard-won insights about your specific threat landscape. While technically the fastest, it can rightfully leave stakeholders concerned about having complete detection coverage.
Approach #3: Review, Revise, and Refresh
This is a strategic middle ground that involves evaluating existing detections, keeping what delivers value, optimizing for the new architecture, and making room for improved approaches. In our experience, this path consistently delivers the best outcomes. However, it requires discipline and a willingness to make difficult decisions about legacy content.
Technical Debt Hidden in Your Detection Rules
Before diving into migration strategies, it's worth examining the common problems that plague detection rule sets across organizations:
Legacy Threat Focus: We still regularly encounter Conficker detection rules in 2025 - sixteen years after the malware's peak relevance. These outdated rules consume computational resources while providing little security value.
Redundant Coverage: Many organizations run duplicate detection logic across EDR, NDR, and SIEM platforms, creating noise without additional security benefit.
Resource-Intensive Rules: Some detection rules consume disproportionate computational resources relative to their security impact, creating scalability challenges and increased costs.
Vendor Overlap: Numerous custom rules attempt to detect threats already covered more effectively by vendor-provided detections, representing wasted engineering effort.
A Better Approach: Detection Engineering Focused on What Matters
The most effective corporate detection engineering teams focus their limited resources on infrastructure and risks unique to their organization. Rather than trying to catch every possible threat with custom rules, they complement vendor-provided detections with targeted coverage of organizational-specific risks.
Instead of creating multiple rules to detect CVE-2024-49706 Sharepoint exploitation attempts, implement a single rule focused on patch management regressions that identifies vulnerable or legacy systems. Adopting this approach addresses the root cause, rather than the symptoms since a vulnerable system likely indicates broader posture management issues.
Shifting Detection Left: Prevention Over Reaction
Too many detection programs focus exclusively on later stages of the attack lifecycle when adversaries are already operating within your environment. A more strategic approach emphasizes "shifting left" to detect and address risks earlier in the process.
Focus Areas for Organizational-Specific Detections:
Privileged Identity Detection: Monitor access to critical infrastructure management accounts, systems, and processes that control your environment's security posture, and unusual activity involving accounts with elevated permissions across your unique technology stack.
Change Management Anomalies: Detect production changes occurring outside established release windows or approval processes.
Infrastructure Exposure Detection: Identify resources exposed to attacks and compromise.
Custom Application Behaviors: Monitor proprietary applications and internal tools that external vendors cannot effectively cover.
The Strategic Advantage of Decoupled Detection
Rather than tightly coupling your detection logic to specific platforms, consider a more flexible architecture. By decoupling detection criteria from the underlying analytics platform, organizations can:
Improve Consistency: Detection logic remains consistent across different security tools
Enable Real-Time Response: Stream-based detection reduces mean time to detection compared to batch processing approaches
Focus Resources: Security teams can concentrate on detection logic rather than platform-specific implementation details
Taking Action
Deciding upon which of the three paths (Lift and Shift, Start Fresh, or Review, Revise, Refresh) are right for you is a decision that you as an organization must make. The Lift and Shift approach may seem the easiest, while typically becoming the hardest and most grueling with the least benefit.
Taking the path to Review, Revise and Refresh your detection content provides an opportunity to build a more strategic, efficient, and effective detection program.
Start by auditing your current detection content. Identify rules that provide genuine security value versus those that exist simply because they always have. Consider which threats are better addressed through preventive controls rather than detection rules. Most importantly, focus your detection engineering resources on the unique aspects of your environment that only your team can effectively monitor.
Audit and Categorize Existing Detection Content. Categorize content into multiple areas:
Legacy content for removal (Conficker, etc.)
Custom content unique to your organization and infrastructure
Recent high value content
Migrate detection content to managed detection configurations
Plan – Determine where this content is best applied
Real time detections
Summary detections over longer time periods
Implement
Translate to destination systems
Apply workflows post detection
Manage, Measure and Maintain
Building for the Future with Abstract
Key Principles for Modern Detection Engineering:
Quality Over Quantity: A smaller set of well-tuned, relevant detections outperforms hundreds of noisy, outdated rules.
Context-Aware Coverage: Focus detection efforts on risks and assets that matter most to your specific organization.
Complementary Strategy: Design custom detections to fill gaps not covered by vendor solutions rather than duplicating existing capabilities.
There's a goal greater than successful migration: building a detection capability that evolves with your organization's needs and threat landscape. This requires moving beyond the traditional approach of accumulating more and more rules and toward a more strategic focus on high-impact, organizationally-relevant detections.
How to Triage npm Malware Incidents | Free npm SBOM Tool for Fast Response
In September 2025, the npm ecosystem was hit by a fast-spreading worm—dubbed Shai-Hulud (Dune fans know why)—that hijacked maintainer accounts and infected nearly 200 packages. For many security teams, this triggered a familiar scramble: are we exposed, and how do we check quickly? Larger enterprises may lean on commercial scanners, but smaller teams (or even big teams dealing with developer laptops and shadow installs) often find themselves googling “how to triage npm malware” at 11PM on a Friday.
This blog is designed to be an evergreen playbook. We’ll use the latest npm worm as an example, but the guidance applies to any npm supply chain compromise. Whether you’re a one-person security team or part of a global enterprise, you’ll find practical steps, a free tool, and common pitfalls to avoid when npm malware strikes.
Why npm Malware Keeps Coming Back
Npm malware is a recurring problem. The npm ecosystem is massive, open, and highly dependent on volunteer maintainers. When even one maintainer account gets phished, every package they control can become a delivery mechanism for a npm supply chain attack. Features like post-install scripts and continuous integration access make it easy for an npm worm to spread quickly.
That’s why incidents like the Shai-Hulud npm malware outbreak are basically inevitable. The key isn’t panic, but preparation: knowing that another attack will come, and having a clear way to triage exposure fast.
First 24 Hours: What to Do Immediately
When an npm worm or any other npm malware outbreak is unfolding, the first 24 hours are critical. It’s damage control time: you want to quickly identify exposure, limit blast radius, and secure your credentials.
Here’s what to prioritize:
Check your exposure
Review whether compromised packages were ever installed in your environment.
Look at your CI/CD pipelines, developer laptops, and any production systems.
Use tools (we'll link to ours below) that recursively scan package.json and package-lock.json files without executing any code. This gives you a safe inventory to cross-check against advisories.
Rotate secrets immediately
Assume that any system which pulled malicious packages may have leaked credentials.
Revoke and reissue npm tokens, GitHub personal access tokens, and cloud credentials (AWS, GCP, Azure).
Do this from a clean workstation to prevent re-exposure.
Block further installs
Temporarily pause or gate npm installs in CI/CD until you confirm which packages are safe.
Use lockfiles (package-lock.json) to pin exact versions, and prevent automatic upgrades that might pull malicious versions.
Audit your CI/CD workflows
Many npm worms persist by adding or modifying GitHub Actions or other CI jobs.
Check for new workflow files, suspicious outbound webhooks, or scripts that were not part of your baseline.
Communicate quickly
Let developers and incident response teams know what’s happening.
Share a simple triage checklist so smaller teams (or those wearing many hats) don’t get stuck wondering what to do next.
In other words, your triage npm compromise plan should emphasize speed and containment: know if you’re affected, cut off exposure, and secure your secrets. Incident response for npm worms will be focused on stopping the bleeding and buying time to investigate further, not perfection.
Tools for Triage: From Crude to Sophisticated
Not every team has the same resources, but everyone needs a way to quickly map out their npm dependencies when malware hits. Here are two practical approaches you can put into action right away:
Free npm SBOM Tool
Justin Borland, Director of Threat Engineering here at Abstract, built a simple script that generates a software bill of materials (SBOM) from your package.json and package-lock.json files—without actually installing code. By running npm with --package-lock-only and --ignore-scripts, you get a safe dependency inventory to cross-reference against advisories.
It’s crude but effective, especially for small shops, incident responders under pressure, or larger orgs looking for fast validation outside of their usual scanners.
It’s worth noting how this approach differs from simply running npm audit. The built-in audit checks dependencies against a vulnerability database but executes install logic, which can be risky during an active compromise. This SBOM script, by contrast, safely builds a dependency inventory without running code which makes it especially relevant when malware is spreading.
Enterprise Security Scanners
For larger teams, tools like Snyk and Dependabot provide automated visibility into vulnerable packages. They integrate with CI/CD and developer workflows, surfacing risks as part of the pipeline. These platforms do have blind spots, particularly on developer laptops, shadow repos, or systems outside the official build process. So even enterprises benefit from lightweight scripts alongside their formal tools.
Open Source Security Scanners
Smaller teams can look at projects like OSV-Scanner, Checkov, or Grype. These are free and community-driven, offering solid coverage for dependency and container scans. While they may lack some of the polish and integrations of commercial platforms, they can be combined with lightweight scripts to fill the gaps.
Common Pitfalls to Avoid
Even well-prepared teams can make mistakes in the heat of an npm malware incident. Here are some of the most common traps to watch out for:
Relying on SBOMs alone. A static SBOM shows you what you think you’re using, but registries can change fast. Worms mutate packages in hours, so you need continuous checks rather than one-time inventories.
Forgetting developer laptops or contractor machines. Enterprise scanners often focus on production pipelines, but malware frequently enters through dev boxes or side projects. Shadow installs are easy to miss.
Assuming 2FA or compliance will save you. Two-factor authentication and policy paperwork don’t stop worms that exploit maintainer accounts or abuse trusted workflows. Don’t let a checkbox mentality create blind spots.
Treating npm audit as a silver bullet. It only checks known vulnerabilities and executes install logic, which can be risky in an ongoing compromise. Safer alternatives, like SBOM generation without code execution, are essential.
The lesson here is don’t assume your usual security practices will cover every angle. Build your triage npm compromise playbook with these pitfalls in mind.
How to Future-Proof Against the Next Worm
Responding in the moment is (obviously) important, but lasting resilience comes from preparing for the next npm worm before it arrives. Here are steps teams can take to future-proof their security posture:
Maintain an evergreen triage playbook. Document your “first 24 hours” actions and update them after each incident. Having a checklist ready keeps smaller teams from scrambling when every minute matters.
Use quarantine mirrors and strict version pinning. Host an internal registry or mirror where new packages are held until they’re verified. Pin dependencies to known-good versions so sudden upstream changes don’t slip into production unnoticed.
Invest in provenance and signed builds. Require signed tags or verified publishers before allowing packages into your builds. Trust your own pipeline, not the public registry.
Automate secret rotation. Make credential resets part of your standard playbook so it’s routine instead of chaos-driven.
Building these safeguards now ensures that the next npm supply chain attack is just a manageable fire drill.
Quick Checklist for Teams of Any Size
If time is short and stress is high, here’s a concise checklist that any team can run through during an npm malware incident:
Scan dependencies safely using the SBOM script or OSV‑Scanner without executing code.
Check for known npm malware advisories and cross‑reference against your dependency inventory.
Rotate secrets immediately—npm tokens, GitHub PATs, and cloud credentials.
Pause npm installs in CI/CD until affected packages are verified or replaced.
Audit CI/CD workflows for new or modified scripts, suspicious webhooks, or privilege escalations.
Pin versions in lockfiles to prevent pulling new, potentially compromised updates.
Quarantine and verify any new third‑party packages before allowing them into production.
Communicate clearly with developers, incident response, and stakeholders so everyone knows the plan.
This npm incident response checklist gives both small and large teams a fast path to triage npm compromises without losing time to debate.
Resources
We linked to this above but in case you missed it, here’s the link tonpm_sbom_generator.py — our ASTRO team’s lightweight Python tool that recursively processes your package.json and package-lock.json files to build an SBOM-style dependency inventory. It runs without installing any code and ignores install scripts, making it safer during active compromise or exploratory triage. Ideal for small teams or incident responders who need fast, actionable visibility.
Aurora Starita
Sep 20, 2025
Security
How Abstract Turns Security Into a Competitive Edge
Every year, security teams add new tools to keep up with evolving threats. And every year, those tools create more logs, more storage, and more alerts. Instead of clarity, teams end up buried in data that is expensive to store and difficult to interpret.
The real issue isn’t visibility gaps—it’s that most security architectures are still built on outdated data strategies that attempt to collect everything. That model drives costs up while slowing down detection.
Abstract Security flips this approach. By moving detections closer to the source—before data is indexed, normalized, and warehoused—Abstract reduces SIEM ingest costs, eliminates noise, and speeds response. The result is a modern security architecture that transforms security from a sunk cost into a business advantage.
Turn Security into a Revenue Enabler
Traditional SIEMs assume you’ll pay to detect threats after data is indexed. That model is both slow and expensive—it bloats your storage bill while attackers move faster than your indexes can.
Abstract Security shifts detection upstream. Threats are identified before they hit your data lake, which means:
For CISOs: earlier visibility into attack chains and faster response, without the ingest tax.
For CFOs: a measurable reduction in SIEM storage spend, often 30–50% less data volume pushed downstream.
For SOC managers: fewer noisy alerts and more high-fidelity detections that analysts can actually act on.
For boards: proof that security isn’t just a cost center—it actively protects revenue by preventing fines, downtime, and reputational damage.
By changing the architecture, Abstract changes the equation. Instead of scaling linearly with data growth, security scales with business value: catching threats earlier, protecting uptime, and aligning costs to outcomes.
With the Abstract Security Platform, teams gain:
Real-time detections from streaming data — stop threats before they reach downstream storage.
Chain-of-event logic pre-index — connect attacker steps earlier in the kill chain.
Threat intel enrichments in pipelines — add context where it matters most.
Edge-side normalization — cut through vendor noise with a consistent, agnostic data model.
This doesn’t just benefit the SOC. Internal risk teams, auditors, and executives gain higher-fidelity insights without the clutter of redundant logs.
Accelerate Business with Better Security
Speed isn’t just about catching threats faster—it’s about removing friction across the business. By embedding threat intelligence directly into every stage of the pipeline, Abstract Security ensures detections happen in real time, before attackers can move laterally or exfiltrate data.
Instant suppression of noise — Known malicious IPs, domains, and signatures are filtered at the edge, so analysts never waste cycles chasing false positives.
Context-rich detections — Threats are enriched with intelligence as they’re processed, giving IR teams high-fidelity alerts instead of raw logs.
Audit-ready visibility — Clean, normalized, and annotated data flows downstream, reducing the burden on compliance and risk teams.
Controlled data growth — By preventing “junk logs” from ever entering the SIEM or data lake, organizations shrink storage footprints while improving data quality.
The impact goes beyond the SOC. Internal auditors gain clearer evidence trails. Business continuity teams see reduced downtime. CFOs see security spend tied directly to efficiency gains. And boards gain confidence that the organization is resilient against modern threats without overspending on legacy architectures.
Abstract doesn’t just make detection faster—it makes the entire business more agile, cost-efficient, and trustworthy.
A Modern Security Architecture for Modern Businesses
Instead of focusing on additional tooling, organizations should look for a solution that can turn security into a strategic asset. Leveraging the Abstract Security Platform, businesses can:
Better align budgets to business outcomes, not ingestion volume
Enable faster threat detection and reduce overall business risk
Gain better insights into current data and achieve faster IR with less downtime
Your security data strategy shouldn’t be focused on storing more data. It should be looking to be faster, while also reducing risk and enabling resilience.
Ready to Rethink Security Economics?
Which logs are driving up your SIEM or data lake bill without adding value? Let’s talk.
Abstract Team
Sep 17, 2025
Product
Shift-Left Detections with Abstract
A note from the Author, CEO and Co-Founder Colby DeRodeff
I’ve been building security analytics platforms for over two decades, starting as one of the early employees at ArcSight in 2001. Back then, we were solving how to centralize detection and security data at scale. But today’s problem is different: the data is bigger, faster, and more distributed. The SIEM Market has drifted from real-time to a post index world—and waiting to detect threats after data is stored simply doesn’t cut it.
At Abstract, we’ve built our platform around a different principle: move detection into the stream. I’ve always believed the best security outcomes come from reacting in real time, not after the fact. That’s why Shift-Left is the design philosophy we started with and the reason customers come to us today and the reason I wrote this paper to share our vision. Let’s continue our mission to “Detect and stop adversaries who threaten our collective livelihood!”
Shift-Left Detections with Abstract: Streaming Analytics for Faster Detection and Response
Security teams today don’t lack data—they are overwhelmed by it. As telemetry volumes increase from SaaS platforms, cloud infrastructure, endpoints, and security controls, the challenge isn’t collecting information—it’s acting on it in time. Traditional detection pipelines, which rely on post-ingest analytics within SIEMs or data lakes, are proving too slow, too expensive, and too disconnected to meet the demands of modern security operations.
Most SIEM platforms detect threats hours after the relevant events have occurred, a direct result of their legacy architectures. Detection logic is often applied only after data has been normalized, indexed, and stored—an inherently delayed process built on monolithic infrastructure that cannot keep pace with real-time security demands.
- Francis Odum, Founder at Software Analyst Cybersecurity Research
This delay is not just operationally inefficient—it is strategically dangerous. According to CrowdStrike, the average adversary breakout time is 43 minutes, meaning attackers often move laterally within the environment faster than traditional systems can even process the telemetry. In this context, every minute of delay increases the defender’s disadvantage and narrows the response window.
Abstract is the only data pipeline platform with a built-in, streaming correlation engine designed specifically for security operations. While others offer basic filtering or static enrichment, Abstract performs real-time correlation, contextual enrichment, and detection in-stream—before the data reaches any destination. This enables organizations to identify threats while they are still unfolding, not after the fact.
With Shift-Left, detection becomes proactive rather than reactive. The outcome is not simply faster alerts—it is fundamentally earlier intervention.
This is why we were founded with the philosophy of Shift-Left.
At Abstract, Shift-Left means moving detection closer to the source—executing analytics in the data stream itself before the data ever reaches its final destination.
Picture your SOC seeing an account hijack attempt tied to a known malicious IP and being able to blocking it before the logs even land in a traditional data lake? This architectural approach improves mean time to detection (MTTD) and mean time to response (MTTR) by eliminating delays introduced by legacy processing stages. Instead of waiting for data to land in storage, normalize, enrich, and index, Abstract evaluates events as they move, enabling live decision-making.
At Abstract, Shift-Left means moving detection closer to the source—executing analytics in the data stream itself before the data ever reaches its final destination. Picture your SOC seeing an account hijack attempt tied to a known malicious IP and being able to blocking it before the logs even land in a traditional data lake? This architectural approach improves mean time to detection (MTTD) and mean time to response (MTTR) by eliminating delays introduced by legacy processing stages. Instead of waiting for data to land in storage, normalize, enrich, and index, Abstract evaluates events as they move, enabling live decision-making.
True Streaming Correlation, Built-In from the Start
While some vendors offer “stream processing” as a bolt-on or rely on static enrichment lookups, Abstract’s streaming analytics engine was designed from the ground up to perform real-time correlation across diverse data sources. This includes:
Threat intelligence that is treated as a live, contextual input—not just static map files or offline reference lists.
Asset and identity data that is integrated directly into the pipeline, enabling identity correlation across fragmented log sources (e.g., endpoint, cloud, SaaS).
Detection rules that operate continuously in the stream, allowing for instant alerting and response logic before any data lands in a SIEM or data lake.
This is not enrichment as an afterthought. It is a unified model where context travels with the data, enabling richer detection outcomes with significantly lower latency.
Why It Matters
Traditional models wait until telemetry is stored and indexed before analysis begins. This introduces delays, raises costs, and often requires complex reprocessing workflows to update detection coverage. With Shift-Left, Abstract eliminates that overhead. Rules are evaluated at the moment of data transit, enriched with full context, and executed in real time.
The result is not just faster alerts. It’s better alerts, grounded in live asset and identity context, and delivered before threats have time to spread.
- Francis Odum, Founder at Software Analyst Cybersecurity Research
Shift-Left, Stay Ahead
By embedding intelligence directly into the data stream, Abstract provides security teams with earlier insight, broader coverage, and faster resolution—without the operational weight of legacy platforms. ‘Shift-Left’ isn’t a feature—it’s a foundational shift in how detection should work.
What Correlation Really Means in Security Analytics
Correlation is one of the most overused and least understood terms in cybersecurity. In traditional SIEM architectures, it often refers to matching static fields—such as an IP address or username—across datasets after ingestion. This approach is brittle, delayed, and limited in its ability to handle context, time, or relationships between events.
At Abstract, correlation is not a search—it is a real-time, resident analytical process that operates within the stream. Our platform performs joins across multiple, disparate log sources as data arrives. This allows us to construct meaningful relationships between events that may originate in entirely different systems: endpoint logs, cloud audit trails, identity providers, threat intelligence feeds, and more.
For example:
A login attempt from a suspicious IP address (CloudTrail)
Followed by a file download from a managed device (EDR)
Attributed to an identity that recently failed MFA (SSO)
In traditional pipelines, these events would be stored separately, normalized independently, and eventually queried through scheduled correlation jobs or manual investigation. By contrast, Abstract can correlate these events in-stream, applying detection logic across all of them within a bounded time window—often milliseconds from the moment the last event is received.
This includes:
Temporal joins, where rules evaluate conditions across time (e.g., “X must follow Y within 5 minutes”)
Cross-source identity resolution, tying together usernames, hostnames, device IDs, and session tokens across systems
Multi-condition pattern recognition, where composite rules are evaluated as logical chains, not isolated triggers
The result is a system capable of identifying behavior, not just events. This allows security teams to detect threats earlier, with greater accuracy, and without the cost or delay of retroactive queries.
Correlation at Abstract is not a feature layered on top—it is a native capability of the streaming engine itself. It allows us to ask smarter questions, at the right time, using the full context of your environment.
Complementing Abstract’s real-time analytics architecture is the work of our ASTRO team—a dedicated group of detection engineers, threat researchers, and adversary simulation experts. The ASTRO team is responsible for delivering a continuously updated library of pre-built detection rules, numbering in the thousands, that are tuned for streaming execution across diverse log sources. In addition to rule content, the team produces Abstract’s proprietary threat intelligence feed, which is directly integrated into the correlation engine—not as static reference files, but as a dynamic, first-class data source. This allows organizations to deploy high-confidence, context-aware detections immediately, without the overhead of building custom content or sourcing external enrichment pipelines.
Offloading Detection Logic from the SIEM
One of the most immediate and measurable benefits of shifting analytics into the stream is the ability to offload detection logic from the SIEM entirely.
In traditional environments, SIEM platforms serve as both the data store and the detection engine. This forces organizations to ingest vast volumes of telemetry simply to apply detection rules—driving up licensing costs, straining performance, and locking teams into rigid vendor ecosystems.
Abstract breaks this dependency. By supporting stream-based detection, organizations can move their existing SIEM rules into the data pipeline itself. These rules are evaluated as data flows through the stream, enriched with context, and correlated across sources—all before reaching any destination.
As a result:
Only alerts and minimal supporting data are forwarded to the SIEM, dramatically reducing ingestion volume and associated cost.
The full-fidelity data remains available in LakeVilla for search, replay, and long-term retention—without needing to store everything in hot SIEM storage.
Security teams retain full control over detection logic and data handling, independent of the SIEM vendor’s roadmap or pricing model.
This architecture introduces strategic flexibility. Instead of being locked into a single SIEM platform—or paying to rehydrate and reprocess data during a migration—organizations can evolve their tooling over time. With Abstract at the center, the detection layer becomes portable, and decisions about storage, search, or response tooling can be made based on operational need, not architectural constraint.
The SIEM becomes just one possible destination, no longer the foundation.
Shift Left Detections and Get Abstracted!
Colby DeRodeff
Jul 22, 2025
Product
How Juul Labs Rebuilt Its Security Operations Using Abstract
As COO of Abstract and someone with years of experience in security operations, I know how challenging it is to modernize security in fast-moving environments. When the team at Juul Labs came to us, they weren’t just looking for visibility. They needed a practical, high-leverage way to simplify their stack and empower their team to act faster. Here’s how we helped make that happen.
Improving Juul Labs’ Security Posture
Juul Labs, a leader in consumer electronics manufacturing, was looking to modernize its security posture and IT infrastructure. Under the leadership of Pablo Quiros, VP & Global Head of Security and IT, they needed a solution with powerful security capabilities to support their growth, all while operating with a lean team.
“There had been multiple attempts to build visibility into our systems. What we inherited was outdated, overlapping, and broken logging infrastructure.” Pablo Quiros, CISO at Juul Labs
The Need for a Force Multiplier
Given the lean nature of his team and Juul’s ambitious growth plans, Quiros recognized the need for a solution that could provide powerful security capabilities without requiring a massive headcount or heavy operational lift.
THE CHALLENGE
Limited visibility into telemetry and attack surfaces
Difficulty scaling threat detection across a small but specialized team
Complex, legacy logging systems that were expensive and inefficient
Slow, resource-intensive processes for pipeline creation and data onboarding
Rigid tools from legacy vendors that didn’t adapt to Juul’s modern architecture
Juul Labs needed to consolidate its logging and detection systems, gain actionable insight across its infrastructure, and do so in a way that maximized agility and minimized cost.
Enter Abstract
After evaluating the market, Quiros and his team selected Abstract Security to support their modernization journey.
"The moment we saw a pipeline being built in Abstract with drag-and-drop simplicity, our logging engineer lit up. That kind of usability is rare in this space — and for a small, high-impact team like ours, it’s invaluable." Pablo Quiros
Juul Labs chose Abstract’s Pipelines and Analytics products to simplify their security data infrastructure and improve operations without replacing their existing SIEM and saw results immediately. Just an hour after being deployed, all of Juul’s systems and data were up and running, delivering value from day one.
Quickly deploy data pipelines, decoupling data destinations from sources, enabling improved data quality, threat intel enrichment, and ease of migration
Cut infrastructure complexity and operational overhead
Integrate seamlessly with existing tools, including Splunk
Reduce onboarding time and training requirements
Proven Results and Return On Investment
Though still early in the deployment, Juul Labs is already seeing strong results:
Increased Operational Efficiency
“What previously took four days can now be done in a single day, freeing up engineering time for high-value tasks. “
Reduced Time to Value
“The Abstract platform was delivering insights within an hour of deployment.”
Improved Visibility & Detection
Juul now has
“full understanding of our attack surface — who, what, where, when, and why.”
Positive Early ROI
Juul expects infrastructure costs to drop by up to 75% over the next year as legacy systems are phased out.
Why Abstract Security
TIME
“Time is our most valuable resource. Abstract gives us time back — in deployment, in operations, in impact.”
USABILITY
“You don’t need a six-month onboarding. It just works.”
COST-TO-VALUE RATIO
“The infrastructure is cheaper to run, easier to maintain, and delivers real outcomes faster.”
Looking Towards the Future
Quiros sees Abstract Security as more than a platform — it's a strategic partner.
“We like that Abstract is young and responsive. We feel like we have a voice in shaping where the product goes.”
Juul Labs is also exploring Abstract’s analytics module as part of its broader push to adopt modern, adaptable technologies.
“We’re always looking for ways to extend our team’s capabilities without increasing headcount,” said Quiros. “The analytics capabilities of the platform will enhance our detection and analytics workflows and increase our team’s capacity.”
Using Abstract, Juul expects infrastructure costs to drop by up to 75% over the next year as legacy systems are phased out. Juul Labs is already seeing strong results in their operational efficiency, improved visibility and detection, reduced time to value, and return on investment (ROI).
What previously took four days can now be done in less than a day, freeing up resources for other high-value tasks. Additionally, Juul now has a full understanding of their attack surface—with clear answers to the “who, what, where, when, and why” behind security events.
“Implementing Abstract Security brought immediate clarity and efficiency to our security operations. Their platform provided immediate visibility into our security environment, making it far easier to detect anomalies and understand our telemetry in real time. What stood out most was the seamless deployment—it was up and running in under an hour, delivering value from day one. As a lean team, we value tools that amplify our effectiveness without adding complexity. Abstract Security helps us cut through the noise and focus on what matters. The intuitive UI, real-time search capabilities, and the ability to pivot quickly between data sources have significantly improved our detection and response capabilities.
This isn’t just another tool—it’s a force multiplier.
Abstract has helped us rethink how we approach security operations, allowing us to be proactive rather than reactive.”
Juul Labs is one of several modern enterprises rethinking their approach to security operations with Abstract If you’re looking to simply your data architecture, increase visibility, and move faster without adding complexity, we’d be happy to show you how.
Chris Camacho
Jul 9, 2025
Product
Automating macOS Incident Response: DFIR-as-Code in Action Against AppleProcessHub
AppleProcessHub and the Application of DFIR-as-Code in MacOS Incident Response
Digital forensics has historically relied on manual collection, inspection, and interpretation—an approach ill-suited for modern operational demands. With increasingly distributed environments and rapid adversary movement, a more scalable, consistent methodology is needed. DFIR-as-Code proposes such a methodology. By encoding forensic and response workflows into reusable logic, organizations can transition from reactive, one-off analyses to systematic, verifiable, and expedited incident response.
A recent case involving the AppleProcessHub malware provides a practical context in which to examine the application of DFIR-as-Code within a MacOS environment. This malware specimen, catalogued in public repositories and identified as both an infostealer and a trojan, offers a compelling use case for standardized forensic automation.
AppleProcessHub Overview
AppleProcessHub is a malicious Mach-O binary that, upon execution, deploys a Bash script. A deep dive of this malware is available from Kandji here(https://www.kandji.io/blog/macos-appleprocesshub-stealer ) This script exfiltrates a set of commonly targeted configuration and credential files, including:
Shell history (.bash_history, .zsh_history)
Git configuration in .gitconfig
SSH configurations and key files
MacOS keychain databases in Login.keychain-db
The nature of these targets underscores a central concern in most modern intrusions: credential theft. Once exfiltrated, these credentials enable lateral movement, privilege escalation, or external access from attacker-controlled infrastructure. For organizations reliant on manual triage, identifying such theft is a race against time—with inconsistent results. By contrast, DFIR-as-Code enables immediate detection, categorization, and escalation of these indicators.
Instrumenting the Response: Codified Triage of AppleProcessHub Artifacts
Using DFIR-as-Code principles, a security team can define specific countermeasures that automatically process secondary data sources such as filesystem images and memory captures. In the case of AppleProcessHub, the following countermeasures are relevant:
1. Credential Artifact Detection with Yara
Yara is a powerful rules language which is used to describe file contents and attributes. Frequently used to describe malware and their corresponding families, Yara can also be leveraged to detect credential files. Below is an example of a Yara rule to detect SSH private keys:
You can use this methodology to describe dozens of different types of credential files. If you find these important files on a compromised system, you should immediately audit and revoke them to prevent further credential abuse. A few examples of credentials worth alerting on include:
Once codified, this logic can be systematically applied to forensic captures, ensuring that credential exposure is identified early in the investigative process. Countermeasures can then be tied to response actions such as initiating credential rotation or notifying appropriate identity governance systems.
In this example, the Abstract Security Platform enables alerting on SSH private keys found on the imaged system. This allows the system to immediately identify and escalate the discovery of credentials on a compromised asset.
Instrumenting basic alerting criteria around your secondary data sources and curated countermeasures enables foundational blocks upon which to expand and build.
Once deployed, immediate notification and escalation of the affected keys can be generated and actioned, as seen in our Abstract Security findings. This can be expanded to target any credentials which may be present in your environment.
Immediate detection and triage of potentially compromised credentials will help DFIR teams “shift left” and reduce mean-time-to-remediation, potentially saving them from additional abuse or exploitation.
2. Processing with Plaso
Plaso or similar tools enable automated extraction and categorization of MacOS keychain contents. Within a DFIR-as-Code pipeline, these parsers operate on every imaged MacOS endpoint, extracting credentials tagged as either macos:keychain:application or macos:keychain:internet.
Below is an example of a Nextcloud macos:keychain:application entry:
The goal is twofold: to identify what services may have been accessed with these credentials, and to quantify the risk surface that must be mitigated. Keychain records—containing application or web credentials—are often neglected in manual reviews but are particularly valuable to attackers. Automating their collection and review ensures that no sensitive data type is overlooked.
During an incident, your team must quickly determine what the compromised system had access to. This enables timely triage, containment, and remediation. DFIR-as-Code aims to minimize wasted time and maximize the speed and depth of analysis.
Below is an example rule that detects MacOS keychain entries not built into the system.
This rule allows us to quickly triage credentials that may have been exposed during an incident and immediately begin containment and remediation. Below is an example of insights and findings generated by rapidly identifying the keychain scope of impact.
3. Persistence Mechanism Identification
In the MacOS ecosystem, the launchd subsystem is frequently co-opted by malware for persistence. While default services and agents are well-documented, malware often creates additional .plist files within user or system launch directories such as:
By deploying comparison rules against known baselines, security teams can isolate non-standard entries that may indicate malicious behavior. These entries can be automatically triaged and flagged for further analysis, reducing the time analysts spend identifying persistence mechanisms in compromised environments.
Below is an example rule that detects non-default launchd entries found on the imaged system:
This DFIR-as-Code countermeasure is ideal for quickly triaging MacOS persistence mechanisms utilizing launchd. Some examples of benign entries, like Zoom, VirtualBox, and GoogleUpdater can be seen in the figure below.
As with other valuable forensic findings, your team can generate insights that help engineers and analysts track and understand the full scope of the incident. You can quickly route tickets to the appropriate teams for immediate remediation. This approach reinforces the “shift left” methodology by delivering relevant information to analysts promptly, allowing them to make faster, more informed decisions with reduced risk.
Structured Application: When, Who, What, and How
To standardize the response to AppleProcessHub and other mac malware intrusions, DFIR-as-Code breaks the incident response process to focus on the following questions:
When: Timestamp correlation and alert ingestion establish the timeline of the incident.
Who/What: The user identities and compute resources that have been impacted. This can include hostnames, usernames, API keys, non-human identities, IP addresses and more. Identifying the Who and What allows for secondary data acquisition via automation and human analysis to understand the scope of intrusion.
How: The operations that occurred in the intrusion.
Using DFIR-as-Code allows teams to detail each of these areas in a version-controlled and auditable manner, ensuring that forensic techniques are uniformly applied across incidents and time.
Reducing Variability and Human Error
A central concern in incident response is inconsistency between analysts. Two individuals reviewing the same forensic image are unlikely to produce identical findings unless provided with a standardized playbook. DFIR-as-Code mitigates this discrepancy by removing procedural ambiguity. Every analyst inherits the same operational tooling, the same decision logic, and the same forensic lens.
This methodology transforms DFIR into a science—not in abstraction, but in execution. For AppleProcessHub, and for future incidents involving similar tactics, this approach ensures that investigative depth is achieved without sacrificing speed or reproducibility.
Continuous Expansion Through Libraries of Countermeasures
The strength of DFIR-as-Code lies not only in its reproducibility but in its extensibility. Organizations may build libraries of countermeasures tailored to specific platforms or malware families. Over time, these collections evolve into institutional memory—codified, scalable, and easily audited.
For AppleProcessHub and similar MacOS threats, relevant countermeasure categories include:
Credential detection via YARA and parsing logic
Launchd anomaly detection
Keychain analysis and service attribution
Exfiltration pattern recognition in memory and disk artifacts
Generally this can take many forms:
SIEM and Sigma rules for secondary data sources (Plaso output)
Advanced analytics using deterministic logic or probabilistic machine learning and AI tooling
Yara rules for file and memory images
Python scripts for advanced use cases and Response procedures to quickly implement required actions
Data models and correlation rules
And more
Below are some examples of DFIR-as-Code countermeasures and response procedures which can be automated to speed up investigations, freeing engineering resources:
Detection Criteria
Countermeasure Type
Ruleset
Response Procedure
Any credentials or key files
Yara
Forensic filesystem image
Escalate for credential revocation. Start secondary case for affected credentials / lateral movement
Persistence via RunKeys and other suspicious entries
RegRipper
Windows Registry
Extract keys, collect related files, start sandboxing / triage
Exfiltrated data confirmed from Plaso output (compression and transfer tools used)
Sigma
Forensic filesystem image
Collect exfiltrated data for evidence. Start scanning exfiltrated data for credentials / impact radius.Escalate and engage Legal.
Malicious or suspicious persistence mechanism found (baseline deviation)
Yara
Forensic filesystem image
Collect related files, start sandboxing / triage. Impact radius evaluation on pervasiveness
Commonly abused binaries found masquerading as other binaries (putty.exe, etc.)
Yara
Forensic filesystem image
Collect related files, start file analysis. Start impact radius queries looking for tool usage and affected resources
Each countermeasure is independently testable, allowing organizations to progress through a well-defined lifecycle: from initial detection, to controlled blocking, and finally to estate-wide deployment.
Conclusion
The emergence of malware like AppleProcessHub highlights is an example of a threat targeting MacOS systems. DFIR-as-Code provides the necessary framework to bring parity between platform complexity and response capacity.
DFIR-as-Code gives your organization’s Security Operations team a powerful force multiplier. When handling complex incident response scenarios, your team can quickly evaluate the exposure of an incident to effectively respond. The result is reduced containment time and accelerated eradication efforts. This framework enables rapid development of use cases which enables further automation.
When your team understands what data an attacker exfiltrated during an incident, they can significantly reduce the time needed for incident disclosure. In some environments, teams must send out notifications within hours, not days or weeks. Automating these elements of the incident response process enables SecOps teams to respond faster and more effectively. Continuously evolving your detection and prevention controls helps close exploitable gaps, while DFIR teams keep expanding their libraries of detection and response capabilities.
By embracing DFIR-as-Code, your team actively detects and signals threats from slower, more complex data sources. With forensic images and memory dumps, you don’t just improve response times; you transform digital forensics from a reactive process into a proactive, adaptive capability. As your organization matures these practices, you build intelligent incident response pipelines that scale with complexity, reduce human error, and stay ahead of modern threats.
Abstract Security Threat Research Organization (ASTRO)
Jul 1, 2025
Security
Introducing Abstract LakeVilla: Pipeline-Powered, Analyst Ready, and Efficient Storage
Security teams are drowning in data management. While collecting the growing volumes of data from all sources is a struggle in itself, storing it is an even bigger problem to solve. And in today’s enterprise environment, data is everywhere; to the point that many organizations don’t know what to do with it. The burden of storing, managing, and making sense of it has drastically increased. And security teams are often forced to make a choice: overspend for real-time systems that house only slices of data or route the majority of it to low-cost storage that is practically inaccessible.
With LakeVilla, you don’t have to make that compromise.
When it comes to data, you shouldn’t have to compromise between cost and performance. LakeVilla is designed to give you a reliable, cost-effective way to store and access historical data without the high price tag or slow performance of traditional SIEM solutions. It makes cold storage actually work for security teams—no more painful rehydration and no more expensive re-ingestion.
Extract Security, Store Data
Roughly only 10-15% of security telemetry drives detection and real-time analytics. This means you don’t need to overflow your real-time systems with everything. While the remaining 85-90% is important for investigations, compliance, and traceability, and needs to be stored for longer durations. Despite this, most architectures send all of your data (including the things you don’t need) through the same high-cost pipelines, resulting in you paying unnecessary expenses and operational overhead.
One of the most important aspects of a proper data strategy is recognizing that not all data is equal in urgency or value. LakeVilla embodies this principle by giving organizations the capability to easily and clearly separate real-time and forensic data paths.
How? Relevant security data goes to the analytics engine. Everything else goes into LakeVilla’s cost-effective, searchable, and immediately available storage - without rehydration, reprocessing, or secondary tooling!
What is LakeVilla?
LakeVilla is Abstract’s cloud-native cold storage solution, purpose-built for security data and deeply integrated into the Abstract Security platform. Unlike traditional archive solutions that bury logs in inaccessible storage, Lake Villa keeps your data instantly searchable and seamlessly connected to your detection workflows.
It eliminates rehydration delays, avoids expensive re-ingestion, and removes extra hidden costs at retrieval. That means you can scale retention, meet compliance, and conduct deep investigations—all while maintaining performance, reducing storage costs, and simplifying your security stack.
One Platform: Unified and Modular Architecture
LakeVilla isn’t an external archive or add-on utility. It is a native component of the Abstract Security Platform. This integration eliminates the need for redundant ingestion paths, parallel systems, or additional operational burden. Data sources only need to be onboarded once and are immediately usable across the entire platform.
Here’s what you get, out of the box:
Fully Searchable Cold Storage Retain long-term telemetry in a low-cost storage tier without sacrificing accessibility—no rehydration, re-ingestion, or external query layers required.
Seamless Integration with Live Pipelines LakeVilla is embedded directly into Abstract’s data pipeline architecture, enabling instant access to historical data without operational handoffs or tooling sprawl.
Replay for Forensics and Detection Enhancement Historical data can be replayed through detection workflows on demand, supporting investigations, rule tuning, and retrospective threat identification.
Cost-Efficient Retention at Scale Store years of security data at a fraction of traditional SIEM or hot storage costs, enabling compliance and threat hunting without budget constraints.
By combining these capabilities into one platform, LakeVilla dramatically simplifies onboarding, eliminates maintenance overhead, and gives security teams a unified environment that works together by design, not by patchwork.
Why LakeVilla?
To recap, LakeVilla changes the game by giving you one flexible solution: keep your long-term data live and useful without the usual trade-offs of cost, speed, or complexity. Here’s how it’s different from traditional storage solutions:
1. Store More, Pay Less
Filter out noise at the source, aggregate repetitive events, and enrich telemetry with context before it ever hits your SIEM or cold storage. By shaping the data in motion, you dramatically reduce volume and eliminate unnecessary storage spend. Then, route that refined data directly into LakeVilla for long-term, low-cost retention that’s still instantly accessible when you need it. It’s smarter storage with cost savings. This means you’re storing smarter in a cost-effective manner, without losing full visibility.
Cut noisy events at the source
Route only high touch data to SIEM or other analytic platforms
Route long-term storage data directly to LakeVilla for cost-efficient cold storage
2. Searchable Cold Storage (No Rehydration Required)
Traditional cold storage solutions create complexity and a long checkbox of actions to complete before you can actually leverage the data you’re storing.
You archive logs to save on cost, but accessing them later requires expensive rehydration and, many times, manual retrieval workflows. Many storage solutions offer cheap storage costs but charge extra for querying and retrieving stored data. With LakeVilla, data is shaped and normalized before storage, making it ready for fast retrieval and response without any rehydration or hidden costs.
Cold logs are query-ready instantly
No rehydration process
No hidden API fees or retrieval penalties
This means your data remains useful and accessible for:
Incident response that needs to look back months, not just minutes
Audit trails for compliance and reporting
Threat investigations that unfold over longer timelines
No more waiting. No more “fetch-and-hope” workflows
3. Adapts in Real-Time
LakeVilla supports seamless routing of enriched, filtered data into cloud object storage solutions like AWS S3, Azure Blob, and Google Cloud Storage. With a simple drag-and-drop interface, you can route data based on content, severity, or compliance needs - no manual rewrites or complex scripts required. It’s flexible routing that keeps up with your environment as it evolves and helps you avoid vendor lock-ins.
4. Built for Security, Not Just Storage
Leveraging the Abstract Security Platform’s pipeline module, LakeVilla prepares data before it even reaches storage. This means data is aggregated, normalized, schema-aligned, threat-enriched, and usable in-stream; ready for faster access the moment you need it.
Pre-storage processing: Aggregates, normalizes, and enriches data before it hits cold storage.
Schema alignment: Ensures data is structured and query-ready before it is stored.
No vendor lock in: Keeps your data portable - no vendor lock-in.
Whether you're responding to an incident, fulfilling a compliance request, or conducting a deep investigation, your data is always ready.
Why have a Lake House when you can have a Lake Villa!
LakeVilla isn’t just a cold storage solution, it’s a smarter way to retain and use your security data. From filtering out noise at the source to storing at scale, LakeVilla ensures your security data remains usable, searchable, and under your control. No rehydration, no hidden fees, no lock-in.
Whether you’re optimizing SIEM costs or future-proofing your detection and compliance strategy, LakeVilla gives you more than storage. It gives you leverage.
Store smarter. Search faster. Stay in control.
Want to see this in action? Contact us, or sign up for our monthly newsletter for the latest information regarding Abstract’s solutions.
Aqsa Taylor
Jun 23, 2025
Product
Uncovering Compromised Git Admins: How to Detect actors like theCom
While our technology estates have evolved beyond workstations, laptops, servers and networks to include numerous Software services, many organization’s detection and analysis capabilities are still focused on the malware involved in endpoint detection and response tools, and exploits sent detected by network Intrusion Detection Systems and next-gen Web Application Firewalls.
Meanwhile the loosely disorganized sets of actors associated with theCom (aka Muddled Libra, Scattered Spider, and more) have cemented themselves as the opportunistic threat actor of the day. Known for leveraging social engineering and identity-focused intrusions, their operations have targeted critical infrastructure, telecom, and software supply chains. The actors associated with theCom have truly democratized the infamous “I hunt sys admins” approach. They often gain initial access by compromising IT help desk staff and administrators.
The actors frequently target Version Control Systems (VCS) such as GitLab and GitHub. These platforms enable development teams to store, manage, and deploy source code and infrastructure configurations. Development teams often mistakenly commit credentials for other systems to their VCS repositories, which enable the actors to move laterally within the technology estate.
This post dives into their tactics and techniques used when compromising version control systems (VCS) such as Git, how to detect and mitigate these intrusions, and how Abstract Security helps.
Overview of Intrusion
These intrusions can be roughly categorized into 4 main phases. Initial Access, where the actor uses carefully crafted social engineering techniques to gain initial access to a privileged system. Establishing Persistenceto enable the actors to maintain access to the systems.Further Reconnaissanceand Lateral Movement where the actors gain access to other parts of the technology estate. Data Exfiltration, when the actors proceed to exfiltrate or encrypt the organization’s data for ransom.
We’ll break the intrusions down below while describing detection and protection opportunities to help mitigate.
Initial Access
These intrusions often begin through the compromise of a laptop or workstation. The actors associated with theCom have professionalized
path and often target initial access via IT help desks and known administrators identified by using publicly available data from platforms such as LinkedIn.
Establishing Persistence
After gaining access to a GitLab instance or GitHub user account, the actors often create a new personal access token (PAT). This personal access token will allow the actors to have persistent access to the Git instance, or GitHub organization. They may also add users or outside collaborators to the GitLab instance or GitHub organization.
Detection
To identify this activity in GitHub Audit logs look for actions of
`personal_access_token.access_granted`
`org.add_member`
`org.add_outside_collaborator`
Detecting GitHub actions of `personal_access_token.request_created` can provide early identification of these requests, though this action is more prone to detecting True Positive Benign activity.
Actors associated with theCom have also been observed adding a member or outside collaborator to your GitHub organization. To detect this activity look for events with `org.add_member` or `org.add_outside_collaborator`
Entries in the GitHub Audit Log with the action “personal_access_token.auto_approve_grant_requests_disabled” are a very high fidelity alert signal that occur when this configuration option is changed. This activity almost never occurs within an organization, and can easily be followed up with your GitHub administrators.
An example of this activity can be seen in the image below, with identification information obscured.
There are additional GitHub org activities that are suspicious, rarely occur, and provide high fidelity signals to alert upon. Abstract Security includes out of the box detection rules for this activity and more, including the following GitHub configuration changes which can be useful to detect suspicious activity:
disabling MFA for users - org.disable_two_factor_requirement
Adding or inviting admins: action equals any of business.add_admin or business.invite_admin
Adding or approving access to Oauth Applications – action equals oauth_application.create or oauth_application.create or org.oauth_app_access_approved
Modifying Single Sign On requirements to access your organizations repositories – action equals any of sso_redirect.disable or sso_redirect.enable
The ASTRO detection rule trigger criteria for org.disable_two_factor_requirement is seen below.
Defense
1. Enable admin access for fine-grained personal access token creation To protect your organization from unauthorized PATs, enable admin approval of fine-grained personal access tokens. Here's how it looks in the GitHub administrative console.
Restrict access via classic personal tokens While the above option requires administrator approval for fine-grained personal access tokens, classic PATs are either completely allowed or completely restricted. Once you Allow Access via classic personal access tokens, you will not have further visibility into users creating and associating those to your organization.
These controls provide a useful control on static tokens from less permissioned developers. The problem here is that theCom actors are targeting the GitHub Administrators who have permission to change these controls! Luckily GitHub requires the use of multi-factor authentication to make these config changes.
Further Reconnaissance and Lateral Movement
In the past, actors associated with theCom have connected trial accounts of GitGuardian to git repositories. While GitGuardian is a great tool for security teams to detect leaked credentials in code commits, using this tool allows the actors to find additional credentials to move laterally into additional technology infrastructure such as AWS accounts, GCP projects, Azure subscriptions, VSphere and OpenShift infrastructure, other SaaS platforms, and sometimes event SSH keys. This behavior isn’t limited to GitGuardian, the actors could use other tools such as trufflehog, gitleaks, shhgit, and more.
Detection
You can detect this and other mass git repository collection activity by detecting a threshold of unique repositories within a reasonable time frame. The detection rule below looks for 10 distinct repositories upon which git.clone or git.fetch has been performed within 10 minutes.
Intrusions into SaaS, PaaS, and IaaS technologies are often hard to accurately detect because nearly all of the actions within these platforms are resident to the platform. This is a similar challenge to the Living off the Land (ref: https://www.youtube.com/watch?v=j-r6UonEkUw ) techniques unveiled by Matthew Graeber and Christopher Campbell at DerbyCon 3 in 2013. When adversaries are Living off the Clouds the activity blends in with legitimate administrator and developer activity.
To increase your true positive rates you can create a table of known login IP addresses from your Identity provider, and whether they have been Multi-Factor Authenticated or not. If the activity is originating from the known MFA’d source addresses, then the activity can be treated with a lower severity.
Perhaps your organization does legitimately use GitGuardian or other tools, You can similarly opt to include known infrastructure source_addresses to further improve your true positive rates of Critical detections.
Exfiltration
There are a few actions on objectives the actors can take to exfiltrate repositories from compromised organizations. The actors can exfiltrate repositories via git.clone actions manually to their local systems.
Some actors associated with theCom have also been observed manually downloading.zip archives of repositories.
This activity is odd by itself, as most developers prefer to use git clone versus a web UI. Manually downloading multiple .zip files is almost always a sign of suspicious activity, whether it is performed by external actors or a malicious insider.
To increase the true positive rate of external actors performing this activity, you can correlate the activity against known MFA’d IP-addresses from your Identity Provider logs.
Pulling the timeline together
While the above activities and actions can be a strong indicator of suspicious, malicious or compromised GitHub user accounts within your organization. Analysts need to pull the timeline together to tell the story of what happened.
What and Who? – collect the suspicious source_addresses paired with the GitHub user_names and user_ids associated with the activity. Expand and pivot from those users to other users associated with the source_addresses, as well as other source_addresses associated with the users.
When? – to determine your window of compromise, analyze the additional activity from these IPs and users. Typically there will be activity that is much more benign to help determine when the compromise began.
You’ll often find that a compromised laptop or workstation led to this activity, and the actors began interacting with a long lived login session to your version control system software.
Conclusion
Our technology estates are now composed of a wide range of systems and services including Cloud, SaaS, self-hosted services, and the traditional computers and networks. This expanded footprint opens threat actors to a larger realm to compromise and operate upon. Our detection and analysis methods need to expand to collect telemetry from these systems and improve our skills to analyze this telemetry.
Secure with Abstract
Abstract Security provides data integrations to collect the GitLab and GitHub audit logs, paired with filtering and aggregation functions to reduce unnecessary data without compromising your ability to detect, investigate and respond to intrusions. Paired with the ASTRO’s Git detection rules to detect suspicious and malicious activity from insider threats and external actors.
Applicable ASTRO detection rules
ASTRO provides over 50 GitHub detection rules as Out of the Box content for customers to detect suspicious GitHub activity. This activity can be tuned using contextual information with additional data models to increase true positive rates. The rules most relevant to this post include:
Abstract Security Threat Research Organization (ASTRO)
Jun 11, 2025
Security
DFIR-as-Code: Scaled and Repeatable Incident Response
Digital Forensics and Incident Response (DFIR) is at the forefront of cybersecurity. DFIR is where the edge of human and technological defensive capability meets the edge of human and technological offensive operations.
This continuous cat and mouse game has been ongoing for decades. It’s shaped by curiosity, fueled by competition, or sometimes just for the lulz. Constantly dealing with an evolving adversary requires timely, accurate, and consistent reaction and response. The output from DFIR can often inform financial, litigation, or legal actions, where the results are analyzed, reviewed, and critiqued in depth.
What does this look like without DFIR-as-Code?
In many organizations DFIR feels more like art than science. Analysts scramble to collect logs, dig into artifacts, and document procedures under pressure, with no guarantees of repeatability or accuracy.
Manually processing and combing through data can be a good first step. Sophisticated adversaries, however, will take advantage of slow response time and any lack of capabilities. Paving well-travelled paths within your DFIR program is essential to shortening the OODA loop and evolving faster than the threat actors targeting your organization.
Ad-hoc approaches cause problems:
Inconsistent response across incidents
Slower investigations due to manual data collection
Risk of error in high-stakes environments
Missed evidence from unprocessed artifacts
Hard-to-audit processes for legal or regulatory review
When an incident hits, this can be catastrophic. Time matters and slow, manual, ad-hoc procedures result in lost evidence, prolonged breaches, and missed opportunities.
Enter DFIR-as-Code: From Manual to Automated
Just as DevOps revolutionized infrastructure through “Infrastructure-as-Code,” DFIR-as-Code extends the gains brought by Detection-Engineering, and brings structure, automation, and repeatability to the forensics and response lifecycle.
Maturing a DFIR program requires building up libraries of use cases and response procedures, much like continuous monitoring programs, and mature detection-engineering programs. This allows advanced organizations to adopt a very similar approach to their investigation and response processes: DIFR-as-Code.
DIFR-as-Code delivers:
Automation of repeatable forensic and response actions
Consistency across incidents, analysts, and teams
Auditable workflows for regulatory compliance
Scalable processes that don’t crumble under pressure
Knowledge transfer across teams via codified playbooks
With DFIR-as-Code, humans stop reinventing the wheel for each incident. They focus on solving new problems, not rerunning the same play manually.
What does DFIR-as-Code look like?
After an Initial Stimulus kicks off the process, the workflow for DFIR-as-Code repeats through 3 steps
Collection
Triage
Contextualization
These 3 steps repeatedly feed the Incident Response process.
DFIR-as-Code workflow
Collection
Collection is the process of acquiring and processing the artifacts and their associated metadata to produce timeline entries. Applied Security Data Strategy: A Leader’s Guide discusses how collection programs and data ingestion form the foundation of a mature data strategy.
Collection expands upon continuous Data Ingestion including point in time acquisition and processing of artifacts, such as the Master File Table from a host, metadata cloud resources, or the contents of etcd from a container.
The data gathered in Collection is often processed by tools such as FTK imager, Autopsy, Plaso, and Log2Timeline among others.
By maturing the collection portion of your DFIR lifecycle, you can ingest additional data in the same analytics platform as the rest of your data sources, providing a much more holistic view of the incidents and events taking place within your environment. Below is an example of the Abstract platform’s Plaso Log2timeline dashboard for analyzing forensic image output.
Triage
Triage takes the Collected artifacts and timelines and applies signaling mechanisms to them. The signaling mechanisms can take the form of deterministic detection rules such as Sigma, Yara, Suricata rules and programmatic processors such as RegRipper, or inference mechanisms such as XGBoost based classifiers or scoring systems.
Although implementing this logic can be challenging, it significantly improves the signal-to-noise ratio by correlating multiple data sources to confirm or refute observed activity.
Storing common Triage logic for different artifacts speeds response by confirming malicious activity, highlighting the most effective artifacts, and providing results in a familiar format that is easy to consume by the analyst.
Contextualization
Contextualization enriches the artifact summaries from Collection and Triage and correlates separate data sources to provide and scope a 360-degree view of the evidence.
Enriching data with Context can:
Highlight known infrastructure and/or user IPs from MFA logins, allowing those to be filtered out of an investigation if appropriate
Match known user handles with their user identity
Understand if attack traffic is being sent to vulnerable hosts
Understanding how prevalent an executable is within your environment
Context allows your team to make higher-confidence assertions about what has or hasn’t occurred.
Next Steps
This entry outlines the DIFR-as-Code concept. This framework can be implemented and adopted to jump-start or rapidly mature DFIR programs within existing organizations, providing a foundational basis upon which to build.
Highly regulated environments and organizations can create well-defined approval processes that govern how they collect data while maintaining compliance and oversight. This enables faster, more well governed Collection while ensuring compliance and governance. By confirming that the requested data is both necessary and permitted, teams can operate successfully in heavily regulated environments.
DIFR-as-Code empowers teams to automate manual processes, enabling the teams to scale human effort and execute more. It ensures that artifacts and evidence are processed to at least a base level of confidence, allowing humans to decide upon additional artifact gathering and processing as necessary.
Join us in the next entry to dive into applying this concept.
Abstract Security Threat Research Organization (ASTRO)
May 22, 2025
Security
Abstract Security Named in SACR’s 2025 SDPP Market Guide: The Rise of Security Data Pipelines
This new report delves into one of the most critical evolutions in cybersecurity today: the rise of Security Data Pipeline Platforms (SDPPs) as the foundation for a modern, AI-ready Security Operations Center (SOC). In its analysis, SACR identifies Abstract Security as a standout player in the space, acknowledging our role in modernizing SOC infrastructure, our commitment to proactive and efficient security operations, and recognizing our integrated approach to data management and threat analysis.
Here’s what the SACR report had to say about Abstract:
“Overall, we were impressed with Abstract Security’s capabilities as a SOC pipeline and threat detection platform that goes beyond SDPP functionality. Abstract is positioning itself beyond traditional data pipelines by integrating advanced threat detection directly into the data streaming process.
Its streaming analytics include real-time correlation, behavioral anomaly detection, and threat intelligence matching, enabling contextual intelligence at the point of ingestion. By analyzing and acting on data as it flows, Abstract supports a more proactive detection posture, aligned with the needs of modern SOCs facing alert fatigue, data sprawl, and cloud-native complexity.”
-Francis Odum, Founder at Software Analyst Cybersecurity Research
Why This Matters: Overcoming the Limitations of Legacy SIEMs
The traditional SIEM model is facing significant challenges. It’s breaking under the sheer weight and variety of security telemetry generated by modern environments, leading to unsustainable costs and operational complexities. Outdated pricing structures based on ingestion volume penalize organizations for increased visibility, forcing difficult trade-offs between coverage and budget.
That’s why the Abstract Security Platform is redefining how security data is ingested, enriched, and analyzed in real-time. By providing security teams with the flexibility to stream, transform, and act on data before it reaches downstream analysis tools, we empower them to regain control. Our intelligent pipelines enable efficient data reduction, enrichment with critical context, and real-time analysis, ensuring that only high-fidelity, relevant data is forwarded for further processing and long-term storage.
How Abstract Security is Leading This Shift
The SDPP market is rapidly evolving, and Abstract is leading this shift, providing a platform that embodies these key characteristics:
Integrated Analytics and Detection: We go beyond basic pipeline functionality by embedding real-time streaming analytics directly within the data flow. This enables immediate threat detection and correlation at the point of ingestion, reducing alert fatigue and accelerating response times.
AI-Powered Intelligence: Our platform leverages AI for data enrichment, anomaly detection, and provides AI-generated explanations of analytical findings, making complex data more understandable and actionable.
Intuitive, No-Code Interface: We offer a user-friendly interface for building and managing pipelines and conducting analysis. This empowers analysts to take control of their data without requiring deep technical expertise.
Flexible Data Handling: We support robust data reduction and filtering capabilities, ensuring cost optimization and reduced noise.
Enabling SIEM Evolution: Abstract isn’t just an augmentation layer—our platform is already being adopted by organizations as a full SIEM replacement, or as a crucial bridge during SIEM migrations, demonstrating our ability to handle both data ingestion and advanced analytics.
Learn More
We’re proud to partner with forward-thinking SOC leaders who are ready to make data actionable. I encourage you to check out SACR’s full report, as it provides a very deep analysis of the emerging SDPP market.
Accelerate threat detection without reengineering your entire stack
Chris Camacho
Apr 28, 2025
News
The Invisible Enemy: Unmasking Microsoft 365’s Logging Blind Spots
Introduction
Security teams depend on logs to tell them what happened, when, and how. But what happens when the logs are incomplete—or worse, misleading?
Microsoft environments are a common target for attackers and many organizations rely on their services. Microsoft’s activity logs are utilized to gain insight into malicious activity to assess scale and determine next steps. Yet crucial information is often invisible, making these threats more difficult to detect and triage.
In this blog, we’ll dissect the limitations of Microsoft’s activity logs, walk through real-world simulations using the msInvader tool, and highlight what gets seen—and what silently slips through the cracks.
Overview of MS Activity Logs
Microsoft’s Office 365 (M365) and Graph APIs are the backbone of log-based detection for the Microsoft suite.
M365 management API audit logs are divided into the following classifications:
Audit.AzureActiveDirectory
Audit.Exchange
Audit.SharePoint
Audit.General
DLP.All
Digging a little deeper, some unsettling realities emerge. Microsoft’s documentation (ref: https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-reference#working-with-the-office-365-management-activity-api) describes the following issues:
Content blobs can take up to 12 hours to appear.
Events in later blobs may predate those in earlier ones.
Logs often miss critical events—especially reconnaissance and failed attack attempts.
Some events may be available from both sources, for example, authentication or sign-in logs, however each source has a different format.
Microsoft 365 events on the left have key differences from the Graph API entry on the right.
Two fields for source IP address: ActorIpAddress vs. ipAddress
The timestamps are formatted differently, missing the timezone in Microsoft365 events.
The Microsoft Audit logs include additional request information such as the User Agent.
Microsoft Audit logs include the Operation/action of the event. Graph API logs don’t include this.
GeoIP information is included about the source client in Graph API logs
Meet msInvader: A Reality Check for Defenders
To test the boundaries of Microsoft’s visibility, we ran a series of attack simulations using msInvader (ref: https://github.com/mvelazc0/msInvader), a red team tool packed with over 20 post-auth techniques.
Using common endpoints like Graph API, Exchange Web Services (EWS), and the REST API, msInvader can:
Enumerate users, groups, roles, and apps
Read emails, reset passwords, or forward inboxes
Attempt privilege escalation via mailbox or folder delegation
Let’s break down what we saw—and what we didn’t.
Enumeration: The Silent Operator
Technique: enumerate_users API Call: GET /v1.0/users Result: 24 users successfully enumerated Logged? ❌ No trace in either Graph or O365 logs
Reconnaissance is a common first step to gather information about the environment. msInvader has multiple techniques for enumeration. Running the ‘enumerate_users’ technique which uses the Graph API endpoint.
We can see that msInvader was able to successfully pull a list of users from the environment. Both Graph and Office 365 logs only provide authentication logs.
This raises a few questions: Why are there ‘UserLoginFailed’ events and a ‘UserLoggedIn’ event? Why are there two ‘Entra ID Sign-In’ events? Where is the log for the user enumeration query? Let’s take a step back and do a quick overview of the authentication process.
msInvader has the option to authenticate using MFA by using the ‘device_code’ authentication type in its configuration. This allows the attacker to log in via a browser with a provided code.
Login proceeds normally from here and execution of msInvader automatically continues.
This means the authentication process was interrupted to add an additional confirmation step for the app. This is a little misleading since the ‘Operation’ for the event is ‘UserLoginFailed’ which suggests the login was unsuccessful. We see in the following event the login was eventually successful.
Why are there two ‘Entra ID Sign-In’ events?
Similar to what was mentioned above, the initial event has the same error code of ‘50199’ but provides a ‘failureReason’ of ‘For security reasons, user confirmation is required for this request. Please repeat the request allowing user interaction.’
Where is the log for the user enumeration query?
None of the enumeration techniques (users, groups, applications, service principals, and directory roles) produce any events from either source.
After testing a few techniques, I started receiving 429 status code responses for too many requests. There were no logs of this either, unfortunately, as this information could be useful to help identify suspicious activity.
Failed Attempts = Invisible Threats
You’d expect failed attacks to light up your detection and analysis tools, right? Not always.
msInvader has the technique ‘add_mailbox_delegation’ which can grant full access to another user’s mailbox if the user has the correct permissions. Attempting this action with a user that does not have the required permissions produces a 403 status code (Forbidden) from the API.
No log of this event appears from either source. This appears to be common with other techniques where only the successful attempts make it through to a log.
Technique: read_email, change_user_password Status: 403 Forbidden Logged? ❌ Still nothing
A relatively simple action to detect would be for a new inbox forwarding rule added for a user’s mailbox. msInvader has a technique for this.
The Abstract Security platform has content packs containing analytics that can detect this, and other activity, in Microsoft logs.
More complicated analytics can be created that use conditions based on parameters in the requests to Microsoft to make detections more specific and help reduce false positives. For instance, when a mailbox permission is set, the access rights and mailbox folder need to be provided. It’s possible to create analytics to create findings on specific events like setting a user to be ‘Owner’ of another user’s ‘Inbox’ folder.
This produces the finding:
Taking things a step further, we can also add conditions for the grantee receiving the permission. Example finding detecting a user’s ‘Inbox’ folder granting ‘Owner’ access to the ‘Anonymous’ user effectively allowing make anyone an owner without needing credentials:
Detection Is Only As Good As Your Data
Here's the hard truth: many early-stage attack techniques—especially recon and failed modifications—simply don’t generate logs in Microsoft’s default configurations.
Want more visibility? Microsoft offers enhanced logging for Graph API, but it comes with extra costs and some limitations. (ref: https://learn.microsoft.com/en-us/graph/microsoft-graph-activity-logs-overview)
Until then, defenders must focus on:
Detecting what is logged (e.g., inbox rules, sign-ins, modified permissions)
Building detections based on attack chains, not just atomic events
Understanding the environment to uncover misconfigurations
Closing Thoughts
Attackers are getting stealthier and Microsoft’s default logging, while robust in many ways, still has significant blind spots—especially when it comes to enumeration, failed attacks, and misclassified activity.
By understanding what’s logged, what’s not, and how attackers operate within those blind spots, defenders can build more resilient detection strategies.
Stay Curious. Stay Defensive. Want more research like this? Follow Abstract Security’s CANVAS blog and subscribe to our monthly newsletter.
Abstract Security Threat Research Organization (ASTRO)
Apr 22, 2025
SIEM
Strengthening Integrations in the Abstract Security Platform
Going into 2025, Abstract Security continues to provide best-in-class technology to our customers and build upon our award-winning platform to eliminate the unneeded complexity involving data operations, all while enhancing threat insights. Here’s what you need to know about the latest integrations our teams have made available!
Integration Station, Your Destination for Seamless Connections
We recognize that one cloud service provider, or one data-logging platform is enough for every organization. With that in mind, our teams at Abstract are always at work enhancing our growing list of integrations!
Abstract Security's platform enables enrichment, filtering, aggregation as well as analysis, and detection and now supports the following data sources:
CloudFlare via Webhook and Cloud Storage buckets
Jumpcloud Events
AWS WAF Logs
GitLab on-prem
OnFIDO via Syslog
Squid Proxy via Syslog
Cisco ASA via Syslog
Auth0 via WebHook
Travel to More Data Destinations!
With growing compliance and data regulatory needs, it’s essential to unify security data operations within a central platform. Abstract has added support for the following data destinations:
Additionally, we’ve enhanced the following data sources already available within the Abstract Security Platform!
Microsoft Graph API
SentinelOne API Integration
Microsoft 365
Intel471 Intelligence API
Windows Event Logs
Proofpoint
Additional Pipelines Content
We’ve enhanced the following data integrations with additional pipelines content to filter, aggregate, transform, and enrich. CloudFlare users can reduce data by atleast 40% (with just OOTB rules), and we’ve added new networking pipeline rules to aggregate and filter data. Microsoft Azure and Entra ID enrichment users can now easily map Application names to AppID GUIDs.
Better Threat Insights with ASTRO (Abstract Security Threat and Research Org)
Here’s the latest enhancements and product updates we’re making to our platform via ASTRO:
Phish Detection: Improve security via improved Lure keywords.
SentinelOne: New detection rules have been added to help aggregate and automate detection alerts.
GitHub: Added detection rules for suspicious branch and repo creation, transfer, and modification activities.
Don’t miss out on future updates. Subscribe to Abstract Security’s newsletter to stay up-to-date on our most exciting announcements. Also, follow us on LinkedIn and subscribe to our Youtube channel!
Aqsa Taylor
Jan 14, 2025
Product
Why your SIEM needs a Data Pipeline Management platform
Feel locked-in with your SIEM?
How do you integrate your data sources with data destinations such as SIEM platforms, XDRs, and Datalakes?
Whether you are new in security or a lifer, chances are you’ve heard the industry discussions relating to log management strategies, collection, routing, storage and integrating data sources with SIEMs and accompanying costs associated with that effort. It’s fundamental that companies must ensure high data quality as they collect and manage security data. Challenges arise because data comes from potentially hundreds of unique sources with differing formats, structures and of course tremendous volumes of useless logs tangled with relevant data. These attributes require cleansing, complex mapping, normalizing or other processes to ensure accuracy and consistency while making sure all stakeholders are happy with how the data is being handled and how the organization is being protected.
And even if you figure all of that out, you’re left with having to work individually with each of the logging platforms because of their differing architectures, data mapping approaches and analytics.
What’s in the way?
Direct integration of your data sources with SIEMs results in noisy data and a heavy onboarding or migration effort. Problems that arise can include:
Cloud visibility – Most logging platforms don’t support all the complex cloud sources you need ingestion from, either because of cost or supportability. This leads to blind gaps in data consolidation.
Paying ingestion costs on data that is not useful. Not all data from your data sources is useful for threat detection. Some events are internal service updates, some logs have extraneous fields, and some have additional info that is irrelevant. When you don’t have the right method to filter out the unnecessary data, you end up storing it in a high-cost data destination causing both your data volumes and costs to increase.
Noisy data blinding your SecOps. State of SecOps and Automation report states that “99% of organizations report high volumes of alerts cause problems for security teams in identifying real threats buried in the noise”. Having noisy data filling your SIEM / datalake platforms isn’t just a cost problem, it's a fundamental security problem. Noise from 90% of irrelevant data takes away focus from the 10% data that matters.
Compromising on visibility All data must be accounted for, to have full visibility, right? But when the data is not filtered or normalized before being routed to destinations, what compromises are you making? How do you decide which data sources need to be onboarded, and which ones are of lesser value or can wait until later to avoid blurry data? Having a lot of data but not filtering it correctly can actually impact visibility negatively. If that’s the case, then the organization will be unable to see real threats through all of the noise.
Vendor Lock-In and SIEM migration complexity Every SIEM / XDR / datastore platform is different. Each vendor may have its own data structure and querying language. Once you’ve onboarded a data source with a particular SIEM vendor, several factors come into play before you can make a switch to a new one, like integration complexity, data volume that needs to be re-routed, analytic policies migration and the operational burden on the team to make the transition. This creates vendor lock-in for an already saturated security team trying to defend the organization against real threats.
So how do you solve the potential problems that might arise from integrating data directly with your SIEMs and other platforms? What if there was a “helper” or a translation layer in between your data sources and data destinations that can take the heavy lifting of data operations off your internal team’s plate by decoupling the sources from destinations?
Introducing Abstract Security’s Data Pipeline Management
A data pipeline management tool helps decouple the data sources from data destinations and adds the ability to operate on the data before it reaches a destination. This removes individual onboarding dependency, and the prebuilt source and destination integrations make data easily routable to any destination.
Abstract’s pipelines feature goes beyond a generic data pipeline management tool with its data and threat expertise. The main difference lies in Abstract’s strong security focus. There are a lot of DPM tools that can be the “helper” to route your data from one platform to another, however, not all tools are built with security in mind. Abstract Security has the data and threat expertise that enables it to distinguish between legit threat data and noise, to mask sensitive data before routing, to apply threat enrichments with live streaming intelligence, and most importantly to recognize what data should not be dropped (under noise reduction).
With Abstract’s “no-code required” model, you can easily perform all these operations without having to hire a dedicated, certified professional to work with the platform. With Abstract Security’s pipeline features, you get
Streamlined Quality Data: Abstract collects, reduces, enriches and routes data from various cloud sources such as AWS CloudTrail, Azure Activity Logs, and GCP logs. Abstract’s out-of-the-box rules filter out low-value data (e.g., debugging logs or redundant telemetry) before sending it to high-cost SIEM platforms, improving the quality of data ingested at destinations. In addition, Abstract’s data aggregation features further reduce data sizes by 40-50%.
Normalization and Enrichment: Cloud logs can be enriched with contextual information (e.g., geolocation, IAM role mappings) before reaching the SIEM, improving the relevance of security alerts for cloud environments. Abstract’s Intel Gallery consists of in-house threat feed (ASTRO feed) that is constantly updated and an option and the ability to bring your own threat intelligence feed into a single platform to apply enrichments.
Dynamic and Context-Aware Routing: Abstract allows for dynamic routing of logs to multiple destinations, enabling the organization to split the stream based on predefined analytic use cases or specific security scenarios. This approach supports cloud use cases, ensuring holistic visibility. Abstract’s ability to send data to multiple SIEMs and cloud monitoring tools ensures that the right data reaches the right platform, whether for compliance, security operations, or cloud monitoring.
Simplified SIEM Transitions: Abstract’s architecture decouples data sources from specific SIEM platforms, enabling the organization to seamlessly replace SIEMs without significant re-architecting. By allowing simultaneous data flow to multiple destinations, Abstract can facilitate easy transitions to new SIEMs or cloud monitoring tools during migration periods, reducing integration costs and minimizing operational disruptions.
With Abstract’s pipelines feature, you can remove the complexity from data operations and make the most out of your SIEM investments without getting locked-in.
Leave the data operations to Abstract so your teams can focus on stopping the adversaries who threaten our collective livelihood.
Aqsa Taylor
Dec 15, 2024
News
Abstract Security + Amazon Security Lake + OCSF = Upgraded Security Data Management!
Thinking of integrating your data sources with AWS’s Amazon Security Lake offering? We got you covered! We are excited to announce Abstract Security’s new addition to our ever-growing integration list – Amazon Security Lake!
Quick recap on Amazon Security Lake
Amazon Security Lake from AWS is purpose-built for you to centrally query your data with Amazon Athena and OpenSearch and build machine learning algorithms on your data via Amazon Sagemaker. With growing compliance and data regulatory needs, it’s essential to unify security data operations in a central platform for visibility and reporting, and that’s essentially what Amazon security lake aims to deliver.
Not at all! What’s cool about this offering is that it supports bringing custom data sources, outside of AWS into Amazon Security Lake via the Open Cybersecurity Schema Framework (OCSF) standard. OCSF is an open and collaborative project that creates a standardized, vendor-agnostic database structure to normalize security events. This creates a uniform format for events regardless of the source origin’s format or syntax. The benefit of such normalization is that data can be routed from any source to any destination, and analytics can be shared between platforms.
So why do I need Abstract Security?
Amazon Security Lake provides the basic building blocks to bring in custom data sources outside of AWS services. To fully realize the power of the Amazon Security Lake requires you to build software to gather the data and normalize the data coming from your external data sources into the OCSF format.
Abstract Security's pipeline platform normalizes data in real time from your data sources to the same OCSF schema. So, you can route data from any source to Amazon Security Lake, without needing to normalize or optimize it yourself. By integrating with Abstract, you can remove noise, ingest data from hybrid cloud environments, normalize in real time and enrich with live-streaming threat intelligence before storing it. This integration plays a key role for our joint customers in multiple ways
Easily onboard any data source Abstract has over 100 (and counting) prebuilt, easy to configure integrations with support for Syslog, HTTP, and API based data sources, in addition to robust support for Cloud data sources such as S3, Kinesis Firehose, GCP GCS and Pub/Sub, or Azure EventHubs. Easily onboard any data source – hybrid cloud environments, SaaS solutions and even on-premises data sources without worrying about normalizing or optimizing it after storage.
Don’t pay storage costs for data that is unnecessary With large volumes of data coming in, the task of eliminating noise becomes cumbersome without full visibility. Abstract Security is built on data and threat expertise with out-of-the-box data processing rules. These eliminate unnecessary entries, redundant telemetry and other noise from incoming data in real time before routing to your storage.
Easily migrate from existing storage data lake to Amazon Security Lake Normalizing data in real time so you can migrate your data from any other storage destination to Amazon Security Lake without worrying about the operational burden and switching costs.
No code-experience required for data operations! Abstract not only makes data operations simplified but also does it in a user-friendly way with a drag and drop, no code experience model; empower your security team with time saving tools without the overhead of custom engineering or yet another query language.
Wait, there’s more! Abstract Security also provides streaming analytics on incoming data from your data sources and matches them against millions of Indicators of Compromise (IOCs) amplifying threat insights, before routing it to Amazon Security Lake.
Get Abstract Security to see all this and more in action today!
Aqsa Taylor
Dec 5, 2024
Product
Abstract Security Welcomes OmegaBlack To Growing Roster of New Customers
Abstract Security, building the future of AI enabled security operations, today announced it has added threat intelligence expert OmegaBlack along with several new MSSP customers over the past two months as the company solidifies its position as channel friendly, working the majority of its new business deals through the channel.
OmegaBlack is using Abstract Security for threat detection based on threat intelligence data to correlate against security logs and is utilizing the newly released Abstract Intel Gallery. Before Abstract Security, OmegaBlack would spend many hours every week dealing with complex integrations and legacy security tools. Now, the company combines all of its threat intel data with customer events, adds context, and does it all with less headcount and none of the manual work to set up playbooks.
"Our team has been searching for a better solution to weave threat intelligence into our data pipelines and detection workflows and came up short. We considered developing our own solution, but the complexity and resources required made this impossible for us," said Jonathan Kovacs, CEO of OmegaBlack. "Abstract Security has completely redefined security platforms, delivering solutions we never thought possible to serve our clients more effectively. Nobody else is doing this and we didn't even know it was possible to serve our clients this way."
Channel Program Expansion
Abstract Security's channel program is expanding with the addition of several new customers which represent the finserv, healthcare and technology industries.
"The channel continues to be number one for Abstract Security and we've been working very closely since our official launch in the spring with our partners to reach more potential customers," said Chris Camacho, COO and co-founder of Abstract Security. "We are thrilled to be working with Jonathan and the team at OmegaBlack and love seeing customers getting value from the solutions that our engineering team works extremely hard to build."
Abstract has seen growing demand since emerging from stealth andannouncing its Seed funding in March 2024. In April, Abstractannounced the opening of its first Middle East office. In May, the company announced theaddition of Christopher Key to its Board of Directors and was selected as a"Pioneering Cybersecurity Startup" winner, as part of the 2024 Global Infosec Awards.
About Abstract Security
Abstract Security, founded in 2023, has built a revolutionary platform equipped with an AI-powered assistant to better centralize the management of security analytics. Crafted by category creators and industry veterans known for redefining the cybersecurity landscape, Abstract transcends next-gen SIEM solutions by correlating data in real time between data streams. As a result, compliance and security data can be leveraged separately to increase detection effectiveness and lower costs – an approach that does not currently exist in the market.
The leadership team of Colby DeRodeff, Ryan Clough, Aaron Shelmire, Chris Camacho, and Stefan Zier bring a unique set of experiences and backgrounds in product development and company-building expertise, at companies such as ArcSight (acq. by HP), Mandiant (acq. by Google), Palo Alto Networks and Sumo Logic. For more information about the company, please follow the journey@Get_Abstracted.
Abstract Team
Oct 9, 2024
PR
Abstract and Chris Camacho: Threat intelligence - the next evolution
In this episode, Ryan had the pleasure of welcoming Chris to Abstract Security and discussing the importance of threat intelligence in the cybersecurity field. Chris and Ryan, both with operational roots in the financial services space, shared insights on how customers are utilizing threat intelligence in their Security Operations Centers (SOCs). We highlighted the shift towards more efficient and automated processes, with many organizations integrating threat intelligence into their Security Information and Event Management (SIEM) systems for proactive threat hunting.
We covered the evolution of threat intelligence use cases, from basic indicator investigation to advanced analytics and event enrichment. Chris emphasized the need for scalable solutions like Abstract Security to help organizations of all sizes effectively manage and leverage threat intelligence data. We also touched on the challenges of prioritizing and integrating threat intelligence feeds into detection and prevention tools, underscoring the importance of a platform that can handle large volumes of data.
A key highlight of the episode is the announcement of the Abstract Intelligence Gallery, a platform designed to streamline the management and correlation of intelligence data from various vendors. This new offering aims to empower organizations to make their threat intelligence actionable, enabling them to proactively identify and mitigate potential security threats.
Abstract Team
Aug 1, 2024
SIEM
Abstract and Jon Oltsik: Future of Security Operations
In this episode of Abstract Canvas, I had the pleasure of discussing the evolution of SIEM with Jon Oltsik, a former ESG analyst and security operations expert. We took a look at the history of SIEM, starting from the early days of focusing on perimeter use cases to the current challenges of managing vast amounts of data, especially with the transition to the cloud.
Jon highlighted the importance of adopting an architectural approach in security operations and analytics, emphasizing the need for a common data service layer to handle the increasing volume and variety of data sources. We also discussed the shift towards platformization in the industry and the challenges of balancing specialization with the need for a comprehensive security solution.
We also touched upon the significance of standardization in cybersecurity, with a particular focus on the OCSF initiative and the importance of industry-wide support for such standards. And we highlighted the ongoing debates and challenges in the industry, such as the need for log health monitoring even after two decades.
It was a super insightful and thought-provoking conversation with Jon, shedding light on the current state of security operations and the need for real innovation and collaboration in the industry. Listen until the end for our shared vision for the future of security operations and a commitment to driving positive change in the field.
Abstract Team
Jun 18, 2024
SIEM
Cal Poly Incubator Interview
Abstract Team
Apr 1, 2024
SIEM
C2 Corner
From Frameworks to Flywheels: How Dropbox’s Jess Jimenez Measures Security Momentum
This week's C2 Corner features Jess Jimenez, the Interim Head of Security at Dropbox where she leads global teams across enterprise and product trust and security. With more than 25 years of experience spanning Fortune 100 companies, financial services, and defense, she brings a pragmatic, risk-focused approach to building resilient programs. A frequent industry speaker and mentor, Jess is passionate about advancing security leadership and developing inclusive, high-performing teams.
Boards and executives press us with the same questions: Are we secure enough? What value is Security delivering? Which business outcomes are we enabling? Answering those questions credibly requires more than static frameworks or backward-looking metrics. It requires a system that shows momentum and impact.
Over the years, we’ve turned to frameworks and metrics for answers: NIST CSF maturity tiers to ISO certifications, FAIR-based risk quantification, and operational measures like Mean Time To Detect (MTTD) and Mean Time to Recover (MTTR). Each serves a purpose in benchmarking, demonstrating diligence, and tuning operations. But they all share a limitation: they capture state, not motion. They don’t tell us where momentum is building or where friction is slowing us down.
Frameworks show where you are. Metrics show how you’ve performed. Flywheels show where momentum is building and where the next investment will compound.
Where Frameworks and Metrics Fall Short
Security programs are no strangers to measurement. Maturity models such as NIST CSF 2.0 or the ISO 27000 series give us tiers of capability. They are useful for benchmarking, but they’re snapshots in time and they often require year over year heavy assessment cycles to generate trends. Operational metrics like MTTD and MTTR quantify efficiency, but they are lagging indicators by nature. Governance metrics such as certifications, audits, and compliance posture demonstrate diligence, but rarely prove that risk has been materially reduced.
These are all important tools. But they don’t capture the reality that security is a dynamic, interconnected system. Improving one capability often amplifies progress in another. That’s the essence of a flywheel.
Security Flywheel Mapping
A flywheel describes momentum, not just state. The flywheel demonstrates how progress in one part of the system reinforces another, creating compounding impact. In Security, flywheels are critical. For example, when we build “secure by default” guardrails into our engineering environments, we don’t just reduce vulnerabilities. We improve detection quality, speed response, and feed intelligence back into stronger controls, improving the guardrails. The loop reinforces itself. Similarly, investing in early-stage product security reviews, augmented by secure code libraries and automated security testing isn’t just about fewer defects; it accelerates product and feature delivery, improves customer trust, and strengthens the business case for further security investment.
Every flywheel has four dimensions: leading indicators that show momentum building, lagging indicators that demonstrate impact, accelerants that spin the wheel faster, and drag points that introduce friction and slow us down. Viewed this way, a flywheel becomes more than a metaphor. It’s a management model: a way to see where progress is reinforcing itself, where it’s stalling, and how to frame Security’s impact in terms executives understand: momentum, leverage, and value creation.
Example Flywheels in Practice
Here are a few illustrative flywheels, with indicators, accelerants, and drag points. Use these as starting points for mapping your own.
Flywheel
Leading Indicators (drivers)
Lagging Indicators (outcomes)
Accelerants (where to invest for momentum)
Drag Points to Watch
Detection → Response → Confidence
% log coverage, # of threat hunts, alerts triaged
MTTD, MTTR, dwell time, reduced escalations
Expand telemetry coverage; invest in threat hunting; automation for triage and enrichment
Alert fatigue, false positives
Vulnerability → Remediation → Risk Reduction
Critical vulns identified, patch cycle time, backlog trend
% code scanned pre-commit, # of training completions
Reduction in late-stage defects, reduction in bug bounty findings, lower rework costs
Integrate SAST/DAST earlier in the development lifecycle; developer-friendly security tooling; targeted training
Tooling friction, impact to developer productivity, developer resistance
Compliance → Trust → Investment
% audit findings closed on time, remediation velocity
Audit results, exec confidence, customer trust
Streamlined GRC tooling; automation of evidence collection; embedding compliance champions in teams
Overhead, checklist mentality
What stands out is that accelerants are often targeted, practical investments—a better prioritization tool, more automation, a streamlined process. Small changes, if applied in the right place, can compound quickly across the loop.
By structuring flywheels in this way, we trace how early inputs cascade into outcomes, where momentum can be accelerated, and where drag must be managed.
How CISOs Can Apply Flywheels
Putting flywheels into practice is a tool for discipline in leadership. Focus on the loops that are critical to your program rather than spreading attention thin. Within each loop, select a handful of leading and lagging indicators that matter and can be measured without overwhelming the team.
Interrogate the system: where is momentum naturally building, and where is it stalling? Accelerants, whether AI, automation, new tooling, or additional staff, should be directed to loops where investments can be leveraged into outsized impact. Conversely, recognizing drag points helps avoid wasteful spend or exposes areas where process redesign may yield more than resource allocation.
Most importantly, flywheels give us a language for storytelling. Boards don’t just want metrics; they want to understand how Security’s work compounds into resilience and trust. When we present flywheels, we shift the conversation from cost and compliance to momentum and strategic advantage.
Why This Matters
For boards and executives, flywheels provide a different lens for understanding Security. Traditional metrics and maturity scores capture compliance and efficiency, but they don’t explain whether the program is getting stronger with each cycle. Flywheels highlight momentum, where small investments generate compounding returns, where friction erodes value, and where resilience is built quarter after quarter.
When framed this way, Security shifts from being seen as a cost center to being recognized as a strategic system of reinforcing loops that create trust, enable growth, and sustain business advantage.
Closing Thought
For CISOs and practitioners, the takeaway is practical: flywheels are a management system, not just a metaphor. By mapping those loops, defining indicators, and surfacing drag, we create a living model of our programs. Use this model to guide trade-offs and to tell a sharper story about where to invest or where to cut.
So here’s my challenge: this quarter, pick one flywheel in your program and map it. Overlay which teams drive each stage. Then bring that story to your leadership team, not as another compliance update, but as evidence that Security doesn’t just defend the business. It builds compounding resilience and trust.
The question isn’t only ‘Are we compliant?’ or ‘Are we efficient?’ The real question is: are we building compounding momentum that makes the business stronger and more resilient with every cycle?
Jess Jimenez
Oct 8, 2025
C2 Corner
C2 Corner: Stop Automating the SOC. Start Automating Remediation.
The Problem We Both See
Security teams already generate more risk and noise than IT partners can absorb. AI will amplify that signal by creating more findings and more decisions. If we do not change the operating model, the backlog grows and real risk stays on the field.
Industry research continues to show the same pattern: alert overload, tool sprawl, manual handoffs, and long time-to-remediate. Automation exists in many SOCs, but it often stops at the point of detection. The work that actually reduces risk still depends on manual coordination with IT, change control, and validation.
This edition of C2 Corner features John Perkins, a cybersecurity professional with deep expertise in managed security services, operations, and product development. Since November 2017, he has been leading Threat Angler as its founder and operator and earlier in his career, held leadership roles such as Director of Managed Security Services at Insight, where he built a practice and centered on cybersecurity outcomes, and VP of MSS Service Delivery at Guidepoint Security, where he managed SOC functions.
John's Take
We are spending too much time automating the SOC and not enough time automating the work that reduces risk. We chase AI SOC buzzwords while remediation stays manual and fragmented. The separation between IT and security keeps the backlog in place.
Security needs to leave the reporting tower. We cannot just publish risks and hand over recommendations. We need to join customers and IT in the remediation game and own the outcome together.
Chris's Take
The win is not more alerts or nicer dashboards. The win is risk down. That means shrinking time to remediate and doing it in a way that IT can trust and operate with every day. Across enterprises the pattern is consistent: findings are cheap, fixes are expensive. Tooling is fragmented, and workflows are the bottleneck.
AI can help with triage, drafting remediation steps, and summarizing status, but it must be grounded in real configuration and asset data, with humans approving production changes. Guardrails matter.
A Practical Blueprint: From Findings to Fixes
Here is a pragmatic model any security and IT team can run starting this quarter.
One backlog, shared ownership
Move to a single remediation backlog that both IT and security can see and update. Tickets for vulnerabilities, misconfigurations, and high risk detections flow into the same queue with owners and due dates. Measure time to first action and mean time to remediate by asset class. Make MTTR a board level metric.
Tighten the intake
Reduce duplicate and low value alerts at the source. Consolidate tools where practical and suppress duplicates before they hit people. This step alone can free a meaningful share of analyst time.
Automate the boring parts of fixes
Automate evidence collection, change requests, CAB templates, and rollback plans. Let AI draft remediation plans, but require grounding on live data and a human to approve the change.
Close the loop, not just the case
Push status back into detection and risk views automatically. When a patch deploys or a control is hardened, analytics should reflect the change the same day.
Prove risk moved
Track MTTR, percent of critical issues closed within the defined SLO, and repeat offender assets. Use these three numbers in quarterly reviews. They are simple and difficult to game.
How Abstract Security Fits
Security first data pipeline: We cut duplicate and low value noise at ingestion and enrichment, so fewer junk items reach the queue.
Built in detections and intel with AIG: Higher quality findings with lower volume, aligned to real attack behaviors.
Closed loop workflow: We stream outcomes back into analytics when a change lands, so the picture reflects reality.
Guardrailed automation: We support automation where it is safe and measurable. AI is grounded on your telemetry and gated by human checkpoints for production changes.
What Good Looks Like in 90 Days
Duplicate alerts down and a smaller tool surface for analysts.
A single remediation backlog with clear owners and SLAs.
MTTR and percent of critical issues remediated become shared KPIs for security and IT.
Automation handles intake, enrichment, evidence, and status updates. People focus on changes and validation.
Executive reviews focus on risk removed, not just alerts investigated.
Closing Thought
Automating the SOC without automating remediation is like calling plays without moving the chains. Security teams need to get on the field with IT, own the fix as much as the finding, and use AI carefully to speed safe changes. That is how we turn more signal into less risk.
John Perkins
Sep 24, 2025
C2 Corner
C2 Corner: From Mandate to Maturity
More Than a Federal Compliance Exercise
Zero Trust has evolved well past the buzzword stage. For U.S. federal civilian agencies, it’s a legal requirement: Executive Order 14028 (2021) directed agencies to adopt Zero Trust by 2024 as a way to harden defenses against nation-state and criminal actors. The Cybersecurity and Infrastructure Security Agency (CISA) translated this mandate into its Zero Trust Maturity Model (ZTMM)—a practical framework for agencies to follow.
For commercial enterprises, there’s no such mandate, but there is pressure. Regulators, partners, and boards are increasingly expecting Zero Trust maturity as a baseline, not just a “nice to have.” For those organizations, adopting CISA’s ZTMM as a strategic benchmark and pairing it with the Cloud Security Alliance (CSA)’s Certificate of Competence in Zero Trust (CCZT) methodologies—offers both the measurement system and the playbook needed to turn Zero Trust into a driver of resilience and trust.
This edition of C2 Corner features Joe Kim, CTO at Squadra Solutions. Joe supports federal and large enterprises and holds multi-vendor certifications including the Cloud Security Alliance Certificate of Competence in Zero Trust (CSA CCZT), Zero Trust solutions such as Palo Alto Networks, Zscaler, and Netskope. His focus is on translating frameworks into results.
CISA’s ZTMM 2.0 — The Federal Blueprint
CISA’s updated model (released in April 2023) breaks Zero Trust into five foundational pillars:
Identity – verifying that every user is who they claim to be (e.g., phishing-resistant MFA).
Devices – ensuring endpoints meet security posture requirements before connecting.
Networks – segmenting and monitoring traffic to minimize lateral movement.
Applications & Workloads – securing code, APIs, and cloud-native workloads.
Data – protecting sensitive information wherever it resides or moves.
These are reinforced by three cross-cutting capabilities:
Visibility & Analytics – making threats and anomalies observable.
Automation & Orchestration – responding quickly and consistently at scale.
Governance – aligning policies, compliance, and accountability.
Each pillar moves through four maturity stages:
Traditional (legacy, perimeter-based trust)
Initial (basic Zero Trust controls in place)
Advanced (integrated, risk-informed controls)
Optimal (dynamic, adaptive, continuous Zero Trust)
This tiered approach acknowledges that no agency (or enterprise) can achieve “Optimal” overnight. Instead, progress comes through incremental adoption, policy refinement, and cultural alignment.
For federal agencies, ZTMM 2.0 is the north star. For enterprises, it’s a market signal: regulatory expectations are converging, and customers, partners, and boards increasingly expect ZT maturity as table stakes.
Why Enterprises Should Pay Attention
Even without federal mandates, enterprises face parallel challenges:
Hybrid complexity: Identities, devices, and workloads span on-prem, multi-cloud, and SaaS.
Escalating threats: Lateral movement and supply chain exploits thrive in environments with implicit trust.
Board-level pressure: Cybersecurity is now a fiduciary priority; resilience and continuity drive investment.
By adopting ZTMM 2.0, enterprises can measure where they stand, prioritize investments, and demonstrate tangible progress to stakeholders. But ZTMM explains what maturity looks like—not how to get there. That’s where CSA’s methodology adds depth.
CSA’s CCZT Methodologies — Turning Maturity Into Execution
While CISA’s ZTMM provides a blueprint for what maturity looks like, it stops short of prescribing how to get there. That’s where the Cloud Security Alliance (CSA) steps in. Through its Certificate of Competence in Zero Trust (CCZT) program and companion methodologies, CSA offers organizations a repeatable, actionable framework for operationalizing Zero Trust.
Zero Trust as a Business Strategy
One of CSA’s most important messages is that Zero Trust isn’t an IT project but rather an organizational strategy. To succeed, it must align with business priorities and risk appetite. That means:
Start with risk management: Protect the most critical assets first (your “High Value Assets” or HVAs).
Secure executive buy-in: Link Zero Trust initiatives directly to outcomes the C-suite cares about—reduced breach impact, compliance wins, and stronger customer trust.
Avoid silos: Make sure ZTMM’s pillars (identity, devices, networks, etc.) are tied back to overarching business goals, not treated as isolated technical efforts.
Monitor and maintain continuously – Visibility and analytics ensure policies adapt to changes in users, devices, and threats.
This cycle neatly aligns with ZTMM’s maturity stages: start at “Traditional,” iterate with each protect surface, and climb toward “Optimal” with measurable milestones.
Implementation: Incremental and Iterative
CSA also provides a practical playbook for execution:
Run a gap analysis against ZTMM pillars.
Update policies to reflect Zero Trust principles (e.g., phishing-resistant MFA, device health checks, continuous monitoring).
Pilot one protect surface at a time—such as privileged identity management or your most sensitive cloud workload.
Maintain a living checklist across governance, compliance, analytics, and disaster recovery to track progress.
This approach makes Zero Trust fundable, measurable, and resilient—avoiding the trap of a “big bang” project that stalls out under complexity or cost.
Monitor and maintain: Use automation to enforce policies, track metrics for board reporting, and adjust governance as maturity grows.
Key Takeaway
ZTMM = defines the destination and maturity levels.
CSA = provides the roadmap and actionable steps to get there.
Together, they make Zero Trust measurable, fundable, and achievable.
Reducing Attack Surface Through Integration
By integrating CISA’s maturity model with CSA’s methodologies, enterprises gain both a benchmark and a playbook:
Attack surface reduction: Micro-segmentation, least privilege, and continuous validation minimize lateral movement.
Measurable maturity: ZTMM tiers provide a yardstick for executives and auditors.
Operational sustainability: Strategy, planning, and implementation practices ensure zero trust isn’t a one-time project but a culture shift.
From Vision to Practice
Zero Trust adoption is an enterprise-wide cultural and technical shift. Federal agencies may be mandated to comply, but enterprises that unify these frameworks gain strategic advantage:
Faster board approval for cybersecurity investments.
Stronger credibility with regulators, auditors, and partners.
A sustainable architecture that scales across cloud, on-prem, and hybrid environments.
Conclusion: Start Small, Scale Smart
Zero Trust isn’t achieved in a single sprint; it matures through disciplined, incremental steps.
Use CISA’s ZTMM 2.0 to baseline and measure your current state.
Apply CSA’s five-step methodology to plan and execute with precision.
Reduce your attack surface—starting with the assets that matter most.
By integrating ZTMM’s benchmark with CSA’s playbook, enterprises not only reduce risk but also build a culture of resilience, operational discipline, and measurable cybersecurity maturity. This combination transforms Zero Trust from a compliance exercise into a strategic advantage—winning faster board approval, strengthening credibility with partners and regulators, and ensuring security scales alongside the business.
Beyond CISA and CSA
For organizations operating in high-assurance environments—such as defense, finance, or critical infrastructure—the Department of Defense Zero Trust Reference Architecture provides even deeper technical guidance. Its focus on mission threads, segmentation, and real-time risk assessment can help enterprises design controls that stand up to the most demanding adversaries.
Abstract’s Perspective: Zero Trust as a Measurable Journey
By Chris Camacho, Co-Founder and COO, Abstract Security
Joe is right: Zero Trust maturity takes more than good intentions. You need a benchmark to measure progress and a playbook to drive execution. That’s where CISA’s ZTMM 2.0 and CSA’s methodology complement each other so well—one defines what maturity looks like, the other provides how to get there.
Where most organizations stumble is in execution at scale. Policies may be well-designed, but in practice the signals are noisy, context is fragmented, and security teams drown in data. That’s why Zero Trust is as much about operational discipline as it is about architecture.
From my perspective, the lesson is clear:
Start small by protecting your crown-jewel assets.
Measure consistently against recognized models like ZTMM.
Iterate quickly by embedding Zero Trust into day-to-day processes, not just strategy documents.
At Abstract, we think a lot about how execution really happens—inside the data pipeline where policies meet signals. When context and control are applied there, organizations can make Zero Trust not just aspirational, but operational.
When enterprises treat Zero Trust as a living, measurable journey, not a one-off compliance project, they build resilience that scales with the business. That’s the maturity curve we should all be aiming for.
Joe Kim
Sep 10, 2025
C2 Corner
C2 Corner: The Tactician and the Strategist in Cyber Threat Intelligence
In Cyber Threat Intel, AI is the Tactician. Humans are the Strategists.
I’ll never forget one of the first companies I worked with trying to integrate cyber threat intel into their security operations. They were post breach and, frankly, willing to throw a lot of money at a lot of their problems. At vulnerability management. At security operations. At threat hunting. At incident response. At threat intelligence (from the logs and IOCs to the deep web monitoring and adversary group profiles). It was the tool and consultant equivalent of Hunger Games. It wasn’t until we took a step back to strategically think about how we stitch all the patchwork of automated inputs and outputs across functions and tooling that we were able to build a collaborative and effective approach to cyber defense.
Platformitization and agentic AI can feel a bit like the new Hunger Games these days. We now can ingest millions (if not more) indicators, cluster related IOCs, detect patterns across logs, and correlate data sources. We have so much data and tooling we’ve created tool sheds we call platforms to house all our tooling in one consolidated place. There’s no question all of this is changing the way threat intel is shaping the way we as human professionals approach cybersecurity in some interesting ways: we’re the strategists moving forward, not the tacticians.
AI as the CTI Tacticians: Precise, Fast, Repeatable
AI is the quintessential specialist. Its strength lies in speed, scale, and precision. It excels at triaging threats, clustering indicators, structuring and enriching information into meaningful patterns, and scanning data continuously without fatigue. AI handles things best when the rules are well defined in environments with clear rules and repeatable patterns. Unlike a tactician on the battlefield, AI can run countless maneuvers faster and with more accuracy than a human ever could. It is brilliant at executing precise, repeatable plays, even predictive ones based on past behavioral analysis based on pattern recognition (again, as long as it’s operating within those clearly defined and repeatable rules).
Humans are Your Strategists
But CTI isn’t just about execution – it’s about direction. Most of the environments we live and operate in don’t have stable, rule-bound systems. The world of cyber threat adversaries is one with shifting rules, incomplete information, and few obvious playbooks. Unlike specialists, who excel in those stable, rule-bound systems, strategists draw on diverse experiences and cross-disciplinary skills to navigate uncertainty and apply real strategic thinking. AI may flag threats, but humans decide which battles to fight and how intelligence informs our broader security posture. They do this by doing things AI doesn’t do on its own:
Interpret AI outputs in context – prioritizing based on business risks, regulatory demands, or geopolitical shifts.
Ensure AI isn’t operating on autopilot: validating, tuning, and avoiding bias or blind spots.
Bridge the gap between cyber operations and executive leadership.
Ultimately, a strategist is someone vital that needs to guide AI, ensure good governance, oversee identity security, and translate insights into actionable, business aligned intelligence.
So you want to be a Cyber Threat Intel Strategist? Be a Generalist.
Thinking back to that post breach feeding frenzy experience so many years ago, something I took away was how the specialists in each part of the organization were missing a critical piece of the puzzle because they didn’t understand each other’s experiences or how they could fit together. Everyone was extremely well intentioned, talented and focused on their specific roles and responsibilities, but they were only looking for intelligence that fit their slice of the cybersecurity defense operation. It was after we started taking a real generalist approach pulling from a broad set of domain knowledge and experiences, reframing the problems, and applying solutions from one domain to another that real cyber fusion started to happen.
Consider the unfamiliar setting of the cyber threat intelligence landscape:
There is no “fixed game board”. Threat actors change tactics constantly. For example, ransomware groups pivot from encryption to extortion-only models; nation-state adversaries adapt malware or pivot from denial or theft of information to distribution of misinformation depending on their objectives.
The environment itself is unpredictable. Emerging technologies – from AI-driven phishing to deepfake-enabled fraud – change the attack surface overnight. Some of these campaigns weren’t in the playbook just a few years ago.
The stakes in the game are dynamic. We live in a world that’s changing as rapidly as our technologies. A CTI report may need to explain not just technical indicators, but how a power grid intrusion ties to geopolitical maneuvering, or how stolen employee data matter more after a change in power in a tenuous foreign corporate relationship.
Now consider the breadth of generalists that helps them thrive in uncertainty:
They connect dots across disciplines. A strategist with experience in geopolitics, business continuity, and cyber operations can see that a phishing campaign is not just a nuisance, but a potential precursor to a supply chain attack with reputational risk.
They adapt faster to new, undefined problems. When AI flags an anomaly the system can’t fully explain, a generalist’s diverse perspective helps critically evaluate when it’s worthy of escalation. They don’t freeze because the problem doesn’t look like a past case study.
They translate complexity into action. Executives don’t want (and can’t make sense of) a list of IOCs. They want to know things like, “Should we halt this merger? How should we handle this vendor? Should we report this to the regulators?” Generalists are skilled communicators, able to frame novel technical problems that don’t have established playbooks in business and policy terms.
The future of CTI, and fully optimized cyber defense operations, rests on the effective integration of AI as the powerful specialist/tacticians and humans as the adaptable generalist/strategists. AI brings precision and repetition. Humans bring vision, creativity, and adaptability. CTI teams that combine both will be the ones best equipped to outthink adversaries and secure complex systems in this era of rapid change.
Abstract’s Perspective: Turning Intel Into Action with AIG
By Chris Camacho, Co-Founder & COO, Abstract Security
This strategist-first approach is exactly why we built the Abstract Intel Gallery (AIG).
Threat intelligence can’t sit on the sidelines as disconnected feeds. It needs to live in the data pipeline, enriching events, prioritizing detections, and cutting noise before anything reaches the SIEM.
With AIG, we are:
Shifting detections left by applying intelligence early in the pipeline, before logs overwhelm downstream analytics
Making threat intel actionable instead of just “interesting,” bridging the gap between data and real-time defense
Reducing noise and cost by aligning enrichment with the signals that matter most
Our goal is to eliminate the “Hunger Games” approach to intelligence integration. Instead of piling on more tools and feeds, AIG ensures that AI-driven tactics and human-led strategy align seamlessly inside the pipeline.
This is how organizations move from reactive intelligence programs to proactive defense strategies— and it’s where CTI finally delivers on its promise.
Want to learn more about how Abstract is helping teams shift detections left with AIG? Reach out and let’s talk.
Simone Petrella
Aug 27, 2025
C2 Corner
Startup Partnerships: Lessons from the Front Lines
How to vet, scale, and collaborate without slowing down
Setting the Stage
Security leaders love to debate whether a single platform vendor or a mosaic of best-of-breed tools offers the safest and most cost-effective path. The truthful answer is that it depends on your threat model and risk appetite.
Over the last four years I have leaned into early-stage startups for several critical capabilities. That decision unlocked speed, innovation, and strategic leverage, yet it also introduced risks that required clear guardrails. This post is about turning those risks into a repeatable advantage.
Why Startups Belong in Your Stack
Cutting-edge innovation: Startups often ship features around new attack techniques long before legacy vendors finish grooming a backlog.
Agility and responsiveness: Founders crave practitioner feedback. I have watched teams add an API endpoint in days or integrate with our SIEM overnight because iteration is their default mode.
Strategic influence: When you engage early, you do not just consume the product; you shape it. Design-partner status gives you a voice in the roadmap and often locks in favorable pricing.
The STEEP Model: Discipline Without Bureaucracy
Former Aetna CISO Jim Routh created a weekly ritual called STEEP (Security Team Early-stage Evaluation Process). His team spends ninety minutes every week reviewing early-stage solutions and pilots roughly one in twenty. The lesson is simple: scheduled experimentation surfaces game-changing technology while keeping risk in check.
Source: Jeff Stone, CyberScoop, “EULA out, equity in: Why startups are now a part of larger companies’ security budgets,” 2018
Risks and a Playbook to Mitigate Them
Risk
Practical safeguard
Operational immaturity (no SOC 2, ISO 27001)
Run a lightweight control-gap assessment and tie commercial milestones to certification progress
Business continuity (failure or acquisition)
Include survivability clauses, code-escrow provisions, and thirty-day data-export rights
Security of the security provider
Begin in a sandbox, restrict write scopes, and review SDLC artifacts just as you would for any critical supplier
Tactics That Actually Work
• Pilot in a segmented environment and measure alert fidelity, dwell-time impact, and MTTR improvement.
• Define joint success criteria upfront and revisit them every two weeks.
• Rotate internal champions each quarter so startup knowledge spreads and survives personnel moves.
These operational muscles keep startup adoption intentional rather than accidental.
A Word from Abstract Security
Abstract was born the same way many of our readers evaluate startups today. Early design-partner feedback shaped our streaming pipelines, analytics engine, detections, and threat-intel integrations. We are proof that disciplined collaboration can produce production-grade results without the vendor bloat that CISOs dread.
Final Thoughts
Startups will not replace your incumbent platforms, and platforms rarely ship tomorrow’s breakthroughs. The strongest security programs blend both. Use large vendors for consistency and scale; use startups for speed and strategic edge.
Join the Conversation
Have a design-partner win or a lesson learned? Drop a comment or reach out. C2 Corner exists to share blueprints, not buzzwords. Big thanks to Israel Bryski for opening his playbook. Stay tuned for more voices from the field.
Israel Bryski
Jul 30, 2025
C2 Corner
Inside the Web of Scattered Spider: DFIR Lessons and the Future of Modern Detection
About the Co Author
Alex Waintraub is the Founder of Waintraub Cyber Solutions, where he guides organizations through both proactive preparation and rapid response to cybersecurity incidents. With nearly 15 years of hands-on experience in digital forensics, threat intelligence, and ransomware negotiation, Alex has led crisis response efforts across sectors including healthcare, finance, manufacturing, and aviation. He previously served as Director of Incident Response at Arete and is a frequent speaker at major industry conferences such as RSA Conference and IT Nation Connect.
Setting the Stage
C2 Corner exists to turn real‑world incidents into practical playbooks. When Alex called to compare notes on the recent airline breaches linked to Scattered Spider, it was clear this story was bigger than just another headline. It is a case study in how fast social engineers can leapfrog traditional defenses and how quickly defenders must evolve. What follows blends Alex’s DFIR field notes with my own perspective on what organizations need to change now.
How Scattered Spider Slips Past the Front Door
Scattered Spider is an aggressive, social engineering-focused threat group linked to high-profile breaches at MGM Resorts, Caesars and several major airlines. Known for using live voice phishing and identity pivoting rather than malware, they target identity infrastructure directly and move fast.
They pick up the phone, impersonate your admin, and convince your help desk to reset multifactor authentication. From there they pivot across identity platforms, cloud consoles, and collaboration tools while monitoring your own response chats. In one investigation Alex led, Scattered Spider moved from a single service account to Active Directory, CyberArk, and cloud backups in hours, showing an uncanny grasp of internal topology.
Why airlines? Airlines became a natural target for Scattered Spider not only because of data value, but due to sprawling vendor ecosystems. Loyalty programs, support contractors, and federated identity setups made it easy to impersonate a trusted link in the chain, and hard for defenders to detect the pivot in time. Trust chains are long, vendor ecosystems are sprawling, and loyalty data is lucrative. The MGM playbook (exploit human trust, not zero days) translated perfectly to aviation service vendors.
What Broke: Legacy SIEMs and Stale Playbooks
Traditional SIEMs struggled here for three reasons:
Static rules: They flag brute force but miss a rapid MFA reset followed by lateral movement.
Latency: Delayed ingestion turns real‑time attacks into slow motion reruns. By the time data shows up, attackers have already moved on.
Context gaps: Anomalous login alerts live in silos, never linked to simultaneous Slack permission changes or sudden outbound file transfers.
Four DFIR Takeaways
Help desk is your new perimeter. Your help desk is now part of your attack surface. Treat voice and chat verification with the same rigor as firewall rules.
Context beats collection. EDR, Okta, and VPN logs are table stakes, but they’re not enough. Correlating identity, device, cloud, and SaaS logs (like Google Workspace or Microsoft 365) is what actually helps teams catch lateral movement.
Ransom isn’t always the main objective. Many campaigns focus on exfiltration and pressure, not encryption.
Preparedness over heroics. Tabletop scenarios must cover MFA fatigue, voice phishing, and cloud pivoting. If your plan starts at the moment of containment, you are already behind.
My Take: Turning Lessons into Modern Detection
At Abstract Security we designed Lake Villa and our real‑time pipelines for exactly this style of adversary. Three principles matter:
Identity‑centric correlation in seconds. A help desk initiated MFA reset tied to a new Slack token and unusual S3 access triggers a single high fidelity alert.
Streaming enrichment at ingest. User role, asset criticality, and threat intel attach to every event before it lands in storage, shrinking dwell time.
Workflow ready context. The same enriched alert feeds SOAR, not just dashboards, so responders can lock accounts or revoke sessions with one click.
When teams replace or augment their legacy SIEM with a pipeline plus Lake Villa analytics, they close the time gap that Scattered Spider exploits. Detection shifts from “after the fact” to “while the actor is still dialing the next help desk agent.”
Priority (0-30 days)
Task
Expected Impact
Week 1
Run a live help desk social engineering drill
Expose verification blind spots
Week 2
Map MFA reset workflows into your detection rules
Surface abnormal chains immediately
Week 3
Enable streaming context enrichment for identity and collaboration logs
Reduce alert triage time
Week 4
Conduct a tabletop on cloud pivot and data leak extortion
Align exec, SOC, and legal on response
Closing Thought
Scattered Spider is already iterating on its next playbook. Our edge comes from iterating faster. Modern detection is no longer about amassing more logs; it is about weaving every human and machine action into a living storyline and surfacing the twist before the breach becomes tomorrow’s headline.
Alex Waintraub
Jul 16, 2025
C2 Corner
10 Cybersecurity Focus Areas That Actually Move the Needle
When someone steps into a security leadership role, there’s immediate pressure to prove value. You’re expected to assess risk, build relationships, and start showing momentum, all before the 90-day mark. That’s why I wanted to highlight this post from my friend Frank McGovern.
Frank has worked across multiple industries, launched Blue Team Con, and advised teams on how to modernize security programs in the real world. His take on what actually moves the needle in your first 180 days is one of the most practical breakdowns I’ve seen.
We asked if we could include it in this C2 Corner series. The ideas are Frank’s, written in his voice, with a few added thoughts from what we see at Abstract Security, especially around log strategy and detection coverage.
If you’re leading a security program, this is a strong foundation to start from.
1. Budget and Staffing
Before making any promises, align with finance and HR. Know your current headcount, available spending budget, and where the gaps are; especially around SIEM and data pipeline costs. Quick note: Teams that come in with a clean data strategy and visibility into SIEM spend tend to get faster support when budget season hits.
2. Documentation and Governance
Before making any promises, align with finance and HR. Know your current headcount, available spending budget, and where the gaps are; especially around SIEM and data pipeline costs. Quick note: Teams that come in with a clean data strategy and visibility into SIEM spend tend to get faster support when budget season hits.
3. SOC Transformation
Whether internal or MSSP-driven, the SOC needs tuning. That includes improving SLAs, tightening escalations, and refining detections.
4. Identity and Access Management
Get MFA enforced, consolidate identity platforms, and review AD hygiene. These are fast credibility wins with big risk reduction.
5. Posture Assessments
Run internal scans, check cloud configs, and get a full picture of exposure before the next pen test or audit brings it to light.
6. Email Security
Still the number one way attackers get in. Lock down SPF, DKIM, and DMARC, and reduce noise in your SOC at the same time.
7. Firewall and Segmentation Reviews
Stale firewall rules can stick around for years. Cleaning them up early sets the stage for proper segmentation later.
8. Application Security
Check your SDLC. Make sure AppSec tools are in place, being used, and feeding results back to dev teams for action.
9. Log Strategy and Detection Engineering
More logs do not mean more security. Focus on visibility. Pick high-value data sources and align detections to real threats. From Abstract: Modern data pipeline platforms play a crucial role here. At Abstract, we focus on helping teams route, filter, and enrich the logs that matter most ensuring the right data reaches the right tools, fast. While some organizations opt for platforms like Cribl for broader routing needs, Abstract is purpose-built for security teams. Either way, a modern log strategy delivers quick wins and long-term value.
10. Third-Party Risk Management
Keep the process lightweight and focused on your most critical vendors. Contracts and SOC 2 reviews cover most of what you need at this stage.
After these ten are under control, you can shift attention to data classification, insider threat, and cyber insurance. Those are important, but don’t come first.
Final Thoughts:
The first six months are about earning trust and driving results. Focus on the fundamentals. Deliver visible outcomes. The rest gets easier from there.
Big thanks to Frank for letting us share this playbook. If you’re stepping into a new security role or helping someone who is, this list is a great place to start.
We’ll be back in two weeks with the next C2 Corner post. Until then, let us know what you’d add to the list or how you’d prioritize differently.
C2 Corner: Achieving Fortune 50 Security With a SMB Budget
From Enterprise to SMB: A Trial by Fire
Jake Lee
In my past lives at larger enterprises I often had access to best-in-class technologies, dedicated vendor representatives, and an extensive budget to pursue defense-in-depth (and in many cases, redundant solutions) layered controls and solutions. However, when I made a paradigm shift from a Fortune 50 to a fast, lean SMB (Small-to-Medium Business), I soon found myself in a proverbial trial by fire that would require innovation, agility, efficiency, and resourcefulness.
Not only was it a new environment, but it was also a new industry—going from the financial sector to healthcare—and I soon found myself on Day One wearing multiple hats and stretching every dollar; while still being expected to deliver enterprise-level protection.
Rebuilding the Security Program with Less
Without deep resources, robust vendor ecosystems, and specialized roles and departments, I needed to focus on what I could achieve. This was a whole new playing field, so before I committed to anything I first needed to identify my primary objectives: consolidate, prioritize, and shift focus on core, native platforms that need to deliver despite having multiple security issues.
The first thing I focused on was to identify gaps in our security, while also reducing our vendor footprint. This approach often gets negatively associated with cutting corners, but this was far from that. It was about starting from the blueprint and architecting the foundation.
For example, if your task is to secure a home, what’s the point of putting five deadbolts on the front door while it, and all your windows can’t be closed? When your resources and budgets are less than 1% of prior life, managing a dozen tools is not scalable. Instead, it becomes an exercise in futility. I focused on multi-functional platforms with solid integration and automation capabilities, strong customer support, and roadmaps that aligned appropriately with our expected growth, while still being able to filter and ignore frivolous bells and whistles.
Fortune 50 Strategy, SMB Budget
In a SMB you need to adapt and improvise. But that doesn’t mean you can’t have a strategy. Enterprise-level strategies don’t have to disappear, so what I did was focused on solid governance frameworks such as the NIST Cybersecurity Framework, assessed core and highest risk based on our core business model—looking at the hundreds of SaaS/PaaS technologies + architecture that the business depended on, and created a multi-phased implementation roadmap that mimicked enterprise structure for operation and governance.
That resulted in a key lesson: security and risk isn’t about how much money you throw at it, it’s about how to invest wisely and intelligently! Every dollar has to be justified in its spending and every decision, no matter how big or small, needs to show a return in risk reduction, compliance, and/or operational efficiency.
Built for Enterprises and Lean Teams Alike
Chris Camacho
At Abstract, we work with customers on both ends of the spectrum. Many of them have backgrounds like Jake, where they helped build or lead large scale security programs across global enterprises. Now finding themselves in fast growing companies or critical infrastructure environments where resources are limited but expectations are still high.
This is exactly why we built Abstract. Our platform scales to support both the largest data volumes and the scrappiest of teams. We handle the heavy lifting for you, whether that means deploying real-time pipelines, delivering threat analytics, or supporting compliance and detection strategies out of the box. We act as a force multiplier for smaller security teams and as a central foundation for larger organizations looking to consolidate and modernize.
Our customers use Abstract to control cost, improve detection, and simplify security operations. Whether they are managing hundreds of terabytes of data or standing up their first security operations center, Abstract provides the tooling, support, and flexibility to meet them where they are.
Running a security program without the benefit of a big budget or specialized teams isn’t a step down. It is one of the clearest tests of leadership in this field. If your organization is looking for a skilled security leader who understands how to scale programs intelligently, efficiently, and with impact, I highly recommend connecting with Jake Lee on LinkedIn. He brings a rare mix of enterprise level strategy and hands-on execution that makes him a powerful asset for any security team.
Jake Lee
Jun 18, 2025
C2 Corner
Thinking Beyond the Budget: Why Your Security Team Isn’t a Cost Center
Beyond the Cost Center Fallacy
By Vesko Pehlivanov
What is cybersecurity in a business sense? Is it a cost center or is it a revenue generator? As tired as this question is, many executives attempt to put security into either one of those buckets. But regardless of which side you choose, both options fundamentally misframes the role of cyber and creates a false dichotomy between security investments and business growth. Why?
In business, there are three levers to grow: revenue, cost, and risk. With this in mind, the goal is straightforward—increase revenue, reduce costs, and maintain risk within acceptable parameters. However, in practice, security doesn’t necessarily fit into all of these neatly.
The Primary Function of Security
Cybersecurity has notoriously been hard to “prove” its value. Every day without incident is a win, but all of us know that this is not as easy as it sounds. From an outsider’s view who doesn’t know all of the moving parts, it is very tempting to think, “No incidents? Great, then let’s reduce the budget. It looks like sunny skies and smooth sailing from here.” And once you go down that slippery slope, too many things go wrong.
If it isn’t a cost center, then can it be a revenue generator? It rarely is, but it can be, especially if you’re a security vendor who can bill for products or features. However, going too far down that route will burn goodwill within the industry and might land you on a “most wanted” list like the SSO Wall of Shame.
Ultimately, security’s primary function is risk management for the business. Any business understands risk explicitly or implicitly as executives have to make risk-based decisions on a daily basis. As a risk management function, security should be treated separately from revenue generation and cost centers—being defined by its own value. It needs to be treated as a set of value-driven business services.
Transforming into a Service Mindset
Why services? Services are defined by the value perceived by its customers, not by the practitioner. For example, while I may be fine with a buzz cut, my wife most definitely would not and would never let a pair of clippers get anywhere near her hair. So taking it back to security, as an industry we need to adopt the same mindset and produce successful achievement of the outcomes expected by our internal and external customers. But to do that, we need clearly defined metrics that can demonstrate that those outcomes are achieved.
Measuring Service Effectiveness
There are four key properties of services that need to be measured on an ongoing basis:
Effectiveness
Quality
Efficiency
Risk
Let’s borrow from my previous example for some context. In terms of a service, I have certain expectations for what I’m paying for and the results I expect to see. If I was to get a haircut, I would expect my hair to be shorter than when I came. However, even though my barber was “effective” in achieving that, there’s more that matters. Was it of good quality? Sure, completely shaving my head would fit the criteria for “shorter hair,” but it’s definitely not the service I was expecting!
Additionally, even if my barber gave me the exact cut I wanted, what about the cost to value? Would the best buzz cut in the world warrant a price tag of $100? Maybe to some, but not to me. Also, let’s not forget about risk. If at any point my barber gives me a nasty cut with a razor, it doesn’t matter if they went above and beyond in providing the service. I’d still be pretty unhappy as a customer!
Putting the Model into Practice
By Chris Camacho
If we want to operationalize a service and value driven approach to cyber, there are three ways for security leaders to do so:
Align with business outcomes
Define service-level metrics
Build a feedback loop
Aligning with Business Outcomes
Start at the top. If you want to shift to a service-driven security model, collaborate with your executive board to agree on expected outcomes. Link security strategy directly to specific business objectives. This could mean building customer trust through data protection, ensuring business resilience, maintaining regulatory compliance, or accelerating the secure adoption of new technologies.
Defining Service-Level Metrics
For each security service you intend to provide, define clear metrics for effectiveness, quality, and efficiency. Being effective will mean a reduction in incident rates. Having high quality might mean seeing reduced friction or complaint volumes. For efficiency, consider optimizing unit costs, or cost per risk mitigated. This could involve improving process automation, introducing self-service capabilities, or simplifying your tech stack.
But don’t forget about risk! Try to ensure that there are no surprises. No outages from fail-close controls and no botched incident responses.
Building a Feedback Loop
Implement robust operational practices. Set Service Level Objectives (SLOs) with your stakeholders. Your KPIs, KRIs, and OKRs should directly tie back to these SLOs, which will result in continuous improvement. Then, make sure to report in terms of the business impact, not just security activity.
Why This Matters More Than Ever
Embracing this shift transforms your security teams from a perceived cost center into a strategic partner, speaking the language of business value and demonstrable outcomes. How does your organization currently measure the value of its security function? Security is entering a new era—driven by automation, AI, and an expectation to “do more with less.” If we don’t evolve how we communicate value, we risk being sidelined or starved of investment.
About Vesko Pehlivanov
Vesko Pehlivanov is a seasoned cybersecurity executive and former Director of Security Engineering at ID.me. He led transformative initiatives across cloud, infrastructure, and security operations—helping modernize security pipelines and drive strategic investment. Vesko was an early enterprise buyer of Abstract Security and remains a passionate advocate for security teams focused on operational excellence.
Vesko Pehlivanov
May 23, 2025
C2 Corner
Trust and Community: The Peer-to-Peer Security Advantage
The Human Signal
By Sandip Wadje
After working in cybersecurity for so long, I’ve come to find that the most valuable signals rarely come from a dashboard. Instead, they come from a DM, a quick text, or an off-the-record gut check from someone you trust. And that’s how it has always been. The security community is always operating in the shadows of formal channels—quietly sharing war stories, threat data, validation checks, and career advice.
Both Chris and I have lived that dual reality; starting in the vendor world, learning from customers, and then later, learning even more from our networks. More than what any feed or platform could provide.
Community Provides Clarity in Chaos
Modern security is incredibly complex and is outpacing the ability of individual organizations to adapt. Budgets are tight. Threats are coming in faster, and unfortunately, the tools we are forced to use are often noisier. And despite all the investments organizations are making I still hear many voice the same frustrations:
“Am I doing this right?”
“Is this tool actually working for anyone else?”
What’s everyone else seeing?”
If you’ve ever asked the same questions, where do you turn? In security, real collaboration doesn’t happen out in the open. It happens in a quiet, yet intentional space that is built on trust, context, and experience. This can take the form of a text to a friend or mentor. Or a ping on a Slack thread or an email to your trusted crew. But if you’re new to the space, you may not have that luxury. So, what do you do?
Find Your Tribe
If you aren’t a part of a community yet, consider reaching out and joining an already established circle. Chris and I have built our own communities because we believe security tools and programs must be shaped by those who live it every day.
CW (Crimeware)
CW is a private email list and Slack group that is managed by Chris. It is focused on real-time intel and operational strategy across threat detection, response, and pipeline tooling. It’s where security leaders and operators compare notes across industries. The rules are simple:
No sales pitches
No posturing
Just help your fellow practitioners be practitioners
The Cyber Collective
The Collective is an invite-only WhatsApp group for C-Suite security leaders—a space where senior practitioners trade real-time insights, gut-check vendors, and stress-test each other’s thinking. It’s private, fast, and high signal by design. Sandip launched this in collaboration with senior executives in the industry to proactively discuss emerging issues.
Scaling Trust Without Losing Signal
By Chris Camacho
Being a part of a community is vital to personal growth. However, as groups grow, there are challenges in preserving signal and authenticity. I’ve found that smaller, curated channels such as WhatsApp, Slack, and Signal outperform broader forums. Additionally, it’s important to bring out insights from all members, as the quiet ones often do the most listening. To bring out their voices, anonymized polls and “quiet queries” help draw out sensitive insights.
Most importantly, the strongest groups blend the strategic and the tactical. Cross-pollination between CISOs and technical SMEs creates powerful alignment. A CISO may ask, “How do I justify this spend,” while a detection engineer counters with, “that product breaks under volume.” Both voices matter, yet these discussions would not often take place in a traditional workplace setting.
Building Around Community Input
At Abstract, this isn’t a marketing angle—it’s our DNA. It’s the reason why I helped form Abstract. Our company was built by practitioners frustrated with broken tools and ignored feedback. So, we decided to flip the model.
Going against the norm, we sponsor invite-only dinners where CISOs and detection leads meet without salespeople. We co-author content with leading experts, as well as operators between roles to provide the industry with down-to-earth content. We collect roadmap inputs through private Slack channels and user groups and shape our platform to serve both CISO priorities and SOC realities.
Our philosophy is simple: if your tools don’t reflect the community’s needs, you’re not solving the right problems. Cybersecurity will always be complex. But solving it doesn’t need to be a lonely and difficult process. The more we can normalize peer collaboration the more resilient we become.
So whether you’re in a job, between jobs, or building something new, find your circle. Reach out, nurture it and pay it forward. There’s always a seat at the table.
About Sandip Wadje
Sandip Wadje is Managing Director and Global Head of Emerging Technology Risks at BNP Paribas since 2017 where he is focused on governance and oversight of emerging technology domains such as cloud, artificial intelligence, digital assets, and quantum. Sandip also leads group-wide oversight on all aspects of “outside-in” intelligence collection with a particular focus on external attack surface, geopolitical risks, social media liquidity risk, cyber-posture of corporate clients and emerging threats such as deep fakes or synthetic identities.
Sandip also represents the bank at various external forums. Sandip has been a member of European Cybercrime Centres (EC3) Financial Services Advisory Group since September 2017 and is also co-chair of European Financial Roundtable (EFR) Cyber Experts Group since February 2020. EFR comprises of 25 leading Financial Institutions in the UK & Europe and is represented by Chairman / CEO’s of the participating institutions. He has over 24 years of experience in cybersecurity, operational risk, IT risk, internal controls, and compliance. Sandip has a hands-on technology background, and his previous stints include BT Global Services, RiskIQ, SecureWorks, KPMG, JPMorgan Chase, and Deloitte.
Sandip is very passionate about leveraging AI for cyber and what adoption of AI means to the convergence of cyber jobs and re-skilling needed to build a Gen AI ready workforce.
Sandip Wadje
May 12, 2025
C2 Corner
Redefining the SOC: Why AI Elevates (Not Replaces) the Next Generation of Analysts
In this edition of C2 Corner—a blog series where we spotlight sharp minds in security—I’m joined by Nick Goulding, a seasoned leader in Cyber Threat Intelligence (CTI) and Security Operations Center (SOC) operations who recently led the Cyber Fusion Center at Carrier.
Let’s Talk Artificial Intelligence
There’s no denying that Artificial intelligence (AI) is a hot topic within the industry, and it’s rightly so. We’ve only just scratched the surface, but in its infancy, AI has already allowed organizations and casual users to accomplish some amazing things. However, like any new technology, it also introduces new types of risk such as data privacy concerns, intellectual property disputes, and especially for many industries—job disruption.
Do cybersecurity practitioners have to worry about AI taking their jobs away? Nick suggests this isn’t the case. Here’s his compelling argument for why AI won’t replace analysts—but instead raise the bar for them.
AI + SOC = Transformation
By Nick Goulding
There’s no getting around the AI boogeyman these days. The idea that artificial intelligence will replace SOC analysts—especially entry-level L1s—is gaining serious traction in some circles. However, I want to make it known that this is a misguided assessment.
We’ve already seen this form of fear before.
Did mathematicians disappear with the invention of calculators? No. Architects didn’t vanish when Computer-Aided Design (CAD) software entered the scene. Teachers didn’t become obsolete with the rise of online curriculums.
In every profession, technology transforms the work; it doesn’t eliminate it. It instead enhances the capabilities of the workforce, broadens their scope, and elevates their impact. And that’s exactly what AI is going to do for cybersecurity operations.
Frankly, it’s already happening. My daughter is crushing math problems that I couldn’t solve until college—and it’s all thanks to an educational system that is embracing technology, not fearing it.
The SOC of the Future: The New L1 Analyst
We cannot deny that AI will be deeply integrated into cybersecurity operations. But the discussion involving AI and cybersecurity shouldn’t be about it replacing people. It should be focused on how people are using automation to be better, faster, and smarter at their jobs.
Using AI-assisted platforms like Abstract, L1s can run detections powered by AI, interpret threat data in natural language using ASSE, and move beyond triage into active threat validation. Additionally, it can enable new analysts to operationalize threat intelligence and correlate across telemetry with no scripting or tuning needed.
Leveraging this kind of technology, tomorrow’s L1 SOC analyst won’t just reset passwords or escalate every ticket upstream. Instead, they’ll be able to:
Conduct initial static malware analysis
Collaborate with AI to draft and implement remediation plans
Build and iterate automated playbooks to handle repetitive tasks
Use natural language interfaces to rapidly query security data
In short: L1s leveraging AI will do the work that today’s L2 analysts are often responsible for. And this shift will cascade upwards. L2s will stretch into L3 territory. L3s will evolve into advanced threat analysts, incident responders, and forensics experts. This is a necessary evolution because attackers are evolving too.
Attackers are Already Using AI
According to the Harvard Business Review, attackers have begun leveraging AI to supercharge phishing attacks, with success rates rising to as high as 62%. Not only are they getting harder to spot using AI, but they are also becoming much cheaper, which allows attackers to increase the scale of their attacks.
Research from IBM X-Force and the Oxford Internet Institute reinforces this, showing a 95% reduction in cost for spear phishing attacks when AI is involved. This means more attacks at better quality, at a fraction of the cost. So how do we keep up?
How Abstract Security Helps Security Teams Keep Up
By Chris Camacho, Co-Founder & COO at Abstract Security
Fighting AI with AI is only part of the equation. The real answer is AI guided by professionals and practitioners. That’s where security platforms like Abstract come in.
Here’s how Abstract Security is helping to tip the balance:
AI-powered data streaming platform: Eliminate noise by focusing on the alerts that matter. Detect threats faster with a streaming-first approach.
Optimized storage management: Smart tiering of log data (hot, warm, cold) cuts costs and ensures relevant data is always available.
Natural language queries: Say goodbye to memorizing cryptic syntax. Instead just say things normally like, “Show me inbound IP traffic from Russia” and get an answer instantly.
Integrated detection rules: Use pre-built and custom rules that combine machine learning and expert logic to surface meaningful correlations automatically.
We Need Cybersecurity Pros More Than Ever
Nick is right, the idea that AI will replace SOC analysts is not just misguided and wrong—it’s dangerous. Eliminating L1s would dismantle our talent pipeline. Instead, we should be reimaging what an L1 analyst can be and then give them the tools and training to grow.
AI isn’t going anywhere. Neither are security experts. The future belongs to those who adapt, integrate, and lead. Even attackers have recognized the power of harnessing AI. Now, it’s our turn. Let’s build a SOC that isn’t afraid of the future.
Many thanks to Nick for sharing his vision on the next-gen SOC. If any of our readers are hiring for a security leadership role, please connect with Nick. If you’re looking to contribute your voice to C2 Corner, drop me a note.
About Nick Goulding
Nick Goulding is a Cyber Security Leader with 18 years of experience specializing in Cyber Threat Intelligence, Threat Hunting, Detection Engineering, SOC/IR, Insider Threat, and Attack Surface Management.
Introducing C2 Corner: By Practitioners, For the Industry
Throughout my two decades in cybersecurity, I’ve met and spoken with the smartest people I’ve known; all who have shared incredible ideas on how to improve the industry. However, they often don’t have a public voice, or a wide-enough platform to share their insights.
Whether it’s due to company policies, NDAs, or extreme workloads, their wealth of insights, innovation, and raw creativity are locked inside the confines of their last SOC, Incident Response team, or threat intel shop. But that changes today.
Giving Practitioners a Voice
That’s why I’m excited to introduce C2 Corner. This is a new blog series where I will be collaborating with cyber pros in the midst of finding their next opportunity. We won’t be talking about job hunting. We will be highlighting the value they’ve already created, with each post showcasing their perspective on real-world security challenges, lessons learned, and their ideas for where the industry should go next.
Why We’re Doing This
I’m starting C2 Corner for three reasons:
To provide the industry with high-signal, no-fluff content straight from the front lines
To give practitioners a platform to speak their mind
To reinforce Abstract Security’s belief that real innovation starts with real operators
What to Expect
Every edition of C2 Corner will be co-authored by me and a seasoned practitioner. Together, we’ll tell their story—sharing their unique perspective and technical insights, while tying it back to real-life challenges security teams are facing now.
Want to Contribute?
If you have a story to share—a detection lesson, a CTI use case, a reflection on burnout, or anything in between, collaborate with us. Reach out directly, or message me on LinkedIn. We can work together to create something that others can learn from.
Let’s make space for real voices in cybersecurity. Let’s help great talent get seen and heard, and let’s build a corner within the industry where practitioners lead the conversation.
Chris Camacho
May 1, 2025
C2 Corner
Videos
From Market Guide to Real World: Abstract's Influence in the Industry and Enterprise
Abstract Team
Aug 19, 2025
Videos
Breaking the SIEM Mold: Security Data Operations & What’s Next | Live at Black Hat
Abstract Team
Aug 19, 2025
Videos
Abstract Security Joins Forces with Analytica42 to Supercharge Integration Delivery to Google SecOps and More
Colby DeRodeff
Dec 16, 2024
Videos
Top Bottlenecks that impact your Data Strategy
Aqsa Taylor
Videos
Why your data storage and retrieval strategy may need a revisit
Abstract Team
Videos
Why is there a need for Data Pipeline Management?
Abstract Team
Videos
How is Abstract a Complete Data Platform for Security?
Abstract Team
Videos
Why your SIEM needs a Pipeline platform
Abstract Team
Videos
How does Abstract Simplify Data and Amplify Insights?