Table of Contents

TL;DR:

  • GRC engineering is the practice of treating governance, risk, and compliance as a living system, with code-defined controls, queryable evidence, version-controlled logic, and continuous monitoring.
  • Most practitioners are gradually applying engineering principles as the landscape complexity increases, without ever formally changing a job title; many more are adopting it gradually inside the compliance, risk, and audit roles they have always held.
  • Three forces are driving the shift: cloud-native infrastructure, multi-framework sprawl, and non-deterministic/agentic AI. Pushing practitioners out of manual execution and into configuration, validation, and system design.
  • Legacy GRC platforms weren’t built for this reality. They assume static environments and break under modern conditions, with failure modes like immutable tests, black-box scoping, hidden schemas, and rigid dashboards.
  • Supporting modern GRC requires a different foundation: queryable data, composable logic, version control, transparent scoping, and auditable, explainable outputs.

The most important shift in enterprise compliance right now isn’t quiet. It’s just misunderstood.

At the practitioner level, GRC hasn’t become engineering yet, and that’s the problem.

Most programs are still being run like administrative functions: audit cycles, evidence chasing, control attestations. Thoughtful, yes. Necessary, sometimes. But fundamentally disconnected from how modern systems actually behave.

Meanwhile, every other security function has already crossed the line.

Enter: GRC Engineering

If the security team showed up to a pentest readout without real data sources, without evidence, without proving how a control actually performs in a live environment, they’d get laughed out of the room. But in GRC, we still accept screenshots and point-in-time assertions as proof.

This is an operational blind spot. 

GRC doesn’t need better workflows. It needs to be treated like an engineering problem.

That means systems that are configured, owned, versioned, and continuously validated… not bought, deployed, and left to decay into shelfware. It means accepting upfront lift: dedicated ownership, integration into source systems, and ongoing maintenance as the environment evolves.

This is what GRC engineering actually represents.

Not a job title. Not a trend. And certainly not something most teams have already figured out.

The phrase started as a community movement before it became a category. The canonical GRC Engineering manifesto, drafted by practitioners including Ayoub Fandi, Charles Nwatu, and Terra Cooke, defines the discipline as "more than just 'GRC + writing code'." It frames the shift as a systems-thinking and design-thinking move: GRC done with the same rigor and customer focus a software team brings to a production service. Because in a world of cloud, identity sprawl, and non-deterministic systems, you can’t maintain security and minimize risk with policies alone. You back it up with data and continually monitor.

What GRC engineering looks like when it's real

GRC engineering is the practice of treating governance, risk, and compliance as a systems problem. The clearest sign that it has actually taken hold in a program is what the day-to-day work looks like:

  • The control library lives in version control. A change to a control reads like a pull request: an author, a reviewer, a commit message that explains the intent, a diff small enough to read in two minutes. Six months later, anyone on the team can trace why the control says what it says.
  • Evidence is being collected continuously, in the background. Integrations against the systems that own the truth (CloudTrail, Okta, GitHub, Jira, the HRIS) feed a central evidence store with metadata and provenance attached. Nobody runs a script on Monday morning to gather it. When an auditor asks "show me production access changes for Q1," the answer is a query, returned in seconds, with the underlying log lines attached.
  • A new framework arrives, and the team adds a tag layer. When DORA, NIS2, or an internal framework lands, the team adds it on top of the existing control set. The relationships between the control and each framework's requirements are stored as data the team owns. The hand-maintained mapping spreadsheet has been deleted.
  • Scoping decisions live alongside the evidence. When an auditor asks "why was this account excluded from the PCI scope?", the answer is in the same place as the evidence, with the rationale, the date, and the person who approved it. There is no archaeological dig and no three-team screen-share to reconstruct what was already decided six months ago.
  • Agents run on top of the data layer. They collect evidence, evaluate controls against defined criteria, raise exceptions to the right owner, and produce structured output that humans review. Every agent action has a log entry showing what data it pulled, what decision it made, and why. The team configures and validates them the way a software team configures a service.
  • Every failure surfaces with context. The right owner gets the alert with the source data attached and a clear ask. Remediation lands in the team's normal work tracker, alongside the rest of the engineering and product backlog.

Step back from those scenes and the rhythm of the program has changed. Audit prep stops being a season. The team's day is exception review, risk decisions, and stakeholder conversations, with the rest of the program producing signal in the background.

Some practitioners do this from a GRC engineer title. Others are quietly retooling the compliance role they have held for a decade to do the same work, one control at a time. The skill set is part Terraform fluency, part API literacy, part data modeling, part deep framework knowledge. The domain expertise stays where it always was: in the practitioner who has been running the program. The manifesto's values track the same shape end to end: code-first control libraries, in-depth continuous assurance, measurable risk outcomes, automation built early and often, evidence-based reasoning, stakeholder-centric UX, and open-source tools developed by practitioners.

Why most teams aren't there yet

Almost no enterprise GRC program looks like the picture above end to end. Most have one or two pieces working and the rest still done by hand. There are four common reasons for the gap:

  1. The platform. First-generation compliance tools were not built to be queried, version-controlled, or extended without a support ticket. A team running on one of them can want GRC engineering practice and still be unable to apply it, because the platform does not expose the surfaces the practice requires. Wanting a Git-style workflow and getting a clone-and-fork option is the daily experience.
  2. The data foundation. Even teams that bypass their compliance platform and pull evidence directly from source systems often find that the data arrives without metadata, without normalization, and without a clean way to join it across sources. Auditor-grade evidence is structured data with provenance, and most programs are not collecting it that way yet.
  3. The team composition. A compliance organization built around policy expertise, audit relationships, and questionnaire response does not turn overnight into one that writes Terraform. The technical skills are learnable in months. The muscle memory takes longer, and budget for a dedicated GRC engineer is still rare outside the largest programs.
  4. The legacy tax. Five years of cloned control tests, screenshot evidence folders, and one-off remediation tickets do not migrate themselves. Most teams adopting the practice are doing it inside a program that already exists, which means the work happens alongside the running audit cycle, with the existing obligations still due.

None of these are reasons to wait. They are reasons the path looks gradual. The teams furthest along did not start with a rebuild. They started with one control they were tired of cloning, one piece of evidence they were tired of chasing, one scoping decision they were tired of explaining. The skills compound, the workflow shifts, and a year later the program runs differently than it did before.

{{ banner-image }}

Why the old platforms struggle with modern compliance programs

First-generation compliance automation was a real category win. It dragged compliance out of email threads and shared drives into something resembling software. For a non-technical operator at a startup, running one framework, in a uniform environment, those tools were the right answer.

End of story, right? Not quite.

The compliance program those tools were built for is not the compliance program that is needed today. Enterprise GRC programs run eight frameworks on average, often across multiple subsidiaries and geographies. The practitioner running the program is increasingly expected to write code, query data, and treat the platform like any other piece of production infrastructure. The UX assumptions baked into first-generation tools (forms, dashboards, point-and-click scoping, immutable tests, hidden schemas) reflect an earlier moment.

When GRC engineering principles are applied to one of these tools, the cracks show fast:

  • Immutable tests. The platform ships with a control test that almost works, except for one branch of cloud accounts. There is no way to edit it. The workaround is to clone the test, remap it, and maintain the fork forever.
  • Black-box scoping. Exclusions live behind cloud console tags or hidden config. When an auditor asks "why was this account excluded?" the answer requires three teams and a screen-share.
  • Hidden schemas. The JSON schema underneath the dashboard is invisible to the user. Custom analysis means filing a support ticket and waiting.
  • Fixed dashboards. The data is in there somewhere. Asking a new question means waiting for a product release or building a parallel pipeline outside the platform.

The team ends up maintaining the platform instead of managing risk. Not because GRC shouldn’t own systems, but because they’re forced to maintain the wrong ones. Instead of owning control logic and signal, they’re babysitting a tool that was never designed to behave like infrastructure.

Analyst Michael Rasmussen, who has covered the GRC market since the term was coined, made a related argument in his April 2026 piece Why the Future of GRC Is a Command Center, Not a Collection of Modules. The market, he writes, has outgrown the assumption that bolting more modules onto a legacy platform produces a coherent program. The practice GRC engineers are shaping needs the platform to behave like a single connected system, with shared data, shared logic, and shared accountability.

The five capabilities a modern GRC program needs from a platform

Five capabilities separate a platform that supports GRC engineering practice from one that gets in the way.

  1. Queryable data. Arbitrary questions of evidence, answered without navigating a fixed dashboard or filing a support ticket. If the data is in the platform, the engineer can ask it directly.
  2. Composable logic. Custom analysis, custom controls, and custom workflows the engineer can build without memorizing a private schema or routing through professional services.
  3. Version control. Control libraries that behave like code: branches, history, diffs, rollbacks. A change to a control is a commit with an author, a date, and a message.
  4. Transparent scoping. Exclusions and inclusions visible alongside the evidence, with the rationale stored in the same place. No archaeological dig through cloud console tags to explain a scope decision.
  5. Auditable outputs. Every finding traceable to source data with full provenance. The kind of evidence Big Four auditors and Schellman-grade firms accept on first review, with no back-and-forth about screenshots.

Ayoub Fandi, who co-authored the manifesto and writes the GRC Engineer newsletter, pushed the queryable-data point further in his post What If Compliance Was Just a Query on Data You Already Collect? He borrows from the observability playbook used in production engineering: most of the data needed to prove a control is working is already being collected somewhere in the environment. The unsolved problem is making that data joinable, queryable, and continuously available, so a control test becomes a query rather than a fire drill.

These are not capabilities reserved for teams with a dedicated GRC engineer. A practitioner who has never written a line of Terraform can adopt them one at a time: edit a control instead of cloning it, pull evidence from a source system instead of taking a screenshot, store the scoping rationale alongside the evidence the first time. The skills compound. The workflow shifts. The manager who started by editing one control ends up running a program that produces continuous, queryable signal.

What the platform built for GRC engineering looks like

The shift the practice needs from a platform is intelligence, not more automation. Automation runs the same step faster. Intelligence distinguishes signal from noise: it tells the practitioner that this control failure is a real exception and that one is an intentional design choice already documented somewhere.

A platform built for GRC engineering is composable by architecture, transparent by default, and trustworthy because every output traces back to source data with metadata intact. Custom logic works without forking the product. Scoping decisions are first-class objects. Agents run on a data layer that auditors already trust. The team can build, query, version, and ship without leaving the platform.

A note for the practitioners who have reached the opposite conclusion: that real GRC engineering means building everything in-house. The instinct toward engineering rigor is right, and the field is better for it. The question is where to spend the build effort. Auditor-grade data infrastructure (source-system integrations, evidence normalization, framework mappings, full provenance) is shared work that does not need to be rebuilt by every team. A platform that absorbs that layer frees the team to spend its engineering time on the parts that actually differentiate the program: its own control logic, its own agent definitions, its own scoping rules. The build-versus-buy line moves up the stack, and the work below it stops being interesting to redo. 

That is the shape of the work GRC engineers are doing. The tools that match it are the ones they will keep.

GRC engineer FAQ

What is GRC engineering? GRC engineering is the practice of treating governance, risk, and compliance as a systems problem. It applies software engineering methods (version control, CI/CD, infrastructure-as-code, API integrations, data modeling) to the work of running a compliance program, so the program produces fresh, trustworthy signal continuously. The canonical reference is the community-authored GRC Engineering manifesto.

What is a GRC engineer? A GRC engineer is a compliance practitioner who has formalized GRC engineering as their job. They build and maintain the logic, integrations, and data pipelines that make the program function. Many practitioners adopt the same practices without changing their title.

Can a traditional GRC manager adopt GRC engineering practices? Yes, and many already are. Adoption does not require a job change. It starts with the next time a control needs editing: modify it at the source instead of cloning it. The next time evidence is requested: pull it from the source system, with metadata, instead of taking a screenshot. The next time scope is questioned: store the rationale alongside the evidence. The skills compound, the workflow shifts, and the manager who started by editing one control ends up running a program that produces continuous, queryable signal.

How is a GRC engineer different from a compliance analyst? The compliance analyst executes the program: collects evidence, completes questionnaires, prepares for assessments. The GRC engineer designs the system that runs the program: defines control logic, builds data pipelines, configures agents, and owns the codebase the analyst's work runs on. Most enterprise programs benefit from both, often in the same person.

What skills does a GRC engineer need? Core technical skills include API literacy, data modeling, query languages (SQL or equivalent), Terraform or comparable infrastructure-as-code fluency, and version control. Core domain skills include deep knowledge of at least one major framework (ISO 27001, FedRAMP, PCI-DSS, or HIPAA), comfort reading regulatory text, and an instinct for what an auditor will accept as evidence. The technical skills can be picked up in months. The domain expertise is the hard part, and it is what experienced GRC managers already bring to the role.

Why is the GRC engineer role emerging now? Three forces created the role: cloud-native infrastructure made manual evidence collection unworkable, framework sprawl made hand-mapped controls untenable past the third framework, and agentic AI moved the practitioner's job up the stack from execution to configuration and validation.

The compliance practitioner who thinks like an engineer is already in the building. The practitioners working their way toward that mindset are right behind them, and the experienced GRC managers leading those teams have most of the hard part already done. The platforms that match how this work gets done are the ones that will run enterprise GRC for the next decade.

Anecdotes is built for both sides of that arc. The composable logic, queryable data, and audit-grade evidence layer the practice depends on are already there, ready for the engineer who wants to build immediately and the manager taking the first steps.

Request a demo to see what a platform built for GRC engineers looks like in practice.

Further reading

Key Takeaways

What you will learn