The Commoditization of Policy: AI and the Consultancy Leverage Model
The integration of Generative AI into cybersecurity operations has fundamentally altered the economics of governance documentation. Historically, the creation of cybersecurity policy was a high-friction task requiring significant billable hours for drafting, formatting, and cross-referencing standards. Large Language Models (LLMs) have effectively commoditized this “zero-to-one” drafting phase, allowing practitioners to generate near-complete structural drafts in minutes rather than days.
The Disruption of the Consultancy Pyramid
This efficiency gain directly challenges the traditional “Leverage Model” utilized by major professional services firms (often referred to as the Big 4). This business model relies on a pyramid structure:
- Partners sell the engagement.
- Seniors provide oversight.
- Juniors and Interns execute the bulk of the foundational work.
In the past, the “grind” of manually writing policy frameworks was a training ground for graduates and a primary source of billable hours. With the adoption of AI, the intellectual labour required to reach an “80% done” state has collapsed. However, many firms continue to utilize Value-Based Pricing, charging clients for the final deliverable and the brand authority, regardless of whether the labour was performed by a human team over weeks or an AI-augmented intern in hours.
The Quality Trap
While the speed of delivery has increased, the removal of the “human in the loop” for the drafting phase introduces significant risk. When junior staff rely heavily on AI without possessing the requisite domain expertise to audit the output, the firm risks delivering “hallucinated compliance”—policies that sound authoritative but fail to align with specific organizational controls or regulatory nuance.
The value proposition of external consulting is shifting from creation to validation. The premium is no longer in the writing, but in the expert customization—the final 20% that tailors a generic template to the specific risk appetite of the client.
I wonder…
- How does the automation of “grunt work” impact the Skills Acquisition of junior analysts who historically learned by writing policies from scratch?
- Will clients eventually demand a shift from fixed-fee project work to outcome-based retainers once they realize the marginal cost of documentation has dropped to near zero? (See: Race to the Bottom).
- What are the liability implications for a firm if a breach occurs due to a policy gap that originated from an unchecked AI Hallucination?
- Does this necessitate a new role in cybersecurity teams, perhaps a Governance Engineer, specifically tasked with prompting and auditing AI policy generation?
References
- Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality - Harvard Business School working paper on how AI boosts performance but can reduce quality without oversight.
- The Future of the Professional Services Business Model - Harvard Business Review on the vulnerability of the opacity in consulting billing.