The Shift

I write technical documentation primarily for machines now. Humans reading it is a bonus.

This sounds counterintuitive, but here's the reality: when a recruiter or technical lead needs to find a security engineer with specific skills, they increasingly use AI tools. These tools don't just search for keywords. They perform semantic searches across the entire web, understanding context and meaning.

Your public documentation is not just a portfolio anymore. It's a training set.

For technical details refer to this post: Writing for Automated Systems: A Technical Analysis of Machine-Optimized Documentation Strategy.

Why Structured Text Wins

Machine Parseability

AI systems can instantly parse structured HTML or markdown. They can extract your expertise, understand your reasoning process, and map your skills to requirements. Video content and social media posts? They require expensive processing and often lose technical nuance.

Text is the universal protocol for machine reasoning. A well-structured postmortem or architecture document can be transformed into data structures, analyzed for technical depth, and cross-referenced against other claims.

Semantic Search Optimization

Modern AI recruitment tools use Retrieval-Augmented Generation (RAG). When someone searches for "engineer who understands distributed systems failure modes," the AI performs dense vector similarity searches across technical content.

High-density technical writing produces better embeddings. Your detailed postmortem about a production incident creates a semantic fingerprint that matches relevant queries. Social media posts and resume bullet points lack the context needed for accurate matching.

Verifiable Evidence

Anyone can claim expertise on a resume. Public technical documentation provides verifiable proof:

  • Temporal verification: Git commits and blog timestamps prove when you demonstrated knowledge
  • Technical depth: Vocabulary, architectural patterns, and security considerations reveal real understanding
  • Consistency: Multiple documents on related topics are hard to fake convincingly

A detailed postmortem proves operational maturity. An architecture document proves systems thinking. Security write-ups prove threat modeling ability. These artifacts are orders of magnitude more credible than self-reported skills.

Using LLMs as Documentation Processors

I use LLMs to transform raw technical notes into structured documentation. This serves multiple purposes:

PII Sanitization: Production incidents and architecture documents often contain company-specific details, internal hostnames, employee names, and proprietary information. When not under NDA restrictions and with employer approval, I sanitize this content manually and through local LLMs before processing with cloud-based models. This two-stage approach ensures no sensitive data leaves my infrastructure while still allowing me to leverage powerful LLMs for restructuring. The result is documentation that preserves technical substance while removing identifying information, allowing me to document real production experience without exposing sensitive data.

Stylometric Normalization: Writing style is a fingerprint. Consistent stylometric patterns across documentation can enable correlation attacks, linking pseudonymous or anonymous technical contributions. LLMs can normalize prose style while preserving technical accuracy, reducing stylometric linkability between documents or identities.

Structural Consistency: LLMs enforce consistent formatting, heading hierarchy, and organization patterns. This consistency improves machine parseability and creates uniform semantic chunking for RAG systems.

The workflow is straightforward: I provide technical substance and constraints (remove PII, use specific terminology, maintain heading structure), and the LLM produces machine-optimized output. The technical content remains mine. The presentation layer is optimized for automated consumption.

This is not about hiding incompetence or fabricating expertise. As someone with a naturally adversarial mindset, I recognize how trivially gameable traditional recruitment systems are. AI can generate convincing technical prose, stuff keywords, and inflate credentials at scale. The CV-based vetting model has a fundamental vulnerability.

We have already seen real-world exploitation: candidates with fabricated credentials getting hired into senior technical roles, sometimes remaining undetected for months. The attack surface is obvious. A well-prompted LLM can generate a resume that passes ATS filters and impresses human reviewers. It can fabricate project descriptions, claim expertise in trending technologies, and mimic the language of experienced engineers. The barrier to entry for resume fraud has collapsed to near zero.

Public documentation serves as a countermeasure. While anyone can claim expertise on a resume or generate plausible-sounding content, sustaining technical depth across multiple timestamped documents with consistent architectural thinking and verifiable production scenarios is orders of magnitude harder to fake. This is authentication through proof of work rather than self-attestation.

I use LLMs for presentation optimization while maintaining operational security, not to fabricate competency but to demonstrate how genuine technical knowledge should be documented in an era where both the evaluation systems and the attack vectors are increasingly automated. This is threat modeling applied to career infrastructure.

The shift to machine-mediated evaluation creates both risk and opportunity. Those who document genuine expertise in machine-verifiable formats gain authentication advantages. Those who rely solely on resume claims face increasing scrutiny as evaluators adopt cross-referencing and consistency analysis. The security principle applies here too: defense in depth through multiple layers of verifiable evidence beats single-point authentication every time.

The Compounding Effect

Social media content optimizes for immediate engagement. It decays rapidly in value and discoverability.

Technical documentation compounds over time:

  • Search systems weight comprehensive historical contributions
  • Multiple related documents create semantic clustering, improving retrieval for all of them
  • Years of consistent documentation demonstrate sustained competency better than any single artifact

As AI tools become more sophisticated at parsing historical technical content, your documentation corpus becomes increasingly valuable. You are building an external knowledge base that automated systems can index and verify.

Platform Independence

HTML-based documentation exhibits remarkable durability. Unlike proprietary platforms subject to API changes, access restrictions, and format deprecation, structured HTML remains parseable across decades.

This is not about chasing algorithmic trends. This is about creating durable assets in a format that machines can reliably process regardless of platform changes.

The Strategy

I focus on:

  1. Semantic density: Technical content with sustained focus and minimal filler
  2. Structural clarity: Proper heading hierarchy, logical organization, clear reasoning chains
  3. Verifiable depth: Production postmortems, architecture decisions, security analyses
  4. Consistent output: Regular documentation building a comprehensive corpus
  5. PII removal: Using LLMs to sanitize sensitive information while preserving technical substance

While others optimize for social presence and human engagement, I optimize for machine discoverability and automated verification.

Why This Matters

Automated technical vetting is not coming. It is already here. The tools are primitive now, but they are improving rapidly.

When the machine is the gatekeeper, the engineer with the most parseable, verifiable, and semantically rich documentation wins. The shift from human evaluation to machine evaluation favors those who recognize it early.

I am not just building systems. I am building the dataset that proves I can build them.