Large language models (LLMs) are tools that can accelerate work, support creativity, and provide structure across various tasks. I use them extensively, but my approach to working with LLMs is shaped by a particular philosophy about skill-building and learning that I want to explain first.
To see how I use LLMs in my professional work, refer to A Case Study in LLM Use: Lessons from a Year of Experimentation.
The Foundation: Learning Through Deliberate Practice
My approach to skill development started with fighting games. Training in games like Street Fighter taught me that mastery comes from repetition, pattern recognition, and precise execution. The satisfaction of practicing combos until they became automatic showed me the value of deliberate, focused practice.
I found the same principles in classical guitar. Learning pieces by composers like Villa-Lobos or Tárrega requires breaking down complex passages into smaller segments, drilling them slowly until muscle memory develops, then gradually building speed and fluency. Scales, arpeggios, and technical exercises must be practiced with precision and consistency. The instrument provides immediate, honest feedback: if your technique is sloppy or your timing is off, you hear it instantly.
Both fighting games and classical guitar taught me that real skill comes from sustained, focused repetition with minimal assistance. No shortcuts, no automation, just deliberate engagement with the fundamentals until they become intuitive. I realized these same principles could apply to other domains: programming, mathematics, analytical reasoning, and more.
This led me to adopt minimal-assistance tools for learning. In programming, I chose the vi text editor specifically because it lacks the conveniences of modern IDEs. This might seem counterproductive, but it creates several important benefits for procedural learning:
- Syntax Internalization: Without autocomplete or inline documentation, I must retain language syntax, standard library functions, and API structures in memory. This builds stronger recall and reduces my dependency on external aids.
- Conceptual Understanding: The absence of automated refactoring and code generation tools requires me to understand program structure, data flow, and algorithmic logic at a fundamental level.
- Reduced Cognitive Offloading: Modern IDEs handle many routine tasks automatically, which can prevent the development of procedural fluency. Vi forces me to engage deliberately with each aspect of code construction.
- Cross-Platform Transferability: Skills I develop in vi transfer readily across programming languages, operating systems, and development environments. The mental models I build remain useful regardless of available tooling.
- Error Pattern Recognition: Without real-time error highlighting and correction suggestions, I learn to identify common error patterns through direct experience, strengthening my debugging capabilities.
The initial learning curve is steeper with minimal-assistance tools, but the resulting competence is more robust and adaptable. This philosophy guides how I think about tools in general: sometimes the best tool for learning is the one that provides the least help.
Where LLMs Fit In: Generating Practice Materials
Given my preference for minimal assistance during practice, you might think I would avoid LLMs entirely. Instead, I have found they serve a specific and valuable role: generating the practice environment itself.
This represents a productive division of labor. The LLM handles the mechanical work of creating exercises, while I work through them without assistance. This solves a real problem in skill development: the need for high-volume, varied practice.
Traditionally, practicing coding or mathematics meant working through textbooks or problem sets with limited variety. Once you complete a problem, it is done. You might repeat it once or twice, but the repetition loses effectiveness because you remember the specific solution.
LLMs change this by generating unlimited variations on demand. Here is how I use them:
- Programming Practice: I ask an LLM to generate algorithmic challenges, syntax exercises, or debugging scenarios. I then work through these problems in vi without any assistance, forcing myself to rely on internalized knowledge. Each problem is slightly different, preventing rote memorization while building genuine fluency.
- Mathematics: I request problem sets targeting specific concepts or calculation types. The LLM produces variations with different numbers, configurations, or complexity levels. I solve them by hand, building computational and logical skills through repetition.
- Classical Guitar: I ask LLMs to generate technical exercises, scale patterns, or chord progression studies in different keys or positions. I then practice these on the guitar, applying the same deliberate, slow-to-fast approach I use for repertoire pieces. The LLM can generate fingering variations for arpeggios or create etudes targeting specific techniques like tremolo or rasgueado.
The key insight is that LLM-generated practice materials are disposable and abundant. I am not working through a precious textbook where each problem matters. Instead, I can practice the same type of problem fifty times with different parameters, building fluency through volume.
The LLM removes the bottleneck of finding or creating practice materials, but it does not interfere with the actual learning process. I still practice with minimal assistance, building the same robust, transferable skills. The LLM simply makes it possible to do this at much higher volume and with much greater variety.
Using LLMs for Productive Work
Beyond skill-building, I use LLMs extensively in production workflows. This is where their capabilities shine most clearly:
- Drafting and Structuring Content: I use LLMs to create initial versions of blog posts, technical documentation, articles, and reports. They provide structure and handle the mechanical aspects of getting words on the page.
- Summarizing and Synthesizing: When I have research notes or materials from multiple sources, LLMs help me organize and extract key insights quickly.
- Creative Exploration: For music composition and arrangement, I use LLMs to generate variations, experiment with ideas, and explore possibilities I might not have considered.
- Documentation Quality: LLMs establish a baseline quality for technical writing, ensuring consistency in structure, terminology, and clarity.
In all these applications, LLMs accelerate my iteration cycles. They produce initial drafts or variations quickly, but I retain responsibility for evaluation, refinement, and final decisions. The objective is not to delegate thinking to AI but to expand my cognitive bandwidth and focus on aspects of work that require judgment and creativity.
The pattern is consistent with my approach to practice: LLMs handle mechanical, repetitive, or exploratory work, which shifts my effort toward evaluation, critical analysis, and creative refinement. These are the aspects where my judgment is most valuable.
Understanding LLM Limitations
LLMs have significant constraints that require awareness:
- Confidence Without Accuracy: Models present incorrect information with the same confidence as correct information, making errors difficult to detect without verification.
- Context Misunderstanding: LLMs may misinterpret requirements, incorporate inappropriate assumptions, or fail to recognize relevant constraints.
- Reasoning Gaps: Generated content may contain logical inconsistencies, unsupported conclusions, or incomplete arguments that appear superficially coherent.
- Validation Requirements: All LLM outputs require my review. The degree of scrutiny should correspond to the stakes of the application.
These limitations mean LLMs function best as assistive tools rather than autonomous agents. They augment my capability but do not replace the need for expertise, judgment, and verification.
Guidelines for Effective Use
To maximize the utility of LLMs while mitigating their limitations, I follow these principles:
- Treat outputs as drafts: I consider all generated content as starting points requiring refinement rather than finished products.
- Data Privacy & Sanitization: Never input PII (Personally Identifiable Information), proprietary source code, or non-public company data into a public LLM without ensuring the data isn't used for training.
- Verify systematically: I check facts, logical consistency, and adherence to requirements. Higher-stakes applications warrant more thorough verification.
- Appropriate task allocation: I use LLMs for repetitive, mechanical, or exploratory work where errors have low cost or are easily detectable. I reserve judgment-intensive and high-stakes decisions for my own evaluation.
- Process lengthy documents: I use LLMs to summarize terms of service, legal documents, and lengthy technical materials. For ToS specifically, I employ adversarial prompts (asking the LLM to identify concerning clauses, unusual restrictions, or rights I'm waiving) to surface potential issues that neutral summaries might downplay. While summaries may miss nuances or misinterpret clauses, engaging with even an imperfect analysis is substantially better than not reading at all. For critical agreements (employment contracts, major purchases, binding commitments), I review key sections directly or consult professionals.
- Maintain skill foundations: I practice regularly in minimal-assistance environments to ensure my capability remains independent of AI tooling availability.
- Focus effort strategically: I concentrate my cognitive resources on evaluation, creative problem-solving, and decision-making rather than mechanical execution.
When I follow these guidelines, LLMs serve as productivity multipliers that expand the volume and pace of my work without compromising quality or understanding.
Bringing It Together
My use of LLMs is guided by a clear philosophy: build deep, transferable skills through minimal-assistance practice, then use AI to amplify productivity in production work and to generate unlimited practice materials for continued learning.
Whether I am drilling combos in fighting games, working through guitar exercises, writing code in vi, or solving math problems by hand, the core approach remains the same: focused, deliberate practice with immediate feedback and minimal external assistance. This builds the kind of robust, intuitive competence that transfers across contexts and persists regardless of available tooling.
This approach has several advantages. First, my foundational skills remain strong and independent of available tooling. I can work effectively in any environment, whether I have access to modern IDEs, LLMs, or only basic tools like vi. Second, I can leverage LLMs for significant productivity gains without becoming dependent on them for core capabilities. Third, I can practice and learn at much higher volume than would otherwise be possible.
The combination is greater than either approach alone. LLMs expand my bandwidth for routine work and provide unlimited practice materials, while minimal-assistance practice ensures my foundational capabilities remain robust and adaptable. Together, these approaches enable both immediate productivity gains and long-term skill development, positioning me to work effectively with or without AI assistance.
Large language models are powerful tools when used with appropriate awareness of their capabilities and limitations. They accelerate iteration, provide structure, and handle routine tasks effectively. But their utility depends on a foundation of human expertise, which I build deliberately through the kind of focused, minimal-assistance practice that makes skills stick. This combination gives me the best of both worlds: the speed and scale of AI assistance with the depth and adaptability of genuine mastery.