Back to blog

Building with Claude Code: Codifying Dev Philosophy

Updated: 7 min read by Titus Soporan

I’ve been using Claude Code extensively for the past few months and have slowly been building a “coding philosophy” file that I port around with me to each new project. I think many now understand that these tools can give you incredible leverage, but keeping them on the rails can be a challenge.

So here goes, my coding philosophy distilled into a CLAUDE.md file that is placed at the root of any new repo I start. Note these are my beliefs and opinions on building software and you will likely want to create your own.

Belief 1: Complexity is the enemy#

I find it interesting that this point isn’t highlighted enough (or maybe I’m just not in the right circles on X) but software spends most of its life in maintenance mode. You ship v1, users start using it, and now you’re maintaining it. The code you write today? You’ll be debugging it at 3am six months from now when you’ve forgotten everything about it.

As your product lives and grows, you keep adding code. Features, bug fixes, improvements - more code, more code, more code. This accumulation is what creates complexity. Every new line interacts with existing code, creating dependencies, assumptions, and hidden coupling.

The more code you have, the more places things can break, the harder it becomes to reason about the system.

With that in mind, the goal shouldn’t be to “produce more code” but instead think of it as “spending lines of code”. I find this framing helps reduce the urge to write more than necessary.

One great talk that touches on these topics is by Joe Armstrong - The Mess We’re In.

Belief 2: Three (Ordered) Phases#

Software development follows three phases, in order (Kent Beck’s wisdom):

  1. Make it work - Ship something that functions
  2. Make it right - Refactor for clarity (this is where we spend most of our time)
  3. Make it fast - Only when profiling proves it’s needed

Most developers skip straight to phase 3, trying to optimize before they even know if there’s a problem. This is where “premature optimization is the root of all evil” comes from.

Think about it: you can’t optimize what you aren’t measuring. And even when you do measure, performance has many layers - it could be an algorithm, a missing database index, a caching opportunity, or infrastructure. Without data, you’re just guessing.

More importantly, your users don’t care about your clever optimizations if the software doesn’t work correctly. A slow feature that works beats a fast one that’s broken.

Belief 3: Extensibility is the marker of good design#

Here’s the paradox: you know the least about a problem when you start building, and the most when you’re done. Yet you have to make architectural decisions at the beginning that will affect everything that comes after.

The traditional answer is “get your data model right” - as Linus Torvalds said:

“Bad programmers worry about the code. Good programmers worry about data structures and their relationships.”

This is true - bad data models create complexity cascades that ripple through your entire system. But here’s the thing: you won’t get it right on your first try. You can’t. You don’t know enough yet.

So what matters more than getting it “right”? Getting it extensible. Design systems that can evolve. Make decisions that are easy to change later. Build interfaces that can accommodate new requirements without breaking existing code.

Joshua Bloch’s talk on API design nails this: “You can always add, you can’t always remove.” Start minimal, make it extensible, evolve as you learn.

This is how you avoid technical debt - not by predicting the future perfectly, but by building systems that can adapt when the future turns out differently than expected.

In Practical Terms#

So how does this translate into my day to day? I’ve been keeping a top-level CLAUDE file (note you can have multiple CLAUDE files per subdirectory) to guide all my development.

I use it to store high-level beliefs as well as patterns that I’ve grown to like, e.g service layer for everything vs thick apis or models.

Here are some examples:

1. Avoid clever code#

# ❌ AVOID: Clever code that looks smart
class AbstractFactoryBuilder(metaclass=SingletonMeta):
    def __init__(self):
        self.strategies = {}
        self.observers = []
    # ... 200 lines of "enterprise" code
 
# ✅ GOOD: Simple, explicit
def process_payment(amount: float, user_id: str) -> tuple[bool, str]:
    """Process a payment. Returns (success, message)."""
    if amount <= 0:
        return False, "Invalid amount"
 
    # Direct, obvious logic
    try:
        charge_card(user_id, amount)
        return True, "Payment processed"
    except CardError as e:
        return False, str(e)

When this breaks which one would you rather debug?

2. Functions Over Classes#

(Again my personal belief here)

# ❌ Nope
class EmailSender:
    def __init__(self, smtp_config):
        self.config = smtp_config
 
    def send(self, to, subject, body):
        # ... implementation
 
# ✅ Just use a function
def send_email(to: str, subject: str, body: str) -> tuple[bool, str]:
    """Send an email. Returns (success, message)."""
    # ... implementation

Functions are easier to test, easier to understand, and easier to debug. Save classes for when you genuinely need to maintain state between calls (spoiler: you rarely do).

3. Service Layer Everything#

Your business logic should live in one place.

# api_views.py - Thin HTTP wrapper
@api_view(["POST"])
def activate_client_view(request, client_id):
    # Just call the service
    success, message = activate_client(client_id, request.user)
    return api_response(success=success, message=message)
 
# services/client.py - Business logic lives here
def activate_client(client_id: str, user: User) -> tuple[bool, str]:
    """All the actual logic is here, testable and reusable."""
    with transaction.atomic():
        client = Client.objects.get(id=client_id)
        client.status = "active"
        client.save()
        create_activity_log(client, "activated", user)
        return True, "Client activated"

This pattern makes testing trivial - just call the service function directly. No HTTP mocking, no request factories, just functions that return tuples.

Does this work?#

I find when using Claude Code I’m mostly just optimizing for reducing re-work, e.g having to tell it to do these things again. This might be a solved problem in the future (memory) but right now it’s not super great.

This approach appears to work well right now for keeping Claude Code on the rails that I want, but further it also gives me a reference point whenever I need to remind it.

Make Your Own#

Creating your own philosophy file is simple. Answer these questions:

  1. What makes code hard to maintain? (Your complexity triggers)
  2. What patterns do you always use? (Your go-to solutions)
  3. What mistakes do you see repeatedly? (Your “please never do this” list)
  4. How do you handle errors? (Your failure philosophy)
  5. What would you tell yourself 6 months ago? (Your learning)

Then document it with examples. Real code, not abstract principles.

The Template#

I’ve created a CLAUDE.md template you can download and adapt.

Here’s the core structure:

# CLAUDE.md - Development Philosophy
 
## Core Principles
 
1. Complexity is the enemy
2. Make it work → Make it right → Make it fast
3. Functions over classes
4. Service layer for business logic
5. Test with scripts, not frameworks
 
## Patterns to Follow
 
[Your examples of good code]
 
## Patterns to Avoid
 
[Your examples of bad code]
 
## Testing Philosophy
 
[How you verify things work]

The Bottom Line#

Software extracts value when it’s deployed and has users, not when it’s “perfectly architected” but sitting undeployed. Focus on reducing complexity and increasing maintainability.

The best code isn’t the smartest code. It’s the code that continues to deliver value without causing you headaches at 3am.