Tech Stack Decisions: What We Actually Use and Why
Forget framework comparison charts. Here's how we actually choose technologies after building software for a decade, and why our defaults might surprise you.
Tech Stack Decisions: What We Actually Use and Why
Every year brings a new batch of "definitive tech stack" articles. They compare React vs Vue vs Svelte. They evaluate Node vs Go vs Rust. They present matrices of features and benchmarks, as if choosing a tech stack is an optimization problem with a single correct answer.
It isn't. Technology choices are fundamentally about tradeoffs, context, and the particular constraints of your project, team, and business. The stack that's perfect for a startup building quickly is wrong for an enterprise replacing a legacy system. The framework that shines for consumer apps might struggle with back-office tools.
After a decade of building software for companies ranging from startups to automotive manufacturers, here's how we actually make these decisions—and what we've settled on as defaults.
The meta-point is worth stating explicitly: spending weeks debating technology choices is usually worse than making a reasonable choice quickly and moving on. Most modern technologies are good enough. The productivity difference between React and Vue doesn't determine your project's success. The difference between shipping in March versus shipping in July might.
The Uncomfortable Truth About Stack Decisions
Most stack debates miss the point. The difference between React and Vue doesn't determine whether your project succeeds. Competent teams ship great products with either. Incompetent teams struggle with both.
What actually matters:
Can you hire for it? The most elegant technology is useless if you can't staff a team. We've seen projects founder not because the technology was wrong, but because the pool of available developers was too small. React's ecosystem might be overwhelming, but the hiring pool is enormous.
Can you maintain it? The new framework is exciting today. In two years, you need to update dependencies, patch security issues, and onboard developers who weren't there at the start. Technologies with active communities and corporate backing are safer bets than brilliant experiments.
Does your team know it? Productivity with a familiar technology usually beats theoretical superiority of an unfamiliar one. The learning curve is real, and it costs time during the period when your project is most fragile.
Is it boring enough? Boring technology is battle-tested technology. The bugs are known. The workarounds are documented. The edge cases are understood. New technology means you're the one discovering problems.
Does it integrate with what you have? Brownfield projects—most projects—have existing systems. The new technology needs to talk to the old technology. APIs, authentication, data formats. A technology that's excellent in isolation but doesn't integrate with your reality isn't excellent.
What's the escape route? Technologies become unmaintained. Companies pivot or go bankrupt. How painful is it to migrate away if you need to? Open standards and common patterns make escapes easier than proprietary lock-in.
Our Actual Stack (And Why)
Here's what we actually use when clients ask us to choose. Not a comparison chart—our actual defaults and the reasoning behind them.
TypeScript Everywhere
This isn't controversial anymore, but it's the foundation everything else builds on. TypeScript catches bugs before runtime, documents code through types, and enables tooling that makes development faster.
We use TypeScript on both frontend and backend. The full-stack type safety is the compelling advantage—a type change in the backend immediately surfaces affected frontend code. Refactoring becomes tractable because the compiler finds the problems.
Some argue TypeScript adds overhead. It does, initially. But we've never regretted using TypeScript on any project longer than a few months. We have regretted using JavaScript.
The migration path matters. Starting with TypeScript is easy; migrating a large JavaScript codebase to TypeScript is painful. If there's any chance a project will grow, start with TypeScript. The cost is low; the optionality is valuable.
Strict mode is worth the friction. "strict": true in your tsconfig catches more bugs but requires more type annotations. Teams sometimes start with lax settings intending to tighten later. They rarely do. Start strict; loosen if you must.
Type generation from APIs is increasingly valuable. OpenAPI specifications, GraphQL schemas, tRPC—these can generate types automatically. The API contract becomes enforced at compile time. Changes in the backend surface as TypeScript errors in the frontend.
React for UI
We've built significant applications with Vue, Angular, and Svelte. We still choose React for most projects.
The reason isn't that React is technically superior. Vue's composition API is elegant. Svelte's compilation approach is clever. Angular provides more structure. Each has genuine advantages.
We choose React because of the ecosystem. When we need a date picker, there are a dozen options. When we encounter a weird bug, someone has blogged about it. When we hire, most frontend developers know React. When we need a specific integration—payment UI, mapping, charts—there's a React library.
This calculus might change. Svelte's ecosystem is growing. Vue has strong adoption in specific markets. But today, React's ecosystem advantage compounds into real productivity gains across projects.
The React learning curve deserves acknowledgment. React's flexibility means you must make many decisions—state management, routing, styling, form handling. Each decision has multiple valid approaches, which can paralyze new teams. Starting with an opinionated meta-framework like Next.js reduces decisions and provides sensible defaults.
React's evolution continues. Server Components change how applications are structured. The React team's focus on performance through compilation (React Compiler) suggests the future involves less manual optimization. Keeping current with React's direction matters for long-term projects.
Next.js or TanStack Start for Full-Stack
Single-page applications have their place, but server-side rendering wins for most scenarios. Better SEO. Faster initial loads. Simpler data fetching patterns.
Next.js is the safe choice—massive community, Vercel's backing, proven at scale. We use it for projects where clients will maintain the code themselves or where the community support matters.
TanStack Start is our current choice for projects we control. It's newer and less proven, but it integrates beautifully with TanStack Router and Query. The type safety is excellent. For teams comfortable with a smaller ecosystem, the developer experience is superior.
The deployment story matters too. Next.js assumes Vercel, and alternatives require more configuration. TanStack Start deploys cleanly to Cloudflare Workers, which matches our infrastructure preferences.
Remix deserves mention as a solid alternative in this space. Its philosophy—embracing web standards, progressive enhancement—produces applications that work well even when JavaScript fails to load. For content-heavy sites or applications with accessibility requirements, Remix's approach has merit.
Astro fills a specific niche: content sites that need minimal JavaScript. Blogs, documentation, marketing pages. Astro's islands architecture ships zero JavaScript by default, adding interactivity only where needed. For the right use case, it's excellent.
The full-stack framework landscape is consolidating around a few strong options. Whatever you choose from this tier is defensible. The differences matter less than choosing one and building expertise.
PostgreSQL, Full Stop
We've used MongoDB, DynamoDB, MySQL, and various specialized databases. PostgreSQL remains our default for everything.
It handles relational data with ACID guarantees. It handles JSON documents when you need flexibility. It handles full-text search well enough to avoid Elasticsearch for most cases. With pgvector, it handles embeddings for AI applications. It runs on every cloud, and managed versions are commoditized and cheap.
The argument for other databases is usually performance at extreme scale. Fair enough. But most applications don't need extreme scale. They need reliable data storage with a good query language and excellent tooling. PostgreSQL delivers.
For truly massive scale or specialized access patterns, we add purpose-built databases as needed. But "as needed" means after proving PostgreSQL isn't enough, not as a premature optimization.
SQLite deserves consideration for specific cases. Local-first applications, embedded databases, single-server deployments. The simplicity of "the database is a file" eliminates operational complexity. Tools like LiteFS enable replication across regions. For applications that can work within SQLite's constraints, the operational simplicity is compelling.
The ORM question splits developers. Some find Prisma or Drizzle productive; others prefer raw SQL or query builders like Kysely. Our take: ORMs add value for basic CRUD but become constraints for complex queries. Most applications need both—ORM for simple operations, raw SQL for complex ones. Choose tools that support both modes.
Database migrations should be version-controlled and automated. Whether Prisma Migrate, Drizzle Kit, or plain SQL migrations, the database schema should be reproducible from code. Manual schema changes lead to environments that drift apart.
Node.js for APIs, Python for AI
Backend language debates generate heat without light. Node.js, Go, Python, Rust—all can power production applications. The right choice depends on what you're building and who's building it.
Our default is Node.js with TypeScript for API services. The full-stack type safety with TypeScript frontend is genuinely valuable. The async I/O model fits API workloads well. The npm ecosystem, despite its flaws, provides libraries for everything.
For CPU-intensive work—AI/ML, heavy data processing—we use Python. The ecosystem is irreplaceable. Fighting the current to do machine learning in Node or Go wastes time.
We're pragmatic about boundaries. A Python ML model served behind a FastAPI endpoint, called by a Node.js API, called by a React frontend—this is fine. Microservices exist for exactly this kind of flexibility.
Go enters the picture for performance-critical services. When you need low latency, efficient concurrency, or small memory footprint, Go delivers. It's also simpler to deploy than Node—single binaries with no runtime dependencies. For API gateways, proxies, or any service where raw performance matters, Go is worth considering.
Rust is the nuclear option. When you need maximum performance and can't tolerate garbage collection pauses, Rust is there. We rarely reach for it because the development overhead is substantial. But for the right problems—low-level systems, WebAssembly, performance-critical libraries—nothing else compares.
The edge computing trend is changing deployment calculus. Cloudflare Workers, Deno Deploy, Vercel Edge Functions—these run JavaScript at the edge with millisecond cold starts. For latency-sensitive workloads, running code close to users matters. The Node.js ecosystem increasingly works in these environments.
AWS by Default, Cloudflare for Edge
AWS is complicated, expensive, and has a terrible user interface. We still use it for complex applications because the breadth of services is unmatched. When a project needs RDS, S3, Lambda, SQS, and CloudFront, trying to piece together the same from smaller providers costs more time than AWS's complexity.
For simpler deployments, especially static sites and edge-oriented applications, Cloudflare Workers is our preference. The deployment model is elegant. The global distribution is free. The pricing is aggressive. The platform grows more capable each year.
The choice between them is usually obvious. Complex applications with many AWS services stay on AWS. Simpler applications, especially those benefiting from edge execution, go to Cloudflare.
GCP and Azure have their niches. GCP for organizations deep in Google's ecosystem or needing specific GCP services (BigQuery, for instance). Azure for enterprise environments already committed to Microsoft. Neither is wrong; AWS's breadth just covers more situations.
The serverless versus containers debate continues. Lambda and equivalents work well for event-driven, bursty workloads. Containers on ECS, EKS, or Cloud Run work better for consistent workloads or when you need more control. Both are valid. The trend is toward containers with serverless characteristics—fast scaling, pay-per-use pricing—rather than pure serverless functions.
Infrastructure as code is non-negotiable for anything beyond experiments. Terraform, Pulumi, AWS CDK—the specific tool matters less than having code that defines your infrastructure. "The production environment exists only in someone's memory" is a disaster waiting to happen.
What We Avoid
Strong defaults require strong opinions about what not to use.
Bleeding-edge frameworks solve problems we don't have while introducing problems we can't yet anticipate. We let other teams find the sharp edges. When a technology has been in production for two years and the community still loves it, we'll evaluate.
Resume-driven development optimizes for developer learning over project success. If someone wants to use Rust for a CRUD API because they're learning Rust, the answer is no. Pick the technology that matches the problem, not the technology you want experience with.
Microservices for everything is usually premature optimization. A monolith that's well-structured is easier to reason about, deploy, and debug than a distributed system. Extract services when you have a proven reason, not as an architectural default.
Database-per-service in particular is often premature. The coordination problems of distributed data are harder than the benefits until you're operating at significant scale. Start with one database. Split when you must.
Kubernetes for everything adds complexity that most teams don't need. K8s is powerful but operationally demanding. Managed services like Cloud Run, Railway, or Fly.io often provide sufficient orchestration without the overhead. Reach for K8s when you genuinely need its capabilities, not because it's the "serious" choice.
GraphQL by default is another pattern we see overused. GraphQL solves real problems—over-fetching, under-fetching, client-driven queries. But it adds complexity—schema management, tooling, learning curve. REST APIs are fine for most applications. Use GraphQL when the problems it solves are problems you actually have.
The Real Framework for Decisions
When evaluating a technology for a specific project:
Start with constraints, not features. What's the team's existing expertise? What's the timeline? What's the hiring outlook? What existing systems need integration? Constraints eliminate options faster than feature comparison.
Favor boring technology. Unless you have a specific, demonstrated need for something newer, prefer the proven option. The bugs are known. The documentation exists. The community can help.
Plan for two years, not ten. Technology changes. Your requirements will change. The team will change. Making the perfect choice for a decade is impossible and not worth attempting. Make a good choice for the next phase.
Document your reasoning. When future developers (including future you) wonder why this stack was chosen, they should be able to read the rationale. "It seemed cool at the time" isn't a rationale. "We chose X over Y because of specific constraints A, B, C" is.
Accept that you'll be wrong sometimes. No framework prevents all bad decisions. Sometimes the choice that seemed right doesn't work out. The goal isn't perfect decisions—it's reasonable decisions that you can course-correct from.
Involve the team in decisions. The people who will live with the technology should have input on choosing it. Top-down mandates breed resentment. Collaborative decisions build ownership. The engineering satisfaction from using technologies the team chose has real productivity implications.
When to Ignore Our Defaults
Our defaults are defaults, not mandates. Specific situations call for different choices:
If you're building for extreme scale—millions of concurrent connections, sub-millisecond latency requirements—Go or Rust might be necessary. Node.js has limits.
If your team has deep expertise in a different stack, switching to ours costs more than it gains. Vue experts should use Vue. Ruby experts should use Rails.
If you're in a regulated industry with specific requirements, those requirements override preferences. Banking software has different constraints than consumer apps.
If you're building something highly specialized—game engines, embedded systems, high-frequency trading—general-purpose web stacks aren't the answer.
The point of having defaults isn't that defaults are always right. It's that defaults free you to focus on decisions that actually matter for your specific situation.
The Meta-Decision
The most important technology decision is how you make technology decisions.
Some organizations let each team choose independently. This maximizes team autonomy but creates fragmentation. Every team runs different stacks; shared tooling is impossible; engineers can't move between teams easily.
Some organizations mandate everything centrally. This maximizes consistency but kills innovation. Teams can't adopt better tools; the standard stack becomes outdated; talented engineers leave for organizations that let them learn.
The balance that works: central standards for cross-cutting concerns (authentication, observability, deployment) with team autonomy within guardrails. Teams can choose their frameworks and libraries within approved categories. Innovation happens at the edges; integration points stay consistent.
Whatever approach you choose, make it explicit. Implicit standards are worse than no standards—teams don't know what's expected, decisions are arbitrary, and consistency happens by accident if at all.
What We're Watching
Technology evolves. Our defaults update. Here's what we're paying attention to:
AI-assisted development is changing how we write code. Copilot, Cursor, Claude—these tools are productivity multipliers for experienced developers. The stack implications are emerging: languages with better documentation and examples train better AI assistants. TypeScript benefits from extensive public codebases.
Edge computing continues expanding. More computation moving to the edge, closer to users. The serverless model at the edge—fast cold starts, global distribution—suits many workloads. Our shift toward Cloudflare reflects this trend.
WebAssembly is finding its niche. WASM runs in browsers and increasingly on servers. It enables code reuse across platforms and languages other than JavaScript running in browser contexts. We're not building with WASM by default, but we're watching its maturation.
Local-first architecture is an interesting countertrend to cloud-everything. Applications that work offline, sync when connected, keep data on user devices. The technology is maturing—CRDTs, local databases, sync protocols. For specific use cases—note-taking, creative tools, privacy-sensitive applications—local-first makes sense.
None of these are defaults yet. They're possibilities we're exploring. By next year, some might be standard recommendations.
Testing Strategies by Stack
Testing philosophy should match your technology choices. What works for a Rails monolith differs from what works for a microservices architecture.
Frontend testing has matured. Component testing with Vitest or Jest catches unit-level issues. Integration tests with Testing Library verify component interactions. End-to-end tests with Playwright or Cypress validate user flows. The pyramid still applies—more unit tests, fewer E2E tests—but E2E tests have become faster and more reliable.
Backend testing depends on architecture. Monoliths benefit from comprehensive integration tests against real databases. Microservices need contract testing to verify services communicate correctly. Event-driven systems need tests that verify event handling and saga completion.
API testing deserves dedicated attention. OpenAPI specifications enable contract testing between frontend and backend. Tools like Postman or Hurl automate API test suites. Schema validation catches contract violations before they reach production.
Database testing is often neglected. Migrations should be tested—can they run on production data sizes? Can they be rolled back? Query performance should be validated against realistic data. Production databases behave differently than development databases with ten rows.
Our approach: test infrastructure as code as rigorously as application code. Terraform plans reviewed. Infrastructure changes tested in staging. Database migrations rehearsed. The build is more than the application.
DevOps and CI/CD Choices
The deployment pipeline is part of the stack. Choices here affect developer productivity and production reliability.
GitHub Actions is the safe default. Integrated with where your code lives. Massive ecosystem of community actions. Good enough for most projects without the operational overhead of Jenkins or self-hosted CI.
Larger organizations might need more. GitLab CI/CD provides stronger control for enterprises. CircleCI and Buildkite handle extreme scale. Jenkins remains relevant where customization requirements exceed what hosted CI offers.
Container registries and artifact management. Docker Hub for public images. Private registries—GitHub Container Registry, AWS ECR, Google Artifact Registry—for proprietary code. Image scanning for vulnerabilities before deployment.
Infrastructure as code is non-negotiable. Terraform for cloud-agnostic infrastructure. Pulumi for teams that prefer general-purpose languages. AWS CDK for AWS-committed teams. The specific tool matters less than having infrastructure defined in version-controlled code.
GitOps for Kubernetes deployments. ArgoCD or Flux managing cluster state from git repositories. Pull-based deployment models that improve auditability and reliability. If you're running Kubernetes, GitOps is the deployment pattern.
Observability Stack
You can't improve what you can't measure. Observability tools reveal what's happening in production.
Metrics with Prometheus and Grafana. The open-source standard for metrics collection and visualization. Grafana dashboards for operational visibility. Alertmanager for notifications when things go wrong.
Distributed tracing with Jaeger or Tempo. Traces show request flow across services. Essential for debugging microservices where problems might originate anywhere. OpenTelemetry provides vendor-neutral instrumentation.
Log aggregation with Loki or Elasticsearch. Centralized logs from all services. Correlation IDs connecting logs to traces. Retention policies balancing storage costs against debugging needs.
Application Performance Monitoring. Tools like Datadog, New Relic, or Sentry provide integrated views. Higher cost but lower operational overhead. Good for teams without dedicated platform engineering.
Synthetic monitoring and RUM. Pingdom or Checkly for synthetic checks from external locations. Real User Monitoring for actual user experience metrics. Both perspectives are valuable—synthetic catches outages; RUM reveals real-world performance.
The observability stack should give you confidence that production is healthy and the ability to debug quickly when it's not. If you can't answer "is the system working right now?" in thirty seconds, your observability needs work.
Security in the Stack
Security tools should be part of the development workflow, not an afterthought.
Dependency scanning. Dependabot, Snyk, or Renovate identifying vulnerable dependencies. Automated updates where possible. Security reviews for updates that can't be automated.
Static analysis. ESLint security plugins, Semgrep rules, CodeQL for vulnerability detection. Running in CI to catch issues before merge.
Secret detection. GitGuardian or TruffleHog scanning for committed secrets. Pre-commit hooks preventing secrets from reaching the repository.
Container security. Trivy or Clair scanning images for vulnerabilities. Minimal base images reducing attack surface. Non-root containers by default.
Runtime protection. Web application firewalls at the edge. Rate limiting to prevent abuse. CSRF and XSS protection built into frameworks.
Security tooling is table stakes. The question isn't whether to use these tools—it's which specific tools and how to integrate them into developer workflows without excessive friction.
When to Upgrade
Technology ages. Dependencies become unmaintained. Security vulnerabilities accumulate. When should you upgrade?
Stay current on frameworks. React, Next.js, your server framework—these should be reasonably current. Security patches, performance improvements, and ecosystem compatibility all depend on staying updated. The longer you wait, the harder upgrades become.
Evaluate new majors carefully. Major version upgrades often involve breaking changes. Evaluate the migration cost, the benefits, and the urgency. Not every major version needs immediate adoption—but you should at least understand what changed.
Monitor for deprecations. When libraries announce deprecations, start planning. Plenty of projects are stuck on unmaintained dependencies because they delayed too long. Deprecation notices are friendly warnings; use them.
Replace abandoned dependencies. When a dependency stops receiving updates, find alternatives. The security risk of unmaintained code compounds over time. Better to migrate proactively than scramble after an exploit.
Allocate time for maintenance. Some percentage of development effort—10-20% is common—should go to dependency updates, refactoring, and platform improvements. Teams that defer all maintenance eventually face painful crises. Teams that maintain continuously avoid them.
The upgrade strategy should be deliberate, not reactive. Know what you're running. Know what's current. Have a plan for getting there.
Build vs. Buy Decisions
For many features, you'll face the choice: build it yourself or use an existing solution.
Authentication and authorization. Use Auth0, Clerk, or similar unless you have unusual requirements. Identity is security-critical and maintenance-intensive. Rolling your own saves money today and creates risk forever.
Payments. Stripe or equivalent. Don't build payment processing. The complexity of tax, fraud, compliance, and edge cases is enormous. The cost of mistakes is worse.
Analytics. Consider PostHog (open source, self-hostable) or Amplitude/Mixpanel for product analytics. Plausible or Fathom for privacy-focused web analytics. Google Analytics if you're comfortable with the privacy implications.
Email. Resend, Postmark, or SendGrid for transactional email. The deliverability expertise these services provide is nearly impossible to replicate in-house.
Feature flags. LaunchDarkly, Statsig, or open-source alternatives like Unleash. The ability to deploy without releasing, to target specific users, to run experiments—this should be infrastructure, not custom code.
The pattern: buy commodity infrastructure, build domain-specific differentiation. Your competitive advantage rarely comes from having a better payment processor—it comes from what you build on top of commodity infrastructure.
Working on a project and wondering about technology choices? Let's discuss what makes sense for your specific situation.