<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Nikita Savchenko's blog]]></title><description><![CDATA[Nikita Savchenko's blog]]></description><link>https://blog.nikitaeverywhere.com</link><generator>RSS for Node</generator><lastBuildDate>Tue, 14 Apr 2026 09:17:44 GMT</lastBuildDate><atom:link href="https://blog.nikitaeverywhere.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[How to Software Engineer in the AI Era]]></title><description><![CDATA[I genuinely believe programming languages are becoming assembler — and I don't mean that metaphorically.
C abstracted assembly. C++ abstracted C. JavaScript abstracted memory management. Each layer le]]></description><link>https://blog.nikitaeverywhere.com/how-to-software-engineer-in-the-ai-era</link><guid isPermaLink="true">https://blog.nikitaeverywhere.com/how-to-software-engineer-in-the-ai-era</guid><category><![CDATA[cloudflare]]></category><category><![CDATA[serverless]]></category><category><![CDATA[edge computing]]></category><category><![CDATA[Web Development]]></category><category><![CDATA[AI]]></category><category><![CDATA[Programming Blogs]]></category><category><![CDATA[Developer Tools]]></category><category><![CDATA[Productivity]]></category><category><![CDATA[Rust]]></category><dc:creator><![CDATA[Nikita Savchenko]]></dc:creator><pubDate>Mon, 30 Mar 2026 15:13:06 GMT</pubDate><content:encoded><![CDATA[<p>I genuinely believe <strong>programming languages are becoming assembler</strong> — and I don't mean that metaphorically.</p>
<p>C abstracted assembly. C++ abstracted C. JavaScript abstracted memory management. Each layer let humans think at a higher level. The same leap just happened again — the new abstraction layer is English.</p>
<p>The language I work in now is English. Whether the target is TypeScript, Rust, or Python — source code is a lower-level artifact generated from higher-level intent. I write close to 0% of code by hand. I write documentation and sophisticated prompts.</p>
<p>I <a href="https://nikitaeverywhere.com/posts/how-to-software-engineer-in-ai-era/?utm_source=hashnode&amp;utm_medium=cross-post&amp;utm_campaign=how-to-software-engineer-in-ai-era">wrote a deep dive on this</a> with the full AfterPack case study. Here I want to focus on the practical workflow — what changed and what it means for anyone building software, whether you're writing code, leading a team, or making product decisions.</p>
<h2>The Mental Model</h2>
<p>Every AI tool boils down to:</p>
<p><strong>Input + Context → LLM → Output</strong></p>
<p><img src="https://r2.nikitaeverywhere.com/2026/03-30-swe-ai-era-magic-box.png" alt="The Magic Box — Input + Context → AI tool → Output" /></p>
<p>Quality of output is directly proportional to quality of input. I've watched the same LLM give a junior mediocre code and a senior production-ready architecture. The difference was entirely in the context provided — and that applies whether you're prompting for code, system design, or go-to-market strategy.</p>
<h2>The Full Lifecycle Changed</h2>
<p>If you're only using AI for code autocomplete, you're at maybe 10%.</p>
<p><strong>Ideation and Research:</strong> Pressure-test ideas across multiple models (Claude, Gemini, GPT — trained on different data, they spot different blind spots). Competitive landscape analysis. Demand validation with real data. Hours, not weeks.</p>
<p><strong>Planning and Architecture:</strong> Where I spend the most time — significantly more than on implementation. Load context: existing architecture, constraints, business requirements. Ask for multiple approaches. Have it argue against its own recommendations.</p>
<p><strong>Planning mode is king.</strong> Days in planning, then execution in ~30 minutes. It self-tests. When I come to QA — it mostly just works when planned right.</p>
<p><strong>Design:</strong> Skip Figma for many use cases. Give the LLM your existing UI and design system — 15 minutes later, new pages match your patterns. Pro tip: Gemini Pro creates better initial visuals, Claude handles long-term code evolution far better.</p>
<p><strong>Implementation:</strong> The fastest part of the cycle. Refactor 100+ files by new design patterns — one-shot in 10 minutes. Previously a week of tedious work.</p>
<p><strong>Testing:</strong> AI writes tests, runs them, interprets failures, fixes code. But on real products, you spend most time here — verifying outputs, applying judgment.</p>
<h2>Your Brain Goes TikTok</h2>
<p>Nobody warned me about this. When AI handles implementation this fast, you become the bottleneck. So you try to keep up — I run 3-5 Claude terminals in parallel. One auditing a codebase, another implementing an API, a third writing product specs. Constantly jumping between contexts, reviewing, catching mistakes.</p>
<p>It's incredibly brain-intensive. Coding used to be more relaxing — hold one idea for 10-20 minutes, type it out, move on. Now you're a rapid context-switcher. TikTok was training us for this all along.</p>
<h2>The Real Bottleneck: Human Synchronization</h2>
<p>Your role becomes judge, architect, quality controller. But the deeper bottleneck isn't you individually — it's <strong>human synchronization</strong>. Meetings, handoffs, miscommunication, waiting for approvals. When execution takes 30 minutes, spending 3 days aligning on requirements is the dominant cost.</p>
<p>Who thrives:</p>
<ul>
<li><strong>Solo builders</strong> shipping entire products alone</li>
<li><strong>Small teams</strong> with clear ownership and less meetings</li>
<li><strong>Senior generalists</strong> who own entire feature areas — PM, QA, architect, engineer in one</li>
</ul>
<p>One person carries what previously required a team. For larger organizations, each person's blast radius is dramatically larger than two years ago.</p>
<h2>The AfterPack Story</h2>
<p>I'm building <a href="https://www.afterpack.dev">AfterPack</a> — a Rust-based JavaScript obfuscator. Three weeks on core architecture, not a single line of Rust.</p>
<ol>
<li><strong>Documentation first.</strong> 20+ spec files. Different AI agents on different parts of the spec — just careful system design.</li>
<li><strong>JavaScript prototyping.</strong> Prototyped obfuscation transforms in JS. Multiple iterations. Agents tried to break the prototypes and suggested improvements.</li>
<li><strong>Adversarial testing.</strong> Claude Code running overnight: "keep iterating until a fresh Claude instance can't deobfuscate the output."</li>
</ol>
<p>Production Rust — a language I'd barely touched. My language is English. I focus on architecture, data structures, algorithms. The previous generation of JS obfuscators took 2-3 years. I'm shipping something more advanced in a fraction of that time. I <a href="https://dataunlocker.com">experience the same compression with DataUnlocker</a> — years of work, now condensable into months solo.</p>
<h2>What Actually Matters</h2>
<ol>
<li><strong>Context is everything.</strong> Write documentation before anything. What you want, why, constraints, what "good" looks like.</li>
<li><strong>Start in planning mode.</strong> Multiple approaches. Model argues against itself. Prototype in simpler language first.</li>
<li><strong>Use multiple models.</strong> Claude, Gemini, GPT — different training, different perspectives. Critical for planning.</li>
<li><strong>Own the architecture.</strong> AI implements whatever you describe. Your job is describing the right thing. That won't be automated anytime soon.</li>
</ol>
<h2>Where to Start</h2>
<ul>
<li><strong>New to software:</strong> AI assistant + plain English. You'll learn from watching it reason.</li>
<li><strong>Not using AI yet:</strong> build your next feature entirely AI-assisted.</li>
<li><strong>Already using AI daily:</strong> better documentation, more planning mode, focus on context quality.</li>
<li><strong>Leading a team:</strong> biggest gains are in reducing coordination overhead, not individual speed. Rethink whether every meeting is still necessary when execution is this fast.</li>
</ul>
<hr />
<p><em>Originally published on <a href="https://nikitaeverywhere.com/posts/how-to-software-engineer-in-ai-era/?utm_source=hashnode&amp;utm_medium=cross-post&amp;utm_campaign=how-to-software-engineer-in-ai-era">nikitaeverywhere.com</a></em></p>
]]></content:encoded></item><item><title><![CDATA[Why You Should Build on Cloudflare by Default]]></title><description><![CDATA[I've run production workloads on Cloudflare for five years - a web analytics SaaS handling 50M requests/day and a code protection tool built on Workers. I've also set up infrastructure on AWS, GCP, an]]></description><link>https://blog.nikitaeverywhere.com/why-you-should-build-on-cloudflare-by-default</link><guid isPermaLink="true">https://blog.nikitaeverywhere.com/why-you-should-build-on-cloudflare-by-default</guid><category><![CDATA[cloudflare]]></category><category><![CDATA[serverless]]></category><category><![CDATA[edge computing]]></category><category><![CDATA[Web Development]]></category><dc:creator><![CDATA[Nikita Savchenko]]></dc:creator><pubDate>Sun, 29 Mar 2026 17:56:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/66b536ca99fbae0b965632a0/9b789945-cc18-4702-b03f-2ba830c658a1.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I've run production workloads on Cloudflare for five years - a <a href="https://dataunlocker.com">web analytics SaaS</a> handling 50M requests/day and a <a href="https://www.afterpack.dev">code protection tool</a> built on Workers. I've also set up infrastructure on AWS, GCP, and DigitalOcean. This is a practical comparison, not a pitch.</p>
<p>I <a href="https://nikitaeverywhere.com/posts/2026-01-29-cloudflare/?utm_source=hashnode&amp;utm_medium=cross-post&amp;utm_campaign=cloudflare-default">wrote a longer version</a> with architecture diagrams and pricing breakdowns. This article focuses on the decision: should Cloudflare be your default?</p>
<h2>The Free Tier That Actually Works</h2>
<p>Most free tiers are 12-month trials with gotchas. Cloudflare's doesn't expire and has no credit card requirement:</p>
<ul>
<li><strong>100K</strong> serverless requests/day</li>
<li><strong>Unlimited</strong> static site bandwidth</li>
<li><strong>5GB</strong> edge SQL database</li>
<li><strong>~10GB</strong> object storage with <strong>zero egress</strong></li>
<li>Key-value store, queues, durable actors - all included</li>
</ul>
<p>The egress pricing difference matters at scale. <a href="https://aws.amazon.com/s3/pricing/">AWS S3 charges ~\(90/TB</a> for data transfer out. R2 charges \)0. My storage bill for serving millions of requests is effectively zero.</p>
<h2>Cold Starts: The Numbers</h2>
<p>Workers use <a href="https://developers.cloudflare.com/workers/reference/how-workers-works/">V8 isolates</a> instead of containers. The cold start difference is not incremental - it's a different architecture:</p>
<table>
<thead>
<tr>
<th>Platform</th>
<th>Cold Start</th>
<th>What's Happening</th>
</tr>
</thead>
<tbody><tr>
<td>Cloudflare Workers</td>
<td>&lt; 5ms</td>
<td>V8 isolate spins up</td>
</tr>
<tr>
<td>AWS Lambda</td>
<td>100-500ms</td>
<td>Container boots</td>
</tr>
<tr>
<td>Google Cloud Run</td>
<td><a href="https://cloud.google.com/blog/topics/developers-practitioners/3-ways-optimize-cloud-run-response-times">1-3 seconds</a></td>
<td>Container + app init</td>
</tr>
</tbody></table>
<p>In production, my p99 latency from Singapore is under 50ms. The same endpoint on Lambda from Singapore would add 200-300ms of network latency before the cold start even begins.</p>
<h2>Three Commands to Production</h2>
<pre><code class="language-bash">npm create cloudflare@latest my-api
cd my-api
npx wrangler deploy
</code></pre>
<p>No IAM policies. No VPC. No Kubernetes. No Dockerfiles. This deploys to 300+ cities globally.</p>
<p>Compare with GCP: enable APIs, configure Cloud Run, set up Artifact Registry, write a Dockerfile, configure IAM service accounts, set up a load balancer. That's a day of work before you write business logic.</p>
<h2>What I Actually Run on It</h2>
<p><strong>Analytics SaaS (50M req/day):</strong> Workers handle proxy requests at the edge, R2 stores processed data, KV handles configuration. The entire infrastructure costs less than a single AWS EC2 instance would.</p>
<p><strong>Code protection API:</strong> A Rust binary compiled to Workers with sub-5ms response times globally. The deploy is <code>npx wrangler deploy</code> - same as any JS project.</p>
<p><strong>Common architecture patterns that work well:</strong></p>
<ul>
<li>Static frontend (Pages) + API (Workers) + database (D1) - full Next.js support</li>
<li>Async processing: Workers → Queues → Workers → R2, with built-in retry</li>
<li>AI pipelines: Workers orchestrating multiple LLM providers via AI Gateway</li>
</ul>
<h2>Where It Falls Short</h2>
<p>Being honest about the limitations:</p>
<ul>
<li><strong>5-minute execution cap</strong> on Workers. Containers (beta) help but aren't GA yet</li>
<li><strong>No GPU compute.</strong> Workers AI handles inference, not training</li>
<li><strong>D1 is SQLite.</strong> Great for most cases, but if you need PostgreSQL-specific features, look elsewhere</li>
<li><strong>HIPAA/FedRAMP gaps.</strong> Enterprise compliance lags AWS and GCP significantly</li>
<li><strong>Vendor lock-in on stateful services.</strong> Durable Objects and KV are proprietary. But Workers run standard JS/TS, D1 exports as SQLite, and R2 is S3-compatible - the compute layer is portable</li>
</ul>
<p>If you need long-running jobs, GPU training, or strict regulatory compliance, Cloudflare isn't the answer today. For everything else - startups, SaaS, APIs, developer tools - the cost/simplicity/performance balance is unmatched.</p>
<hr />
<p><em>I <a href="https://nikitaeverywhere.com/posts/2026-01-29-cloudflare/?utm_source=hashnode&amp;utm_medium=cross-post&amp;utm_campaign=cloudflare-default">covered this in more depth</a> with additional architecture examples and code.</em></p>
]]></content:encoded></item></channel></rss>