[{"content":"","date":"February 10, 2026","externalUrl":null,"permalink":"/","section":"","summary":"","title":"","type":"page"},{"content":"When a new pod comes online, it often experiences a gate rush: it’s declared “Ready,” immediately receives its full share of production traffic, and then falls over—spiking latency, throwing transient 5xx/504s, or flapping readiness. This is especially common for warm-up–sensitive services (JVM class loading/JIT, cache population, connection pool establishment, TLS handshakes, model loading, etc.).\nA good mental model is a turnstile in front of the new pod: let some traffic in at first, then gradually increase until the pod can safely handle its steady-state share.\nThat’s what slow start does—whether it’s implemented at the load balancer (e.g., ALB target groups) or inside the service mesh (typically via Envoy/Istio).\nThe core problem: readiness is binary, warm-up isn’t # Kubernetes readiness is a yes/no signal. But many workloads are in a gray zone: they can handle some traffic shortly after startup, but not their full share without causing timeouts or resource contention.\nThis shows up as:\nThread/CPU saturation on cold instances Latency spikes (p95/p99) and transient 503/504s Readiness flapping when the process is overwhelmed A key nuance: simply delaying readiness doesn’t always solve it—because some apps only “warm up” under real load. Without requests, caches won’t populate and code paths won’t get exercised.\nWhat “slow start” does # 1) Load balancer slow start (e.g., AWS ALB target groups) # ALB slow start allows a newly healthy target to receive a linearly increasing share of requests during a configured warm-up window.\nThis is powerful because it works without modifying your app:\nThe pod can register and pass health checks The LB still protects it from immediate full load You get fewer cold-start brownouts during rollouts and scale-ups This pattern is explicitly recommended when the app needs real traffic to reach steady-state (cache warm-up, lazy initialization, etc.).\n2) Service mesh slow start (Envoy / Istio) # In a mesh, the data plane (often Envoy) can apply the same idea: new endpoints start with a reduced effective load-balancing weight, which ramps up during a slow-start window. Envoy documents this as “slow start mode,” affecting upstream load balancing weights and helping avoid timeouts and degraded user experience for endpoints that need warm-up.\nMany service meshes expose a simplified control for this. For example, Istio’s warmupDurationSecs maps to Envoy’s slow start window (but may not expose every Envoy tuning knob).\nOperationally, mesh slow start helps prevent new instances from receiving a full share of traffic immediately after becoming ready, reducing 5xxs during initialization.\nWhy this is an “operational excellence” feature, not just a performance tweak # Slow start is one of those mechanisms that turns unknown unknowns into known trade-offs:\nReduced incident rate during “normal” operations # Rollouts, node drains, and autoscaling events happen constantly. Without a traffic ramp, you’re effectively betting that every new pod is instantly production-grade.\nSafer progressive delivery # Canary analysis is only meaningful if the canary is measured after it’s warmed up. Otherwise, you end up chasing false positives (or worse, ignoring real problems because the baseline is noisy). (This is why many teams pair warm-up windows with rollout pacing and analysis delays.)\nMore predictable capacity during bursts # Without slow start, you risk overloading the newest pods right when you need them most (traffic spikes). With slow start, you trade a small ramp-up delay for drastically fewer error spikes—usually a favorable trade in real systems.\nHow to use slow start well # 1) Don’t use slow start to “paper over” broken readiness # Slow start is a safety net—not a replacement for correct readiness and warm-up behavior. If you can make readiness reflect “actually ready for full load” (pre-warmed caches, initialized pools, and readiness gated on warm-up), do that first.\nThe best practice stack is:\nCorrect readiness (gate on dependencies you truly need) Pre-warm what you can (classes, caches, pools) Slow start for what must warm under real traffic 2) Pick a window based on measured warm-up, not guesses # A practical method:\nRun a realistic load test Roll pods while under load Choose the smallest window that keeps latency/errors within SLO 3) Put guardrails on “too high” values # An overly long slow start can reduce effective capacity during bursts: you may scale out, but the new pods contribute too slowly to avert overload. (This is especially relevant when scaling is reactive.)\nEven AWS ALB slow start has explicit bounds for slow_start.duration_seconds, with the range being 30-900 seconds (15 minutes).\n4) Make it observable # If you roll out slow start, you should be able to answer:\nAre new pods seeing fewer requests initially? Do error/latency spikes during rollouts go down? Does time-to-steady-state increase, and is it acceptable? At minimum, watch:\nper-pod RPS distribution during rollout p95/p99 latency for new pods vs old 5xx/504 rates and readiness flaps A portable checklist engineers can apply anywhere # If you’ve seen any of these:\n“New pods are Ready but still throw 5xx for ~30–120s” “Canaries fail analysis early, but pass if retried” “Scale-outs during bursts don’t stop the bleeding” Then consider this playbook:\nTighten readiness (dependency checks, warm-up gates where feasible) Add pre-warming for deterministic work (classes, caches, pools) Enable slow start at the LB and/or mesh to meter real traffic during warm-up Ensure rollout tooling (Argo/Rollouts/Deployments) doesn’t advance analysis until after warm-up Add guardrails so slow start can’t be set so high it becomes a capacity risk The takeaway # Slow start is a deceptively simple idea with outsized impact: it acknowledges that cold pods are not instantly equivalent to warm pods, and it encodes that reality into your traffic routing layer.\nWhether you use ALB slow start (increasing request share linearly) or mesh slow start (ramping load-balancing weight for new endpoints), the goal is the same:\nPrevent the gate rush, keep rollouts boring, and make autoscaling events resilient—by giving new instances a controlled on-ramp to production traffic.\n","date":"February 10, 2026","externalUrl":null,"permalink":"/posts/202602-slow-start/","section":"Posts","summary":"When a new pod comes online, it often experiences a gate rush: it’s declared “Ready,” immediately receives its full share of production traffic, and then falls over—spiking latency, throwing transient 5xx/504s, or flapping readiness. This is especially common for warm-up–sensitive services (JVM class loading/JIT, cache population, connection pool establishment, TLS handshakes, model loading, etc.).\n","title":"Beating the “Gate Rush”\u003cbr\u003e\u003csmall\u003eWhy Slow Start Matters for Resiliency and Operational Excellence\u003c/small\u003e","type":"posts"},{"content":"","date":"February 10, 2026","externalUrl":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":"","date":"February 10, 2026","externalUrl":null,"permalink":"/categories/kubernetes/","section":"Categories","summary":"","title":"Kubernetes","type":"categories"},{"content":"Kubernetes (often abbreviated K8s) is an open-source container orchestration platform originally designed by Google and now maintained by the Cloud Native Computing Foundation (CNCF). It automates deploying, scaling, and operating application containers across clusters of machines. You describe the desired state (e.g. “run 3 replicas of this app”), and Kubernetes reconciles the actual state to match it—handling scheduling, self-healing, load balancing, and rolling updates. It has become the de facto standard for running cloud-native workloads; the official documentation and Kubernetes API reference are the authoritative sources for concepts and usage.\n","date":"February 10, 2026","externalUrl":null,"permalink":"/tags/kubernetes/","section":"Tags","summary":"Kubernetes (often abbreviated K8s) is an open-source container orchestration platform originally designed by Google and now maintained by the Cloud Native Computing Foundation (CNCF). It automates deploying, scaling, and operating application containers across clusters of machines. You describe the desired state (e.g. “run 3 replicas of this app”), and Kubernetes reconciles the actual state to match it—handling scheduling, self-healing, load balancing, and rolling updates. It has become the de facto standard for running cloud-native workloads; the official documentation and Kubernetes API reference are the authoritative sources for concepts and usage.\n","title":"Kubernetes","type":"tags"},{"content":"","date":"February 10, 2026","externalUrl":null,"permalink":"/tags/load-balancer/","section":"Tags","summary":"","title":"Load-Balancer","type":"tags"},{"content":"","date":"February 10, 2026","externalUrl":null,"permalink":"/categories/operational-excellence/","section":"Categories","summary":"","title":"Operational Excellence","type":"categories"},{"content":"","date":"February 10, 2026","externalUrl":null,"permalink":"/posts/","section":"Posts","summary":"","title":"Posts","type":"posts"},{"content":"","date":"February 10, 2026","externalUrl":null,"permalink":"/tags/progressive-delivery/","section":"Tags","summary":"","title":"Progressive-Delivery","type":"tags"},{"content":"","date":"February 10, 2026","externalUrl":null,"permalink":"/tags/service-mesh/","section":"Tags","summary":"","title":"Service-Mesh","type":"tags"},{"content":"","date":"February 10, 2026","externalUrl":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"},{"content":" Personal This is about me\nProfessional Todd is a Principal Engineer at Intuit, is building an AI-native platform with \u0026ldquo;Done For You\u0026rdquo; experiences utilizing secure, multi-tenant Kubernetes infrastructure. Todd has worked on various large-scale distributed systems projects during his career, ranging from hierarchical storage management, peer-to-peer database replication, enterprise storage virtualization, two-factor authentication SaaS, and Kubernetes clusters. Todd is co-author of the book \u0026ldquo;GitOps and Kubernetes\u0026rdquo; and is co-chair of the CNCF Developer Experience/End-user SIG.\n","date":"February 4, 2026","externalUrl":null,"permalink":"/about/","section":"","summary":"Personal This is about me\nProfessional Todd is a Principal Engineer at Intuit, is building an AI-native platform with “Done For You” experiences utilizing secure, multi-tenant Kubernetes infrastructure. Todd has worked on various large-scale distributed systems projects during his career, ranging from hierarchical storage management, peer-to-peer database replication, enterprise storage virtualization, two-factor authentication SaaS, and Kubernetes clusters. Todd is co-author of the book “GitOps and Kubernetes” and is co-chair of the CNCF Developer Experience/End-user SIG.\n","title":"About","type":"page"},{"content":"","date":"February 2, 2026","externalUrl":null,"permalink":"/categories/engineering-culture/","section":"Categories","summary":"","title":"Engineering Culture","type":"categories"},{"content":"","date":"February 2, 2026","externalUrl":null,"permalink":"/tags/productivity/","section":"Tags","summary":"","title":"Productivity","type":"tags"},{"content":"","date":"February 2, 2026","externalUrl":null,"permalink":"/tags/velocity/","section":"Tags","summary":"","title":"Velocity","type":"tags"},{"content":"Ever walk into a meeting and feel an eerie sense of déjà vu?\nThe same slide deck. The same “quick recap.” The same debate you’re positive you already settled last week.\nHow can you stop meetings from repeating themselves?\nIn large-scale engineering organizations, meetings often suffer from State Drift. Instead of moving time forward, they loop. Like Groundhog Day, we wake up, log into Zoom, and relive the exact same architectural debate… again.\nThis isn’t a failure of intelligence or effort. It’s what happens when busy teams rely on human memory instead of shared systems. When context lives in people’s heads instead of durable artifacts, meetings become reruns.\nHere’s how to break the loop and make meetings progress linearly again using meeting effectiveness principles and best practices from high-performing tech organizations.\nBreak the Loop Before the Meeting Starts (Async First) # In Groundhog Day, Phil keeps repeating the day because nothing actually changes. Many meetings do the same thing because no work happened between them.\nRecurring meetings should also be cancelable by default. If there’s no agenda or outcome for this instance, don’t repeat the day.\nIf your meeting exists mainly to:\nShare updates Re-explain context Gather broad input …it probably doesn’t need to be a meeting.\nBefore scheduling a meeting, document the problem, the open questions, and any proposed options asynchronously. Shared docs, threads, or brief write-ups give people time to think clearly without the pressure of real-time performance.\nAsync clarity is the reset button.\nDefault to async for updates, questions, and feedback. Document issues and options in advance, and send pre-reads at least 24 hours before a meeting. If async discussion turns into long back-and-forth, then schedule a short sync to resolve it. Some tech companies go further and use silent reading starts—spending the first 10–15 minutes of a meeting reading the same document. It’s awkward once, then magical forever. Everyone starts from the same timeline.\nIf a topic can be clarified or debated without real-time back-and-forth, it probably doesn’t need a meeting at all. And if async prep turns into long back-and-forth anyway, that’s your signal that a meeting may actually help.\nDecide If Time Is Actually Moving Forward # A simple test:\nWhen this meeting ends, what will be different than when it started?\nIf the answer is unclear, you’re about to relive the day again.\nEvery meeting should clearly fall into one of three categories:\nDecide – a decision will be made Ideate – ideas will be generated or refined Solve – a specific problem will be worked through Put the purpose and desired outcome directly in the invite.\nA simple agenda works:\nOutcome: “This meeting is successful if we leave with…” Purpose: Decide, ideate, or solve Next Steps: What happens after today When the outcome is clear and visible, right in the invite and reiterated at the start, people stay focused. When it’s fuzzy, attention drifts, side topics creep in, and suddenly the purpose of the meeting is lost.\nIf you can’t name the outcome, that’s often a sign the meeting shouldn’t exist yet.\nThis is how you prove the clock is actually ticking forward.\nStop Re-Living Decisions (Write Them Down) # Most “Groundhog Day meetings” happen for one reason:\nThe decision wasn’t captured clearly enough to survive the week.\nJust as Phil was forced to relive the same day until he mastered its complexities, your team will continue to repeat the same meeting until you \u0026ldquo;get it right\u0026rdquo; by establishing clear outcomes and documenting decisions to move the project forward. If decisions only exist in memory, or in someone’s personal notes, they’ll be revisited. Guaranteed.\nEffective teams treat documentation as part of the meeting itself, not an optional follow-up. Decisions, action items, owners, and due dates are captured in real time and shared shortly after the meeting ends.\nTo break the loop:\nCapture decisions in real time, not after the fact Use tools like Zoom AI Companion so note-taking doesn’t steal attention Send a summary within 24 hours that includes: Decisions Action items Owners Due dates If a decision is made, explicitly note the DACI:\nDriver Approver Contributors Informed Then, in the next meeting, start with a one-minute recap of the last decision instead of reopening the debate.\nThat’s how time starts moving again.\nReduce the Cast (Fewer Characters, Better Plot) # In the movie, Phil is stuck reliving the same day, but imagine if everyone in Punxsutawney showed up to each scene.\nThat’s what over-invited meetings feel like.\nStrong meeting cultures are intentional about who actually needs to be there. Invite only the people required to reach the outcome. Others can, and should, get the summary afterward.\nInvite only people with a clear role in the outcome Avoid multiple layers of the same team: one representative is usually enough Share outcomes broadly after the meeting instead This avoids a common anti-pattern: lots of passive listeners who later reopen decisions because they weren’t sure what happened.\nSmaller meetings create clearer decisions, and fewer sequels.\nEnd the Day on Purpose # One reason meetings feel repetitive is that they just… end.\nNo recap.\nNo commitments.\nNo clear “this day is over.”\nEffective meetings reserve the final minutes to:\nRestate decisions Confirm action items and owners Set expectations for what happens next The Groundhog Rule: Once a decision is made, participants must commit to it. Refrain from the \u0026ldquo;meeting after the meeting\u0026rdquo; to relitigate outcomes. This is how you ensure the loop doesn\u0026rsquo;t restart tomorrow. The Real Lesson of Groundhog Day # Phil doesn’t escape the loop by trying harder.\nHe escapes by changing the system.\nTeams don’t need better memories.\nThey need better defaults:\nAsync before sync Avoid unnecessary meetings Written decisions Small, intentional meetings Clear outcomes Do that, and your meetings stop repeating themselves, and start telling a better story.\n⏰🐿️\nReferences # The GitLab Handbook - How to run an effective meeting at GitLab Atlassian - DACI Decision-Making Framework AWS Whitepaper - Communication and Collaboration Amazon Leadership Principles Asana - Asynchronous communication isn’t what you think it is The Surprising Science of Meetings by Steven Rogelberg Image source: Sony Pictures Entertainment ","date":"February 2, 2026","externalUrl":null,"permalink":"/posts/202602-welcome-to-groundhog-day/","section":"Posts","summary":"Ever walk into a meeting and feel an eerie sense of déjà vu?\nThe same slide deck. The same “quick recap.” The same debate you’re positive you already settled last week.\nHow can you stop meetings from repeating themselves?\n","title":"Welcome to Groundhog Day\u003cbr\u003e\u003csmall\u003eWhy Your Meetings Keep Repeating Themselves\u003c/small\u003e","type":"posts"},{"content":"Regarding the Multi-Modal http2 support ask from Traffic, can we vend two ports (http-service-mesh and http2-service-mesh) and then document the dev teams should just use the correct one depending on the protocol being used? For different paved roads (agentic streaming, etc) the default vended template code can use the appropriate port depending on the use case. That way we don\u0026rsquo;t need to make this another configuration in AIR that the user needs to do.\n(BTW, seems like ALPN would be more for North-South traffic and isn\u0026rsquo;t really supported by Envoy for mTLS East-West traffic)\n@ntantry @kdowney @ssh\n9 replies\nKevin Downey [10:48 AM]\nAre we asking for a new port like 8091 ?\n[10:52 AM]\nSo Mesh supports 3 protocols, HTTP, HTTP2 and GRPC.\nI don\u0026rsquo;t think we want 3 ports, but maybe we can default to HTTP2 for 8090 and 8091 for grpc?\n[10:52 AM]\nHTTP1.1 still seems odd that it has to be hardcoded\nTodd Ekenstam [11:00 AM]\nAssuming we want to just have two vended ports, here is a proposed task list of what we\u0026rsquo;d need to do to implement it:\nIAIR — IKS AIR (Infrastructure Layer)\nGoal: Expose both ports (http-service-mesh and http2-service-mesh) to services by default.\nTasks:\nIAIR-XXXX: Update AIR service scaffolding to vend two mesh ports: http-service-mesh → HTTP/1.1 (default) - 8090 http2-service-mesh → HTTP/2 (h2c) - 8091 IAIR-XXXX: Add configuration flag to allow disabling one port if desired (protocolMode: http1 | http2 | dual). Default is dual. IAIR-XXXX: Ensure generated Kubernetes Services and DestinationRules set port.name correctly (http-, http2-). IAIR-XXXX: Add documentation and migration notes for teams moving from single-port to dual-port mode. MESH — Traffic / Service Mesh Team\nGoal: Enable routing, mesh policy, and envoy filter support for dual ports.\nTasks:\nMESH-XXXX: Update default mesh configuration (EnvoyFilters, DestinationRules, VirtualServices) to support both HTTP/1.1 and HTTP/2 ports. MESH-XXXX: Add routing examples for mixed-mode (HTTP/1.1 + HTTP/2) services. MESH-XXXX: Verify compatibility with WebSocket over HTTP/1.1 and HTTP/2 streaming (h2c) scenarios. MESH-XXXX: Extend gateway configuration templates to route to either port explicitly. MESH-XXXX: Add integration tests to validate connection upgrade success, abnormal disconnects, and protocol stickiness. Observability / Platform Metrics Team\nGoal: Deliver visibility and autoscaling metrics for both protocols.\nTasks:\nO11Y-XXXX: Add new metrics for active inbound HTTP/1.1 vs HTTP/2 connections. O11Y-XXXX: Emit golden signals per protocol: upgrade success, abnormal disconnects, latency, throughput. O11Y-XXXX: Update Splunk/Wavefront dashboards to visualize both connection types. MSAASINT — Code Templates / Paved Road Team\nGoal: Make the dual-port pattern invisible to developers (Day-0 ready).\nTasks:\nMSAASINT-XXXX: Update Python Agentic Starter Kit (ASK/PSK) templates to connect to http2-service-mesh port by default for streaming use cases. MSAASINT-XXXX: Provide fallback config for HTTP/1.1 (http-service-mesh) to preserve backward compatibility. MSAASINT-XXXX: Add example routes in templates that demonstrate both ports (e.g., WebSocket over HTTP/1.1 + voice streaming over HTTP/2). MSAASINT-XXXX: Update Java Starter Kit (JSK) to emit Inbound HTTP(WebSocket) connection metrics for autoscaling. IKS — AIR Autoscaling Team\nGoal: Enable autoscaling that accounts for active connection counts (for http-service-mesh and http2-service-mesh ports), ensuring reliable scaling for long-lived streaming and WebSocket workloads.\nTasks:\nIKS-XXXX: Add support for new connection-based scaling metrics (traffic_active_websocket_connections, traffic_active_http2_streams) in the AIR autoscaling engine. IKS-XXXX: Integrate with the O11Y metrics pipeline to ingest and normalize connection count metrics from Prometheus/Wavefront, labeled by protocol (http1, http2). IKS-XXXX: Extend the autoscaling recommendation engine to include connection counts as an input when generating HPA recommendations. IKS-XXXX: Update the default HPA manifest template to include connection-based metrics Caveat: Parts of this task list is GenAI-assisted; need to review/refine. Just offering this as a starting point for discussion and to correctly understand the scope of the ask.\n[11:01 AM]\n@vinit ^^ For your review.\n[11:03 AM]\n@kdowney I think gRPC runs over HTTP/2 so we wouldn\u0026rsquo;t need a third port, I don\u0026rsquo;t think.\nVinit Samel [11:08 AM]\nThanks what\u0026rsquo;s the total cost for each domain in SPs?\nKevin Downey [11:14 AM]\nThey have GRPC also as service object\nShankarram Shivram [11:36 AM]\ncollating discussions - https://intuit-teams.slack.com/archives/C09D6FYAL1K/p1761326252829599 this is the thread with traffic\nShankarram Shivram [10:18 AM]\n@SumitMathur ^^\n[10:22 AM]\nFew questions and opinions we have\nWe have concerns on leaky abstraction - asking users to switch between http1 and 2 How can this be addressed as an invisible concern for the customers - Could ALPN be used to solve this instead ? What are the implications of having http2 as default for the entire fleet vs having a selection choice (between http1 and 2 ? Mahalingam M [11:28 AM]\nThanks, @ssh.\nWe can have a meeting to discuss and close this. @Llavar Mindley, can you help setup a meeting to discuss this dependency on IKS AIR?\n[11:32 AM]\n**We have concerns on leaky abstraction - asking users to switch between http1 and 2** - This will be only for Agent Starter Kit services to start with (JSK in the future). For new Agents, we want them to start with HTTP2 by default. For existing agents moving to this model, we can support the migration transparently. Clarified this earlier, but we can discuss and close this. **What are the implications of having http2 as default for the entire fleet vs having a selection choice (between http1 and 2 ?** Based on the recommendation we have got from @jwebb3, HTTP2 will be more performance efficient in case of streaming use cases as compared to HTTP1 and the LLM vendors are also moving towards supporting HTTP2. So, for Agent Starter Kit services where the streaming use cases will be enabled, we can go with HTTP2 by default unless there are exceptions which we can handle on a case-to-case basis. Thanks, @Avni Sharma. As discussed earlier, should be abstracted for users and handled by platform. @Sreejan Sur, @HariprasadK, Can you check and confirm this?\nFrom a thread in tmp-agentcommunication-traffic-iks-pavedroad | Oct 8th, 2025 | View reply\nShankarram Shivram [11:33 AM]\nthanks maha - Why only scope it to Agent starter kit services ? what are the implications of enabling it as default for all AIR assets ? (edited) Mahalingam M [11:34 AM]\nWe are primarily targeting WebSocket over HTTP2 for Streaming scenarios. We have got a couple of other non-streaming use cases for WebSocket, but they are not our immediate priorities. For the latter, we need support on JSK.\nShankarram Shivram [11:36 AM]\nwhy not the existing ones (non websocket ) ones also for http2 ?\nNagaraja S Tantry [11:39 AM]\n@Maha kind of trying to decouple transport level which communicates over http2 always.. only the hop between proxy to app layer can be http2 or http1 depending on what app supports and I believe this should happen automatically (meaning it should downgrade to http1 if app supports only that)\nMahalingam M [11:45 AM]\nThanks, @ntantry. I think we can discuss this in detail and close this.\n@ssh, @ntantry, Apart from the questions above, if you have any other questions, please list them here. I will discuss this with the team before our meeting.\nTodd Ekenstam [11:54 AM]\n@Maha I\u0026rsquo;m thinking instead of a config of http1 or http2, could AIR vend out two ports, HTTP/1.1 8090, HTTP/2 8091? Then the vended application code can use whichever port is appropriate for it\u0026rsquo;s use case? (edited) Mahalingam M [12:57 AM]\nThanks, Todd.\n@Venkata Krishna Murthy Vadrevu and @Sreejan Sur to evaluate this option. ","externalUrl":null,"permalink":"/posts/future-http2/","section":"Posts","summary":"Regarding the Multi-Modal http2 support ask from Traffic, can we vend two ports (http-service-mesh and http2-service-mesh) and then document the dev teams should just use the correct one depending on the protocol being used? For different paved roads (agentic streaming, etc) the default vended template code can use the appropriate port depending on the use case. That way we don’t need to make this another configuration in AIR that the user needs to do.\n","title":"","type":"posts"},{"content":"","externalUrl":null,"permalink":"/authors/","section":"Authors","summary":"","title":"Authors","type":"authors"},{"content":"","externalUrl":null,"permalink":"/tags/excalidraw/","section":"Tags","summary":"","title":"Excalidraw","type":"tags"},{"content":"I value your privacy as much as my own. This policy outlines how this website handles data. Because I use privacy-first tools, the short version is: I don\u0026rsquo;t know who you are, and I don\u0026rsquo;t track you across the internet.\nDo Not Track: This website does not track users over time or across third-party websites, and therefore does not respond to \u0026ldquo;Do Not Track\u0026rdquo; signals.\nAnalytics (Umami) # This website uses Umami Analytics, an open-source, privacy-focused alternative to Google Analytics.\nNo Cookies: Umami does not use cookies or any other persistent identifiers. Anonymization: Your IP address is never stored. All data is collected in aggregate, meaning I can see that \u0026ldquo;someone\u0026rdquo; visited a page, but I cannot trace it back to you. Data Ownership: The data is used solely to improve the content of this site and is never shared with third parties for advertising purposes. Information for California Residents (CCPA/CPRA) # Even though this site collects minimal data, the California Consumer Privacy Act requires me to disclose the following:\nCollection: I collect \u0026ldquo;Internet or other electronic network activity information\u0026rdquo; (e.g., browser type, referring site, and pages visited) via Umami. No Sale or Sharing: I do not sell your personal information, nor do I \u0026ldquo;share\u0026rdquo; it for cross-context behavioral advertising. Right to Know/Delete: Since I do not collect identifiable information (like names, emails, or IP addresses), I generally have no data to \u0026ldquo;delete\u0026rdquo; or \u0026ldquo;provide\u0026rdquo; upon request, as your visit is anonymous. Third-Party Links # My site may link to external websites. I am not responsible for the privacy practices or content of those sites. I encourage you to check their privacy policies when you leave this domain.\nContact # If you have any questions about this policy, please feel free to reach out via email.\nLast Updated: February 2026\n","externalUrl":null,"permalink":"/privacy/","section":"","summary":"I value your privacy as much as my own. This policy outlines how this website handles data. Because I use privacy-first tools, the short version is: I don’t know who you are, and I don’t track you across the internet.\n","title":"Privacy Policy","type":"page"},{"content":"","externalUrl":null,"permalink":"/series/","section":"Series","summary":"","title":"Series","type":"series"}]