■ technical SEO for SaaS
We don't write your blog posts. We fix the technical reasons your pages aren't ranking — prerender, schema, Core Web Vitals, sitemap, internal linking, indexability. The engineering work most SEO agencies hand back to your dev team and hope they get to it.

■ Who this is for
■ Probably not a fit

Case Study · This site
Sitemap URLs (high-quality only)
Schemas per landing page
Stack
Vite · React Router · Helmet · Prerender SSG
■ Real work — recently
appycodes.dev had the same problem most SaaS sites have — too many thin pages competing for the wrong terms, almost none indexed by Google, generic templated content with mismatched FAQs, no structured data in the prerendered HTML, and a sitemap full of URLs that Google had quietly ignored.
The rebuild was straight engineering. We pruned the sitemap from 37 URLs down to 9 high-quality pages, then deliberately added new pages targeting winnable, intent-driven keywords — each with full JSON-LD schema (Service, BreadcrumbList, FAQPage, Article) baked into the prerendered HTML so crawlers see it on the first request, not after JavaScript executes.
We fixed the prerender pipeline itself — the previous version was silently dropping every script tag that React Helmet emitted, which meant zero structured data was reaching Google despite being in the React tree. We added 301 redirects for the legacy URLs Google had indexed but no longer existed, rebuilt the internal linking from nav, footer, and the/services/ index, and shipped each new page at 2,000+ words with embedded case studies and opinionated technical content.
This page is one of those new pages. The work it describes is the same work we ran on this site. You can inspect the source to verify.
■ How we approach technical SEO
These are the technical choices that actually change what Google sees and how it ranks your pages. We have strong opinions on each, formed from shipping SaaS sites that needed to compete in real SERPs.
Marketing pages should be static (SSG or prerender). Pages that depend on logged-in state should render server-side or remain SPA. The mistake most SaaS sites make is choosing one rendering strategy for the whole site. We split per route. Crawlers should see complete HTML on first byte for everything they need to rank.
Service, Product, Article, FAQPage, BreadcrumbList, Organization, Person — chosen per page type, not blanket-applied. We verify with Google's Rich Results Test before considering it shipped. Critically, the schema must land in the static HTML, not be inserted later by JavaScript — we have seen many sites where the schema exists but Google never sees it.
We profile real user metrics from CrUX or your own RUM data, not synthetic Lighthouse scores. The fixes are usually the same handful of patterns: defer or remove heavy third-party scripts, preconnect to font hosts, set explicit dimensions on images, lazy-load below the fold, swap heavy fonts, replace images with modern formats. Lighthouse 100 is not the goal — passing field data is.
A page is not indexed unless Google can crawl it, render it, and conclude it deserves to be indexed. We check robots.txt, sitemap freshness, canonical correctness, hreflang if international, and the internal link graph that signals page importance. Orphaned pages with no internal links almost never rank.
Two clicks from the homepage to any important page. Topic clusters where pillar pages link to detail pages and vice versa. Avoid deep folder nesting unless the URL hierarchy adds genuine value. We restructure URL trees when needed but always with 301s and the patience to wait for Google to recrawl.
After the work ships, we set up Search Console properly, run weekly checks on indexation and Core Web Vitals, and monitor server log files to see what Googlebot actually crawls vs ignores. Without monitoring, you don't know whether the work paid off — you're guessing from rank trackers, which lag and lie.
■ Real work
Easyship is a global shipping platform now valued over $40M. More than 100 calculator engines power their site — quote tools that compare carriers, rules, surcharges, and customs across 200+ countries. Each one is a high-intent commercial query (think "USPS to UK shipping calculator") that should rank.
The technical SEO challenge with calculator pages is that they're fundamentally interactive — users pick options and the result computes client-side. That makes them dangerous for crawlers if rendered the wrong way. Google sees an empty form; the page looks thin and gets demoted or skipped entirely.
The right pattern is to ship a meaningful default state in the static HTML — populated with realistic values, real explanatory copy, an example calculation, and proper schema for the service being offered. The interactive calculator layers on top once JavaScript runs. Crawlers get a full page; users get a working tool. That principle is the difference between calculator pages that rank and calculator pages that don't.
Case Study · Shipping SaaS
Company valuation
Calculator pages
Stack
React · TypeScript · Node.js · Vue.js
■ How we work
Crawlability, indexability, rendering, structured data, Core Web Vitals, site architecture, on-page signals. You get a written audit with a prioritised fix list — not a generic PDF, not a sales document.
Robots, sitemap, canonical, broken redirects, indexation issues, missing meta — the changes that ship in the first sprint and start moving Search Console signals immediately. We do these before touching anything that takes longer.
Move marketing pages to SSG or prerender. Verify the rendered HTML matches what users see. Check that JavaScript-injected content (titles, schema, internal links) actually lands in the static output. This is where most SaaS SEO work breaks silently.
Add JSON-LD per page type. Verify with Google's Rich Results Test. Check that the schema is in the prerendered HTML, not inserted later. Add Organization, BreadcrumbList, and Article or Service schemas across the site as appropriate.
Profile real user metrics. Fix LCP (defer scripts, preconnect fonts, optimise hero images), CLS (set image dimensions, reserve space), INP (split long tasks, defer non-critical JS). Re-measure with field data, not Lighthouse.
Rebuild navigation, footer, and contextual internal links to signal page importance properly. Set up Search Console correctly, configure indexation monitoring, and check server logs for Googlebot crawl patterns. Hand over a written runbook of what to watch and when to act.
■ Honest comparison
| SEO agency | Solo SEO consultant | In-house dev | Appycodes | |
|---|---|---|---|---|
| Technical engineering depth | Limited | Limited | High | High |
| Ships fixes in production | Hands off to your dev team | Hands off to your dev team | Yes | Yes |
| Ranking strategy and intent | Yes | Yes | Rarely | Yes |
| Schema, prerender, CWV done right | Sometimes | Sometimes | If they prioritise it | Yes |
| Time to ship first fixes | 4-8 weeks | 4-6 weeks | Whenever it gets prioritised | 1-2 weeks |
| Owns code that lands | No | No | Yes | Yes |
| Writes content | Often | Sometimes | No | No |
None of these is universally right. A traditional SEO agency is the best call when your bottleneck is strategy, not engineering. We are the best call when an audit has already told you what's wrong and the problem is that nobody on your team has the time, context, or engineering depth to actually ship the fixes.
About the author
Ritesh leads engineering at Appycodes and has spent the last decade shipping SaaS products for funded startups across the UK, US, India and Australia. His focus is on production engineering — including the technical SEO work that quietly determines whether a SaaS product's content compounds or stalls.