■ TL;DR
- Started narrow, scaled wide. Hired in 2022 as one Android engineer to write Java for the existing app. Now we own the whole engineering stack — mobile, backend, ads platform, marketplace, and WordPress front.
- Rebuilt the app on React Native with a modern Gen-Z UI/UX, replacing the legacy native iOS (Swift) + native Android (Java) split that was costing the founder twice the maintenance effort.
- Shipped a full TikTok-style ads manager from scratch on Next.js + Node + Postgres + Python — with campaign creation, granular targeting, performance reporting, and ML-driven user-interest profiling.
- Launched BLOCCoin on Algorand as a real on-chain marketplace where users earn coins through the app and spend them with platform partners.
How the engagement started
BLOC came to us in 2022 with a narrow, well-defined problem: they needed an Android engineer who could write Java (specifically against their Java Struts servlet backend) and keep their existing native Android app moving forward. The iOS counterpart was a separate codebase written in Swift, maintained by a different developer. Two repositories, two languages, two release cycles — for a small team, the maintenance load was already heavier than the feature work.
We were brought on in that narrow Android role first. The brief was simple: keep the Android app shipping, fix the backlog, ship the next feature. No agency-style pitch, no attempt to expand scope. Do the work, prove the engineering, earn trust on the basics.
That trust is what turned a four-week Android engagement into a four-year engineering partnership across the entire BLOC ecosystem. Every expansion below started as a quiet question from the client — "could you also handle this?" — and ended with us taking another codebase off their plate.
Stage 1 — Native Android, Java & the existing servlet backend
The first engagement was pure execution. We picked up the existing native Android codebase, ran the Java Struts servlet backend in our own environment, and started shipping the backlog. The work was deliberately unglamorous: feature tickets, bug fixes, App Store compliance updates, library upgrades. Servlet API quirks. Old build tooling.
Working inside someone else's codebase like this is its own discipline. You don't restructure on day one. You don't propose a rewrite. You match the existing code style, you find the patterns that already work, and you ship the immediate work cleanly first. We did that for several months before suggesting any architectural change.
Two outcomes from that period set everything that followed in motion. First, the founder noticed delivery cadence was consistently faster than before — features that used to take two weeks were landing in five days, with fewer regressions. Second, we started flagging legitimate technical issues in the Java Struts backend (slow indexes on DynamoDB, sandbox / production overlap, brittle deploy pipeline) — issues we were willing to fix if they wanted us to.
Stage 2 — Taking over the entire backend
About a year in, the founder asked whether we'd take over the full server-side. The previous backend developer was moving on. Maintaining a separate native iOS team and a separate backend team for an early-stage product was costing more than it was worth.
We picked up the Java Struts servlet codebase end-to-end. Our first job was the unglamorous work that nobody had had time for:
- Improved indexing on the DynamoDB tables. Several core queries were doing full-table scans because the access patterns had shifted since the original schema design. We added the right partition + sort keys and shifted hot queries onto secondary indexes. Per-call latency dropped from hundreds of milliseconds into tens.
- Cleanly separated sandbox and production. Previously the two environments were sharing more infrastructure than was safe. We split them, automated deploys per environment, and made it impossible to push untested code straight to prod.
- Removed the worst speed bottlenecks. Some of these were obvious (synchronous calls that should have been queued). Some were hidden inside servlet filters that ran on every request. We profiled the actual hot paths before changing anything, then moved the slow ones to background workers.
By the end of Stage 2, BLOC had a single team owning the backend instead of two; the platform was meaningfully faster; and the founder had time back to think about the product instead of the infrastructure.
Stage 3 — The major rebuild on React Native
With the backend stable, the conversation moved to the apps. The native iOS (Swift) and native Android (Java) split was the biggest source of compounding pain in the product: every feature had to be designed twice, built twice, tested twice, and shipped twice. The two apps were slowly drifting apart in UX, and parity bugs were eating into roadmap time.
We proposed a full rebuild on React Native. Same backend, one mobile codebase, two store listings. The proposal came with a clear honest tradeoff — you lose some native UI feel; you gain feature velocity, hiring leverage, and a path to feature parity that doesn't cost double. For BLOC's product surface (event feeds, social interactions, RSVP flows, profile management) the tradeoff was clearly worth it.
UX redesign first, build second
Before any code, we put our UX designer on the problem. The brief was a modern Gen-Z UI/UX — confident, fast, weighted toward visual hierarchy and discovery, with the kind of gesture-driven interactions users expect from TikTok and Instagram. The design system that came out of that work became the spine of every BLOC surface that followed (including the ads manager and the BLOCCoin marketplace).
The build
We built the React Native app with two non-negotiables:
- Performance over perfection. Screens had to be ready in under 200ms. Image-heavy event feeds needed proper caching. Animations had to run on the native thread, not on the JS thread.
- Real-world network conditions. BLOC users use the app inside venues — basements, sports halls, roof-top bars. We designed the network layer to assume every request might fail and retry intelligently rather than spin forever.
The app shipped on schedule. It launched on both the App Store and Google Play, with strong feedback and engagement metrics from the founder team. Same backend. One codebase. Half the maintenance overhead.
Stage 4 — BLOC Ads Manager (TikTok-style)
With the app rebuilt, BLOC moved to the next strategic surface: monetisation through advertising. They wanted a self-serve ads platform their advertisers could use to run campaigns inside the BLOC app itself — modeled on TikTok Ads, with best practices borrowed from Google Ads and Meta Ads Manager.
This was a much bigger build. A self-serve ads platform isn't one feature — it's effectively a SaaS product inside a product. We scoped and shipped it end to end.
Stack
- Frontend: Next.js — multi-page advertiser dashboard with campaign creation flows, budget management, and reporting
- Backend: Node.js with Express — separate from the BLOC app backend, multi-tenant per advertiser
- Database: PostgreSQL — relational data was the right model for advertisers, campaigns, ad sets, and creatives, with their many-to-many relationships
- Profiling layer: Python services that classify users by interest signals — frequent visits to nightlife venues mark a "partygoer", repeated bookings at Asian-cuisine venues mark a "foodie", and so on
What advertisers can do
Advertisers go to BLOC's site, sign in to the ads manager, and run campaigns end-to-end:
- Create a campaign and pick an objective (awareness, traffic, conversions)
- Build ad sets with budgets, schedules, and bidding strategies
- Target the audiences our profiling layer surfaces — partygoers, foodies, students, sports fans
- Upload creatives, preview them in BLOC's in-app placements
- Watch the campaign run with real-time performance reporting — impressions, clicks, conversions, demographic breakdown
- Adjust spend, pause, A/B test new creative — all without contacting BLOC support
Why we built profiling on Python, not the main backend
The user-interest classification is genuinely separate work from request handling. It runs on user behaviour over time — event check-ins, bookings, app interactions — and produces audience segments that the ads platform queries on demand. It made sense to keep that as a Python service that runs on its own schedule, writes to a dedicated table, and exposes a simple read API to the ads manager. Easier to evolve, easier to retrain, easier to move to a heavier ML pipeline later without touching the rest of the system.
Ad placements live inside the BLOC app



Advertisers self-serve from business.getonbloc.com — campaigns, targeting, and live performance dashboards.
Stage 5 — WordPress front and ongoing security
BLOC's public-facing front (marketing pages, content, press, contact) lives on WordPress. We picked up maintenance for those sites alongside everything else and have run them since.
The WordPress work isn't glamorous but it's important. We treat it the same way we treat the rest of the stack — the codebase is in version control, plugin updates go through a staging environment first, security headers are hardened, and admin access is audited. None of that interrupts the BLOC team's content workflow; they keep publishing exactly the way they always did. The difference is that nothing breaks at 3am.
Stage 6 — BLOCCoin marketplace on Algorand
The most recent strategic build is BLOCCoin: a real on-chain token built on the Algorand blockchain, with a marketplace inside the BLOC app where users can spend their coins on real-world goodies offered by the platform's partners.
We chose Algorand over the more visible alternatives for specific engineering reasons. Algorand's Pure Proof of Stake gives finality in seconds with predictable fees — which matters when a user redeems a coin in a nightclub queue and needs the transaction confirmed before the bouncer asks again. The carbon footprint also lines up with BLOC's brand position. The standard Asset (ASA) primitive let us issue BLOCCoin without writing a custom token contract.
The marketplace surface itself is integrated into the React Native app. Users see their BLOCCoin balance in their profile, browse a marketplace of partner offers, and redeem with a single tap — the on-chain transaction is signed and confirmed in the background while the UI shows a clean success state. From the user's perspective it works like any other in-app spend; from the platform's perspective every coin earned and spent is verifiable on chain.
Outcomes
Four years in, BLOC's engineering picture looks fundamentally different from where it started:
- One mobile codebase instead of two — same features ship to both stores from one team
- One backend team instead of two — the Java Struts era is behind them, replaced by a coherent modern stack
- A working ads platform generating advertiser revenue inside the app
- An on-chain rewards economy via BLOCCoin that ties the social product to a real spendable asset
- WordPress frontend running cleanly, securely, and without surprise outages
And practically: we've become the team the founder calls first for any engineering question, not the team they call when something is on fire.
Stack summary
| Layer | Stack |
|---|---|
| Mobile (current) | React Native (iOS + Android) |
| Mobile (legacy) | Native Swift (iOS), Native Java (Android) |
| App backend | Java Struts servlet, DynamoDB |
| Ads Manager frontend | Next.js |
| Ads Manager backend | Node.js (Express), PostgreSQL |
| User profiling | Python services |
| Marketing front | WordPress |
| Token economy | Algorand (BLOCCoin as ASA) |
What the client says
■ Verified Clutch review
“Four years in, Appycodes runs the engineering for every major surface of BLOC — the React Native app, the full backend, the ads platform, the WordPress front and the BLOCCoin marketplace. Cadence is consistent, quality is consistent, and they've become the team I call first for any engineering question. I've recommended them to several other founders.”
Josh, Founder
BLOC — Social Events & Rewards
Quote paraphrased from a verified Clutch review and the ongoing engagement. Read the original on Clutch via the link above.
■ Related services
Building something in this shape?
The three services that map directly to the BLOC engagement:

About the author
Ritesh — Founder, Appycodes
LinkedInRitesh leads engineering at Appycodes and has spent the last decade shipping social, marketplace, and SaaS products for funded startups across the UK, US, India and Australia — including BLOC, Khatabook, CREOATE, Easyship, and others. Reach out if you have a multi-year build in mind that needs a real engineering partner rather than a vendor.





