Lessons from the Dev: Developing ScarletFace For Trials

Behind the Code: The Developer’s Story of ScarletFace For TrialsScarletFace For Trials began as a small experiment: a prototype built to test a novel matchmaking algorithm and a fresh user-experience concept in the competitive field of competitive trials and esports. Over three years, that seed grew into a full-featured product, shaped by design trade-offs, community feedback, technical debt, and the realities of shipping under pressure. This article walks through the developer’s side of that journey — the decisions, mistakes, recoveries, and the philosophies that emerged while building ScarletFace For Trials.


Origins and motivation

The core idea that sparked ScarletFace For Trials was simple: make trial matchmaking and performance tracking more transparent and fair, while giving players tools to improve. Existing platforms often prioritized retention metrics or monetization hooks over meaningful feedback and balanced pairing. Our founding team — a mix of gameplay engineers, data scientists, and players — wanted a product that rewarded skill development and reduced frustration from mismatched play.

From day one we framed the project around a few nonnegotiables:

  • Fairness in matchmaking.
  • Actionable analytics for players.
  • Low-friction onboarding for new users.
  • Respect for player privacy.

These priorities influenced every technical and product decision that followed.


Tech stack and architecture choices

We chose a modular, event-driven architecture to allow independent scaling of critical parts: matchmaking, telemetry ingestion, analytics, and the client-facing UI. Key components:

  • Real-time matchmaking service written in Go for low-latency matching and concurrency handling.
  • Telemetry pipeline using Kafka for reliable event streaming.
  • Analytics and ranking computations in Python, leveraging NumPy/Pandas and lightweight ML models for performance estimation.
  • A cloud-hosted PostgreSQL for durable user and match records; Redis for ephemeral state and leaderboards.
  • Front-end built with React and WebSockets for live updates.

Why these choices? Go’s concurrency model fit matchmaking well; Kafka ensured we wouldn’t lose event data under load; Python let our data scientists iterate quickly on ranking logic. The separation of concerns made it easier to replace parts later without rewriting everything.


The matchmaking problem — balancing skill and experience

Matchmaking is often described as both art and science. We started with a basic ELO-derived system but quickly realized trials are more complex: team compositions, map-specific skill, recent form, and even time-of-day effects matter.

We iterated through several models:

  1. Simple ELO with decay — easy to implement but brittle for teams.
  2. TrueSkill-inspired team model — better but slow to compute at scale.
  3. Hybrid model — ELO baseline with modifiers for role, map, and recent performance using lightweight statistical features.

The hybrid model struck the best balance. To avoid opaque scores, we surfaced simple explanations to players (e.g., “matched slightly higher due to win streak and map advantage”), which improved perceived fairness.


Telemetry, observability, and the data backlog

Collecting rich telemetry was a priority: every trial produced hundreds of events. We ingested these via Kafka into S3 for raw logs and into a stream-processing layer for near-real-time metrics. Early on we underestimated the volume; storage and processing costs ballooned.

Lessons learned:

  • Sample aggressively where full fidelity isn’t required.
  • Define retention policies by event importance.
  • Invest in tooling to explore raw event data quickly — saved hours during incident investigations.

We also built dashboards for core health metrics (latency, match success rate, queue times) and product metrics (match balance, churn after matches). These dashboards were crucial for spotting regressions after deployments.


UX and communication: why transparency matters

Players dislike opaque systems. We focused on transparency: explaining why a match was made, showing recent form and role-weight, and offering post-match breakdowns with clear visualizations (heatmaps, timeline events, and concise stats).

Design trade-offs:

  • Too much data overwhelms; we layered information with progressive disclosure.
  • Real-time updates needed to be unobtrusive; we used subtle signals (colored badges, short blurbs) rather than modal popups.

This approach reduced complaints and increased usage of post-match analytics — a core engagement metric.


Scaling pains and unexpected outages

No project ships without incidents. Our first major outage came from a cascade: a bug in a schema migration caused worker crashes, Kafka lag, and eventually timeouts in matchmaking. The team’s response plan matured quickly after that incident.

Key improvements post-incident:

  • Blue/green deployments and feature flags to isolate risky changes.
  • Automated schema migration checks and small, backward-compatible changes.
  • Chaos-testing for critical paths (we introduced occasional simulated worker failures).
  • Better runbooks and a rotation for on-call engineers.

These operational investments reduced mean time to recovery for later incidents by an order of magnitude.


Privacy, data ethics, and community trust

Respecting player privacy was a stated priority. We minimized personally identifiable data collection, used aggregated analytics for public leaderboards, and provided players controls over what was shared. Practically, this meant:

  • Storing only hashed identifiers where possible.
  • Offering opt-out toggles for data collection and public stats.
  • Clear documentation on what data is used for matchmaking and analytics.

Trust earned through transparency translated into higher participation in voluntary telemetry programs, which improved model quality without compromising privacy.


Monetization without undermining fairness

We avoided pay-to-win mechanics. Monetization focused on cosmetic items, premium analytics dashboards, and faster replays. This preserved the competitive integrity of matchmaking. We carefully separated monetization code paths from match-influencing systems and audited interfaces to ensure no indirect advantage could be bought.


Team culture and remote engineering practices

The team worked distributed across time zones. We emphasized asynchronous communication, lightweight specs for features, and regular design reviews. Code reviews were mandatory and often focused on clarity and operational safety as much as correctness.

A few cultural practices that paid off:

  • “Ship small, iterate fast” — shorter PRs, quicker rollbacks.
  • Postmortems without blame — focus on fixes and systemic improvements.
  • Cross-disciplinary pairing — engineers and data scientists working together on ranking logic.

These norms helped maintain velocity while controlling risk.


What we’d do differently

Hindsight brings clarity. If starting over:

  • Start with stricter sampling and retention rules for telemetry to control costs earlier.
  • Invest sooner in automated testing for migration scripts.
  • Prototype transparency features with real users earlier to refine wording and avoid misinterpretation.
  • Make the matchmaker more modular from day one to simplify experimentation with ranking algorithms.

The continuing evolution

ScarletFace For Trials remains an active project. New priorities include richer role modeling, better anti-abuse detection, and expanded educational tools (coach-recommended drills based on match weak points). The roadmap emphasizes improving fairness and supporting a healthy competitive ecosystem.


Closing thoughts

Building ScarletFace For Trials was equal parts engineering challenge, product design, and community work. The developer’s story is one of iterative learning: balancing technical constraints, ethical choices, and player needs. The result isn’t a perfect system but a continually improving platform shaped directly by feedback from the people who use it.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *