UpSkillZone AI

Job Twin catalog — UpSkillZone AI

Five capstones. Each produces verifiable skill claims.

A Job Twin is a scoped, real-world deliverable graded by a calibrated mentor against a fixed rubric. It is not a lecture, not a quiz, and not autograded homework. Every score that lands on a credential was signed by a human who cleared the kappa floor in the mentor agreement.

Each twin is time-boxed against a server-authoritative clock. The clock is the assessment — a portfolio piece without a deadline is not a comparable signal. A pass produces one or more cryptographically-signed skill assertions drawn from the public taxonomy, each weighted by the mentor against a per-twin ceiling.

The five twins below are the curriculum endpoint of the AI Engineer track. A learner who clears all five holds a credential a hiring manager can read, run, and verify without the platform mediating the trust handshake.

The 5 twins

  • JT 1day twin

    Job Twin 1 — Inference fundamentals

    You're shipping a streaming chat endpoint.

    Time-box
    6 hours
    Pass threshold
    70%
    Re-attempts
    1 allowed

    Skill assertions

    • llm.inference.streaming-design
    • llm.api.error-handling

    Open the jt-inference-1 brief →

  • JT 2day twin

    Job Twin 2 — RAG Evaluation Harness

    Your team ships a customer-support RAG.

    Time-box
    6 hours
    Pass threshold
    70%
    Re-attempts
    1 allowed

    Skill assertions

    • llm.evals.dataset-design
    • llm.rag.retrieval-pipeline

    Open the jt-rag-eval-1 brief →

  • JT 3day twin

    Job Twin 3 — Production deployment + observability

    Take an existing model-serving service and harden it for production: containerize, add health/readiness probes, structured logs, OpenTelemetry traces, and a basic SLO dashboard.

    Time-box
    8 hours
    Pass threshold
    70%
    Re-attempts
    1 allowed

    Skill assertions

    • llm.ops.containerization
    • llm.ops.observability

    Open the jt-prod-deploy-3 brief →

  • JT 4live incident

    Job Twin 4 — Live incident response

    A customer-facing model is hallucinating regulated content.

    Time-box
    90 min + 4h post-mortem
    Pass threshold
    65%
    Re-attempts
    1 allowed

    Skill assertions

    • llm.safety.hallucination-mitigation
    • llm.ops.incident-response

    Open the jt-incident-4 brief →

  • JT 5capstone

    Capstone — Production AI service

    Ship an end-to-end production AI service of your choosing.

    Time-box
    14 days
    Pass threshold
    75%
    Re-attempts
    none

    Skill assertions

    • llm.ops.system-design
    • llm.evals.dataset-design
    • llm.safety.security-review
    • +1 more

    Open the jt-capstone-6 brief →

How they fit

The twins map onto the curriculum stages in order. JT 1 (inference fundamentals) is the foundations gate — backpressure and graceful timeouts are the entry to production LLM work. JT 2 (RAG evaluation) is the program's most differentiating deliverable: evals are the most under-taught skill in production AI engineering, and a learner who can build a calibrated harness can build anything downstream of one.

JT 3 (production deployment + observability) closes the handoff from notebook to deployable service — the line where most AI engineering interviews fall apart. JT 4 (live incident response) is the only twin with a 90-minute live window: on-call competence under genuine time pressure is the single most under-tested skill in AI engineering hiring today. JT 6 (capstone) is the headline artifact on the credential — a two-week build with two independent mentor reviewers and a public deliverable.

The cohorts that ship these twins. Browse tracks →

The credentials these twins issue, queryable in the public ledger. Browse verified skills →

The thesis behind mentor-graded, time-boxed assessment. Read the manifesto →