OSIT

Standardizing operational readiness reporting through speed, clarity, and control

Overview

Space Operations Command (SpOC) relied on fragmented tools and manual processes to report OPSCAP and SYSCAP system readiness—creating delays and increasing risk for time-sensitive missile warning operations.

As a Senior Product Designer at Rise8, I led design on the OSIT (OPSCAP / SYSCAP Input Tool), a centralized reporting system built with U.S. Space Force Guardians to standardize how operational status is captured and shared.

By making intentional tradeoffs between flexibility, speed, and visualization, we replaced scattered reporting methods with a single operational picture.

Primary outcomes:

  • Centralized system-of-record for OPSCAP and SYSCAP reporting

  • Faster status submission for Crew Commanders

  • Improved real-time situational awareness across Space Operations Command

  • Reduced manual duplication across phone, email, and slide-based reporting

Mission & Business Context

Crew Commanders are required to submit OPSCAP and SYSCAP reports within strict time windows to support missile warning and space operations.

Before OSIT, reporting was distributed across:

  • Phone calls

  • Emails

  • PowerPoint slides

  • Multiple legacy applications

This created three systemic risks:

  1. Delayed awareness of degraded systems

  2. Conflicting versions of system status

  3. High cognitive load under time pressure

The goal was not simply to digitize reports, but to create a single operational truth that leadership could trust.

Strategic Tradeoffs

Tradeoff 1: Flexibility vs. speed of reporting

Some users wanted highly customized reports, but flexibility increased completion time and error risk.

Decision:
We constrained report structure and prioritized fast, repeatable status updates.

Outcome:
Crew Commanders could complete critical reporting faster with fewer mistakes.


Tradeoff 2: Text detail vs. visual comprehension

Traditional reports relied on narrative descriptions, but these were slow to scan under pressure.

Decision:
We prioritized a visual operating picture over long-form textual summaries.

Outcome:
System health could be understood at a glance instead of parsed line-by-line.


Tradeoff 3: Real-time updates vs. operational control

Continuous live editing risked partial or inconsistent data states.

Decision:
We supported controlled multi-report submissions and quick actions instead of free-form real-time editing.

Outcome:
Statuses remained consistent while still enabling rapid bulk updates.

Research and Risk Reduction

Research focused on understanding failure points in the reporting chain. We:

  • Interviewed Space Force Guardians and Crew Commanders

  • Mapped the fragmented reporting ecosystem

  • Observed time-bound reporting workflows

Key insight:

Accuracy mattered less than shared understanding within tight reporting windows.

This reframed the problem from:
“make reporting easier” → “make readiness visible.”

“In the older software, I had to read and understand and have some level of analysis to look at what the status is. By having OSIT as a graphical interface, you can put more things on the common operating picture. So, it allows you to maintain situational awareness of more things in one place.”

-SpOC Captain

Design Strategy

We concentrated design effort on three moments:

  • Status entry

  • Status review

  • Status sharing

Key interaction decisions:

  • Batch reporting instead of single-system updates

  • Prominent status indicators

  • Minimal-step workflows for common actions

Each flow was evaluated against a single outcome:
Could leadership immediately understand system health?

Leadership & Influence

I worked closely with:

  • Space Force Guardians

  • Product leadership

  • Engineers using Palantir Foundry

My role included:

  • Translating mission needs into product strategy

  • Leading design reviews with operators

  • Aligning UX decisions with data model constraints

  • Driving prioritization between features and speed

Outcome:
Design became a decision-making input to mission workflows, not just an interface layer.

Reflection

This project reinforced that high-stakes systems require intentional tradeoffs.

Key lessons:

  • Speed and consistency often outweigh customization

  • Visual clarity outperforms narrative detail under pressure

  • UX can directly influence mission readiness

If I were to revisit this project:

  • I would instrument reporting time earlier

  • I would stress-test visualization under degraded data conditions

  • I would push for tighter automation with source systems

OSIT demonstrated that well-designed tools can improve how mission-critical decisions are made — not just how data is entered.