0 %

B2B SaaS Case Study

Multi-User Automotive Platform

Problem: Multiple teams worked on shared data within the same platform, but the system had no concept of shared state or readiness. Also ownership, while enforced through permissions - was not visible where cross-team coordination was required. This led to version conflicts, manual coordination, and unreliable releases.

Intervention: Defined and implemented a shared data lifecycle model that governed how data moved across editing, staging, validation, and release - establishing system-level ownership, visibility, and readiness signals.

    Impact:
  • 30% reduction in release errors
  • 25% faster staging and validation
  • Increased cross-team visibility and alignment
  • Reduced manual coordination and operational overhead


Context & Complexity

I led UX for a core workflow within an enterprise automotive platform used by marketing, pricing, and product teams to manage vehicle data prior to release. While permissions restricted who could edit specific data, once changes were staged and merged, ownership was no longer visible - making coordination and issue resolution difficult.

  • Multiple teams edited shared data in parallel within the same platform
  • Changes were staged independently, with no system-level coordination
  • There was no way to compare draft changes to production before staging
  • A single release required combining data from all teams into a unified dataset


Core Problem

Multiple teams edited shared data in parallel without a unified coordination model.

    No Shared State Model
  • Version conflicts between staged and unstaged data
  • Mixed and unclear data states across teams
    No Readiness Model
  • No signal that the full dataset was ready for validation
  • Teams relied on manual communication to coordinate
    Ownership Not Visible Where It Mattered
  • Ownership enforced through permissions during editing
  • Not visible once data was merged across teams
  • Issues during validation were not clearly attributable
    No Pre-Validation Capability
  • No way to compare draft data to production before staging
  • Issues discovered too late in validation
    Reactive Validation Model
  • Manual validation required
  • Errors surfaced after multiple teams had staged

Result: Unpredictable, low-confidence releases

Before After
Uncoordinated parallel edits Shared lifecycle with defined states
Ownership implicit (permission-based) Ownership visible and actionable across the lifecycle
No shared 'readiness' signal Data 'ready' states and notifications
Manual, reactive validation System + AI-assisted validation
Mixed, unclear data states Visible and trackable states and versions
Unpredictable, low confience releases Defined release readiness

Root cause: Lack of a shared lifecycle model.


Research & Problem Framing

Through research and workflow analysis, several systemic issues emerged:

Key Insights
  • Errors occurred at team handoffs
  • No visibility into parallel work
  • No way to compare edited data to production
  • Inconsistent definitions of โ€œreadyโ€
  • Validation was a major bottleneck

Conclusion: Ownership was implicit during editing, but broke down once data was merged across teams - particularly in scenarios involving dependencies and partial staging - leading to delays and manual coordination. This revealed that the problem was not the individual workflows, but the absence of a shared system model to support coordination across teams.


Designing the System Model

I worked closely with product and engineering to align on a shared lifecycle model, establishing a common understanding of data state, readiness, and ownership across teams.

Defined a lifecycle model governing how data moved through the system:
Active Editing → Compare → Staging → Validation → Readiness → Release

Automated validation was followed by a QA review step, allowing teams to approve exceptions and ensure data quality before release. This lifecycle became the governing model across the platform.

Supporting System Principals
  • Defined state transitions to make data progression visible from editing through release
  • Introduced system-level readiness signals to determine when a dataset was safe to validate
  • Made ownership visible at points of coordination, particularly after data was merged
  • Identified key breakdowns in coordination, especially at staging and validation

Iterated through whiteboarding, diagramming, and validation with product, engineering and users.


Prototyping the Lifecycle

I used prototyping to validate how the lifecycle model would behave under real-world conditions - particularly where multiple teams contributed to a shared dataset. Rather than focusing on individual UI interactions, I tested system behavior: how data moved through staging, how readiness was determined, and how issues surfaced during validation. Prototyping also served as a tool for alignment, helping product and engineering teams validate how the system should behave under real-world conditions.

Key areas of exploration included:
  • When and how data should transition into staging
  • How to determine when a full dataset was ready for validation
  • How to compare draft changes against production before staging
  • How to identify incomplete or conflicting data prior to validation
  • How ownership should be surfaced once data was merged across teams

These explorations aligned teams on a shared definition of readiness and ensured the lifecycle supported coordinated workflows rather than independent team actions.


Designing The Experience

I translated the lifecycle model into a set of consistent interaction patterns across the platform.

Release Dashboard
  • Surfaced system-level readiness states (Not Ready, Blocked, Ready for Release) to reflect overall dataset status

  • Enabled expansion into team-level views to show ownership and contribution across marketing, pricing, and product

  • Highlighted blocking issues and responsible teams directly in the workflow

  • Supported filtering by team to allow users to quickly identify items requiring their action

Editor
  • Supported team-level versioning, allowing users to work on draft versions (e.g., PROD-v6) while maintaining visibility into the live production dataset

  • Surfaced validation issues directly within the editing workflow, including a running error count and field-level highlighting to support fast resolution

  • Enabled a fix - re-stage workflow, allowing users to resolve validation errors and stage updates without re-running comparison against production

  • Maintained clear system context, showing draft state alongside the current production version to reduce errors during editing

  • Provided controlled progression through actions (Compare, Stage), ensuring changes were validated before moving forward

Validation and Testing

Given the complexity of parallel workflows, I focused testing on reducing risk at key breakdown points - particularly around staging completeness, readiness determination, and validation timing. Rather than validating individual screens, testing focused on system behavior under real-world conditions, using scenario-based workflows that simulated multiple teams contributing to a shared dataset. This included:

  • Testing how users determined when a dataset was ready for validation
  • Evaluating how incomplete, conflicting, or outdated data was identified prior to validation
  • Assessing whether ownership was clear once data was merged across teams
  • Validating whether feedback supported fast and accurate issue resolution
  • Measuring user confidence in system-driven readiness versus manual coordination

Impact on Design

  • Introduced a comparison step to validate draft data against production before staging
  • Defined system-level readiness signals to indicate when a dataset was safe to validate
  • Redesigned validation feedback to clearly surface issues and support resolution
  • Made ownership visible at key coordination points, particularly during validation
  • Simplified and standardized state indicators across the lifecycle
Tradeoffs
  • Enabled parallel editing rather than restricting access, prioritizing flexibility while managing risk through system-level coordination
  • Introduced comparison and validation checkpoints instead of enforcing strict workflow constraints
  • Used AI to augment validation and readiness assessment rather than fully automate decision-making
  • Balanced system standardization with flexibility for team-specific workflows


Impact After Release

  • 30% reduction in release errors
  • 25% faster workflows
  • Increased cross-team visibility and alignment
  • Reduced manual coordination and operational overhead
  • Improved downstream data quality