Six product teams were building AI summarization independently. No standard. No shared component. No adoption path. What started as a response to that pressure became a platform standard — one component definition with reach across 12+ page templates.
Product teams at Oracle were building AI summarization experiences independently. Six different visual treatments. Six sets of decisions made in isolation. The Redwood system had atomic patterns, but nothing cross-cutting — no component teams could adopt without rebuilding from scratch.
A cross-product audit made the situation visible across three dimensions. Users weren't getting reliable output — they were getting inconsistent, unverifiable summaries with no signal about where the information came from. Product teams were solving the same problem from zero, every time. And the underlying technology — Ask Oracle — was a platform-level capability that would eventually reach all page templates. The component had to be designed for that scale from day one.
The fragmentation was the real problem — not any individual team's implementation. Each one was making reasonable decisions in isolation. The system just didn't give them anywhere to converge.
Before any design exploration, I pushed for clarity on a fundamental question: what does this component actually do? Not what it looks like. Not what it contains. What does it do.
That question sounds obvious. But in practice, most components get designed before anyone's written a clear sentence about their purpose. Every element we considered adding, every interaction we evaluated — it had to pass through the definition first. That filter kept the anatomy honest.
That sentence became the filter. And by the end of the project — when stakeholders pushed for new requirements — we revisited it. The final shipped definition added that the Summary also provides contextual actions that allow users to extend capabilities and act on the information presented.
From the definition, three principles emerged. Each one was tied to a concrete implementation decision — not just a design aspiration.
The summary should surface the most relevant information for the user's current decision. In practice: I worked with the development team to ensure the AI prompt architecture prioritized key insights — not generic summaries that could apply to anything.
Users need to know where the information comes from. In practice: source attribution was non-negotiable — it became a required element of the anatomy, not an optional enhancement. A black box is not a trustworthy component.
The summary must be readable at a glance. In practice: I established voice guidelines to avoid machine-generated language patterns. The content had to read like a human insight, not a data dump.
Nothing was added by default. Every element in the component had to justify its presence against the principles. If it didn't pass, it stayed out.
A visual marker that signals AI-generated content. Already established as a pattern in Redwood, the data bloom gives users an immediate, consistent recognition signal across all Oracle products — before they read a single word.
An AI-generated title that surfaces the most critical insight on the page. Not a label. Not a category. A specific, actionable statement derived from the data.
The core narrative — concise, scannable, written to support decision-making. This is where the component generates real value. I established guidelines to keep it compact and readable, while acknowledging a real constraint: we don't fully control what the AI produces.
Linked references to the data sources used to generate the summary. This element directly serves the Trust & Transparency principle. It's what separates a trustworthy AI component from a black box — users can verify, dig deeper, or question what they're reading.
AI-generated content length is often outside our control. We can recommend guardrails, but we can't force them. The design had to account for edge cases where the summary runs long. The solution: a progressive truncation pattern with a "More" trigger, so the component degrades gracefully without breaking the page layout.
Summaries could take considerable time to render depending on data volume. I coordinated with the Motion team to define a skeleton loading state with a "Generating content" label — it communicates process without creating layout shift or leaving users confused about what's happening.
Tab moves focus to the first interactive element. The component uses role="region" with aria-label="summary". The headline is marked as heading level 2. The CTA group uses role="group" with aria-label="summary actions". These weren't last-minute additions — they were in the component spec from the anatomy phase.
Midway through the project, reviews put us in contact with additional product teams who wanted to adopt the Summary for contexts we hadn't designed for. Two requirements surfaced and pushed the component forward.
My instinct was to manage scope carefully. What worked was a different framing entirely: showing stakeholders what was coming in future versions. Once they could see their needs mapped to a roadmap — even approximately — they were willing to accept a v1 without everything they wanted. The spirit of versioning became a negotiation tool. Iterating wasn't a compromise; it was the plan.
"Designing a design system component is fundamentally different
from designing a product feature. The surface you're designing
is not the UI — it's the adoption behavior of other teams."
The Summary launched in Dashboard and Data Authoring — the highest data-density contexts in Oracle, and the most natural early adopters. From there, adoption is progressive. The 12+ Redwood page templates will integrate the component as Ask Oracle expands its presence across page headers.
Product domains like SCM, ERP, and Finance inherit the component through page templates — no additional design or development work required on their part. The design system becomes the distribution mechanism. One definition. Platform-wide impact.
The review cadence with stakeholders stretched to two weeks between contacts. In retrospect, I would have pushed prototypes earlier and more frequently — without waiting for scheduled reviews. A calendared meeting shouldn't be the bottleneck of a design decision. The constraint was organizational, and I let it run longer than it needed to.
When additional teams arrived mid-project with new requirements, my instinct was to manage scope carefully. What worked was showing them that their needs were mapped to a roadmap — even approximately. Once they could see that, they accepted a v1 without everything they needed. Versioning isn't a compromise. It's a negotiation tool.
The data bloom works as an identifier when there's one summary per page. But as other components — like Comparison Cards — begin embedding their own AI-generated summaries, the data bloom loses its distinctiveness. Multiple blooms on a single page create visual noise and dilute the signal they were meant to provide.
The next design challenge is clear: how do you communicate "this content was AI-generated" in a way that scales gracefully — from a prominent page-level summary to multiple embedded instances — without relying on a visual device that assumes it will always have the page to itself? That question didn't have an answer within the scope of this project. But it's the right question for the next one.