Prompt Workflow

Helping data analysts navigate AI tools confidently.

MY ROLE

Lead Product Designer

Context

On the PolyAnalyst platform, users can insert the AI Assistant Node at any point of their data workflow to leverage LLM-powered tasks. However, the initial prototype fell short in critical areas. As a result, users found it difficult to trust or control, leading to poor usability and minimal adoption during internal testing.

TEAM

1 Data Scientist

3 Software Engineers

outcome

Improved prompt workflows and cost visibility, leading to successful client contract extension and adoption in 76% of weekly active workflows within 3 month of launch.

THE PROBLEM

The AI Assistant Node had strong backend capability but poor usability.


After conducting interviews with some data analysts and scientists, I discovered some key pain points.


Invisible Prompts


Users couldn’t see or edit the prompt being sent to the LLM, creating trust issues.

No Cost Transparency


Users had no understanding of how much a request might cost in compute or token usage.

Confusing Hierarchy


Users struggled to configure the node, with no clear feedback when errors occurred.



These issues led to

  • Low inclusion in real pipelines

  • Hesitance to use the node beyond experimentation

DESIGN CONSTRAINTS

All configuration workflow had to fit within a modal due to current system-wide design, requiring all key features—prompt selection, input, output, and metadata—to be displayed in a limited space without overwhelming the user.

SOLUTION

I redesigned the node around prompt/cost visiblity and information retrieval.

Instead of just slapping on a prompt library, I rebuilt the workflow to prioritize efficiency and ensure that users are well-aware of execution outcome and cost.

Prompt Library

Prompts could be saved, modified, and reused—no more redundant work.

Token Cost Estimation

After each run, a clear breakdown of token usage and projected cost is displayed, giving users immediate feedback and helping them make smarter, more cost-aware decisions.

Integrated Testing Flow

Instead of relying on manual test setups, we embedded structured testing directly into the assistant node setup, making iteration easy—no manual sampling needed.

History Log

To give users better visibility into their interactions, the log tab is introduced to track key details of every execution, aiming to turn experimentation into a traceable, reliable process—not just trial and error.

IMPACT

Following the rollout, we monitored usage and gathered feedback over a three-month period.


We saw a substantial increase in both adoption and user satisfaction. Internal teams reported a noticeable reduction in manual testing steps and a shift toward cleaner, more streamlined pipelines.


100%

Full Rollout

76%

Adoption in 3 months

MY TAKEAWAY


AI is incredibly powerful—but some of its biggest friction points are the ones you don’t immediately see: hidden costs, lack of visibility into past actions, and unclear system behavior. These aren’t technical limitations—they’re design problems.


By making things like cost, history, and workflow state more transparent, we can help users feel not just empowered, but confident when working with AI. That’s the kind of experience I aim to build—not just capable, but clear and easy to trust.

Prompt Library

Prompts could be saved, modified, and reused—no more redundant work.