Logo Menu Close
Anastasiia Medvid UX/UI designer
Resume

Let's break the ice

Thank you! Your message has been sent.

Comprehensive AI shift planning for a hotel management SaaS platform

Designing a modular, multi-agent AI scheduling system that coordinates hotel operations and learns from historical patterns.

Project

AI hotel management SaaS

Tools

Figma, Claude, Jira, Confluence

Contribution

Gathered requirements, mapped cross-module flows and states, defined interaction patterns for context-aware AI chat, designed the UX/UI & design system maintenance

Case 3 preview

Solution

The developed feature implements a complex ecosystem of specialized AI agents with clear settings for scope and data sources, allowing managers to safely explore, analyze, and take action across different modules from a single interface.

Context & problem

The system lacked a central interface that could unify multiple AI agents into a manageable ecosystem and coordinate all platform modules without constant screen switching.

At the same time, this interface had to embody three key qualities:

Core qualities

  • Transparency and explainability

    Agents provide detailed reasoning for their decisions, enabling human oversight and building trust in automated decision-making processes.

  • Collective intelligence

    Specialized agents for pricing, forecasting, guest management, and operations work together, making aligned decisions that optimize overall performance.

  • Collaborative decision-making

    The multi-agent framework coordinates decisions across different operational areas, optimising overall business outcomes rather than isolated metrics.

Context screenshot

Goals & success criteria

Objectives and how success will be measured

Product goals

  • Enable collaborative work on the same task and allow important actions to be approved by authorised staff.
  • Build AI into the core architecture to support advanced decision-making and optimisation across the platform.
  • Provide APIs and integrations so that the platform can seamlessly connect to existing hotel systems and third-party services, including via the chat interface.
  • Ensure the chat has access to all required knowledge and its own history, so it can continuously learn and adapt over time through machine learning.

UX goals

Design 3 interface variants for the assistant (full screen, drawer, widget) to give users access to the feature and to data from their current location in the system.

Provide a single, intuitive chat experience that hides system complexity and simplifies operations.

Enable switching between modes (ask / agent).

Work with context: limit search to internal platform data or extend it to open web data.

Support quick actions and automation such as agent selection, Quick Stats Cards, and agentic behaviour.

Enable live handoff (connecting a manager or @mentions in chat) so the chat can become a group space, e.g. for approvals.

Success criteria

Events to track:

  • assistant_opened
  • mode_switched
  • quick_action_clicked
  • approval_confirmed
  • conversation_abandoned
  • approval_requested
  • conversation_resumed_from_history
  • handoff_started
  • mention_added

Scope of this iteration

Design and validate the core UX for the AI assistant

Actions and collaboration inside chat

Introduce and validate quick actions (agents, Quick Stats, automation triggers).

Design collaboration patterns: mentions, approvals, and shared threads within the chat.

Modes and context control

Define and test switching between ask and agent modes.

Design controls for context and data sources (which properties are in scope, internal vs web data).

Assistant entry points & layout

Design and validate three interface variants for the assistant: full screen, drawer, and widget.

Constraints & inputs

What shaped the solution and what trade-offs followed.

What influenced the solution

I saw that managers kept jumping between modules and reports, so I needed one entry point where they could explore, analyse, and act across the platform. I decided to design a unified chat surface that combines search, analytics, and actions instead of separate tools for each use case.

Because AI agents were making high-impact decisions, I also had to keep their logic visible and humans in control. This pushed me toward clear modes (ask vs agent), explicit context boundaries, and confirmation steps before important actions are executed.

Key trade-offs

To keep things safe and explainable, I exposed context controls, property selection, and data-source switches directly in the UI. The trade-off is a denser start screen than a minimal "just type your question", but users immediately see where the answer comes from and can narrow or widen the scope.

The feature needed to support search, analytics, orchestration, agents, quick stats, and collaboration in one place, so I relied on progressive disclosure. The start state is relatively rich with prompts and controls, while detailed reasoning and advanced options stay hidden until the user chooses to expand them.

Finally, I wanted agentic behaviour, but enterprise hotels still require approvals and audit trails. In practice, agents prepare plans and recommendations, while the interface keeps a human in the loop for sensitive operations through approvals, mentions, and confirmation states before anything risky is actually executed.

Competitive & best‑practice analysis

For the core interaction model I deliberately leaned on conversational AI assistant patterns instead of traditional search UX. The interaction is anchored in:

Conversational AI assistant patterns

The interaction is anchored in:

  • a single intent first input that supports both exploratory questions and precise commands
  • natural language queries as the primary control surface, with no explicit mode toggles
  • structured answer cards that present synthesized content and key entities in a readable layout
  • persistent conversation threads that retain multi turn context and can be revisited or branched

To make the assistant feel legible and learnable over time I introduced:

  • prompt scaffolding through suggested follow ups and quick reply chips that reveal capabilities
  • entry points for common jobs to be done so users do not start from a blank prompt state every time
  • inline refinement controls on each response card for clarify, narrow, and shift angle actions
  • interaction patterns that encourage iterative refinement of a single artefact instead of prompt hopping

The response layer is intentionally structured as small documents inside the conversation rather than raw text streams. Each answer:

  • uses typographic hierarchy, headings, and bullets to create a scannable information architecture
  • separates high level takeaways from implementation details and edge cases
  • embeds source links close to the claims they support to increase trust and traceability
  • is designed to be copy pasted into downstream artefacts such as specs, tickets, or briefs
  • follows an AI first "micro deliverables" approach where the assistant outputs reusable chunks, not just prose
Conversational AI assistant pattern

Agentic patterns and execution safety

On top of conversational patterns I designed the assistant as an agentic layer that can plan and execute within the product environment. The mental model is oriented around:

  • goals and outcomes as the primary input, rather than low level implementation instructions
  • the assistant translating user intent into an explicit, step by step execution plan
  • upfront visibility into planned actions before any irreversible operation takes place
  • editable checklists that allow users to review, reorder, or partially apply a plan
  • a clear distinction between reasoning about the workspace and acting on it

Execution safety and predictability are treated as first class UX concerns.
To support that, I introduced:

  • distinct visual states for analysis mode and execution mode
  • confirmation views for high impact changes, written in human readable language
  • lightweight approval flows for actions that require stakeholder review
  • an activity log that records each agent run, including timestamps, inputs, and outputs
  • rollback affordances that let users safely undo or step back from an execution path

To avoid a black box experience, I kept the system in a human in the loop posture.
The interface:

  • explains why a recommendation was generated and what internal capabilities informed it
  • communicates confidence levels instead of overstating certainty on ambiguous inputs
  • routes edge cases and permission constrained actions to a human owner when needed
  • frames the assistant as a collaborative teammate embedded in the workflow
  • reinforces user control while the system handles repetitive planning and orchestration work
Agentic patterns and execution safety

Key user flow

After several iterations, the solution aligned with both user and business needs. The core flows were refined into a functional, predictable, and easy-to-use system.

Specify the context

The assistant can scope every conversation to a precise slice of the product, from a single property to a combination of modules and data sources. Users can mix Databases & Live Data, User Profiles, System State, External Feeds, and connected Agents so each answer is grounded in exactly the context they care about, not the entire estate.

Specify the context

Make chat group to ask for approval or collaborate

Any chat can be turned into a lightweight workspace for approvals by adding teammates to the thread. Invited users see the full history from the first message and can approve or reject proposed changes, adjust rates, or talk directly with the assistant, while the original chat owner stays in control of when work actually moves forward.

Make chat group to ask for approval or collaborate

Ask mode. Insight retrieval that needs manager approval

In this flow the user runs an insight retrieval task against a clearly scoped context window, such as a metric for a given entity and time range. The assistant returns a structured analysis with suggested next steps and highlights high impact options that change system behaviour, nudging the user to involve a manager for approval before any recommendation turns into execution.

Ask mode. Insight retrieval that needs manager approval

Attach source

User-added sources, such as links and documents, are kept separate from AI-generated sources so the context stays transparent and easy to review.

Attach source

Agent mode

This flow turns a scoped request into a coordinated multi agent process, where specialized agents split the work, exchange context, and handle different parts of the task in parallel. The assistant then consolidates the outcome, surfaces the key decisions, and prepares the result for review so the user stays in control before any change is applied.

Agent mode

More projects