top of page
Search cover.jpg
Re-envisioning Marketplace Search

Widening top of the funnel through an improved discovery experience, increasing searches by 34%

Type:  Individual
Time:
  4 weeks
Keywords:  conversational search, component design, accessibility, north star, stakeholder management

The product: Atlassian Marketplace

Atlassian Marketplace lets one sell and buy apps that enhance the functionality of Atlassian's 1P products. The site has 2 interfaces: the Partner side (integration makers), and the Customer side (integration buyers).
This project was carried out to optimize the customer-side discovery flow that primarily looked at the following touchpoints:

Marketplace Home
Homepage
App listing page
Search results page
Search results page
App listing page (PDP)
My role📝

Owned the end-to-end redesign of the Search experience for all devices. This involved creating a search workflow that can scale to incorporate NLP and conversational aspects, conducting research and collaborating with cross-functional leads.


Jump to impact > 

Duration

Project direction: 2 weeks
Design and iterate: 4 weeks
Validating: 1 week
Implementation:  6 weeks

Design process overview
Design process
Context: why a revamp now? 

Despite high landing numbers, Marketplace has around 35% drop-offs on desktop and 80% on mobile.
90% of our search keywords were limited to 2 words and the top queries were all direct app names. We also saw low engagement - 50% of users only check out one app and don't 'browse' the site offerings.

Essentially, high-intent customers preferred to search for apps outside of Marketplace and landed here only to evaluate and install apps. We had a narrow funnel and were losing key customers to other channels. This led to the primary hypothesis that the Marketplace offerings were failing to help customers discover the right app.

We also used a legacy tech stack. But ignoring these technical constraints, we began with a broad and holistic design brief:

initial brief.jpg
That's a big ask...so where do we focus?

We began with a 3-day design workshop to envision the North Star customer experience. After alignment with the Product team on future direction, a layer of business priority was applied to the envisioning, keeping in mind:

  1. Does this add customer value?

  2. What is the time to market?

  3. Does this build a performant and reliable Marketplace?

The design team also heavily advocated for a foundation overhaul, which would pay off in the long run. 

Customer journey for acquiring apps

After prioritisation, we limited the focus of FY24 to solely enhancing the App DISCOVERY phase.

Moving on - who is actually involved in the app discovery phase? From prior research, we know that customers evaluate apps in either of the following two mindsets. We then identified the top tasks of each persona and their pain points. 

Evaluation mindsets
Problems with app discovery today

Analyzing the Discovery journey with a heuristic lens, relying on existing research and from instrumentation, we were able to identify scattered problems across the UI:

Overview of pain points
An overview of the experience pain points in the Discovery journey
Final in-scope product brief

Upon synthesizing the diverse pain points, we categorised them into 3 areas of intervention, each tied to a specific mindset. Each of the streams was led independently by a designer, with regular syncs among the 3 to ensure a holistic experience at the end. 

Program plan.jpg
In-scope program plan for FY24
header.jpg
...this brings us to
Marketplace Search Re-design
Re-designing the Search feature to make it easier for admins to discover and shortlist the right app
Experience goals

PRIMARY:

  • Create a prominent and reliable search experience to encourage users to perform solution-oriented discovery


SECONDARY:

  • Bringing in ​style consistency, responsiveness, and AA accessibility

Scope
  1. Working with PMs to scope out best possible experience for discovery


  2. Working with PMs and Engg. to identify feasibility and create final brief

  3. Working on end-to-end designs that are accessible and scalable

  4. Working with engineers to ensure 1:1 implementation

Findings: Current search experience

Today, the search flow in Marketplace is powered by a keyword-based model that works by text matching with little typo tolerance. Despite being a highly used touchpoint, this model has led to it being used primarily for navigating to pre-decided apps and not as a conscious discovery touchpoint. Admins prefer external search engines for discovering solutions. 

Admin's discovery journey
Admin's discovery journey and pain points

Collating the pain points with existing search usage and keyword data, our top findings were as follows:
 

  1. Loss of trust
    The search feature being very basic has resulted in a loss of trust, pushing people to discover apps outside of Marketplace. 80% of customers directly land on app listing pages.

     

  2. Quality of results > Quantity of results
    Even in abundant search result pages, the maximum CTR was seen within the top 6-9  tiles. Customers using search have clear intent and are less prone to discovery. 

     

  3. Ambiguity on value
    Customers are looking for holistic solutions and not just integrations. As of now, Marketplace only offers apps, making the value of searching within this ecosystem unclear. 

     

  4. Lack of continuous discovery model
    We see customers drop-off from no results and app listing pages instead of engaging further. There is no redirection from PDP or no-result search pages.

     

  5. Low usage of filters
    Low filter engagement indicates customers prefer to manually go through apps. However, customers engaging with filters tend to have a higher conversion rate. 

Challenges

While achieving a rich-content Marketplace Search that offers holistic solutions (not necessarily apps), remains the North star, we could not get there in a single step. Hence the product team broke down the Search improvement journey into 3 major releases:

Release plan
Aligned Release Plan (Design vs. Engineering)

For design, the challenge was to create an incremental design that could ladder up to accommodate each release, while offering a familiar experience to the user.

At the same time, we were faced with the question of how can we train our new model before release. As we were moving to a new search suite, we needed a period of iteration and feedback before we could make the algorithm and ranking logic available for all. For this, we wanted a singular design intervention that could be integrated with both the older and newer interface.

So, the model would be trained by internal Atlassian usage on the old interface and the final model would be released as a part of the revamped UI. 

Competitive benchmarking

Looking at competitors, other Atlassian in-product and site-wide search features and synthesizing data from Baymard, I was able to break down the task into 6 granular steps

  1. Trigger (intent to search)
     

  2. Discovery of search field (location, capabilities, persistence)
     

  3. Initiation (query formation)
     

  4. Expression (query refinement)
     

  5. Refinement (Shortlisting, Deeper evaluation)
     

  6. Continuing discovery (looping back to search activity)

Competitor research
Initial Ideations

In terms of discovery ideology, we wanted customers using search to start from a wide funnel and be able to narrow it down progressively. I looked at various HMWs for the 6 interaction points across tiemlines (now, next and future). Our core principles remained the same:

1. Trigger and initiation stage - Reducing barrier to search

These ideations were on how we could improve the visibility of the feature and encourage more users to engage with the feature.  A major con of the older experience was a lack of direction on what to search for, leading to a loss of trust trying out complex queries.

2. Expression stage - Guidance for query formation

The ideations looked at helping admins form the best query for what they were looking for. Our hypothesis was that the right query would lead to the right result and avoid disappointments down the line. 

3. Refinement and continuous discovery stage - Aiding admins in forming an opinion

Our current search results page had a lot of data without any appropriate hierarchy which made it hard for admins to analyze and narrow down on apps. These explorations looked at more opinionated visualization of the results across 2 types of result page typologies: abundant results, no results, and filter-based no results.

explore 3.jpg
4. In the meanwhile, how can we train the model on our older UI? 

Marketplace was coded in a legacy tech stack which made making changes difficult and time-consuming. As a part of this revamp, we also migrated our code and components. Explorations for integrating the search component in the UI were along the direction of keeping it as independent from the rest of the page as possible.

Component intergration explorations
Now, how should this scale for conversational search?

As I looked into crafting an experience for conversational search, it seemed that the vision that would be the best experience and the proposal advocated by the business was very different. 

The experience vision was to leverage the LLM for capabilities such as comparison and summarisation. AI could also be used to power personalised recommendations. 

However, the business vision was to implement AI Search as a Beta feature asap that will be trained by users and improved upon. The model was also heavily limited in capability. 

Difference in opinions between design and product.jpg
Re-scoping of the brief - to AI or not AI?

There was a difference of opinions regarding the positioning and intervention points of AI and conversational search.

The design team strongly felt that the nascent model developed was not fit for general availability as it did not coincide with a customer's expectations of an LLM and failed to leverage its capabilities in the best way. The experience would also not offer any actionable feedback for the next iteration. 


We had little idea when we would have the bandwidth to upgrade to Stage II of the design. After multiple discussions using prototypes, re-looking at persona goals and going over the proposed impact, the team decided to remove conversational search from the current scope.

Instead, we earmarked time for generative research to be done on AI integration in the Marketplace Ecosystem in Q4. 

Final design decisions

After the re-scoping, I started working on the UI. This is where a lot of brainstorming, sparring and decision-making took place: 

  1. Removing filtering before querying: In the old design, users could filter before entering a query on the homepage. But this usage (6%) was potentially because it was the only navigation method. With the new nav bar, I introduced filters only after the query.
     

  2. Moving filters out of view in the search results: Data showed low filter engagement. We also saw most clicks on the visible filters in the 1st fold. To make them all visible but reduce visual noise, I bucketed them behind drop-downs and expand based on usage. 
     

  3. Modal to invoke search from other pages: We had a persistent search icon on the top navigation. While we explored similar options of having a drop-down there, we also had another embedded Marketplace touchpoint within our products where the navigation bar was not available to us. Hence, we decided to introduce a search shortcut key and have it invoked as a modal on other pages. 
     

  4. Reducing number of upfront results: Since the majority of conversions and clicks were seen within the top 6 apps, we aimed to cut down choice paralysis by limiting upfront results to 8 rows.
     

  5. New Search component - Since our design system did not have a component to support auto-suggestions, I created a new component, scalable across devices. This would evolve over time, reducing our dependency on other teams. 
     

Some iterations -

Iterations.jpg
Design detailing

Keeping only keyword search in scope, I moved on to the detailing, which included:

  • A11Y annotations

  • Content guidance

  • Design system team sparring

  • Design across breakpoints (desktop, tablet, mobile)

A11y hand-off spec example
Old vs. new experience
{81BBCEA7-C51E-45F6-A412-F15B883DDBE4}.png
Before - Active search on homepage
{BD2D5B73-79C8-40FC-A8A1-9F8EB8B477A1}.png
Before - Typing query state
{5910B651-E113-4FBE-9D67-F06C1C99C69F}.png
Before - Search results
{18035812-6E08-46BC-BF33-430989B9CB80}.png
Before - Loading screen
{D01E9866-5794-40F1-9D24-FD11924C0B83}.png
Before - Invoking search from other pages
{088AD82D-62A6-4870-BBBE-591EB7B38D71}.png
{67D526EC-6447-4A92-86E9-ACCD5550E094}.png
Before - Tablet and mobile states
{7E863A7F-4D78-40BA-8D4C-6CE19F8EB233}.png
After - Active search on homepage
{543A60A0-2EAB-43DC-B55B-C8D409506F55}.png
After - Typing query state
{9A376FCC-408E-4627-8A15-F21822E74B6A}.png
After - Search results
{B4C4A4D7-2C56-4F44-A8E4-C08E519AEA25}.png
After - Loading screen
{3675C9F0-0DFD-408C-A0D5-742A4AEC0470}.png
After - Invoking search from other pages
{3EB4FD7D-9AC1-4DC0-8681-0979A130532A}.png
{83A136E2-BEC5-464A-9FD2-B330F37AB706}.png
After - Tablet and mobile states
Validation: SEQ testing

Before the experience went live, an SEQ test was conducted with first-time and experienced Marketplace users. Customers were asked to perform 5 tasks of which 2 were search-focussed.

What worked well: 

  • Users felt the interface was cleaner and evaluating app tiles was easier on the wider layout

  • The new design created a notion of improved performance. Users relied on search even in exploration-based tasks

  • Users were able to intuitively go through progression of the search component's states 

Areas of improvement:

  • Users expected Search to cater to a wider range of use cases than we offer.

    • This validated our NLP and conversational direction, calling for an expedition.  

    • In the short term, this could mean including categories or collections in the results (to-do).

  • A few new users found some of the filter copies confusing as they used to be upfront. Sort copies such as 'relevance' were also unclear.

    • These were iterated on by the content designer. 

Impact
Impact

Despite the limited scope, we saw users engage more with the search feature. During SEQ, users were observed to form a perceived notion that the new design had a better search engine. 

✅ Total number of searches initiated: increased by 34%

 
✅ Number of searches initiated from the Homepage: increased by 21%
 
✅ Conversions from Search results page: increased by ~3%

✅ Number of times no results page is shown: dropped by ~4%

Drop-offs from mobile homepages have also decreased from 80% to 41%! Mobile users account for ~42% of our traffic. 

No significant improvement was seen in % drop-offs from desktop, but we are seeing increased engagement with our offerings. Our next step is to analyse the data points surrounding this and understand how our offerings can be made more relevant and richer. 

Future path and learnings

Being the first project I fully drove, this was a great exercise in stakeholder management and driving team alignment. The regular cross-functional reviews helped re-align the team towards problem-solving rather than feature shipping. 

A major learning came from the feature that remained unshipped - the project strengthened my belief that doing things right's more important than simply doing them. Building trust in every interaction matters.
 
Creating a design that spanned the older and newer UI also led to many interesting design-engineering brainstorming sessions, offering new perspectives into the challenges of revamping legacy tech stacks. 

Thank you for your time!

bottom of page