Re-envisioning Marketplace Search
Widening top of the funnel through an improved discovery experience, increasing searches by 34%
Type: Individual
Time: 4 weeks
Keywords: conversational search, component design, accessibility, north star, stakeholder management
The product: Atlassian Marketplace
Atlassian Marketplace lets one sell and buy apps that enhance the functionality of Atlassian's 1P products. The site has 2 interfaces: the Partner side (integration makers), and the Customer side (integration buyers).
This project was carried out to optimize the customer-side discovery flow that primarily looked at the following touchpoints:
Homepage
Search results page
App listing page (PDP)
My role📝
Owned the end-to-end redesign of the Search experience for all devices. This involved creating a search workflow that can scale to incorporate NLP and conversational aspects, conducting research and collaborating with cross-functional leads.
Jump to impact >
Duration⏳
Project direction: 2 weeks
Design and iterate: 4 weeks
Validating: 1 week
Implementation: 6 weeks
Design process overview
Context: why a revamp now?
Despite high landing numbers, Marketplace has around 35% drop-offs on desktop and 80% on mobile.
90% of our search keywords were limited to 2 words and the top queries were all direct app names. We also saw low engagement - 50% of users only check out one app and don't 'browse' the site offerings.
Essentially, high-intent customers preferred to search for apps outside of Marketplace and landed here only to evaluate and install apps. We had a narrow funnel and were losing key customers to other channels. This led to the primary hypothesis that the Marketplace offerings were failing to help customers discover the right app.
We also used a legacy tech stack. But ignoring these technical constraints, we began with a broad and holistic design brief:
That's a big ask...so where do we focus?
We began with a 3-day design workshop to envision the North Star customer experience. After alignment with the Product team on future direction, a layer of business priority was applied to the envisioning, keeping in mind:
-
Does this add customer value?
-
What is the time to market?
-
Does this build a performant and reliable Marketplace?
The design team also heavily advocated for a foundation overhaul, which would pay off in the long run.
After prioritisation, we limited the focus of FY24 to solely enhancing the App DISCOVERY phase.
Moving on - who is actually involved in the app discovery phase? From prior research, we know that customers evaluate apps in either of the following two mindsets. We then identified the top tasks of each persona and their pain points.
Problems with app discovery today
Analyzing the Discovery journey with a heuristic lens, relying on existing research and from instrumentation, we were able to identify scattered problems across the UI:
An overview of the experience pain points in the Discovery journey
Final in-scope product brief
Upon synthesizing the diverse pain points, we categorised them into 3 areas of intervention, each tied to a specific mindset. Each of the streams was led independently by a designer, with regular syncs among the 3 to ensure a holistic experience at the end.
In-scope program plan for FY24
...this brings us to
Marketplace Search Re-design
Re-designing the Search feature to make it easier for admins to discover and shortlist the right app
Experience goals
PRIMARY:
-
Create a prominent and reliable search experience to encourage users to perform solution-oriented discovery
SECONDARY:
-
Bringing in style consistency, responsiveness, and AA accessibility
Scope
-
Working with PMs to scope out best possible experience for discovery
-
Working with PMs and Engg. to identify feasibility and create final brief
-
Working on end-to-end designs that are accessible and scalable
-
Working with engineers to ensure 1:1 implementation
Findings: Current search experience
Today, the search flow in Marketplace is powered by a keyword-based model that works by text matching with little typo tolerance. Despite being a highly used touchpoint, this model has led to it being used primarily for navigating to pre-decided apps and not as a conscious discovery touchpoint. Admins prefer external search engines for discovering solutions.
Admin's discovery journey and pain points
Collating the pain points with existing search usage and keyword data, our top findings were as follows:
-
Loss of trust
The search feature being very basic has resulted in a loss of trust, pushing people to discover apps outside of Marketplace. 80% of customers directly land on app listing pages.
-
Quality of results > Quantity of results
Even in abundant search result pages, the maximum CTR was seen within the top 6-9 tiles. Customers using search have clear intent and are less prone to discovery.
-
Ambiguity on value
Customers are looking for holistic solutions and not just integrations. As of now, Marketplace only offers apps, making the value of searching within this ecosystem unclear.
-
Lack of continuous discovery model
We see customers drop-off from no results and app listing pages instead of engaging further. There is no redirection from PDP or no-result search pages.
-
Low usage of filters
Low filter engagement indicates customers prefer to manually go through apps. However, customers engaging with filters tend to have a higher conversion rate.
Challenges
While achieving a rich-content Marketplace Search that offers holistic solutions (not necessarily apps), remains the North star, we could not get there in a single step. Hence the product team broke down the Search improvement journey into 3 major releases:
Aligned Release Plan (Design vs. Engineering)
For design, the challenge was to create an incremental design that could ladder up to accommodate each release, while offering a familiar experience to the user.
At the same time, we were faced with the question of how can we train our new model before release. As we were moving to a new search suite, we needed a period of iteration and feedback before we could make the algorithm and ranking logic available for all. For this, we wanted a singular design intervention that could be integrated with both the older and newer interface.
So, the model would be trained by internal Atlassian usage on the old interface and the final model would be released as a part of the revamped UI.
Competitive benchmarking
Looking at competitors, other Atlassian in-product and site-wide search features and synthesizing data from Baymard, I was able to break down the task into 6 granular steps:
-
Trigger (intent to search)
-
Discovery of search field (location, capabilities, persistence)
-
Initiation (query formation)
-
Expression (query refinement)
-
Refinement (Shortlisting, Deeper evaluation)
-
Continuing discovery (looping back to search activity)
Initial Ideations
In terms of discovery ideology, we wanted customers using search to start from a wide funnel and be able to narrow it down progressively. I looked at various HMWs for the 6 interaction points across tiemlines (now, next and future). Our core principles remained the same:
1. Trigger and initiation stage - Reducing barrier to search
These ideations were on how we could improve the visibility of the feature and encourage more users to engage with the feature. A major con of the older experience was a lack of direction on what to search for, leading to a loss of trust trying out complex queries.
2. Expression stage - Guidance for query formation
The ideations looked at helping admins form the best query for what they were looking for. Our hypothesis was that the right query would lead to the right result and avoid disappointments down the line.
3. Refinement and continuous discovery stage - Aiding admins in forming an opinion
Our current search results page had a lot of data without any appropriate hierarchy which made it hard for admins to analyze and narrow down on apps. These explorations looked at more opinionated visualization of the results across 2 types of result page typologies: abundant results, no results, and filter-based no results.
4. In the meanwhile, how can we train the model on our older UI?
Marketplace was coded in a legacy tech stack which made making changes difficult and time-consuming. As a part of this revamp, we also migrated our code and components. Explorations for integrating the search component in the UI were along the direction of keeping it as independent from the rest of the page as possible.
Now, how should this scale for conversational search?
As I looked into crafting an experience for conversational search, it seemed that the vision that would be the best experience and the proposal advocated by the business was very different.
The experience vision was to leverage the LLM for capabilities such as comparison and summarisation. AI could also be used to power personalised recommendations.
However, the business vision was to implement AI Search as a Beta feature asap that will be trained by users and improved upon. The model was also heavily limited in capability.
Re-scoping of the brief - to AI or not AI?
There was a difference of opinions regarding the positioning and intervention points of AI and conversational search.
The design team strongly felt that the nascent model developed was not fit for general availability as it did not coincide with a customer's expectations of an LLM and failed to leverage its capabilities in the best way. The experience would also not offer any actionable feedback for the next iteration.
We had little idea when we would have the bandwidth to upgrade to Stage II of the design. After multiple discussions using prototypes, re-looking at persona goals and going over the proposed impact, the team decided to remove conversational search from the current scope.
Instead, we earmarked time for generative research to be done on AI integration in the Marketplace Ecosystem in Q4.
Final design decisions
After the re-scoping, I started working on the UI. This is where a lot of brainstorming, sparring and decision-making took place:
-
Removing filtering before querying: In the old design, users could filter before entering a query on the homepage. But this usage (6%) was potentially because it was the only navigation method. With the new nav bar, I introduced filters only after the query.
-
Moving filters out of view in the search results: Data showed low filter engagement. We also saw most clicks on the visible filters in the 1st fold. To make them all visible but reduce visual noise, I bucketed them behind drop-downs and expand based on usage.
-
Modal to invoke search from other pages: We had a persistent search icon on the top navigation. While we explored similar options of having a drop-down there, we also had another embedded Marketplace touchpoint within our products where the navigation bar was not available to us. Hence, we decided to introduce a search shortcut key and have it invoked as a modal on other pages.
-
Reducing number of upfront results: Since the majority of conversions and clicks were seen within the top 6 apps, we aimed to cut down choice paralysis by limiting upfront results to 8 rows.
-
New Search component - Since our design system did not have a component to support auto-suggestions, I created a new component, scalable across devices. This would evolve over time, reducing our dependency on other teams.
Some iterations -
Design detailing
Keeping only keyword search in scope, I moved on to the detailing, which included:
-
A11Y annotations
-
Content guidance
-
Design system team sparring
-
Design across breakpoints (desktop, tablet, mobile)
Old vs. new experience
Before - Active search on homepage
Before - Typing query state
Before - Search results
Before - Loading screen
Before - Invoking search from other pages
Before - Tablet and mobile states
After - Active search on homepage
After - Typing query state
After - Search results
After - Loading screen
After - Invoking search from other pages
After - Tablet and mobile states
Validation: SEQ testing
Before the experience went live, an SEQ test was conducted with first-time and experienced Marketplace users. Customers were asked to perform 5 tasks of which 2 were search-focussed.
What worked well:
-
Users felt the interface was cleaner and evaluating app tiles was easier on the wider layout
-
The new design created a notion of improved performance. Users relied on search even in exploration-based tasks
-
Users were able to intuitively go through progression of the search component's states
Areas of improvement:
-
Users expected Search to cater to a wider range of use cases than we offer.
-
This validated our NLP and conversational direction, calling for an expedition.
-
In the short term, this could mean including categories or collections in the results (to-do).
-
-
A few new users found some of the filter copies confusing as they used to be upfront. Sort copies such as 'relevance' were also unclear.
-
These were iterated on by the content designer.
-