Skip to content

How a fundamental switch in language increased new sites by 25%

Product Designer · 2023 · 3 months

How a fundamental switch in language increased new sites by 25%

Overview

What to know about this project

Tasked with updating a prominent activation flow, I documented existing problems, ran user interviews, synthesized the data, then redesigned the flow, improving the product's main KPI by 25%.

An example of how I handled the ambiguity of having little to no supporting data.
What seemed like a UI refresh turned into a fundamental rethink of how we spoke to users.
Reflection: This project turned out very big for a single launch, a learning I carried forward.

Context

Who is this for?

Users (Pros): Website designers and developers who build sites for clients as a living.

Product: The Hub by GoDaddy Pro, a variation of a GoDaddy account with features and tools tailored for client and site management.


The Problem

Few users were adding their sites to the Hub.

What we knew:

  • Low click-through rate on entry points for users to add a site.
  • Low number of sites added for active users.

Business constraints: Limited data. Not all pages were tracked, so there was no historical session data or drop-off visibility. The original design file was also missing, leaving no documentation of the full flow or rationale behind past decisions.


Discovery

An audit

By mapping out what existed currently we could get a full view of the experience and prioritize where users could run into friction. The most impactful find was bugs in error states that could create unnecessary dead ends for users.


Discovery

Interviews

Site management was advertised as the main value when users signed up for the Hub, which made it extra hard to understand why adding sites (the first step to utilizing these features) was getting skipped after onboarding. It’s rare to run interviews, but I pushed to get qualitative data where I could ask clarifying questions to really understand how Pros interpret what the Hub is and what it means to add their sites.


Research Findings

What users told us

01"Why should I add a site?"

Users weren't connecting the dots between site maintenance features and the step of adding their site. Adding a site felt like a task, not a gateway to value.


Instead of telling users to 'add their sites,' share the value of why they should. Adding a site is really just a step toward the features they were promised at signup.

02"What counts as a GoDaddy site?"

It was unclear what qualified as a GoDaddy site, especially when hosting and domain were split between different companies.


The only thing that matters is whether it's a WordPress site. If it's in the account, setup is fast. If not, users enter credentials. We could determine this internally. Users shouldn't have to categorize it.

03"I'm not going to risk my client's site."

Pros are extremely protective of client sites. If it wasn't clear exactly what would happen after adding a site, they wouldn't risk it.


Explicitly tell users that adding a site to the Hub makes no updates or changes to the live site.


Final Designs

What we changed

Redesign for simplicity and clarity. Simplified the first step by removing a confusing choice and addressing hesitations directly.

Redesigned activation flow
Original activation flow
Before
After
Redesigned activation flow

Shift language from task-framing to value-framing. Making it about the goal (managing client sites) and adding your site is just a step.

Redesigned entrypoints
Original entrypoints
Before
After
Redesigned entrypoints

Better error management. Found and addressed bugs when sites wouldn't connect, and replaced the single generic error with contextual states for each failure mode.


Results

Outcomes

25%More users completed the activation flow (sites added)
58+%Increase in click-through rate from language updates alone
x orgResearch used cross-org for front of site and other Pro product insights

Reflection: This project felt too big for a single shippable experience. Looking back, I think the UI suffered for it. I'd be curious to explore how breaking it into more contained experiments (error management, content updates, and UI updates separately) might have produced sharper, more measurable results at each step.