Skills Discovery is part of a new product offering launched by Cappfinity, focused on helping individuals and organisations unlock and apply their strengths across the full talent lifecycle. Building on Cappfinity’s reputation as a world leader in Talent Aquisition & Talent Management, Skills Discovery expands into the Talent Development space — creating the foundation for a suite of new products.
Why the research mattered
In a company where research hadn’t traditionally been embedded in the design process, this project became an opportunity to demonstrate its value as a foundation for meaningful product decisions.
Learning and development experiences come with their own design challenges: engagement can be low, terminology is often abstract, and reflection tasks demand careful consideration of tone, pacing, and interaction.
To design something that users would actually want to engage with, we needed to understand how they interpreted, felt about, and responded to the tasks we were asking them to do. Research in this phase helped shape both the structure and tone of the toolkit.
Why this method?
I chose client-facing interviews over surveys because we needed depth, not just sentiment. At this early stage, understanding why clients were engaging or disengaging and the context behind their behaviours was going to be most valuable. Quantifying sentiment will more important when creating and measuring against benchmarks for testing and iterating on the toolkit.
Who I spoke to
Since I wasn't able to speak directly with clients in this phase, I focused on gathering insight from 5 internal team members across customer success, sales, and configuration. While the insights were second-hand, these roles offered complementary perspectives: client-facing teams shared patterns in client feedback and behaviour, while the configuration stakeholder surfaced challenges with content delivery.
What I did
I ran semi-structured interviews, allowing for consistency across sessions while leaving space to explore unexpected themes. The questions were designed to align closely with our research goals:
Approach and analysis
I ran semi-structured interviews, allowing for consistency across sessions while leaving space to explore unexpected themes
I captured and reviewed notes across all sessions, using thematic analysis to group insights into patterns. I also tagged quotes and insights based on their relevance to core assumptions, which helped inform early design principles and content strategy.
Excerpt from the thematic analysis conducted after the stakeholder interviews
What I learned
The insights gathered helped us pinpoint where existing solutions were falling short and clarified the design principles needed to deliver lasting value and real-world application.
"Some users take screenshots of their entries because there’s no way to export them or share with others"
"Without a prompt to apply the insight, it kind of just sits there"
"It’s a lot to take in, especially if they’re working through it on their own"
"There’s no system for flagging outdated content...we rely on someone spotting it"
Why this method?
To inform early thinking around the Skills Discovery Toolkit, I audited a legacy onboarding product that sat within the same talent development space. While it served a different purpose, it offered useful insight into how users engage with self-directed learning and reflection-based tasks. This helped surface patterns, pain points, and design decisions that could influence how we approach the toolkit.
What I did
I documented pain points and strengths using annotated screenshots and thematic tags. I also mapped the audit findings against early goals for the Skills Discovery Toolkit, such as clarity, reusability, and self-guided usability.
Auditing the existing product
What I learned
Assumed too much prior knowledge, which could lead to drop-off or confusion
Several reflection tasks and tips felt similar, leading to a sense of redundancy and potential disengagement
The product used one-off content blocks and page designs instead of reusable components, increasing content maintenance overhead
Variations in UI elements (e.g. heading levels, buttons, spacing) made the experience feel less polished and less trustworthy
The audit also helped me map which existing components or patterns could be reused, which would need adjustment, and where we’d likely need to start from scratch. This early component audit informed our modular design approach and helped reduce future design and build effort.
Why this method?
While interviews and the product audit had already given us useful insights, I also reviewed existing feedback data from the product. Though limited in scale, this dataset offered a first hand perspective on how users were experiencing a learning product. This served as a useful parallel to what we were beginning to explore for the Skills Discovery Toolkit.
This helped me spot signals around engagement, perceived value, and user expectations that supported or the themes that emerged from the previous two methods.
What I did
I gathered:
I reviewed the entries and lightly coded them based on:
While not comprehensive, this validated previous findings about where users were getting value and where they were losing interest or momentum from their perspective.
Quantitative insights
"I felt this training repeated a lot of the content already covered multiple times in other materials"
Only 45% of users rated the task instructions as ‘clear’ or ‘very clear’
"...I didn’t know if this was for me"
Before committing to development, we ran usability testing on a high-fidelity prototype to validate our foundational design logic, core content structure, and interaction flows. The goal was to identify any major usability or comprehension issues early, ensuring that the toolkit would be intuitive and valuable to users before build effort was invested. This helped reduce the risk of rework and supported a leaner handoff to development.
I conducted remote, moderated usability testing with 5 participants drawn again from internal proxy users. Participants were asked to complete core flows in the prototype, including:
The sessions were designed to capture first-time comprehension, ease of use, and user reactions to content tone and structure. I used a task-based script and encouraged them to think out loud.
I captured notes across usability testing dimensions, including:
I then coded and grouped observations by issue severity (e.g. must-fix before build, nice-to-have later) and theme (e.g. content clarity, visual hierarchy, interaction feedback).
Tracking feedback from participants
Users struggled to form a clear mental model of the toolkit
Navigation was generally intuitive, but lacked reinforcement cues
Missions were perceived as one of the most valuable areas of the toolkit
Uncertainty around positioning limited perceived value for non-managers
These insights helped us move into development with confidence in the usability of the design.
After implementing design and content changes based on usability testing, we launched a live beta to validate those improvements:
This phase allowed us to identify both remaining UX gaps and technical friction that wasn’t visible in prototype testing.
We released a controlled beta version of the toolkit to a limited group of internal users. I monitored usage patterns and gathered feedback via:
Tracking feedback from participants from the live beta testing
I also conducted follow-up interviews with some testers to better understand their task flow and reactions to changes.
Broken buttons, browser compatibility issues, unreliable interactions
Task instructions and video-based activities still felt unclear
Reflections
Building on insights from earlier phases, we used this feedback to prioritise final changes and refine the overall experience. These iterative improvements informed the first version of the toolkit released to a select group of pilot users as part of the MVP launch.
Based on design principles and their related success metrics, I outlined a post-launch research plan to help establish baseline metrics from which we could focus on improving the product experience over time.
Design for Repeatable Reflection
Keep Interactions Lightweight
Content-Led Relevance
Structure for Sustainability
This includes tracking usage analytics:
This research journey was shaped by real-world constraints, shifting priorities, and a need to stay resourceful. Even with these things, I managed to deliver valuable, data-driven insights that helped shape the toolkit in a way that ensure it was not only function, but also valuable and intuitive.