Open Assessment Technologies (OAT) faced an identity problem. It had long been recognised for its robust and highly customisable education assessment platform, built to accommodate a wide range of bespoke needs. The platform excelled at adapting to the unique requirements of prospective clients. However, in recent years, OAT set its sights on a new direction—transitioning into a SaaS provider with the goal of offering its own suite of digital products and capturing the market.
Digital transformation is a complex journey. While OAT had deep technical expertise, it lacked clarity in key areas: Who were its core users to serve? Which features should take priority? And what would truly drive value?
OAT’s leadership had a clear vision for business growth—expanding market share in the EU for B2G and K-12 while leveraging its core engineering teams to integrate AI and other innovations emerging from the US market. However, achieving these goals required more than just technical capability; it demanded a clear UX and product strategy to drive user adoption. Without a strong understanding of its users and their needs, internal conflicts arose over competing ideas. This misalignment put the company at risk, making it critical to establish a strategic direction that prioritized meaningful user engagement and impact.
Leveraging valuable assumptions from cross-functional colleagues—drawing from past work with customers, personas, and documented pain points and gains—helped create a clearer picture of user profiles as starting points. By mapping the current state of a typical journey for key user groups and then envisioning a future-state journey, critical gaps in data emerged.
This process provided clues about who the primary and secondary users were, what OAT assumed their needs to be, and what would genuinely add value—all as working hypotheses. These insights laid the foundation for the UXR team to define key topics and subject areas to explore further through field research.
Armed with ideas about our primary and secondary users and their goals, we held additional workshops to articulate and prioritise our key research questions. The slides below showcase artefacts from these workshops throughout the process.
Our key outputs were efficient screening questions and an interview script. After that, we moved into recruitment—finding and booking participants. To maintain velocity, I led the team in running surveys in parallel. The idea was to collect data in the background while we conducted moderated interviews.
After several rounds of interviews with real users matching the target profiles and personas, the team gathered to process the vast amount of data collected. Using a digital mural board for real-time collaboration, each sentiment or finding was treated as an atomic note, then sorted, clustered, classified, and discussed to ensure a shared understanding across the team. This synthesis and visualization of research artifacts proved invaluable in uncovering emerging sentiment patterns and revealing the real-world challenges that target users face daily.
In the U.S., state laws dictate curriculum standards, which districts must follow—often making interpretation and implementation complex. Providing instructional materials for teachers, including guides, examples, and a list of standards for each item and unit (when creating new questions), would ease this process. This reduces the burden on the Donna persona to find and supply resources for teacher training, ensuring exams are well-structured and free from bias.
How can AI solve real problems for educators using Tao Studio?
While there are high expectations for AI to reduce time and cognitive load, concerns remain about quality outcomes—especially regarding the sources that feed it. Success largely depends on how this is managed.
Reduction on time on tasks: AI is often perceived as an analysis accelerator, a helpful tool supporting the data analytical tasks that are more time-consuming.
Cognitive load reduction: Curriculum standards, depth of knowledge, competencies, course blue-prints, and accessibility accommodations (special needs of students) are currently taxing heavy cognitive load (short-term memory) on Users of content creation of exams.
Unsurprisingly, many assumptions from the workshops were validated. However, the field study also uncovered nuanced and unexpected findings—some with the potential to be game-changers in the market.
One critical insight emerged in the initial stages of the customer journey’s assessment cycle: 2 distinct key buyer personas—the District and State Assessment Coordinators—were largely underserved. These personas work together to set the strategic direction for schools, determining district-wide and school-wide policies, selecting tools and technology partners, and shaping the curriculum’s trajectory. The research uncovered 6 key insights into their unmet needs and identified strategic opportunities for OAT to capitalise on moving forward.
Historically, OAT had primarily focused on the middle segment of the journey, catering to a different set of personas. The implication of this continued investment would mean not achieving its business goals.
Additionally, the research highlighted multiple opportunities for Generative AI to address industry-wide gaps, particularly in comparison to competing products like Tao Studio.
By shifting focus to these critical user needs—many of which are feasible to address with conventional technology—OAT could unlock significant value.
Note: Generic images credit – Adobe Firefly. They do not reflect the final solution of study.