← Back to work

Sound Has Context

Sociability is a tech-for-good startup that provides trusted accessibility information about public venues so disabled people can plan their day with confidence.

In this project I led a product research initiative to redesign how sound accessibility data is collected and represented in the app. The goal was to improve data reliability, user trust, and the usability of the contribution workflow.

The outcome was a proposed feature redesign introducing a context-aware sound data model and a new contributor experience.

Digital Qualitative Research Feature Proposal Accessibility Tech
Company Sociability
My Role Team Lead, Product Research
Tools Figma, Miro
Date 2025
Sound Has Context — Sociability

Sociability collects accessibility information across a wide range of factors, from mobility measurements to visual, hearing and sensory conditions such as colours, smells and sounds. During fieldwork with users and venue owners, a recurring issue emerged: the app's sound data was too simplistic to be useful.

Sound was represented with a binary label: Quiet / Loud. But sound environments are highly contextual. A café might be calm in the morning and overwhelming during peak hours. For users with sensory sensitivities, hearing conditions or anxiety, this difference can determine whether a place feels accessible or not.

The current model created two problems:

Detected challenges

— Low user trust in the accuracy of sound information.

— Frustration among contributors who felt the categories misrepresented their venues.

This revealed a deeper product challenge: Sociability's mission is not only to provide that information, but to also make the data collection process itself inclusive and participatory, allowing users to shape the database by contributing their own experiences. There was an opportunity to improve the experience as a whole.

How might we ensure the sound data collected is objective enough to build user confidence, yet flexible and nuanced enough to reflect real contexts?

To understand the problem more deeply, I built on existing research conducted with disability organisations and app users, which mapped the types of sound environments most relevant to different user groups.

I then conducted additional in-field sessions with users and business owners, observing how people navigated venues and how sound conditions varied across time and context.

Alongside this, I ran:

— Benchmark analysis of accessibility platforms and sensory mapping tools.

— Product assessment of the existing Sociability experience.

This allowed me to understand both user needs and product constraints, including how data was collected, structured and displayed in the app.

Research synthesis

Mapping the existing system revealed that the problem was not only about sound labels, but about the entire data collection workflow.

Problems identified

Content model

The binary sound description lacked nuance and did not reflect how users actually experience sound environments.

Data contribution process

Sound data was uploaded through manual tags requiring subjective judgement, creating inconsistency and contributor frustration.

Interface design

The UI did little to guide interpretation or convey confidence in the data.

Opportunities

The challenge became an opportunity to rethink how sensory accessibility data is structured and collected. Key directions included:

Redefining the sound data model.

Improving objectivity and shared reference points.

Redesigning the contribution workflow.

Strengthening trust through clearer visual communication.

Based on these insights, I proposed a redesigned sound data system built around two complementary layers.

1. Objective sound measurement

Users could record sound levels using a standardised decibel scale, visualised with a colour-coded spectrum. This introduced a shared reference point across users and venues, reducing subjective interpretation and increasing confidence in the data.

2. Contextual sound descriptors

Alongside measurement, contributors could classify the type of sound present, using an expanded taxonomy based on earlier user research. This allowed the system to capture both intensity and character, giving users richer information about the environment.

Sound data input redesign

Sound accessibility data combines:

Objective measurement Decibel range, captured via standardised recording.
Contextual descriptors Sound types (music, voices, traffic, machinery), drawn from user research taxonomy.
Temporal context Time of day and activity type, to account for how sound environments change.

This hybrid model allows the system to represent environments more faithfully than binary labels, giving users the nuance they need to make confident decisions.

Sound data structure

Sound Data

Intensity

decibel range

Type

sound tags

Context

time / activity

Accessibility profile — displayed on venue page

Beyond the data model itself, the proposal redesigned the entire sound contribution experience. The goal was to make the act of contributing more inclusive, structured and trustworthy.

Key improvements included:

Clearer inputs

Contextual explanations alongside each category to reduce guesswork.

Visual hierarchy

Design language that guides contributors through the process in sequence.

Progress indicators

Making contributions feel manageable and building a sense of completion.

Accessible language

Terminology suited to users with different cognitive and sensory needs.

Although the feature did not ship before I left Sociability, the proposal was delivered to the product and design team and received positively as a direction for improving sensory data across the platform.

The project demonstrated how small data model decisions can significantly influence trust and usability in accessibility technologies. It also deepened my understanding of the challenges of translating complex, subjective human experiences — such as sound perception — into digital systems that remain both reliable and inclusive.

I would love to chat with you!

Send me an email at
or find me on LinkedIn