← Back to work

Sound Has Context

Sociability is a tech-for-good startup providing trusted accessibility information for the places people want to go. In this project, I worked on proposing a new user experience and redesign of the app data collection and content for increasing user trust by improving the quality of sound information.

Digital Qualitative Research Feature Proposal Accessibility Tech
Company Sociability
My Role Team Lead, Product Research
Date 2025
Sound Has Context — Sociability

The app provides a range of accessibility factors for venues so that disabled people can plan their day with confidence: from mobility data like measurements, to visual, hearing and sensory (VHS) conditions such as colours, patterns, smells and sounds. While in the field, speaking directly with users and business owners, a pattern kept surfacing: the app's sound data wasn't quite right.

The app offered a binary solution: Loud or Quiet. But a pub that's peaceful on a Tuesday lunchtime is a completely different environment on a Friday evening. For users who might have concrete accessibility needs tied to them (for example, those with hearing conditions, sensory sensitivities, or anxiety) that gap isn't a minor inconvenience, it's the difference between going somewhere and staying home.

Detected challenges

— Sound is inherently difficult to measure, is variable and contextual.

— Lack of trust on sound information by users and lack of satisfaction by business owners about the sound definitions in their venues.

Critically, Sociability's mission is not only to provide that information, but to also make the data collection process itself inclusive and participatory, allowing users to shape the database by contributing their own experiences. There was an opportunity to improve the experience as a whole.

How might we ensure the sound data collected is objective enough to build user confidence, yet flexible and nuanced enough to reflect real contexts?

To kick things off, I reviewed the extensive desktop research and interviews conducted by a colleague with disability organisations and users. Her work segmented user types and profiled the sounds most critical to each group. From there, I continued my in-field sessions with the purpose of understanding the user journeys, pain points and the contextual nature of sound firsthand.

Additionally, I carried out a benchmark analysis with existing solutions out on the web, identifying both systems Sociability wasn't implementing and unmet needs in the broader market.

Finally, I conducted a product assessment of the current app experience, allowing us to document the usability issues that needed to be addressed in parallel.

Research synthesis board
Research synthesis — user journeys, pain points and benchmark analysis

The assessment mapped the existing sound data system across two dimensions: what was not working and where the opportunities were for an improved approach.

Problems identified

Content

The current binary sound description (loud, quiet) was not nuanced enough and was lacking objectivity.

Uploading information

The process for manually uploading sound information via tags was limiting and required a high degree of personal judgement, leading to frustration.

Outdated UI

The app's interface contributed to the lack of detail and trust. Lack of use of colours for guiding understanding.

Opportunities

Process redesign

Use this project as an example for redesigning the entire data collection process, specifically for visual, hearing, and sensory features, and the app's experience as a whole.

Terminology

Reassess the language and redefine content to reflect context.

Objectivity

Include standardised parameters for increasing trust and understanding across different user conditions.

Inclusion

Redesign data collection to be accessible and participatory, enabling all users to contribute their experiences with confidence.

Triangulating insights, I proposed a redesigned data collection process built around two layers.

First: sound recording paired with a standardised, colour-coded decibel scale. Reducing subjectivity and giving every user segment a shared, objective reference point they could assess against their own conditions.

Second: building on my colleague's user research, a new taxonomy of sound-type tags. Letting contributors describe the nature of the sound (music, voices, traffic, machinery) with a richer and more relevant set of options.

The proposal also addressed process: clearer input categories with helpful descriptions, visual hierarchy to guide understanding, and progress tracking to make the contribution experience feel less effortful and more trustworthy.

Before and after — sound data redesign
Before / After — sound data input redesign

Although the feature didn't actually ship before I left Sociability the proposal was delivered and received very positively. It gave the team a clear, research-grounded direction for making sound data genuinely useful.

This experience allowed me to work firsthand with digital inclusion, with the complexity of translating the vast world of accessibility to digital environments. As a non-disabled individual, it pushed me to challenge assumptions, to sharpen my research and communication skills, and to understand the importance of honest and rigorous data while valuing the subjective and unique richness of human experience.

I would love to chat with you!

Send me an email at clara.astiochoa@gmail.com
or find me on LinkedIn