Accessibility

Methodology of Evaluation — What and How We Assess

This methodology is a practical way to analyze what works well in a digital public service — and what creates confusion, fear, or a loss of trust.
We don’t assign grades, scores, or penalties. We don’t create rankings.
Our goal is to identify specific barriers and offer pro-human solutions.

Evaluation is based on two sources:

  • A user questionnaire – a simple form with clear questions and room for comments.

  • A subjective assessment by the foundation’s representative – we try to complete the service as a real person would, and document all difficulties and emotional reactions.

This methodology can also be used by institutions as a form of internal testing. Since every barrier is described along with practical suggestions, the process is accessible and non-technical.

 

1. Clarity

Key question: Does the person immediately understand what the service is, how to use it, and why it matters?

Typical barriers:

  • Bureaucratic or technical language

  • Long and complex sentences

  • Vague instructions or unexplained acronyms

  • Text written from the system’s point of view, not the user’s

Consequences:

  • People don’t understand what to do → they give up

  • Feeling that the service is “not for me”

  • Fear of making a mistake

Suggestions:

  • Use plain language, short sentences, and clear headings

  • Avoid acronyms or explain them on first use

  • Address users directly: “You can apply...”, “Choose an option...”

  • Always explain what happens next: “After clicking, you will...”

 

2. Accessibility

Key question: Can a person use the service independently, without special skills or support?

Typical barriers:

  • The service doesn’t work on mobile devices

  • No subtitles, translation, or features for people with disabilities

  • Requires modern equipment or fast internet

Consequences:

  • Exclusion: “I can’t use it at all”

  • Dependence on others

  • A whole group of people is left out

Suggestions:

  • Test on phones, older devices, slow connections

  • Provide subtitles, contrast, font scaling, screen reader support

  • Offer simplified versions or alternatives where needed

 

3. Efficiency

Key question: How many steps are required? Are they all necessary?

Typical barriers:

  • Too many stages or forms

  • Repetition of the same information

  • Mandatory fields or documents without explanation

Consequences:

  • People get frustrated or give up

  • Time is wasted, trust is lost

Suggestions:

  • Simplify the path – remove unnecessary steps

  • Explain why each item is needed

  • Show progress: “Step 2 of 4”

 

4. Safety and Trust

Key question: Does the person feel safe and understand what is happening to their data?

Typical barriers:

  • No explanation of data processing

  • Technical errors or instability

  • No confirmation after completing actions

Consequences:

  • Fear: “Did I just do something wrong?”

  • Loss of trust in the system or institution

Suggestions:

  • Explain what data is collected and why

  • Confirm actions clearly: “Your application has been received”

  • Ensure technical stability and clean design

 

5. Emotional Experience

Key question: How does the person feel before, during, and after using the service?

Typical barriers:

  • Cold or formal tone

  • No acknowledgment or thank you

  • Errors that are hard to understand

Consequences:

  • Shame, irritation, feeling inadequate

  • Avoidance of digital services in the future

Suggestions:

  • Thank the user: “Thank you for using our service”

  • Use a respectful and kind tone

  • Explain errors in plain language: “Missing name – check the field above”

 

6. Life Context

Key question: Does the service consider people’s real-life circumstances?

Typical barriers:

  • Forms that require a printer or scanner

  • No alternatives to physical visits

  • Overly demanding technical requirements

Consequences:

  • People can’t complete the task due to lack of tools, time, or skills

  • Feeling excluded or helpless

Suggestions:

  • Offer alternatives (in-person option, phone help, representative use)

  • Simplify formats: allow online input instead of only PDF download

 

7. Institutional Tone and Style

Key question: What does the service communicate about the institution itself?

Typical barriers:

  • A distant, impersonal tone

  • The institution’s voice dominates, user voice is absent

  • No visible identity or explanation of who runs the service

Consequences:

  • People don’t know who to trust

  • Feeling like the system is faceless or indifferent

Suggestions:

  • Add a human voice: “We created this service to make things easier for you”

  • Use warm, respectful communication

  • Show clearly who is behind the service and what to expect next

 

Why We Don’t Use Scoring

In Pro-Human Digitalization, we don’t use scores, rankings, or stars.
This isn’t a test or a competition.

We don’t rate institutions — we help them understand why a person might feel lost or discouraged and how to respond.

Each barrier is a signal, not a failure. It simply means that in this specific area, the system failed to resonate with a human need.

Our final report doesn’t give a “grade”. It offers a map:

  • what exactly was difficult,

  • why it matters,

  • and what can be done about it — often in simple, low-cost ways.

We believe pro-human digitalization is not a checklist — it’s a path.
And we walk that path together with public institutions.