En Fr

Methodology: Survey

Our Digital Well-being Survey asks people how they interact with digital technology. By collecting evidence about people's digital behaviours and experiences in real-time and in detail, we can start to address blind spots in how we explain the inequalities and opportunities of digital life for different types of people.

Global standards

  • Survey design follows OECD's global statistical standards, particularly its Guidelines on Measuring Subjective Well-being.
  • Detailed review of similar surveys and best practices from OECD national statistical offices informed questions and response options.

Survey mode

  • Standardised online survey tool.
  • Limited risk of potential biases because of indirect questioning techniques, adjusting scale formatting, neutral wording, anonymity, and confidentiality that obscure any connections between responses and potentially sensitive or socially charged issues.

Question order

  • Survey introduces questions upfront on some loaded topics like mental health or overall life satisfaction to balance out potential order effects.

Response scales

  • The survey combines a range of different types of response scales (e.g. 11-point numerical scales, labelled Likert scales, binary response options).
  • This approach might make the survey harder to complete, but the results will compare better with international surveys and global statistical standards on similar topics.

Survey length

  • Respondents can complete the survey in less than 10 minutes.
  • Survey balances response burden, response rate, and comprehension to capture well-being outcomes connected to technology.

Respondent privacy

  • Personal data and processes are protected by data confidentiality rules.
  • The survey doesn't collect or retain any information that can identify respondents.

Methodology: Indicators

We select our indicators from a global set that capture the statistics of digital life and draw on scientific research to predict the impact on everyday lives. When it makes sense to do so, we extend the indicators beyond national averages to different groups of people, sectors of the economy, and sub-national regions. This helps us capture some of the links between well-being dimensions.

Relevance

  • This is about figuring out how the statistics meet the current and potential needs of visitors to the Hub. We've assessed this using the main issues and trends for new technologies we identified in similar OECD work and academic research. It's a qualitative assessment, looking at the value that data contributes through a careful evaluation and selection of basic data to balance the right range of indicators with the well-being dimensions relevant to digital life.

Accuracy

  • We work out if the indicators measure what they're supposed to measure. Objectivity and credibility of the indicator's source is important, especially in helping Hub visitors derive confidence from its reputation and analytical soundness.

Comparability

  • We measure the impact of applying statistical concepts and measurement tools and procedures when we compare geographical areas, non-geographical domains, or over time. We can also complement official statistics with relevant, pre-validated non-official data.

Timeliness

  • This is how much time passes between an event and getting access to the information describing it. Although similar, punctuality is the lag between the data's expected and actual release dates. Both ideas describe an indicator's ability to capture the relevance of the event, important when we need to report the most pressing interests of digitalisation on people's lives.

Coherence

  • We combine and use data in different ways, so we need to explain any clashes and overlaps between concepts or data. For example, if two indicators cover the same event, then we'd need to reconcile the differences in time of recording, valuation, and coverage. We identify and validate the Hub's indicators with analytical and measurement frameworks and indicators from the OECD, Eurostat, and other organisations.

Interpretability

  • The indicators should be useful, informative, and relevant to multiple audiences, so we make sure it's as easy as possible for the different people involved to visualise and communicate the data. We provide clear descriptions of the indicators and underlying statistics to help Hub visitors understand, use, and analyse the data properly.