On October 27th, 2021, thirteen Network-Centric Resource designers engaged in an online discussion to share challenges and lessons learned about processes and methodologies that ensure knowledge assets are ultimately used by intended communities and networks. We discussed the ways we know if resources are used and useful to their intended audience, and key ingredients for a used and useful resource.
We are particularly grateful to Kristin Antin, Knowledge Collaboration Lead at HURIDOCS for suggesting and driving the online discussion.
How might we know if a resource is needed and valued?
Web metrics might measure some of the value of our resource
If the resource is online, it is helpful to know who is accessing the resource, how they access it, from where, and when they access it. One excellent web analytics platform that is open source (allowing you to host the platform yourself) is Matomo. But you could also use Google Analytics. At the very least, this will give you an indication that people see **value** in these resources, which is something important to measure. For example, if 25 people download your resource in a week, this alone does not tell you if the resource is being **used**, but it does tell you that it is valuable enough for someone to download — or in other words, that there is a need for this information. Whether or not the information in the resource was appropriate, useful or applied is another question.
How might we know our resource is useful and used?
You either need to ask people if and how the resource is useful and is being used, or you need to observe it. Keep in mind that self-reported data from users is very subjective and can be unreliable, but it is often the most informative/useful (hah!) information you can collect.
People might tell us
- Surveys/forms/questionnaires.
- You can ask questions to elicit information about the “usefulness” of a resource. For example, if you consider “understability” as an important requirement for “usefulness”, you could ask the user to rate how understandable the information in a resource is, on a scale from 1 to 10. (see more examples in the table below)
- Tip 1: Quantifiable ratings that can be qualified (e.g. ask the user “why did you give that rating?”)
- Tip 2: When asking for feedback, just ask one question, and keep the responses simple:
- X? -> React with this emoji
- Y? -> React with emojis.
- You can ask “did you apply any of the information in the resource to your work?” and “if so, what impact did that have?” or “what changed as a result of this?”.
- Gathering feedback via personal (or group) engagement – take the time to schedule a call with a user. This can provide opportunities to ask “how did you use the resource?” and many other questions such as “how can this resource be improved?”. One participant made the point that ideally, this discussion with users should be one that focuses on learning — a reflection on the usefulness of the resource and thinking about how to move it forward (as opposed to a more backward, extractivist model of collecting stories of outcomes and impact. Users reach out to the resource-creators/stewards asking for improvements or help in addressing their problems.
We might observe a resources usefulness
- We might see it used in the wild – (e.g. is the resource referred to on social media or overheard in discussions) If someone has picked up the resource and adapted it, and has taken ownership, that’s a very good indicator that your resource is not only useful but perhaps even vital.
- Accompaniment – you might follow/accompany people THROUGHOUT the process of using the resource and applying the content to their own contexts.
How might we ensure our resource is useful and used?
- Regular updates and maintenance
- Co-creation – know your audience by getting them to help you build it
- Licensing that allows reuse/modification
- Adaptability, open formats
- Inclusive and adaptable to local content
- Navigable – people can easily find what they are looking for
- Accessibility: understandable language. Accessible to people with disabilities
- Easy to update
- User testing
- Appropriate – Sometimes, the right kind of resource for a particular audience is not in-depth solutions, but is instead a framework for thinking about the issue. One participant talked about a PILPG 5-page document that summarized outcomes of a discussion and this simple resource allowed this user to understand the kinds of challenges to expect in this field.
All of these principles and practices (and much more!) are included in the Responsible Resource Creator Manifesto.
Requirements for a Useful & Used Network-Centric Resource.
Kristin Antin, Knowledge Collaboration Lead at HURIDOCS, shared her ideas on requirements for a useful HURIDOCS resource:
RESOURCE REQUIREMENTS | WHAT DOES THIS LOOK LIKE? |
---|---|
Relevant to their experience and the challenges they face | Resource topics and content based on real-world experiences and challenges |
Understandable to someone who isn’t a data scientist or social science researcher and for whom English is not their first language | Glossary, examples, translations, multiple formats (e.g. text, graphics, video) |
Accessible – able to find it, download it, or somehow get it even if they don’t have great internet access, not too long/dense. | physical access to the resource;multiple formats;local language;findability (reference-ready);using licensing that allows for re-use, repurposing and modification;having a distribution plan. Source: https://www.fabriders.net/rrcmdraft-2/ |
Actionable and appropriate – equips people to take an action of some kind to help them in their work, and provides accurate advice for our audience with a range of budgets, resources, and expertise | Description of steps, key takeaways, what to do in different scenarios, challenges you might face and how to address them, lots of examples, links to additional resources |
Measuring Usefulness
Kristin also shared thoughts on measuring usefulness:
Hypothesis to test | Indicators of success | Results |
---|---|---|
The resources are understandable to the target audience. | When asked to rate how easy it is to understand the resource, 80% of target audience select 8/10 or higher | Measure in Jan 2022 |
The resources are actionable (clear steps and considerations for different contexts) to the work and challenges of our target audience. | When asked to rate how well the information applies to their own work/context, 80% of target audience select 8/10 or higher | Measure in Jan 2022 |
Our target audience will recommend the resource to colleagues, peers, networks. | When asked how likely they are to recommend this resource to others, 80% of target audience select 8/10 or higher | Measure in Jan 2022 |
Potential steps for creating used, useful and useable Network-Centric Resources.
We finished the discussion with participants identifying steps they might add to their processes and projects in the future.
- Rethink surveys as the go-to for feedback and think more about how to get honest and valuable inputs from partners.
- Consult with users before designing
- Analyse the context beforehand to ensure it meets priority needs
- Speak more with the community we are building the tool for
- Translate into more languages for increased access.
- Involve the network and community into the making of the resource.
- Engage throughout the process
- More convening around content
- Learn more about outcome harvesting!
- Gather feedback before we launch.
Resources shared during the discussion:
- What can useability research do for you?
- Democratising Access to Resources
- Responsible Resource Creator Manifesto
- Lean UX (book)
- Localization Lab
- The Lifecycle of a Network-Centric Resource
- Sumana Harihareswara has great insights on her blog from her work in the open source space.
Credit
Along with Kristin, the people who participated in this online discussion and contributed to this blog post were:
- Gus Andrews, Front Line Defenders
- Romain Ledauphin, Archivist
- Diego Toledo, HURIDOCS
- Kyla Van Maanen, Intertidal Strategies
- Yurim Choi, Transitional Justice Working Group (TJWG)
- Scott Stevens, Transitional Justice Working Group (Access Accountability(
- Liz Monk, Western Pennsylvania Regional Data Center (WPRDC) at University of Pittsburgh
- Grace Linczer, HURIDOCS
- Yvonne Madondo, CIVICUS
- Oriana Castillo, CIVICUS
- Arnalie Vicario (@arnalielsewhere), HOT
- Ashley Fowler (Internews/USABLE/SAFETAG)