The more information you have about the engagement patterns of network members or users of an online platform, the more tempting it is believe that these data alone can tell you everything you need to know. But, until you explore what type of engagement is valuable and why, and what kind of impact that engagement has on people, organizations and communities, your hypotheses about what actually drives outcomes remain untested.
Organizations often assess their network building efforts or technology interventions (or a combination of the two) to be able to come to more definitive conclusions about what works so that measures and indicators can be adjusted and an organization can learn from its experiences. With new technology that tracks people’s behavior (or even old technology like years of paper attendance records from different types of events) you can integrate actual behavior data on engagement over time with survey and other research to get a more comprehensive picture of how value and impact are created through engagement. You can also compare how engagement and other measures, such as number or type of connections in a social network, relate to impact.
With two recent projects, we were able to integrate engagement data with survey and other data to probe the value of different levels and types of engagement. The results offered insights into how impact was achieved and helped both organizations refine their network engagement strategies.
More Engagement on the Community Commons Means More Impact on Users
The Community Commons provides public access to thousands of meaningful data layers that allow mapping and reporting capabilities for people and organizations to explore community health and policy data interventions and best practices.
- Data Collection and Analysis of Engagement – We worked with the Institute for People, Place and Possibility (IP3), the organization that stewards the Commons, to implement an online system to track user-centric data in a searchable, cloud-based relational database. This provided us with data to establish categories for a ladder of engagement based on engagement with core platform activities, such as building maps and reports, connecting to others, or reading tutorials to build capacity for using data.
- Survey Data Collection on Outcomes – After a year of collecting platform data, we launched a user survey to explore what impact platform use and tool engagement had on users.
- Results – Across key measures, the combined data showed greater impact for users who were more engaged. One of the core hypotheses in the Commons’ Theory of Action was that increased engagement with the platform’s tools would increase users’ knowledge, skills and capacity, a hypotheses that was supported by our research. A sample of the findings from this integrated analysis below.
Different Patterns of Engagement in Mozilla Science Lab Correspond with Different Views on Network Health and Outcomes
The Mozilla Science Lab is a network of researchers, developers, and librarians making research open and accessible and empowering open science leaders through fellowships, mentorships, and project-based learning.
- Data Collection and Analysis of Engagement – In order to build a full database of people who had engaged with Science Lab over the years, we used event records, call attendance records, and GitHub data on code contributions and study group participation to create categories for both the level of engagement and the type of engagement of network members. This allowed us to compare diversity of participation – those people who participated in more than one way – to level of participation – those people who participated a specific number of times — as part of our analysis.
- Survey Data Collection on Outcomes – As part of an existing cross-program survey conducted by the Mozilla Foundation, Mozilla Science participants were asked about their engagement in the networks that the Mozilla Foundation supports. Respondents were asked questions about the network’s health, and how they benefited from their participation in the network.
- Results – We found that an individual’s levels of engagement and diversity of engagement correlated in slightly different ways with their reporting on network health and benefits (see results below for an example). Connecting the dots between patterns of engagement in a network and a range of network outcomes continues to be an important part of how we approach our network evaluation work.
Developed with the Center for Evaluation Innovation this two-part guide to network evaluation includes a brief that outlines the frameworks, approaches and tools to address practical questions about designing and funding network evaluations and aCasebook that provides profiles of nine evaluations.
Download at: www.networkimpact.org/networkevaluation
The civic tech field has expanded so widely in recent years, it’s hard to think of a major city or an area of civic life that these technologies don’t touch. In this dynamic environment, the John S. and James L. Knight Foundation has been a field leader, investing over $25 million since 2010 in projects ranging from neighborhood forums, to civic crowdfunding platforms, to efforts that promote government innovation. For eighteen months, Network Impact worked with Knight Foundation grantees and other civic tech leaders to find out how they measure success, focusing on tools they’re using to track platform performance and assessment challenges they face along the way.
We started by identifying key outcomes related to these common civic tech objectives and gathered case examples of assessments from the field:
- Build place-based social capital
- Increase civic engagement
- Promote deliberative democracy
- Support open governance
- Foster inclusion and diversity
Our work also led us to think about tracking the performance of a platform through its lifecycle – recognizing that assessment priorities vary with stage of development, from early testing of a minimum viable product to later-stage scaling of a tested concept.
The result of this research: two guides to evaluating civic tech that summarize assessment best practices, including leading methodologies and metrics that can help innovators monitor progress towards their goals and evaluate the impact of their efforts. Some of these assessment best practices focus on connections between users, both online and off-line, with an important network dimension.
Assessing Civic Tech: Case Studies and Resources for Tracking Outcomes is a publication of the Knight Foundation with Network Impact that focuses on measuring the impact of civic tech platforms on people, places, and processes.
How To Measure Success: A Practical Guide to Answering Common Civic Tech Assessment Questions is a Network Impact publication that offers examples and advice for monitoring a platform’s ongoing performance using tools and approaches that are effective and practical.
Additionally, the Knight Foundation wrote up their key lessons from investing in civic tech that are also worth a read.
How Code for America is using the Assessing Civic Tech guide
The release of this guide is coming at the right time. Demonstrations of what is possible are up in running in communities of every size across the United States. Now we need to find out not only what works, but what works best over time.
At Code for America, the guide will be particularly helpful for Fellowship teams and volunteer Brigades who are thinking about the questions they need to ask and the changes in attitudes they need to measure to assess progress towards increasing civic engagement and open governance. The process and case studies documented in this guide will be useful for structuring these assessments.
At Code for America, we believe that it is critically important to identify the residents, community groups, or government staff who will be using the particular public service program or benefit, then work with them early in the assessment design process. This guide provides important examples of how to frame an evaluation to include and work with intended beneficiaries. It offers sample questions and resources that will be very helpful to organizations and individuals who are beginning to explore how they can include measures of civic engagement and changing attitudes in their assessment of their efforts.
Connecting to Change the World builds on an earlier resource that Pete and I developed called Net Gains. This latest collaboration with John Cleveland includes examples and lessons that have emerged from our work with social impact networks over the last decade or so. During that time, we’ve been introduced to many new networks and deepened our work with others. As a consequence, we have a better understanding of what makes some networks highly “generative.” By generative, we mean networks with a renewable collaborative capacity to generate numerous activities simultaneously. These are networks that activate members’ connections on an emergent basis as need and opportunities arise.
Examples in the book include RE AMP – more than 165 nonprofit organizations and foundations in eight Midwestern states working together on climate change and energy policies, Reboot- a network of young Jewish American “cultural creatives” who are exploring and redefining Jewish identity and community in the U.S. and the U.K., ten regional networks of state agencies and nonprofit providers that have organized to end homelessness in Massachusetts, and five regional and two national networks of rural-based organizations that are promoting public policies that benefit rural communities in the U.S. In all of these networks, members have been very deliberate about creating, strengthening and maintaining network ties in order to establish a base of connections from which many activities can arise at the same time or over time. This foundation is the starting point for the progression from connecting to aligning to production or joint action that we also discuss in the book.
Net Gains provides practical advice for the growing community of network builders developing networks for social change. The handbook draws from the experiences of network builders, case studies covering a diversity of different networks, and emerging scientific knowledge about “connectivity.” The guide is divided into four parts, each focusing on a specific element of network building and offering strategies for successful development of networks at different stages in their evolution, from the moment of their inception, to the management of their ongoing production.
The handbook can be downloaded here.
Connecting to Change the World is an informative guide to creating collaborative solutions to tackle the most difficult challenges society faces. Drawing from the authors’ depth of experience with more than thirty successful network projects, the book provides the frameworks, practical advice, case studies, and expert knowledge needed to build better performing networks. The book aims to give readers greater confidence and ability to anticipate challenges and opportunities.
When you’re evaluating a network, what are you looking for?
We recently submitted an evaluation proposal for a 7-year old network with more than 120 organizations spread across more than a half-dozen states. Without knowing much about the network we had to describe what we’d be evaluating, our analytic framework. It had 12 components, many of them specifically about a network, rather than an organization. It’s a framework we’d apply for assessing the condition and performance of any network.
Purpose: What is the network’s purpose? Is it being fulfilled? Has it changed over time? What other purposes are emergent among network members?
Value Propositions: What are the reasons that members participate in the network? Which reasons are most important to the members? How well do members feel their value propositions are being fulfilled by participating in the network?
Membership & Engagement: Who has been attracted to the network and who hasn’t that it would be desirable to have? What are the types of engagement in the network and to what degree do members engage in the network? Are the network’s rules/incentives for member engagement effective? Are there barriers that prevent/reduce member engagement?
Network Connectivity: What are the relationships among members? What level of reciprocity and trust has been built? What is being transacted between members? How has member connectivity evolved over time? What is the connectivity “shape” of the network (different patterns of connectivity—e.g., super hubs; multiple hubs; clusters) and how does the shape enable or block network efficiency and effectiveness?
Network Alignment: How well are network members aligned around ideas, goals, strategies, standards, and other guideposts? To what extent does alignment in the network influence members’ actions?
Network Production: To what extent has the network’s connectivity and alignment created conditions for collaboration/co-production by network members of, for instance, usable knowledge, policy change, services, or innovations. How well do network production processes function?
Other Network Capabilities: Which other network capabilities (e.g., network reach and resilience) matter to the network’s health—and what is their condition?
Governance: Does the network’s structure for decision-making enable members? Is it efficient and effective? Does it promote member confidence in and loyalty toward the network? What are the network’s monitoring and feedback loops and how well are they being used? What is the network’s resonance to members’ interests/actions? What is its adaptive capacity?
Business Model: What is the value chain within the markets and other contexts within which the network operates? What products and services—value creation– does the network offer? What is the network’s business model—revenues and costs—and how will it be sustained?
Operations: How well does the network enable members to benefit from the network through coordination of and communications among members, access to shared resources, working group leadership, and peer-to-peer exchange and learning? What staffing, mechanisms, and resources are in place? Which members do/don’t use them?
Strategic Communications: How is the network positioned with external audiences/stakeholders to achieve its goals? In what ways can the network’s external connections, capacities, and brand be leveraged for greater impact or to attract more resources?
Impacts: What measurable impact is the network having in achieving its purpose and goals? What impact is participating in the network having on the way members think and act? How can the network effectively measure its impact on a continuing basis—and use the information for improving its performance?
Taking your network’s temperature regularly is easy—and helps to inform continuous improvement of the network’s effectiveness.
When the 14 organizations in the Southwest Rural Policy Network met in November 2010 to discuss how well their network was doing, they didn’t just share their latest impressions. They had data stretching back nearly a year and a half. Since June 2009, as part of their formal work plan, they had self-assessed the network five times using a Network Health Scorecard. The assessment covered four essential categories: the network’s purpose, performance, operations, and capacity. The process only takes a few minutes after the network’s quarterly meeting—but reveals a great deal about how network members judge the network.
In November, Joyce Hospodar, the network member who chairs the network’s evaluation committee, summarized the scores—on a scale of 1-5, with 1 being low/5 being high—for the past five quarters: Purpose scores were holding steady. Performance scores peaked the previous spring. Operations and Capacity hovered around 4.0, but dipped recently.
Interestingly, when network members also scored where they thought the network’s health was compared to a year earlier, the ratings were all substantially higher than at the outset.
The scorecard is a tool, one source of evaluative feedback a network can use to gauge how well it’s doing and what sort of improvements might be useful. “Note,” says network coordinator Mikki Anaya, “this assessment only measures one aspect of the SWRPN’s effectiveness—the capacity/organizational efforts of the network.” A different evaluation will look at the network’s policy advocacy activities.
The one we developed has a total of 22 questions divided into the four categories. Several networks have adapted the questions to better reflect the specifics of their network. But in any case, the evaluative process is the same:
- Identify key indicators of the network’s well-being
- Regularly collect data from the members
- Analyze the data and share it with members
- Determine what changes are needed
Kudos to the members of the SW Rural Policy Network for picking up on this tool and incorporating its use into their network practice.
Intentionally managing members’ connections can strengthen your network.
A network’s connectivity–the number and quality of links between nodes, and the structure of those links–changes over time. To support a network’s development, network stewards intentionally manage this evolution, instead of just letting it happen.
A year ago, we started working with a start-up national network with about 60 members. The connectivity among members, which we measured and then, using special software, mapped graphically, was fairly low–not a surprise since it was a young network. But there was a core of about 11 members who were more densely and intensely connected to each other. The network maps, which place the most connected members at the center of the map, revealed this core of members, as well as those members at the periphery with few connections to others. As a result of the connectivity analysis the network stewards initiated activities aimed at increasing connectivity.
A year later–we just reported in a “state of the network” presentation at the network’s annual meeting–the connectivity building efforts have been a great success. The average number of links among members more than doubled. The intensity of links–what members transact with each other–also increased substantially. And new network maps revealed that the core of highly connected members also more than doubled–even thought there had been a 33% turnover in network membership. Now 25 members form the core or central hub of the network. All of these changes indicate strengthening of the network, revealed and made visible to the members through the use of network mapping.
Clay Shirkey, a champion of network approaches, sees a new revolution coming.
Here is Shirkey’s fascinating insight, offered in an interview in the June 2010 issue of WIRED:
“People have had lots of free time for as long as there’s been an industrialized world. But that free time has mainly been something to be used up rather than used, especially in postwar America, with the rise of suburbanization and long commutes. Suddenly we no longer lived in tight-knit communities and therefore we spent less time interacting face-to-face. As a result, we ended up spending the bulk of our free time watching television…
Someone born in 1960 has watched something like 50,000 hours of television already–more than five and a half solid years…”
Somehow, watching television became a part-time job for every citizen in the developed world. But once we stop thinking of all that time as individual minutes to be whiled away and start thinking of it as a social asset that can be harnessed, it all looks very different. The buildup of this free time among the world’s educated population–maybe a trillion hours per year–is a new resource. It’s what I refer to as the cognitive surplus.”
Shirkey further argues that as watching television, a solitary activity, is replaced by the use of technologies that promote social connection, there is a growing demand and ability for shared and productive activity.
“When someone buys a computer or mobile phone, the number of consumers and producers both increase by one. This lets ordinary citizens, who’ve been previously locked out, pool their free time for activities they like and care about. So instead of free time seeping away in front of the television set, the cognitive surplus is going to be poured into everything from goofy enterprises like lolcats, where people stick captions on cat photos, to serious political activities like Usahahidi.com, where people report human rights abuses.”
In short, the cognitive surplus will feed the process, already begun, of social networks of various sorts using technologies that support/enhance/ease connectivity to align around particular ideas and identities and then produce value. An idea that Shirkey explores in his new book, Cognitive Surplus: Creativity and Generosity in a Connected Age.