Note: This research project was featured in an article in The New Republic “Escape from Facebookistan” in May 2018.
Front Porch Forum’s (FPF) mission is to help more than 130,000 neighbors across Vermont’s 260,000 households connect and build community in their neighborhoods. Using software they co-created with their technology partner, Toronto-based TWG, FPF hosts free online neighborhood forums that provide members with opportunities to share information, goods and services; promote local businesses and contractors; and engage in discussion on community issues. Through the e-newsletters, neighbors talk about neighborly things: missing pets, households items to borrow or lend, crime, and wild-life sightings. They also talk about opportunities to get involved in their community by volunteering or by engaging in town hall, school board or other community discussions. And they often go offline to meet each other face to face, or attend events.
FPF’s technology has several distinct features:
- FPF forums are not threaded discussions. This means there is no direct back and forth between neighbors in real-time. Instead, neighbors who see an issue raised in one e-newsletter can either email the author off-list, or submit a posting for the next e-newsletter, which builds lag time into discussions.
- FPF has a team of Vermont-based, online community managers who review all postings.
- Postings are ordered first by FPF’s back-end technology, and then reviewed or re-ordered by online community managers.
FPF’s overarching mission to provide a public service to communities drove them to ask: What impact are FPF postings and discussions having over time? Could technology like FPF help to build social capital in communities? With support from the Robert Wood Johnson Foundation (RWJF), FPF partnered with Network Impact to explore these questions.
Social capital is commonly defined as social connections and the norms of trustworthiness and reciprocity that arise from them. Social capital is considered a predictor of health and well-being, economic development and responsive government (Putnam, 1993; La Porta et al. (1997); Knack and Keefer (1997)). In their research, RWJF highlights the importance of socially connected communities, noting that people who feel attached to their place are “likely to be healthier than those who feel isolated or marginalized…and more inclined to take action to improve [their] own health and the health of others.”
Efforts to understand social capital and social networks in place-based settings like neighborhoods often focus on the progression from weak to strong ties in a network over time, as strong ties typically indicate greater trust and connection. Recent research suggests that weak ties formed by short, transactional interactions with other people impact well-being (Sandstrom and Dunn, 2014). Part of FPF’s overarching hypothesis about the impact of their technology is that if neighbors have an easy, friendly, no-cost way to communicate daily, then their perception of their neighborhood and their role in it will become richer. They will pay closer attention to local goings on and begin to get more involved. Then, when trouble or opportunity arises, this collection of neighborly, conversing, helpful neighbors will respond, whether it’s digging out elderly neighbors after a snowstorm or going after one-time funding to build a community youth center. In this light, small acts of neighborliness take on new meaning. In our research, we hypothesized that participating in or witnessing these small acts creates weak ties between neighbors that are powerful enough to encourage place attachment, a key correlate of social capital.
We worked with FPF to create a Theory of Action to describe how the exposure to e-newsletters might affect FPF members over time. Below is a simplified version. (To view the full, detailed Theory of Action click here ).
To test elements of the Theory of Action, we designed an online member survey that was sent to all FPF members and completed by over 13,000 members. We then integrated member usage data from FPF’s back-end database, which allowed us to match survey responses with members’ online behavior and engagement data (for example, how many times a member has ever posted, which forum s/he is a member of, whether sh/e is a public official). This allowed us to integrate and analyze both self-reported impact data from the survey and actual usage data on behavior patterns.
Results of our analysis confirmed that FPF is helping to build social capital and that witnessing everyday acts of neighborliness is a powerful driver of both online and offline community engagement.
Self-report data strongly suggest that members are driven to be more engaged with FPF by witnessing other members of their community participate in small acts of neighborliness. Notably, this finding also holds for members who gave lower scores when asked to rate their neighborhood and who were less optimistic about their neighborhood’s future, both common correlates of low levels of social capital.
- FPF is having an impact on members regardless of how often, or even if, they post. A positive impact is experienced by all members, even those who participate less and have lower online engagement.
- Across all types of forum communities, discussions of local issues were a top-value generator.
- FPF is likely having more of an impact on offline actions in communities than is currently captured. When asked how often they take action offline in their community as a result of an FPF post, 18% of respondents with low online engagement reported taking offline action as a result of FPF once a month or more (compared to 28% for those who are highly engaged online).
- Top factors for remaining a member in FPF were staying connected to members of the community and staying informed about what is going on locally.
This research provides a jumping off point for digging deeper into how technology can enhance opportunities to build social capital in place. Lessons from this research that could be applied and tested in connection with other efforts to use technology to build community in place include:
- Monitor impact: The field of civic tech has advanced considerably in recent years, and many innovators are moving past standard usage metrics to include outcomes-based research and tracking in their platform and tech monitoring. By conducting research that tracks how and why impact is generated for users and their communities, tech creators are able to maximize that impact by increasing platform engagement and social capital building over time. You can find our publications and resources for evaluating civic tech initiatives here.
- Support frequent small acts of neighborliness – To support place attachment and increased social capital in communities, offer both online and offline opportunities to participate in and witness small acts of connection and kindness.
- Create offline ambassadors – Recruit users who are active online who also report taking actions offline to be ambassadors for both the technology and for community building and engagement. By connecting those engaged members with local initiatives you can further explore ways to support active online and offline engagement. And, local offline ambassadors can reinforce the platform’s impact in the community.
- Use information hierarchy to show users that you are responsive to what they value – People in different communities may value different kinds of information. For example, FPF members who rated their community higher found postings on local crime to be the most valuable, while those who rated their community lower found information on local events to be the most valuable. Differences may reflect other variables as well, such as needs and preferences in rural vs. urban areas. Collect research data that describe the highest value generators for different places, as well as what drives users to engage more, and use that data to highlight information that drives engagement and creates the most value (e.g. putting information about a particular topic at the beginning of the newsletter).
- Hyperlocal is not dead: Many smaller communities lack good local news and information sources. The fact that FPF is by, for and about Vermonters was second highest ranked driver of platform engagement. FPF built a service and a company intent on supporting authentic community interactions and opportunities to share information. In a limited analysis of open-ended responses about why they remain a member of FPF, members testified that FPF was the best source of community information they could get. Especially in rural and small communities, there is an information gap that technology is well-positioned to bridge to keep people informed, connected and aware of opportunities.
- Putnam, Robert with Robert Leonardi and Raffaella Y. Nanetti (1993). Making Democracy Work: Civic Traditions in Modern Italy. Princeton: Princeton University Press.
- La Porta, Rafael; Florencio Lopez-de-Silanes, Andrei Shleifer and Robert W. Vishny (1997). “Trust in Large Organizations.” American Economic Review Papers and Proceedings, 87, 333-38.
- Knack, Stephen and Keefer, Philip (1997). “Does Social Capital Have an Economic Payoff? A Cross-Country Investigation.” Quarterly Journal of Economics, 112(4), 1251-88.
- Sandstrom, G. and Dunn, E. (2014). Social Interactions and Well-Being: The Surprising Power of Weak Ties, Personality and Social Psychology Bulletin.
The more information you have about the engagement patterns of network members or users of an online platform, the more tempting it is believe that these data alone can tell you everything you need to know. But, until you explore what type of engagement is valuable and why, and what kind of impact that engagement has on people, organizations and communities, your hypotheses about what actually drives outcomes remain untested.
Organizations often assess their network building efforts or technology interventions (or a combination of the two) to be able to come to more definitive conclusions about what works so that measures and indicators can be adjusted and an organization can learn from its experiences. With new technology that tracks people’s behavior (or even old technology like years of paper attendance records from different types of events) you can integrate actual behavior data on engagement over time with survey and other research to get a more comprehensive picture of how value and impact are created through engagement. You can also compare how engagement and other measures, such as number or type of connections in a social network, relate to impact.
With two recent projects, we were able to integrate engagement data with survey and other data to probe the value of different levels and types of engagement. The results offered insights into how impact was achieved and helped both organizations refine their network engagement strategies.
More Engagement on the Community Commons Means More Impact on Users
The Community Commons provides public access to thousands of meaningful data layers that allow mapping and reporting capabilities for people and organizations to explore community health and policy data interventions and best practices.
- Data Collection and Analysis of Engagement – We worked with the Institute for People, Place and Possibility (IP3), the organization that stewards the Commons, to implement an online system to track user-centric data in a searchable, cloud-based relational database. This provided us with data to establish categories for a ladder of engagement based on engagement with core platform activities, such as building maps and reports, connecting to others, or reading tutorials to build capacity for using data.
- Survey Data Collection on Outcomes – After a year of collecting platform data, we launched a user survey to explore what impact platform use and tool engagement had on users.
- Results – Across key measures, the combined data showed greater impact for users who were more engaged. One of the core hypotheses in the Commons’ Theory of Action was that increased engagement with the platform’s tools would increase users’ knowledge, skills and capacity, a hypotheses that was supported by our research. A sample of the findings from this integrated analysis below.
Different Patterns of Engagement in Mozilla Science Lab Correspond with Different Views on Network Health and Outcomes
The Mozilla Science Lab is a network of researchers, developers, and librarians making research open and accessible and empowering open science leaders through fellowships, mentorships, and project-based learning.
- Data Collection and Analysis of Engagement – In order to build a full database of people who had engaged with Science Lab over the years, we used event records, call attendance records, and GitHub data on code contributions and study group participation to create categories for both the level of engagement and the type of engagement of network members. This allowed us to compare diversity of participation – those people who participated in more than one way – to level of participation – those people who participated a specific number of times — as part of our analysis.
- Survey Data Collection on Outcomes – As part of an existing cross-program survey conducted by the Mozilla Foundation, Mozilla Science participants were asked about their engagement in the networks that the Mozilla Foundation supports. Respondents were asked questions about the network’s health, and how they benefited from their participation in the network.
- Results – We found that an individual’s levels of engagement and diversity of engagement correlated in slightly different ways with their reporting on network health and benefits (see results below for an example). Connecting the dots between patterns of engagement in a network and a range of network outcomes continues to be an important part of how we approach our network evaluation work.
Developed with the Center for Evaluation Innovation this two-part guide to network evaluation includes a brief that outlines the frameworks, approaches and tools to address practical questions about designing and funding network evaluations and aCasebook that provides profiles of nine evaluations.
Download at: www.networkimpact.org/networkevaluation
The civic tech field has expanded so widely in recent years, it’s hard to think of a major city or an area of civic life that these technologies don’t touch. In this dynamic environment, the John S. and James L. Knight Foundation has been a field leader, investing over $25 million since 2010 in projects ranging from neighborhood forums, to civic crowdfunding platforms, to efforts that promote government innovation. For eighteen months, Network Impact worked with Knight Foundation grantees and other civic tech leaders to find out how they measure success, focusing on tools they’re using to track platform performance and assessment challenges they face along the way.
We started by identifying key outcomes related to these common civic tech objectives and gathered case examples of assessments from the field:
- Build place-based social capital
- Increase civic engagement
- Promote deliberative democracy
- Support open governance
- Foster inclusion and diversity
Our work also led us to think about tracking the performance of a platform through its lifecycle – recognizing that assessment priorities vary with stage of development, from early testing of a minimum viable product to later-stage scaling of a tested concept.
The result of this research: two guides to evaluating civic tech that summarize assessment best practices, including leading methodologies and metrics that can help innovators monitor progress towards their goals and evaluate the impact of their efforts. Some of these assessment best practices focus on connections between users, both online and off-line, with an important network dimension.
Assessing Civic Tech: Case Studies and Resources for Tracking Outcomes is a publication of the Knight Foundation with Network Impact that focuses on measuring the impact of civic tech platforms on people, places, and processes.
How To Measure Success: A Practical Guide to Answering Common Civic Tech Assessment Questions is a Network Impact publication that offers examples and advice for monitoring a platform’s ongoing performance using tools and approaches that are effective and practical.
Additionally, the Knight Foundation wrote up their key lessons from investing in civic tech that are also worth a read.
How Code for America is using the Assessing Civic Tech guide
The release of this guide is coming at the right time. Demonstrations of what is possible are up in running in communities of every size across the United States. Now we need to find out not only what works, but what works best over time.
At Code for America, the guide will be particularly helpful for Fellowship teams and volunteer Brigades who are thinking about the questions they need to ask and the changes in attitudes they need to measure to assess progress towards increasing civic engagement and open governance. The process and case studies documented in this guide will be useful for structuring these assessments.
At Code for America, we believe that it is critically important to identify the residents, community groups, or government staff who will be using the particular public service program or benefit, then work with them early in the assessment design process. This guide provides important examples of how to frame an evaluation to include and work with intended beneficiaries. It offers sample questions and resources that will be very helpful to organizations and individuals who are beginning to explore how they can include measures of civic engagement and changing attitudes in their assessment of their efforts.
Connecting to Change the World builds on an earlier resource that Pete and I developed called Net Gains. This latest collaboration with John Cleveland includes examples and lessons that have emerged from our work with social impact networks over the last decade or so. During that time, we’ve been introduced to many new networks and deepened our work with others. As a consequence, we have a better understanding of what makes some networks highly “generative.” By generative, we mean networks with a renewable collaborative capacity to generate numerous activities simultaneously. These are networks that activate members’ connections on an emergent basis as need and opportunities arise.
Examples in the book include RE AMP – more than 165 nonprofit organizations and foundations in eight Midwestern states working together on climate change and energy policies, Reboot- a network of young Jewish American “cultural creatives” who are exploring and redefining Jewish identity and community in the U.S. and the U.K., ten regional networks of state agencies and nonprofit providers that have organized to end homelessness in Massachusetts, and five regional and two national networks of rural-based organizations that are promoting public policies that benefit rural communities in the U.S. In all of these networks, members have been very deliberate about creating, strengthening and maintaining network ties in order to establish a base of connections from which many activities can arise at the same time or over time. This foundation is the starting point for the progression from connecting to aligning to production or joint action that we also discuss in the book.
Net Gains provides practical advice for the growing community of network builders developing networks for social change. The handbook draws from the experiences of network builders, case studies covering a diversity of different networks, and emerging scientific knowledge about “connectivity.” The guide is divided into four parts, each focusing on a specific element of network building and offering strategies for successful development of networks at different stages in their evolution, from the moment of their inception, to the management of their ongoing production.
The handbook can be downloaded here.
When you’re evaluating a network, what are you looking for?
We recently submitted an evaluation proposal for a 7-year old network with more than 120 organizations spread across more than a half-dozen states. Without knowing much about the network we had to describe what we’d be evaluating, our analytic framework. It had 12 components, many of them specifically about a network, rather than an organization. It’s a framework we’d apply for assessing the condition and performance of any network.
Purpose: What is the network’s purpose? Is it being fulfilled? Has it changed over time? What other purposes are emergent among network members?
Value Propositions: What are the reasons that members participate in the network? Which reasons are most important to the members? How well do members feel their value propositions are being fulfilled by participating in the network?
Membership & Engagement: Who has been attracted to the network and who hasn’t that it would be desirable to have? What are the types of engagement in the network and to what degree do members engage in the network? Are the network’s rules/incentives for member engagement effective? Are there barriers that prevent/reduce member engagement?
Network Connectivity: What are the relationships among members? What level of reciprocity and trust has been built? What is being transacted between members? How has member connectivity evolved over time? What is the connectivity “shape” of the network (different patterns of connectivity—e.g., super hubs; multiple hubs; clusters) and how does the shape enable or block network efficiency and effectiveness?
Network Alignment: How well are network members aligned around ideas, goals, strategies, standards, and other guideposts? To what extent does alignment in the network influence members’ actions?
Network Production: To what extent has the network’s connectivity and alignment created conditions for collaboration/co-production by network members of, for instance, usable knowledge, policy change, services, or innovations. How well do network production processes function?
Other Network Capabilities: Which other network capabilities (e.g., network reach and resilience) matter to the network’s health—and what is their condition?
Governance: Does the network’s structure for decision-making enable members? Is it efficient and effective? Does it promote member confidence in and loyalty toward the network? What are the network’s monitoring and feedback loops and how well are they being used? What is the network’s resonance to members’ interests/actions? What is its adaptive capacity?
Business Model: What is the value chain within the markets and other contexts within which the network operates? What products and services—value creation– does the network offer? What is the network’s business model—revenues and costs—and how will it be sustained?
Operations: How well does the network enable members to benefit from the network through coordination of and communications among members, access to shared resources, working group leadership, and peer-to-peer exchange and learning? What staffing, mechanisms, and resources are in place? Which members do/don’t use them?
Strategic Communications: How is the network positioned with external audiences/stakeholders to achieve its goals? In what ways can the network’s external connections, capacities, and brand be leveraged for greater impact or to attract more resources?
Impacts: What measurable impact is the network having in achieving its purpose and goals? What impact is participating in the network having on the way members think and act? How can the network effectively measure its impact on a continuing basis—and use the information for improving its performance?
Taking your network’s temperature regularly is easy—and helps to inform continuous improvement of the network’s effectiveness.
When the 14 organizations in the Southwest Rural Policy Network met in November 2010 to discuss how well their network was doing, they didn’t just share their latest impressions. They had data stretching back nearly a year and a half. Since June 2009, as part of their formal work plan, they had self-assessed the network five times using a Network Health Scorecard. The assessment covered four essential categories: the network’s purpose, performance, operations, and capacity. The process only takes a few minutes after the network’s quarterly meeting—but reveals a great deal about how network members judge the network.
In November, Joyce Hospodar, the network member who chairs the network’s evaluation committee, summarized the scores—on a scale of 1-5, with 1 being low/5 being high—for the past five quarters: Purpose scores were holding steady. Performance scores peaked the previous spring. Operations and Capacity hovered around 4.0, but dipped recently.
Interestingly, when network members also scored where they thought the network’s health was compared to a year earlier, the ratings were all substantially higher than at the outset.
The scorecard is a tool, one source of evaluative feedback a network can use to gauge how well it’s doing and what sort of improvements might be useful. “Note,” says network coordinator Mikki Anaya, “this assessment only measures one aspect of the SWRPN’s effectiveness—the capacity/organizational efforts of the network.” A different evaluation will look at the network’s policy advocacy activities.
The one we developed has a total of 22 questions divided into the four categories. Several networks have adapted the questions to better reflect the specifics of their network. But in any case, the evaluative process is the same:
- Identify key indicators of the network’s well-being
- Regularly collect data from the members
- Analyze the data and share it with members
- Determine what changes are needed
Kudos to the members of the SW Rural Policy Network for picking up on this tool and incorporating its use into their network practice.
Intentionally managing members’ connections can strengthen your network.
A network’s connectivity–the number and quality of links between nodes, and the structure of those links–changes over time. To support a network’s development, network stewards intentionally manage this evolution, instead of just letting it happen.
A year ago, we started working with a start-up national network with about 60 members. The connectivity among members, which we measured and then, using special software, mapped graphically, was fairly low–not a surprise since it was a young network. But there was a core of about 11 members who were more densely and intensely connected to each other. The network maps, which place the most connected members at the center of the map, revealed this core of members, as well as those members at the periphery with few connections to others. As a result of the connectivity analysis the network stewards initiated activities aimed at increasing connectivity.
A year later–we just reported in a “state of the network” presentation at the network’s annual meeting–the connectivity building efforts have been a great success. The average number of links among members more than doubled. The intensity of links–what members transact with each other–also increased substantially. And new network maps revealed that the core of highly connected members also more than doubled–even thought there had been a 33% turnover in network membership. Now 25 members form the core or central hub of the network. All of these changes indicate strengthening of the network, revealed and made visible to the members through the use of network mapping.
Clay Shirkey, a champion of network approaches, sees a new revolution coming.
Here is Shirkey’s fascinating insight, offered in an interview in the June 2010 issue of WIRED:
“People have had lots of free time for as long as there’s been an industrialized world. But that free time has mainly been something to be used up rather than used, especially in postwar America, with the rise of suburbanization and long commutes. Suddenly we no longer lived in tight-knit communities and therefore we spent less time interacting face-to-face. As a result, we ended up spending the bulk of our free time watching television…
Someone born in 1960 has watched something like 50,000 hours of television already–more than five and a half solid years…”
Somehow, watching television became a part-time job for every citizen in the developed world. But once we stop thinking of all that time as individual minutes to be whiled away and start thinking of it as a social asset that can be harnessed, it all looks very different. The buildup of this free time among the world’s educated population–maybe a trillion hours per year–is a new resource. It’s what I refer to as the cognitive surplus.”
Shirkey further argues that as watching television, a solitary activity, is replaced by the use of technologies that promote social connection, there is a growing demand and ability for shared and productive activity.
“When someone buys a computer or mobile phone, the number of consumers and producers both increase by one. This lets ordinary citizens, who’ve been previously locked out, pool their free time for activities they like and care about. So instead of free time seeping away in front of the television set, the cognitive surplus is going to be poured into everything from goofy enterprises like lolcats, where people stick captions on cat photos, to serious political activities like Usahahidi.com, where people report human rights abuses.”
In short, the cognitive surplus will feed the process, already begun, of social networks of various sorts using technologies that support/enhance/ease connectivity to align around particular ideas and identities and then produce value. An idea that Shirkey explores in his new book, Cognitive Surplus: Creativity and Generosity in a Connected Age.