This final evaluation report describes the progress of the Regional Networks to End Homelessness toward goals set forth by the ICHH such reducing the need for shelter and achieving housing placement outcomes and increasing opportunities for broad-based discussion with diverse stakeholders. Following brief introduction and background sections, the report summarizes the findings of the evaluation in detail; and offers recommendations, based upon these findings for long and short term action. The evaluation informed the United Ways of Massachusetts and the ICHH’s immediate commitment of $1 million to support network coordination in all regions through the following fiscal year, and, as consequence of the pilot results, the state legislature approved Home BASE, a major program that builds on the innovations successfully used in the pilot.
With support from the Knight Foundation through the Knight Community Information Challenge, community foundation leaders and their partners around the country are working to create more robust local information “ecosystems.” It was our privilege to get into the field to see what each of the four community foundations featured in the case studies is doing to promote information healthy communities.
Our eye for ethnographic detail helped to surface some of the real-life stories at the center of these efforts. Here is incontrovertible evidence that accessible, reliable and relevant news and information can enhance civic life and spark community change. Of course, we were particularly alert to the network building dimension in all of this. In addition to the local specifics about news and information, the cases also detail some basic network strategies that are relevant to any social change effort: how to create connections that open information pathways so that people can align and act. Which case most closely maps the challenges you face in your social change work? Any insights here that you might take forward?
When you’re evaluating a network, what are you looking for?
We recently submitted an evaluation proposal for a 7-year old network with more than 120 organizations spread across more than a half-dozen states. Without knowing much about the network we had to describe what we’d be evaluating, our analytic framework. It had 12 components, many of them specifically about a network, rather than an organization. It’s a framework we’d apply for assessing the condition and performance of any network.
Purpose: What is the network’s purpose? Is it being fulfilled? Has it changed over time? What other purposes are emergent among network members?
Value Propositions: What are the reasons that members participate in the network? Which reasons are most important to the members? How well do members feel their value propositions are being fulfilled by participating in the network?
Membership & Engagement: Who has been attracted to the network and who hasn’t that it would be desirable to have? What are the types of engagement in the network and to what degree do members engage in the network? Are the network’s rules/incentives for member engagement effective? Are there barriers that prevent/reduce member engagement?
Network Connectivity: What are the relationships among members? What level of reciprocity and trust has been built? What is being transacted between members? How has member connectivity evolved over time? What is the connectivity “shape” of the network (different patterns of connectivity—e.g., super hubs; multiple hubs; clusters) and how does the shape enable or block network efficiency and effectiveness?
Network Alignment: How well are network members aligned around ideas, goals, strategies, standards, and other guideposts? To what extent does alignment in the network influence members’ actions?
Network Production: To what extent has the network’s connectivity and alignment created conditions for collaboration/co-production by network members of, for instance, usable knowledge, policy change, services, or innovations. How well do network production processes function?
Other Network Capabilities: Which other network capabilities (e.g., network reach and resilience) matter to the network’s health—and what is their condition?
Governance: Does the network’s structure for decision-making enable members? Is it efficient and effective? Does it promote member confidence in and loyalty toward the network? What are the network’s monitoring and feedback loops and how well are they being used? What is the network’s resonance to members’ interests/actions? What is its adaptive capacity?
Business Model: What is the value chain within the markets and other contexts within which the network operates? What products and services—value creation– does the network offer? What is the network’s business model—revenues and costs—and how will it be sustained?
Operations: How well does the network enable members to benefit from the network through coordination of and communications among members, access to shared resources, working group leadership, and peer-to-peer exchange and learning? What staffing, mechanisms, and resources are in place? Which members do/don’t use them?
Strategic Communications: How is the network positioned with external audiences/stakeholders to achieve its goals? In what ways can the network’s external connections, capacities, and brand be leveraged for greater impact or to attract more resources?
Impacts: What measurable impact is the network having in achieving its purpose and goals? What impact is participating in the network having on the way members think and act? How can the network effectively measure its impact on a continuing basis—and use the information for improving its performance?
Taking your network’s temperature regularly is easy—and helps to inform continuous improvement of the network’s effectiveness.
When the 14 organizations in the Southwest Rural Policy Network met in November 2010 to discuss how well their network was doing, they didn’t just share their latest impressions. They had data stretching back nearly a year and a half. Since June 2009, as part of their formal work plan, they had self-assessed the network five times using a Network Health Scorecard. The assessment covered four essential categories: the network’s purpose, performance, operations, and capacity. The process only takes a few minutes after the network’s quarterly meeting—but reveals a great deal about how network members judge the network.
In November, Joyce Hospodar, the network member who chairs the network’s evaluation committee, summarized the scores—on a scale of 1-5, with 1 being low/5 being high—for the past five quarters: Purpose scores were holding steady. Performance scores peaked the previous spring. Operations and Capacity hovered around 4.0, but dipped recently.
Interestingly, when network members also scored where they thought the network’s health was compared to a year earlier, the ratings were all substantially higher than at the outset.
The scorecard is a tool, one source of evaluative feedback a network can use to gauge how well it’s doing and what sort of improvements might be useful. “Note,” says network coordinator Mikki Anaya, “this assessment only measures one aspect of the SWRPN’s effectiveness—the capacity/organizational efforts of the network.” A different evaluation will look at the network’s policy advocacy activities.
The one we developed has a total of 22 questions divided into the four categories. Several networks have adapted the questions to better reflect the specifics of their network. But in any case, the evaluative process is the same:
- Identify key indicators of the network’s well-being
- Regularly collect data from the members
- Analyze the data and share it with members
- Determine what changes are needed
Kudos to the members of the SW Rural Policy Network for picking up on this tool and incorporating its use into their network practice.
Intentionally managing members’ connections can strengthen your network.
A network’s connectivity–the number and quality of links between nodes, and the structure of those links–changes over time. To support a network’s development, network stewards intentionally manage this evolution, instead of just letting it happen.
A year ago, we started working with a start-up national network with about 60 members. The connectivity among members, which we measured and then, using special software, mapped graphically, was fairly low–not a surprise since it was a young network. But there was a core of about 11 members who were more densely and intensely connected to each other. The network maps, which place the most connected members at the center of the map, revealed this core of members, as well as those members at the periphery with few connections to others. As a result of the connectivity analysis the network stewards initiated activities aimed at increasing connectivity.
A year later–we just reported in a “state of the network” presentation at the network’s annual meeting–the connectivity building efforts have been a great success. The average number of links among members more than doubled. The intensity of links–what members transact with each other–also increased substantially. And new network maps revealed that the core of highly connected members also more than doubled–even thought there had been a 33% turnover in network membership. Now 25 members form the core or central hub of the network. All of these changes indicate strengthening of the network, revealed and made visible to the members through the use of network mapping.
An evaluation plan recently prepared by Network Impact shows how assessing a network does–and doesn’t–differ from assessing an organization.
The assignment: evaluate the impact of a loose network of 100s of people around the US–on its members and on other people and organizations.
First step–as with an organization evaluation–is to establish the purpose of the network. But then it’s important to understand the form/structure or “shape” of the network, a matter that veers away from organization evaluation. The shape of a network–the ways in which connections/transactions among members distribute and concentrate–affects the functionality of the network. A network built around “key hubs” may be most effective in spreading ideas rapidly and widely whereas a network built around a dense cluster of connections can facilitate the transfer of complex information and promote peer exchange.
What matters next is to determine what the members hold as value propositions for participating in the network. This, too, diverges from an organization evaluation. Even though an organization’s employees will hold value propositions for their work in the organization (they love the mission of the organization; the organization fits their professional path; they need a job), the types of value propositions will be different from those of people voluntarily associated with a network.
Then, it’s on to what is being transacted by members with each other and the degree to which transactions are leveraged through the network to other members. This sort of analysis could be applied to an organization, to learn more about its culture, and the implicit ways in which work gets done. But with a network, it’s an absolutely necessary part of the evaluation, while in an organization it’s more of a discretionary practice.
When looking at the connections among network members, in other words, it’s essential to ask:
• How are connections configured?
• What flows through the connections?
• What is the strength of the connections (intensity, regularity)?
• How to the patterns of connection structure, content, intensity, and outcome evolve over time?
Answering these questions, along with those about members’ value propositions, provides the basic data for evaluating the network. Quite a bit of this data and analysis is not what you’d need to evaluate an organization’s impact.
Third in Network Impact’s series about network evaluation.
Monitoring changes in a network’s member-to-member connections is integral to network evaluation, especially when a network’s performance depends on its evolution (e.g., from low levels of connectivity to higher levels of connectivity, conversion of weak links to strong links, etc.). One way to display information about a network’s evolution is to create network maps We use special mapping software to analyze and visually display the information that we gather about network connections and changes over time. We’ve found that network maps generated in this way reveal patterns that are hard to “see” in the raw data and that are difficult to summarize narratively. (Read more about network structure/shape.)
Network mapping for evaluation purposes can be challenging, however. I was reminded of this recently when I set about mapping ties among homeless service providers in Massachusetts. In this case, pilot efforts to reduce rates of homelessness in the state are being implemented through ten new regional partnerships of many organizations. From the start, our evaluation envisaged the production of ten sets of “before” and “after” regional network maps to demonstrate and compare patterns of network change in relations among the partnering organizations.
We started on the right foot. We added a set of “network connections” questions to an online Network Health Survey that was already in the pipeline (network mapping practice #1: don’t over-survey). We discussed the potential utility of the results with network coordinators – not just the value to the evaluation but also to network members who, we thought, might use the visually compelling network maps to publicize and promote their new ways of working (practice #2: establish salience). We encouraged coordinators to publicize and promote the mapping project (practice #3: pre-notify and follow up with reminders). But, in the end, we were hampered by a low survey response rate from some networks.
In certain kinds of quantitative research, one can make do with a statistical sample. However network mapping of the kind we do requires close to a 100% response rate. We mapped “before” and “after” connections in 6 of the 10 networks and found some interesting patterns. In the other 4 networks, critical information was missing. Any story told in a graphic based on incomplete data would have been misleading.
What went wrong. We delivered our survey by email which has some advantages: people tend to provide longer open-ended responses to e-mail than to other types of surveys; research shows that responses to e-mail surveys tend to be more candid than responses to mail or phone surveys. In this case, however, many of our intended respondents were “fed up” to start with email and, as service providers, were already “over-surveyed” from other sources. (Turns out the problem is wider. The U.S. population as a whole is over-surveyed; response rates in the U.S. for all types and manner of survey are declining as a result). This is something we will pay closer attention to in the future.
At Network Impact, planning starts with network-centric questions.
We’re often asked to help an existing network to plan its future. “”What should we do next?”–to strengthen or expand or sustain the network. Helping networks answer the question–devise their strategies–depends on developing an understanding of the network’s condition. Here are some of the basic questions we ask the network (by interviewing its coordinators and stewards and surveying its members).
• What’s the purpose of the network? Yes, the same question you’d ask if you were working with an organization: Vision, Mission. Some networks have multiple purposes. A network’s purpose may evolve rapidly as its members come to know each other and realize what the potential value may be. If a network says its purpose is peer exchange/learning among members, it’s worth considering that as the network matures this value proposition may by superseded others. When you know the purpose, you can also consider whether the network’s structure (shape of connectivity) is the best one for the purpose.
• What type of network is it? It’s useful to classify a network as either being a connectivity or alignment or production network. (Learn more about these distinctions in Net Gains.) These different types provide different value for members and require different “enabling infrastructure” to support members’ activities.
• What stage of network evolution has the network reached? We have two ways of thinking about the “life cycle” of a network. One is a cycle of birth-to-growth, growth-to-stabilization, stabilization-to-turbulence, and turbulence-to-either-decline-or-transformation. Start-up networks have different needs and potential from mature, stable networks. Our second framework goes back to the connectivity-alignment-production model. All networks have a foundation of connectivity, but some of the evolve into alignment networks, and some alignment networks evolve into production networks. (Learn more about this model of network evolution.) An alignment network that is expanding requires a different set of strategies than a production network that is in turbulence.
• What are its members most important value propositions? How good do they feel about the value they are getting from participating in the network?It’s essential to be clear about members’ value propositions–the motivating forces behind the network’s energy–and to know how members feel their VPs are being addressed. (Read more about identifying and measuring value propositions.)
• What degree of connectivity do its members share? And what is the “shape” of the connectivity? Connectivity is the lifeblood of a network. But connections among members will vary. Some members will connect frequently with each other. others will connect infrequently. Some will connect with many other members, some with just a few. The patterns of connectivity can be mapped and analyzed, and this becomes the basis for strategies to strengthen connectivity. With one network, we asked members if they had talked with, met with, or collaborated on a project with other members–different intensities of connection. This allowed us to map not just who linked to who, but also some of the quality of the connectivity.
• What is being transacted (what flows) between members? When you know what the network’s members are doing with each other–whether it’s a network-sponsored activity or something some members just decided to do, the network can decide whether it wants to dedicate resources to enabling others to participate. If, for instance, a national network finds that some of its members are working on creating local networks of the same sort, it can decide to help them and others do this, or it can decide not to. Members’ transactions reveal opportunities for the network to provide more value.
To help networks do some of this planning work on their own we developed a self-assessment tool, the Network Health Assessment Scorecard, which can be used by network members to provide feedback and generate a where-do-we-stand conversation within the network.
First in a series about monitoring and assessing network practice.
In our experience, people who build networks for social change are deeply curious about their network’s performance, but they are wary of the conventional evaluation “straitjacket.” They can’t imagine how a rigid assessment framework could be usefully applied to the dynamic, self-organizing network they are nourishing. And they wonder how an evaluation approach designed to assess organizational practice could possibly capture the far more complex practice of network organizing.
When we design an evaluation for a network, we do draw on conventional evaluation principles but we also use a unique network evaluation framework to track and document a network’s evolution and outcomes. We look at things having to do with networks as a distinct organizing form, such as network structure and composition: Who is connected to whom? What is transacted through these links? We also track value creation (What value does the Network produce both for individual members and for the broader constituency it serves?) and internal network conditions that contribute tonetwork health (such as complementary capacities and diversity). Although it is sometimes difficult to tease out contributing factors, we try to design evaluations that allow network builders to assess the relationship between network organizing and network impact. What difference did network organizing make and why?
One way to document the difference that network organizing makes is to compare performance across networks with similar goals and different network organizing practices. Have you had any experience with this approach? What do you think of it? What approaches are you using?
When we first started thinking about network evaluation we found the work of two Canadians very helpful: Heather Creech and Terri Willard through the International Institute for Sustainable Development.
Second in a series about monitoring and assessing network practice.
Lately I’ve been giving a lot of thought to network adaptation – and, from an evaluative perspective, how best to capture the trajectory of networks with multiple, emergent activities and connections. In open and rapidly evolving nets, of course, members often need real-time information to make effective decisions. But, even in relatively stable nets, organizers want to know about the results of their catalyzing efforts. So many networks begin with a deliberate effort to weave new connections, but few build in the means to systematically gauge the effect of such efforts over time.
Pete Plastrik and I continue to be interested in learning more about how to monitor patterns of network engagement and action in networks whose members use 2.0 digital media to connect and communicate. BTW we have learned a lot about this by following some of the conversations that Beth Kanter hosts on her blog. In other networks with known membership, we’ve had some success combining qualitative methods (e.g. interviews and member journaling) with member surveys.
Truth to tell, all-member surveys that we’ve developed took a lot of time to design. But most have been “baseline” surveys that cover a lot of ground in order to catch up on the network’s evolution. Such surveys can be followed up with shorter surveys to a subset of members (say, 20% per year). I recently learned that this has been the approach of the Robert and Patricia Switzer Foundation in tracking their network of more than 400 fellows.
Have you designed a network survey? What kinds of questions did you include? What did you learn that was useful? Post a comment here.