Back to top
An evaluation plan recently prepared by Network Impact shows how assessing a network does–and doesn’t–differ from assessing an organization.
The assignment: evaluate the impact of a loose network of 100s of people around the US–on its members and on other people and organizations.
First step–as with an organization evaluation–is to establish the purpose of the network. But then it’s important to understand the form/structure or “shape” of the network, a matter that veers away from organization evaluation. The shape of a network–the ways in which connections/transactions among members distribute and concentrate–affects the functionality of the network. A network built around “key hubs” may be most effective in spreading ideas rapidly and widely whereas a network built around a dense cluster of connections can facilitate the transfer of complex information and promote peer exchange.
What matters next is to determine what the members hold as value propositions for participating in the network. This, too, diverges from an organization evaluation. Even though an organization’s employees will hold value propositions for their work in the organization (they love the mission of the organization; the organization fits their professional path; they need a job), the types of value propositions will be different from those of people voluntarily associated with a network.
Then, it’s on to what is being transacted by members with each other and the degree to which transactions are leveraged through the network to other members. This sort of analysis could be applied to an organization, to learn more about its culture, and the implicit ways in which work gets done. But with a network, it’s an absolutely necessary part of the evaluation, while in an organization it’s more of a discretionary practice.
When looking at the connections among network members, in other words, it’s essential to ask:
• How are connections configured?
• What flows through the connections?
• What is the strength of the connections (intensity, regularity)?
• How to the patterns of connection structure, content, intensity, and outcome evolve over time?
Answering these questions, along with those about members’ value propositions, provides the basic data for evaluating the network. Quite a bit of this data and analysis is not what you’d need to evaluate an organization’s impact.
Back to top
Third in Network Impact’s series about network evaluation.
Monitoring changes in a network’s member-to-member connections is integral to network evaluation, especially when a network’s performance depends on its evolution (e.g., from low levels of connectivity to higher levels of connectivity, conversion of weak links to strong links, etc.). One way to display information about a network’s evolution is to create network maps We use special mapping software to analyze and visually display the information that we gather about network connections and changes over time. We’ve found that network maps generated in this way reveal patterns that are hard to “see” in the raw data and that are difficult to summarize narratively. (Read more about network structure/shape.)
Network mapping for evaluation purposes can be challenging, however. I was reminded of this recently when I set about mapping ties among homeless service providers in Massachusetts. In this case, pilot efforts to reduce rates of homelessness in the state are being implemented through ten new regional partnerships of many organizations. From the start, our evaluation envisaged the production of ten sets of “before” and “after” regional network maps to demonstrate and compare patterns of network change in relations among the partnering organizations.
We started on the right foot. We added a set of “network connections” questions to an online Network Health Survey that was already in the pipeline (network mapping practice #1: don’t over-survey). We discussed the potential utility of the results with network coordinators – not just the value to the evaluation but also to network members who, we thought, might use the visually compelling network maps to publicize and promote their new ways of working (practice #2: establish salience). We encouraged coordinators to publicize and promote the mapping project (practice #3: pre-notify and follow up with reminders). But, in the end, we were hampered by a low survey response rate from some networks.
In certain kinds of quantitative research, one can make do with a statistical sample. However network mapping of the kind we do requires close to a 100% response rate. We mapped “before” and “after” connections in 6 of the 10 networks and found some interesting patterns. In the other 4 networks, critical information was missing. Any story told in a graphic based on incomplete data would have been misleading.
What went wrong. We delivered our survey by email which has some advantages: people tend to provide longer open-ended responses to e-mail than to other types of surveys; research shows that responses to e-mail surveys tend to be more candid than responses to mail or phone surveys. In this case, however, many of our intended respondents were “fed up” to start with email and, as service providers, were already “over-surveyed” from other sources. (Turns out the problem is wider. The U.S. population as a whole is over-surveyed; response rates in the U.S. for all types and manner of survey are declining as a result). This is something we will pay closer attention to in the future.