More than ever, profound changes are needed so that those left behind by America’s progress and prosperity can not only achieve but also sustain decent lives. Among these, a broader, more inclusive and ultimately more accountable approach to how we judge “what works” looms large.
From: A Lot to Lose: A Call to Rethink What Constitutes “Evidence” in Finding Social Interventions that Work (2009)
From the beginning, with the launch of Hope Meadows in 1994, I have grappled with how to determine “what works” in complex, multi-faceted neighborhood programs of this sort. I began by seeking the advice of Dr. Edward Zigler, who was Sterling Professor of Psychology and Director of the Center in Child Development and Social Policy at Yale University. He surprised me by pointing out that it is not necessary to conduct sophisticated evaluations to prove that family and community are good for children.
In contrast, about a year later I met with a professor of community health whose area of expertise was evaluation. I explained to him the purpose of Hope Meadows, i.e., to foster the well-being of children adopted from the foster care system by enfolding them on a daily basis into networks of stable, caring intergenerational relationships. I further explained that with the help of senior volunteers who were neighbors, individuals and couples would receive support as they undertook the lifelong task of parenting these children with their troubled pasts.
“How do we determine what works?” I asked. His reply was that we should begin by randomly assigning children to pre-adoptive families or to long-term foster care. He was serious. I was astounded at the audacity of such a suggestion. Would he like to have children randomly assigned to him, with the expectation he would make a life-long commitment to legally raise them through adoption? And in the end, what would have been proven — that permanent family and community are indeed good for kids?
Gathering data
With Hope Meadows continually evolving and with no real established guidelines for its evaluation, for the ten years that I served as its executive director, we collected all kinds of data. For example, we looked at adoption rates. In its first five years, Hope Meadows achieved a rate of 74.6% in closing cases (e.g. adoption, return home) compared to Illinois’s rate of 23.7% for the same time period.
We also looked at health and education outcomes for the children and at patterns of engagement of older adults in the community. We held periodic focus groups, conducted surveys, gathered ethnographic narratives which were published in academic journals, and we even studied relationship patterns using network analysis. Two Masters theses and one PhD dissertation used data from Hope Meadows. In addition, we commissioned an overall evaluation of the Hope Meadows program by the Center for Prevention Research and Development at the University of Illinois.
From a research perspective, all of this work was gratifying and provided lots of richly descriptive information and countless practical insights. But from the perspective of evaluation things were a bit fuzzier — in fact, with a few exceptions, it was never entirely clear to me what exactly we were evaluating or, more importantly, why.
Evolving understandings

Hope Meadows has always been, and the initiatives that have followed are, at their core, not only place-based but also relationship-based programs. They were designed to enfold the most vulnerable among us into networks of care and support by bringing together neighbors of all ages on a daily basis to provide assistance and to share the ups and downs of everyday life—to develop bonds of friendship and, over time, a culture of neighborliness—of friendliness, kindness, helpfulness, and consideration. We have come to refer to this deliberate designing and implementation of relationship-based programming as Intentional Neighboring.
Evaluation then, for the work we are doing, is not, for example, about child outcomes per se, nor whether families and communities are good for children (something we already know, as Professor Zigler reminded us), or as many have suggested, that neighbors can do a better job than social services.
For us, and for the numerous initiatives around the country working to adapt this model, evaluation questions and accompanying methods must be about Intentional Neighboring with its goal of creating an ideal community based on the Generations of Hope Community (GHC) paradigm—a place where caring neighbors come together to address some of our nation’s most complex social challenges, where those who are vulnerable are also valued community members who participated and contributed, and where older adults find meaning and purpose in their daily lives, even at the end of life, through caring relationships and continuing engagement. We have come to understand that it is these three outcomes of the GHC paradigm that serve as the basis for guiding our evaluation planning, for determining “what works.”
What is needed
More specifically, to determine the impact and outcomes of a program based on intentional neighboring, there must be a focus on relationships. What is meant by the term “caring neighbors” and how do we know if neighbors are caring? How are the vulnerable people in the community perceived by others in the neighborhood? Are they viewed as valuable members of the community? Do they participate and contribute, and if so, how? And finally evaluation of this paradigm must include older adults who make up the majority of residents and who, themselves, often are vulnerable as they age and approach the end of their lives. Are they finding meaning and purpose in their daily lives? How do they define this for themselves? How are their caring relationships and continuing engagement determined?
The answers to these questions can provide strong and credible evidence of the success of the program model, helping us gain insight about program effectiveness, how to strengthen program quality, and how to improve outcomes for residents. Like the Generations of Hope paradigm itself, evaluating the initiatives that implement it will require innovation, and will represent a critical difference from how “what works” is typically judged and determined. There may even be randomized controlled trials, but never in the way I was first advised twenty years ago!
Download PDF version of this post