Building our impact intuition

I was speaking to an impact investor recently, and he was saying that investment decisions are ultimately based on an intuitive sense the investor has about the company: the deal, the team, the market opportunity. And shouldn’t we just use our intuition to assess impact?

This is the most common unspoken premise used by impact investors to justify not collecting impact data.

So, where does this intuition come from? And is there such a thing as good and bad impact intuition?

Intuition is subconscious pattern recognition. And patterns are the sum total of the information we’ve taken in. If that information, and our ability to understand and process it, is of high quality, then we develop good intuition. If not, not.

A good investor is awash in quantitative and qualitative data that inform her investment intuition. For example, on the quantitative side, she’ll know what she expected gross margins to be, the predicted length of a company’s working capital cycle, and how many years she forecasted it would take for the company to get to profitability.

But that original financial model will have a very short shelf life: after the investment, she’ll get reams of data to show whether her predictions were right or wrong.

But in the world of impact, she’ll handle things differently.

She’ll look at research and benchmarks to develop a thesis. And she’ll stop there and multiply products sold by those benchmarks [e.g. 10 lights sold * X predicted impact/light = 10x impact].

This is like creating a pre-investment financial model of a company and then, two years later, when asked how the company is performing, using the model’s original variables to answer that question.

Not only would this answer not be any good, but her impact intuition would never improve.

Why do we accept the idea that we can understand the impact we are creating in people’s lives by looking at comparables? Why do we nod when told that it’s hard to get better data (it’s not)? How can we say that “we know impact when we see it” if we don’t gather data to understand actual performance?

The only explanation is this: we are not the people whose lives are, or are not, improved by a given intervention; we are not personally affected by a positive or negative ROI on a “better” solution; and the difference between potential and actual impact doesn’t land on our doorstep, or in our pocketbooks, or in our child’s cough or the quality of the education he receives.

The only way we’ll create better impact intuition is if we apply ourselves seriously to the question of learning what does and doesn’t improve people’s lives.

We don’t settle for “it’s too hard” anywhere else but here.

Cogito Ergo Sum Ego Creo Impulsum

Of all the charts in the GIIN’s 2017 Annual Impact Investor Survey report, the one that struck me the most was this one.

Of 200 impact investors surveyed, 98% say their impact is in line with or outperforming their expectations.

This in a sector in which almost no one actually measures impact, a sector that still debates whether having the intention to create impact is an important description of an “impact” investor.

It’s like we have our own version of cogito ergo sum: “I call myself an impact investor, therefore, ipso facto, I create impact.”



The Power of Lean Data

In the last few months, I’ve been writing more about the evolution in how we’re thinking about impact measurement at Acumen. We call in Lean Data.

Until now, there’s really not been a good way for social enterprises to measure their impact in a way that makes sense for them and adds values for their companies and for their customers.

I think we can change that.

For the full soup-to-nuts story of Lean Data, check out the article that we published yesterday in Stanford Social Innovation Review: The Power of Lean Data. I had the great pleasure of writing this piece with Tom Adams of Acumen and Alnoor Ebrahim of Harvard Business School.

SSIR_Lean Data

If you want to go out and use Lean Data, you still have time to sign up for our +Acumen Lean Data course, which starts on Monday. And don’t forget to print out and laminate your own version of our handy-dandy Lean Data Field Guide.

Geeking out Next Thursday

I’m looking forward to speaking at the Catalyst for Social Change event this coming Thursday, November 12. I’ll be speaking together with Jake Porway, the founder of DataKind and Samuel Sia, one of MIT’s Innovators under 35.

The event is at Fordham Law School at 7pm, and there are still a few seats left – you can get tickets here.

We’ll be talking about innovative approaches to data and measurement, and using them to make the world a better place. It should be a lot of fun.

While I don’t know exactly where the conversation will go, I suspect that if you’re the kind of person who finds this image funny then you’ll have a blast. Hope to see you there.


No, it’s not too much trouble to measure impact

As impact investing goes more mainstream, there is a growing chorus suggesting that impact measurement might be the providence of academics and idealists.

(as in, “…we have spent too much time and too many resources discussing impact measurement and trying to measure outcomes. Is an individual who needs eyeglasses better off if she has access to them? If you are wearing a pair while reading this article, you know the answer. There are myriad basic products and services such as eyeglasses to which the majority of the world’s population does not have access and which, if they did, would allow them to live significantly improved lives. So let’s move on and not overburden those initiatives focused on underserved communities with academic questions. They already face plenty of challenges trying to deliver what they promise.”)

Now, the argument goes, the real investors have arrived, so we can do away with all of that impact measurement mumbo-jumbo.  If companies succeed and grow, if capital is getting deployed and returned, and if more capital is coming in, then we know that we are succeeding.  The rest is just noise.

That argument would make sense if impact measurement is undertaken as an academic, ex post process  in which those on the periphery of the system peer into its beating heart, extract data, and attempt to define whether or not those at the center are creating sufficient impact.   Who are they to judge?

Indeed, let’s avoid a scenario like that at all costs.  In fact let’s avoid any measurement system in which the main goal is to produce data that isn’t, at its core, useful to operating companies in their interactions with end customers.

However, let us also avoid quick, easy caricatures about what measurement is and could be.

To walk through an example, let’s begin with the assertion that any company that qualifies as an impact investment is creating some sort of direct benefit for end customers or other key stakeholders (e.g. creating jobs).

So, we might ask, who wants to know if this hypothetical company is creating impact?

Sure, a wonky social scientist would love to know.  She’d hope to understand if someone who buys a solar light or who hooks up to a mini-grid stops spending money on dirty, dangerous, expensive kerosene.  If she doesn’t, then there’s less impact than one would hope.

The good news is that while the academic would love to have answers to these questions, we wouldn’t and shouldn’t answer these questions primarily for her.  Because the same questions she has are core questions driving the success of the business.  Any company that has an iota of sales and marketing DNA will need to understand answers to a basic set of questions:

  • Are customers buying solar lights as a replacement to kerosene or as a supplement?
  • How much less do customers spend on kerosene as a result of having a new source of light?
  • Are lights are used primarily late at night in homes, for kids to study, or out in the fields?
  • And on and on….

Similarly, a company selling drip irrigation kits has no choice but to find out whether end customers achieve the 2 to 3x yields that the company gets on demonstration plots.  A company selling drinking water needs to understand if customers are contaminating the water before they consume it (which means that a marketing message around better health ultimately won’t deliver).  And of course a company offering vocational training and job placement will definitely need to know how many graduates they place, how graduates’ incomes compare with the money they made before the program, and which training programs have the highest yield on job placement rates and salaries.

All of which is to say that understanding impact is a key driver of business success for any company selling a new product or service to an underserved market.  And the companies that are first to realize this will be best positioned to meet the needs of their customers and deliver products that create the most value.

Put another way, understanding impact starts with questions like:

  • Who are we serving?
  • Why are these customers buying this product? (what problem does it solve for them)
  • How are they using the product?
  • How does this product compare with what they did before?
  • What benefits do they hope to realize when using this product?
  • Are they realizing those benefits?
  • Why or why not?
  • Etc.

If we recognize that conversations about impact start and end with the end customer, we will sort out the way forward.  Whereas we will continue to stumble out of the gate if they we miscast these efforts as pitting investors’ priorities against those of companies.  Companies will increasingly need this data, and, recognizing that this data must and will be collected, we as a sector will miss an opportunity if we don’t agree at the outset to use a common set of standards – so that as the data is collected, it can be aggregated in ways that allow for easy comparison.

The idea that we have the option to opt out of understanding impact is akin to arguing that we can build large-scale, successful new enterprises without understanding our end customers in any real way.   It’s absurd.  Our opportunity is to understand, in a much deeper way, the intersection of a company, its products, and a customers’ well-being.  The better the customers are served, the better the company will do, and the flywheel will start turning.  If we lack data on impact, we’ll never start walking that path.

An “intangible” dividend?

So here’s a curious narrative: in the early 1990s, 4,600 poor families in LA, New York, Chicago and Boston were moved from very poor neighborhoods (more than half the residents living in poverty) to wealthier (less than a third of the residents living in poverty).  The hope was this would result in better jobs, higher incomes, and better educational outcomes.

After rigorous, scientific testing, the initiative failed to deliver the desired results.

And yet, in what was described as an “intangible dividend” by the NY times, the recipients ended up significantly, quantifiably happier.  “The improvement [in happiness] was equal to the level of life satisfaction of someone whose annual income was $13,000 more a year.”

This is the dividend that’s called intangible.  Happiness.

Of course it’s hard to measure, of course it is squishy and self-reported, but if we’re ever going to get anywhere we have to have the comfort and confidence to say out loud that things like human dignity, pride, and yes happiness are the whole point, the only point really, and that everything we’re doing is aimed at loose proxies to those results – what could be more real or concrete than that?

Just think how much we’ve punted on this issue, if we’re really honest with ourselves.  We’ve come to a point where we’re saying with a straight face that if we put a lot of money into the impact investing sector and that money realizes a healthy level of financial return then we’ve had success.  That puts us about seven degrees removed from actually understanding if anyone is better off, happier, freer, more proud or connected or more able to realize their potential, if someone is more likely to realize justice if they’re wronged or less likely to fall back into poverty if they get sick.

As a sector we have to have the courage to say out loud that happiness is not an “intangible” dividend, it’s not a silver lining in a program that otherwise failed to raise people’s incomes.

Would that we lived in a world in which the NY Times headline could have been: “large-scale government program a huge success, making 4,600 families happier, healthier, even without increasing incomes.”

It feels like looking at the sun, saying out loud that the whole point is happiness or pride or dignity.  It’s so much easier and safer to look away.

What you can’t measure

So what was the measurable impact of….?”

Of course this question matters a lot, a ton, the most maybe.

The catch is that we fail to fully appreciate three truths:

  1. You can only measure a subset of the things that matter
  2. We end up convincing ourselves that the things we are able to measure are a good approximation of the whole
  3. But they might not be

A friend was nice enough to send this Skype chat along to me the other day (names changed):

[9:53:08 PM] Felipe: for lent, i’m going to do the generosity experiment

[9:53:24 PM] Felipe: 40 days of saying yes to everything

[9:53:28 PM] Felipe: you are warned 🙂

[9:54:22 PM] Samuel: wow

[9:54:27 PM] Samuel: 40 days

[9:54:28 PM] Samuel: are you sure?

[9:54:45 PM] Felipe: lent is 40 days…i have nothing to give up

[9:54:54 PM] Samuel: ok

[9:55:09 PM] Samuel: Sasha Dichter will be happy to note this

[9:56:17 PM] Felipe: it’ll be on a smaller scale than his, for sure…but let’s see how it goes

(I’m not sure it will be on a smaller scale, really.  The most profound and lasting changes are personal.)

Folks have been asking me: “do we have to wait until February 14th, 2012 for the next Generosity Day.”  Of course not!!!  Start, go, share, inspire others…and if you have a free moment let me know how it went.