The Moral Imperative of Effective Altruism

This past August, a thousand or so entrepreneurs gathered in Berkeley, California to talk about making the world a better place. They represented the usual suspects you’d find at a Bay Area conference – data wonks, business strategists, venture capitalists, and blue-sky creative types. They shared Silicon Valley’s obsession with big ideas powered by data and evidence. But instead of hacking taxi rides or Internet searches, these entrepreneurs were hacking giving.

The Effective Altruism (EA) Global Conference, and the growing movement behind it, is based on a simple but revolutionary idea: that the wealthy have a moral imperative not only to give, but to give in the most effective way possible.

EA’s basic moral sensibility is captured in a thought experiment known as the “drowning child,” originally conceived by the philosopher Peter Singer. Imagine you’re driving to work and see a child drowning in a pond. No one else is around, and if you don’t intervene the child will die. You can save the child, but you will ruin your work shoes and your suit will get dirty. Should you help? read more…

Is The Universe Conscious?

I’ve been doing some reading lately about the role of consciousness in the Universe. Now up is Beyond Biocentrism, by Robert Lanza, one of the world’s most famous scientists. Lanza’s premise is that life is not merely an accidental outcome of a meaningless Universe. Instead, life — and consciousness, are an integral component of the Universe. They are “indispensable cosmic atrributes”.

Lanza explains how quantum physics upended our assumptions about the nature of reality and of observers as merely incidental. Instead, the very act of observing changes the physical behavior of particles in measurable ways. Somehow these particles “know” someone is watching and adjust their behavior accordingly. Other features of the physical world once thought to be fixed and concrete — such as space and time — are actually only meaningful in reference to an observer. They are concepts invented by our minds to help us understand the world.

Lanza’s argument bolsters the idea that the Universe is not simply an arbitrary and meaningless collection of things moving this way and that. Instead, it is fundamentally meaningful. It also reminds us that the true nature of reality is not the mechanical world as we’ve come to understand it since Newton. There is something else in the picture, a true nature we are only beginning to understand.

A Critical Thinking Perspective On The Election

The election is only eleven days away, and thank God for that. It’s creating unprecedented levels of anxiety and making lots of us wonder how, exactly, we all ended up in this place.

It astounds me that anyone is still actually undecided, given the wealth of information we already have about both candidates. However, as we make our final decisions, I think it’s worth examining what the pros and cons of each candidate are when viewed through a critical thinking lens.

Viewing the candidates through the perspective of critical thinking is actually – and unfortunately – a fairly radical approach. Most of us vote based on some combination of our “like” or “dislike” for the candidates, our party affiliations, and the recommendations or opinions of those around and those we trust. All these have some value, of course. But what if we tried to throw of our political affiliations and determine which candidate had the best plans to promote our goals and values, as determined through facts and critical analysis?

The first thing we want to know is, who has an understanding and respect for facts, regardless of the conclusions or proposals they put forth? Fact-checking sites like Politifact are good sources for basic information about each politician’s Pinocchio factor — and demonstrate quantitatively that this year’s Republican nominee is legendarily untetethered from the truth.

But beyond just statements of truth or fiction, it’s worth examining more deeply whether each candidate’s proposals actually would accomplish their stated goals, when placed under scrutiny. It’s not even worth wasting pixels on the nonsense talk that our conservative party’s nominee spouts, but it does give us an interesting perspective on those candidates this year who have been more closely tied to reality. For example, those on the left have been adamantly against the Trans-Pacific Pipeline and rallied around Bernie Sanders’ claim that he could provide college for all and not break the economy at the same time. But there was little actual critical analysis of either proposition. When I asked people who supported Sanders over Clinton what the evidence base for either argument was, they’d typically send me a collection of memes and talking points from the Sanders for President website.

When viewed through the pragmatic lens of critical thinking, Hillary Clinton seems to most hew the closest to policies with a strong evidence-base and empirically tested theory of action. That doesn’t mean that her policies are always correct. But she follows the rationalist tradition of Barack Obama, who ended up acquiting himself pretty well in the fact of historical opposition. Of course, this is not to say that there are other potentially important personal characteristics, such as trustworthiness or the dreaded “temperament,” that ought to be considered as well. But in terms of the consequentialist goal of who will achieve what we really want, it might be the best frame.

Using Evidence To Do The Most Good: The Effective Altruism Global Conference

Effective Altruism (EA) is an emerging movement based on a simple idea: that we ought to use reason and evidence to do the most good in the world. This summer I had the opportunity to attend the EA Global conference in Berkeley, which brought together people from the realms of philosophy, international health and development, research methods, nonprofit management, philanthropy and many other fields to think about ways to make our work as effective as possible. read more…

Are Randomly Controlled Trials Really The “Gold Standard” For Evidence?

Back from vacation and still metabolizing the caloric tsunami that swept through our house over the holidays.  Now that the fruit cake and egg nog safely out of site, it’s time to get our heads back into deep thoughts about research methods.

Today we turn our attention to randomly controlled trials, or RCTs.  An RCT is an experimental design in which subjects are randomly assigned to one of two or more conditions (such as a treatment or a control group); a treatment is applied; and then results between the groups are compared.  They are regarded as the “gold standard” of evidence.  But is this justified, particularly in the social sciences?  Are randomly controlled trials (RCT’s) the be-all end-all of research-based evidence?

The Power of RCTs

RCTs are very powerful because they come the closest of any research design to actually determining causality.  And this, after all, is our ultimate goal in research on program effectiveness.  We don’t just want to know that something happened, we want to know why,  so we have a basis to act in the future.  By measuring the outcomes of a treatment and its counterfactual (such as a control group or alternative treatment), an RCT has the best potential to identify causal relationships and rule out spurious associations.

For this reason, The U.S. Department of Education, among other groups, have placed increased importance on RCTs in recent years.  For example, the What Works clearinghouse, which evaluates the evidence of educational interventions, has a strict hierarchy of evidence levels, in which only well-designed and well-implemented” RCTs can be designated as “meeting evidence standards”.  The result is that relatively few programs are designated as “effective” by What Works.  For example, only eight early childhood programs are designated as having “positive effects”, and another nine are designated as having “potentially positive effects”.

Limits of RCTs

Is the emphasis on RCT’s well-founded?  In an ideal situation, all programs of interest would have a research base of well-conducted RCTs, and the answer would be ‘yes’.  When available, well-conducted RCTs are generally bnest means of determining program effectiveness.

But in the “real world”, things are different.  For one, the proportion of currently operating or proposed programs which have RCT data are likely to be quite low, particularly in the social sciences.

In addition, a host of “real-world” factors can impact the ability of RCTs to determine causality, such as publication bias and replication of interventions.  For example, imagine a new preschool curriculum that, its creators claim, increase kindergarten readiness.  Further imagine that we know that the proposed curriculum, in reality, has no actual effect on kindergarten readiness.  Now imagine that the curriculum is implemented in 100 different school districts and 100 researchers conduct 100 RCTs., one in each district

Using standard confidence levels (which allow for a 5% chance of mistaking no effect for an actual treatment effect), five of the studies are likely to turn up an effect, even though none exists.  Publication bias will amplify the problem, if we believe that those five studies are more likely to get published than the 95 studies which found no effect.

This is a somewhat apocryphal example, but it illuminates some of the real-world challenges we face in overinterpreting the results of RCTs.  They also face a host of other threats to validity, such as the possibility that members of the “control” group inadvertently received the treatment too, which would have the effect of underestimating the true extent of the causal relationship.

These “real world” issues may overwhelm the virtues of the “ideal” landscape in which a host of well-conducted RCTs are available for us to examine.   In practice, RCTS are now always possible to conduct, and they can be expensive.  (To this end, several organizations, including the Coalition for Evidence Based Policy, are issuing “low-cost” RCTs to be completed for under $100,000 – a bargain compared to the $3 to $5 million tab that a full-scale educational RCT can cost.)

In early childhood, relatively few RCTs have been conducted, for a range of reasons logistical, political, and practical.  Even the most well-respected – such as the recent Head Start Impact Study, which randomly assigned 5000 3- and 4-year olds into Head Start or a non-Head Start control program – generated as many questions as it answered.  So what do we do when RCT evidence is not available?  We still need a way to make determinations about the best programs to pursue, and the best way to spend our money.

The best way to proceed, in my view, is to take into account all of the research available, and weigh it according to a variety of characteristics, including research methodology used, attrition, and other factors.  If the best research we have on an existing program is a tracking study with no control group, we ought to examine that research and draw what we can from it.  If a good RCT comes along which studied the same program than that research can take precedence – but until that point we would be mistaken to ignore the body of evidence already assembled, even if it is lower quality than we’d prefer.

Why Americans Don’t Trust Charities

An article in this week’s Chronicle of Philanthropy reports that 1 in 3 Americans lacks faith in charities, according to a new poll:

More than 80 percent said charities do a very good or somewhat good job helping people. But a significant number expressed concern about finances: A third said charities do a “not too good” or “not at all good” job spending money wisely; 41 percent said their leaders are paid too much.

Half said that in deciding where they will donate, it is very important for them to know that charities spend a low amount on salaries, administration, and fundraising; 34 percent said that was somewhat important.

And 35 percent said they had little or no confidence in charities.

So why are we so skeptical of charities – and is our skepticism warranted?

To some degree, charities have failed to earn donors’ trust because they haven’t been particularly good about rigorously evaluating and documenting their impacts.   Only in recent years has a more impact-oriented, evidenced-based ethos begun to permeate the nonprofit realm.  Transparent, rigorous data that allow us to draw conclusions about costs and outcomes are hard to come by.

Impact Is The Best Way To Measure Charity Effectiveness

While many charities need to step up their game in providing rigorous data on impacts, we also need to do a better job of educating people about the best ways to evaluate charities.  According to the poll, the most important factor people use to deciding whether to give to a charity is that it spends a low amount on salaries, administration, and fundraising.   Yes the ratio of administrative to program costs is a poor indicator of a charity’s quality.  That’s because outputs, not inputs, are what ultimately matter.

We wouldn’t decide whether to buy, say, a new smartphone based on how much money the company had spent on marketing or salaries.  Instead, we’d make our decision based on what the product could accomplish, in relation to its cost.  We should evaluate charities in the same way.

The Crisis In Charitable Giving

The Chronicle reports that only 13 percent of those polled thought that charities do a good job of spending money wisely.  This amounts to a crisis in philanthropic giving.  Americans give over $358 billion in charitable giving each year, yet have little faith in where their hard-earned money is going.

What we need is a revolution in the way we think about charity quality.  Charities need to be rigorously evaluated with an eye towards the measurable impact per dollar invested.  Consumers also need to be educated as to what to expect from a charity. It’s impact, not inputs, that matter.