Thirty minutes before my flight to Fukuoka was scheduled to depart, it was canceled due to a snowstorm. The announcement was delivered swiftly and casually, as if the speaker was announcing a 20% off sale at the duty-free store, not dooming me to ten hours of miserable ground travel.
Aggrieved, self-pitying, and on the cusp of what eventually blossomed into a full-blown cold, I took a lap around the seating area and contemplated the transportation nightmare that had befallen me: a two-hour drive for naught, $100 in airport parking fees I could have avoided, an hour-long bus ride, an overpriced bullet train that cost more than my original flight, a newly mandatory subway ride to my now far-away accommodation. I braced myself for the unique psychological torture of wasting hours on various expensive, comparatively slow methods of transport when I had been expecting a zippy plane ride, a before-sunset arrival, and a quiet evening sipping artisanal green tea somewhere swank and dimly lit.
Amidst my frustrated pacing, I spotted a girl around my age still sitting down. She appeared mildly confused by the general bustle, but not quite at the adequate level of alarm the situation called for. I could tell that she was a foreigner, like myself, and figured that she must not have registered the cancellation announcement. For a second, caught up in the seeming totality of my own drama, I considered letting her figure it out on her own. Then I decided against it, remembering all the times that others had taken pity on my cluelessness.
I approached her and said, “The flight is canceled. No flight.” I crossed my arms into an “X” and gestured that she should hurry on to wherever she needed to go next.
“Kyanseru?” she asked, enunciating the katakanized version of “cancel”.
“Kyanseru.” I confirmed.
As the realization sank in, her voice began to quaver with the faux lightheartedness that often follows bad news in the moments before despair overtakes amusement. In broken Japanese, she asked me about the next flight. In equally broken Japanese, I responded that there were none until the next day. She told me she needed to catch a flight to Vietnam at 10 PM that night. It was 4. I told her, by way of placing both hands over my mouth, of my concern. I took her to the airline help desk. They relayed that she was shit out of luck, except in nicer words. The girl started to get antsy. I started to realize that there was no way I could leave her behind.
I was being presented with a clear opportunity to exercise morality, like catching a glimpse of a baby behind a window with smoke billowing out of it. I blew my nose and swallowed my selfishness, then asked if she wanted to join me on the updated multi-leg journey I had formulated. She agreed, nodding and wiping her eyes.
Bonded by our bad fortune and mutual illiteracy, we traveled the next 200 miles together, shuffling through various ticketing lines, getting lost in stations we didn’t anticipate being in, and drifting in and out of sleep sitting next to one another. I offered to haul one of her suitcases with my free hand; she surprised me with a warm yuzu drink from a vending machine. On the bus, she told me that had been working as a strawberry farmer for the past three years, but overstayed her visa and was now headed home permanently. On the train, she showed me pictures of her life: the scales she used to weigh and pack produce, the sprawling potlucks that she and other international workers hosted last New Year’s day, the nail salon she planned to work at upon her return to Vietnam. In exchange, I showed her photos of mine: my friends and I throwing up peace signs on mountaintops, my mother’s Vietnamese home-cooking, the strip mall near my house in Napa that I discuss in my lesson plans about everyday American life. At one point, we exchanged names, but they were quickly forgotten, or perhaps never learned.
At 8 PM or so, the girl and I made it to Fukuoka Station, the furthest point we would travel together. At the airport train ticket gate, we smiled at one another and hugged goodbye. She thanked me profusely. I wished her good luck. I watched her lug her three massive suitcases towards the platform, one pulled by her left hand, the other two knocking against one another as she dragged them forward by the crook of her right elbow. When she disappeared from my field of vision, I turned around and headed to my platform.
I have no idea if she made her flight, and I have no way of contacting her to find out.
That night, exhausted by the whole ordeal, I cried silently in my hostel bunk and nibbled at a rum raisin bagel.
One of the many quandaries of being alive is not knowing if our efforts are contributing to a better world. But despite the uncertainty, I find value in attempting anyway. Looking back, I think that sticking with the girl was the right thing to do, and I’m glad that I did.
Each day, I try to make ethical decisions that I can be proud of. I try to be a “good person” even when it requires a little more of me. I try to get a little bit closer to living in accordance with my belief that every person is equally entitled to well-being. Effective altruism assists me in all these endeavors.
Doing good has been extremely top-of-mind for me over the past year, primarily as a result of my increasing interest in effective altruism (EA), a philosophical and social movement dedicated to answering the question, “How can we best use our resources to help others?”
Just over one year ago, I wrote a comprehensive blog detailing my early explorations into the movement, as well as several of my hesitations. In re-reading it now, I find that it holds up pretty well.
Many things have stayed the same. I still feel that you are my other me, that our essential sameness impels me to act impartially and care deeply for all people. I’m still motivated to use my resources to do as much good as I can -- or at the very least, increasingly more than I’m doing at the present moment.
But I also got a lot wrong. For starters, I probably should have read at least one philosophy book before labeling myself a utilitarian, which I would no longer do without a healthy list of caveats (at least, not in front of anyone who has taken Philosophy 101). Begrudgingly, I did end up listening to “predictions about the most likely ways in which artificial intelligence and bioengineered viruses will kill us” (to quote 2022 me) and they scared me in all the ways I expected they would. Also, I endorse Peter Singer a lot less now.
In any case, I’ve fallen a lot deeper down the rabbit hole. I’m chronicling what I’ve done to show how my thinking has evolved (and the materials that prompted the changes) and to provide an account of how a tentative EA becomes a more deeply engaged one.
What I've done over the past year
I read a number of foundational EA books, including:
I also tried to read Superintelligence by Nick Bostrom but found it devastatingly boring. Of the 30 books I started in 2022, it was the only one I couldn’t bring myself to finish, and I read Hanya Yanagihara’s 814-page novel A Little Life last year. A month later, Bostrom was outed as a racist (to no one’s great surprise, damningly) so, like, I just knew the vibes were bad. Lots of people think he has good ideas about AGI though, so proceed as you must. I just don’t understand any of the ideas because, as mentioned, the incomprehensible drollness with which they were communicated impeded any possible understanding I might have otherwise managed.
All the other books are great though! You can get a free physical copy of any of them here!
I wrote (1) two-sentence comment on the EA Forum and won a $100 prize from it. If that’s not a signal to post more, I’m not sure what is. One of my 2023 goals is to write an article for the EA Forum!
I completed the introductory and in-depth EA fellowships, which are eight-week long, once-weekly online courses that explain core EA concepts. Each week, you’re given a few readings to parse through, then you sync in a small group to discuss over a one-hour Zoom call.
If you’re even the slightest bit curious about EA, I highly, highly recommend the introductory course. It’s thought-provoking and intellectually challenging, while simultaneously being accessible and low-stakes. EA is a missionizing philosophy -- i.e. it benefits from an increase in members -- but the course is structured to avoid obligatory agreement. Dissent is encouraged. Critique is welcome. If I was a bad-faith actor, I might sign up simply to practice my debate skills against thoughtful partners.
My stand-out learnings are as follows:
1. The numerical value of non-human animal life
Several of my closest friends are vegan, and vegan first and foremost for the animals, which means I grew up watching Cowspiracy and reading about battery-caged chickens and digesting all the abhorrent exposés on factory farming. Having said that, nothing has influenced me to take animal welfare more seriously than simply being asked to put a number value on the life of specific animals.
For example, how many chicken lives equals one human life? How about pigs? Cows? Shrimp? If you could choose between saving one human and 10,000 chickens, which would you choose? How about 1,000,000 chickens? How about 10,000,000 chickens?
The point isn’t to come to a universally agreed-upon number. (That’s for charity evaluators to figure out!) The purpose is to determine how you, as an individual, currently value animal life. This provides a starting point that you can adjust upwards and downwards as you learn more and update your beliefs.
Deciding upon a hard number is useful because it enables you to do expected value calculations. For instance, even if I value the life of a chicken very, very, very little, like 1/10,000,000th (one ten-millionth) of a human life, 136 million chickens are killed each day for food, so my level of horror/outrage/disgust (and my corresponding actions) should theoretically match what they would be if 13.6 humans were being slaughtered each day for food, and if there were an entire industry built around supporting that murder. At least from a math perspective, the sheer scope of animal suffering compels action.
Unfortunately, understanding principles logically is way easier than actually living them, so I regret to admit that I still eat meat. But I did manage a small change. Since June of last year, I stopped purchasing meat at the grocery store, essentially becoming pescatarian at home. I still eat meat at restaurants and at social events, but I plan to revisit my diet choices if and when I move to a country with more protein alternatives. Thinking this one over!
2. Forecasting and calibration
Forecasting is a fancy word for making predictions. It comes into play when dealing with a high degree of uncertainty, like way higher than deciding whether or not to help a girl get to the airport.
One of the key aspects of forecasting is assigning a numeric probability to outcomes. The weather forecast is a great example of this; e.g. “there’s a 70% chance of rain tomorrow.” Political forecasting is also a huge market; e.g. “Pollsters predict a 68% chance of Biden being re-elected.”
Especially when dealing with long timelines, like over the course of decades or centuries, forecasting is an essential part of deciding which action to take. Good forecasts help you do good math.
But for those of us who are not data scientists, forecasting is nonetheless an interesting practice to incorporate into daily life. In everyday language, people tend to use non-specific confidence indicators like “probably,” “seems unlikely,” or “pretty sure,” all of which are moving targets that mean different things to different people. This makes it hard to form an accurate view of what people actually mean (and how much to trust what they’re saying), including ourselves!
In my attempt to become better at predicting my own behavior, I’ve begun using exact numbers whenever I can. It sounds something like this: “I’m 85% sure that I’ll get that done by the end of the week!” or “Based on what I know, I think the chances that I’ll do that are only 20%.” Providing a specific likelihood and specific timeline for action allows me to check my predictions against reality, which leads to better forecasts going forward! Essentially, by slowly revealing gaps in my knowledge of self, I begin to know myself better.
To present a practical application, if you come to realize that the chances of you embarrassing yourself in any given social situation are only 2%, you’ll only need to worry about it at a 2% level, compared to the ~50% level most people flounder around. Forecasting is a tool to model the world more accurately, and how we move through it. It helps us make optimal decisions in rapidly fluctuating situations.
It takes time to develop a perception of your knowledge that matches reality, but it’s surprisingly easy to make major progress in just an hour or so. I enjoyed playing this game, which allows you to assign confidence levels to your answers to trivia questions. The game helps you become “well-calibrated.” This means that when you say you’re 50% confident, you’re right about 50% of the time; when you say you're 90% confident, you're right about 90% of the time; and so on. Being well-calibrated means you have a strong sense of how to evaluate your own certainty -- that is, when to be confident in your knowledge, and when to admit you’re not so sure.
Kind of like meditating, the best way to understand forecasting is just to try. Reading theory might help you develop the language to talk about it, but the only way to improve is simply to practice.
3. Artificial Intelligence
I had a brief, two months-ish phase where I got very interested in AI. I read a couple of books and completed Crash Course’s YouTube series on AI. I also began religiously listening to NYT’s podcast “Hard Fork”, which both makes me laugh and keeps me up-to-date on the latest tech news. It only took a few dozen hours of effort, but I think I can finally explain what a neural network is.
I had somewhere between 10-15 one-on-ones with extremely kind people, including 80,000 Hours career counseling, which I’d recommend to any fellow young people who are interested in dedicating their careers to social impact and/or have ever sent the text, “idk wtf i’m doing”.
Some highlights of my talks included: discussing how natalism and longtermism intersect (fascinating), early conceptualizations of meta-level coworking strategy and theories of change (neat), and lots and lots and lots of conversations about operations work within the EA ecosystem (exciting).
I also blabbed at length about my extremely amateur writing to someone who I later learned wrote for The New York Times, which kind of makes me want to shrivel forever. Still, on the whole, meeting new people was a net positive! I hope to do a lot more of it in the future!
Donating 5% of my income through Giving What We Can’s Try Giving pledge was the big, concrete action I took in service of my belief in EA principles. The funny thing about donating is that it seems like it would be a grand embarkation, but in practice, is actually as simple as inputting your credit card number a couple of times to set up recurring donations. It’s kind of like stepping outside after your final day of school; you’d expect some sort of fanfare, but really, life just keeps on keeping on.
The ease of donating, though, is only more reason to. I can say honestly that I have not missed any of the 5% I gave over the past year.
In 2022, I gave 50/50 to Givewell’s Top Charities Fund and The Life You Can Save’s All Charities Fund. Currently, my donations are split 80/20 between Givewell’s All Grants Fund and Longview Philanthropy’s Longtermist Fund. As always, I give thanks for the privilege of helping to ameliorate suffering and honor all life in tandem with my own.
I think my donations speak to how my ideas on how to do the most good have changed over time, particularly how they’ve shifted to include longtermism. Longtermism, simply, is the idea that we should prioritize positively influencing the long-term future of humanity — hundreds, thousands, or even millions of years from now. At face value, it sounds kind of sci-fi and dystopian, but under this definition, caring about climate change makes you a longtermist, so it’s probably less fringe of an idea than you think.
I liken longtermism to preparing for freak accidents. For instance, I’m not planning on my arm being severed any time soon, but in the unlikely scenario that it is, I really hope that the doctor who receives me at the hospital knows what to do. In this metaphor, longtermism efforts might include training medical staff on how to treat arm severance, researching best practices on how to recover, and trying to reduce the likelihood of arms being lopped off in the first place.
Longtermists try to identify events that are both comparably likely to happen and comparably bad, such as misaligned artificial general intelligence (AGI), nuclear war, and bioengineered pandemics. These are things that would affect a whole lot of people and last a very long time.
One of the most common critiques of longtermism is that it detracts from our ability to do good today, right here and right now. I won’t go so far as to say there’s zero trade-off, but I’m also not going to yell at anyone who is actively trying to safeguard the future. In an interview, someone asked me: “Do you have any hesitations about working for a longtermist organization rather than one focused on more immediate problems?” My answer was no, because I’m currently doing nothing, so working on any solution would be an improvement. That’s pretty much where I stand on the matter. There’s a whole lot of suffering going on in the world, and I respect a variety of worldviews on how to reduce it. For me, the important thing is actively considering what might be effective and taking steps to move in that direction, as opposed to, like, assuming good intentions will automatically lead to good outcomes.
Longtermism is a philosophical maze, incorporating population ethics, moral discounting, the limits of cluelessness, and a bunch of other galaxy-brain lines of thinking that are too complicated to explain in the purview of this blog. For our purposes, I think the bulk of the argument can be summed up with the phrase, “low probability, high expected value.” If our best guesses on how to protect the future are in even the general vicinity of correct, a little bit of effort has the potential to go a long, long way.
I made good on my promise to parse through 80,000 Hours’ career resource materials and landed upon operations work within the EA ecosystem as my most promising path to impact. I’ve made it to the final hiring rounds on three of the six applications I’ve submitted thus far, which is a good sign that I have the right aptitudes for this kind of work! I’m going to keep applying and keep collecting that sweet, sweet work test compensation until something pans out. In the spirit of forecasting, I predict that there’s an 85% chance of me doing paid work for an EA-aligned organization by the end of 2023.
I’m also working on the launch of EA Architects & Planners, a working group under the “high-impact professionals” umbrella. Not too much to say right now -- don’t wanna give away all our future plans -- but things are definitely cooking!
It’s been interesting to transition from an isolated individual reading forum posts in her bedroom to a Professional™ in the Community™, so I’m still figuring out how to talk about EA in my usual conversational, mildly flippant, decidedly non-rigorous manner without scaring away employers. At least in the context of my personal blog, I prefer to spitball ideas, not present painstakingly evidenced arguments, so I’m going to keep being my lighthearted, unserious self…at least for now. Perhaps I’ll brush up on my scientific writing ability in the future.
In truth, it was a wild year to deep dive into EA. EA’s public image had an objectively rocky past season. Crypto fortunes were lost, racists were outed, sexual misconduct was exposed. I mention these occurrences because I think it’s important to be upfront about the environment that potential EAs are stepping into. In last year’s blog, I wrote that I was wary of joining a “white boys’ club…defending hoarded wealth with the promise of charity” and uhhhh, don’t want to say I called it, but I did suggest the possibility.
But EA also did a whole lot of good. To quote William MacAskill, “When we think of the impact EA has had so far, it’s pretty inspiring. Let’s just take one organization: Against Malaria Foundation. Since its founding, it has raised $460 million, in large part because of GiveWell’s recommendation. Because of that funding, 400 million people have been protected against malaria for two years each; that’s a third of the population of sub-Saharan Africa. It’s saved on the order of 100,000 lives — the population of a small city.” I feel very fortunate to have contributed a tiny, tiny portion of that pie.
Overall, the most meaningful thing that EA has given me is a feeling that my life is extremely valuable. Like, “could potentially save dozens of lives” valuable. That’s something that can’t be easily forgotten, at least not without killing a considerable portion of my conscience in the process. Clearly, every person has some sense of their ability to do good, but EA forced me to really reckon with mine -- put it into numbers, reflect on all the counterfactuals. I’ve had so many privilege checks over the course of my life, but learning about EA was a privilege call to action.
EA isn’t for everyone -- and doesn’t have to be -- but it's been worthwhile for me. I like the idea that life is worth protecting, and that we can all play a part in protecting it.
Is EA perfect? Obviously not. Do some of the thought leaders come across as tone-deaf or convoluted to the point of being illegible? Sure. Does it kind of scare me that many EAs have little to no attachment to their racial and gender identities and thus, have a difficult time understanding why they are of such import to others? Yes, a little. Am I aware that my sudden shifting of thousands of dollars and frequent vague references to an “online forum” kind of makes it seem like I joined an underground pyramid scheme? Yeah.
But do I believe in the core principles of EA: reasoning transparency, Bayesian thinking, and expanding my moral circle? Yep. Are many EAs lovely, intelligent, inspiring? Absolutely. Does EA make me feel grateful and excited to be alive, like I have the chance to do something worthwhile with my existence? Indeed. Does that feel kind of special? Yes.
So I’m here, and I’m trying.
At the end of the day, if you presented me with two complete strangers -- one who identifies as an EA and one who doesn’t -- then asked me to bet on which person will do the most good, I’d pick the EA every single time.
Why? Simply because they are optimizing for it.