Notes on Utilitarianism by John Stuart Mill

Some Useful Terms

  • Ethics is the subfield of philosophy concerning the nature of right and wrong. 
  • Normative ethics is the subfield of Ethics concerning what standards to use when judging what we morally ought to do.
  • Consequentialism is a normative ethical theory that judges the rightness or wrongness of actions entirely on their consequences or effects.
  • Utilitarianism is a type of consequentialism that believes happiness and unhappiness ought to be maximized and minimized respectively.

Book Notes

Utilitarianism (1861) is the most famous book on the eponymous ethical theory. Due to its great influence on the study of ethics and short length of just under 100 pages (allowing for its continual use in undergraduate classrooms), it has maintained great relevance to the present day. It has played a key role in the history of consequentialist ethical theories and can be credited, in part, for their popularity.

Divided into five chapters, Mill describes what the theory of Utilitarianism is (and is not), how people might be motivated by it, his proof for it, and ends with an analysis of justice and its relationship with the theory. Throughout the first three chapters it is notable how much time Mill spends deflecting canards, or objections he does not consider to have merit due to their inaccurate assessment of what Utilitarianism is – some occurring before he even provides an outline of the theory itself.

This outline begins in chapter 2. The key principle Mill directs us to, is the much remarked upon Happiness Principle : Acts are right insofar as they tend to increase the overall happiness or decrease unhappiness. By happiness, Mill is eager to point out, he does not mean the trite notion of momentary bliss, but all the aspects of life that are satisfying or pleasurable. Additionally, in discerning what types of happiness are best, he uses a controversial criterion: of any pair of actions where everyone or nearly everyone who has tried both prefers one over the other, the preferred one is the one bringing greater happiness. 

This, Mill believes, demonstrates that so-called higher pleasures of mental, moral, or aesthetic quality are better than lower, sensation-driven pleasures. It also leaves philosophizing and intellectual thinking as some of the greatest pleasures around — quite convenient for Mill, given that this is what he spent much of his time doing outside of political advocacy.

Furthermore, Mill notes, other principles often embedded in moral language, such as veracity or virtue, still have purchase in Utilitarianism. However, these are secondary principles, which, while good guideposts to moral behavior, are not the ultimate deciding factors of right and wrong. The ultimate judge of rightness and wrongness is the degree to which happiness has been increased or decreased.

In chapter 3, Mill dedicates significant time to describing how Utilitarianism is not unique from most ethical theories in certain ways. The same psychological and social sanctions will be used to prompt people to perform moral actions. While it may take time before the tenets of Utilitarianism seep out into society through education and persuasion, the mental and social tools to prompt moral behavior are already there, even if what is considered moral is changing.

In chapter 4, we are asked to consider how Utilitarianism might be proved. As he notes, this is no direct proof but is the best that can be asked for a moral theory. Roughly it goes:

  1. Everyone desires happiness
  2. The only way to prove what is desirable is to observe what people desire
  3. A person’s happiness is thus good for that person
  4. Therefore, the general happiness is good to the aggregate of people

Despite being warned that this was not a direct proof of mathematical strength, it does still feel underwhelming. Specifically, it is peculiar that Mill thinks it logical that one person’s happiness being good for them entails increasing the aggregate amount of happiness being good for the aggregate of people. Such a logical connection requires some other assumptions about what the aggregate of persons means and whether or not something can be good for them. 

Through chapter 5, Mill considers the topic of justice. He searches for common attributes to conceptions of justice, and finds them to be grounded in a set of emotions that deal with self-preservation, some observed in other animals. These emotions, when constrained by social custom, motivate the creation of law. Mill points out, the etymology of justice demonstrates the deep connection it has to our legal foundations (Jus means law in Latin). But, he states that it is deeper than law, as the law itself can be unjust.

So, justice can be seen as the ways in which society protects our moral rights, sometimes through law. This means we all have a stake in the creation of just systems. Mill connects this to utility, and the happiness principle, by noting that just systems secure people’s basic security and alleviate many of the most basic concerns we have regarding harms that others might inflict upon us. However, justice is not systematic and it lies on top of some of our deeper intuitions concerning morality. At the base of our moral intuitions lies the notion of utility. Justice emerges from this. Furthermore, Mill argues, there are cases in which it would be moral to act expediently outside of what is just, yet within what is moral. This demonstrates that justice delineates a class of moral rules which emerge in societies to satisfy certain common emotions used for self-protection and fairness. This is less fundamental than the notion of what is moral, which Mills states is determined by the principle of utility.

Together, the chapters lay out a series of passages that contain many influential and compelling arguments in favor of, at the very least, a prioritization of happiness in any ethical system, if not adherence to Mill’s version of Utilitarianism itself. Mill’s work has been followed by a series of derivative ethical theories and has done much to advance the expanding moral circle, where greater moral concern is given to women, the impoverished, those in other countries, and non-human animals.

How I (Try to) Stay Productive: A List of Tips

Jump to the list of tips

As I wrote about last week, the internet age has given us countless devices and apps designed to distract. It still is sometimes hard to distinguish where exactly an activity transforms itself from useful, or harmlessly entertaining, to full on distracting. However, what seems clear to me, is that for most people using modern technology, this line is often crossed. Given this, I thought that it might be useful to write about what I do to prevent distraction and increase productivity in my life. 

I remember when I first started realizing that technology was going to be a serious problem for my academic prospects. In middle school, I often would procrastinate writing papers until the middle of the night before they were due. At this time I did not have a computer of my own, so in some sense I thought it a treat that I was able to use the family computer on a weeknight. I would usually end up watching Netflix until my monkey brain finally ceded control sometime in the wee hours of the morning and I began writing. 

Reflecting upon those experiences at the time, I knew it was a problem. I knew it was going to make it more difficult to succeed come high-school, but I didn’t have a ready blue-print to deal with the problem. I was also too confident in my own capacity for self-restraint to seriously ask for help.

Since then, I have gone through numerous strategies to help curtail the negative influence of technology in my life. Today, I rely on a combination of certain habits and certain restrictions on sites. The following are, I believe, the most important features of my current system.

I have a set-up where I keep my computer in the place where I do most of my work. I, with almost no exceptions, keep this laptop there and do not bring it to where I sleep and do much of my reading. I bring my phone with me, but try to keep it apart from where I am sleeping (or at least on the opposite side of the room if I need it for an alarm). I am still working on improving my phone habit, however.

In terms of technical steps to prevent distraction, I found a few important apps and features in iOS 14.3 that are particularly useful. For my computer running Windows 10, I use an app called Cold Turkey to block every website on a list across every browser. It works by forcing you to install the Cold Turkey extension on each of your browsers in order for them to launch. You can then make schedules both for when websites on this list are blocked, and when you are able to edit this list. I have found this to be very effective at preventing me from accessing certain sites (eg. Youtube, Reddit, and News sites) that I would gravitate to when bored and get sucked into. 

On my phone I have a rather draconian system. The first thing I use is the inbuilt Content Restrictions settings in the iPhone settings. Here I only allow access to sites that I have whitelisted. These are Wikipedia, Google, and a handful of others. I have purposefully forgotten the content restrictions passcode so that I would need to reset it with my AppleID to change these settings. 

This doesn’t work by itself for me because this doesn’t prevent you from installing apps that you can use to easily evade the content restrictions. In response I have deleted all apps that are somewhat distracting and purposefully forgotten my AppleID password so it is more difficult to reinstall them. Hopefully, in the future Apple makes it easier to self regulate your usage and harder to bypass your restrictions. I know my complicated setup isn’t for everyone, but you can still use the productivity features offered in iOS in less extreme forms.

Here is a summary of my productivity tips:

Habits and Home Setup Tips

  • Separate your workspace from your sleeping and resting space.
  • Keep your computer in a different room from where you sleep.
  • Charge your phone in a different room (or at least the opposite side of the room) from where you sleep and where you work.
  • When reading, keep devices in a different room or put them where you can’t hear them.
  • Set time in your day when you can’t use the internet, particularly at night

Tech Tips

  • Use Cold Turkey (or Self-Control for Mac) to block sites or apps that you think are distracting on your PC. Or, only allow yourself to use certain apps at specific times.
  • Use Content Restrictions on iOS to either block all non-whitelisted sites, or block specific sites you find distracting.
  • Turn off notifications from apps that you use too much.
  • Delete apps that you can’t stop using or can’t stop from distracting yourself.
  • If you need a draconian measure, forget your passcodes and passwords that allow you to change these settings or download distracting apps.

Most importantly, I’ve found that this is a continuous process. You will not find the perfect setup for yourself immediately. The most important thing is that you don’t give up. Instead, accept incremental progress as you learn more about yourself and your habits.

Good Luck,

Alexander Pasch

Bottlenecks to Progress in the Internet Age

I have been reading A New History of Western Philosophy by Anthony Kenny and it resurfaced thoughts that I have often had when learning about historical figures and everyday life in prior eras. In particular, how these figures were able to overcome the dual problems of censorship of political and religious elites and the limited availability of information will always fascinate me.

The lack of access to crucial historical texts was perhaps the major bottleneck which prevented philosophical progress in medieval Europe. In fact the capture of Constantinople by Ottoman forces in 1453 ended up being critical for the Renaissance. This is what forced the Greek scholars, who had kept the philosophy of Plato and other ancients alive, to flee to Italy, where Scholasticism (the rigid fusion of Christianity and Aristotelianism) dominated. The spark of new classics was enough to light the flames of new philosophies that burned the Scholastic tradition to the ground.

Think about that. Works of Plato, lingering somewhere in Byzantine libraries for hundreds of years, simply needed to be transported across the Meditteranean and communicated by the scholars who kept them to unleash a wave of progress the world is still reverberating from. Obviously there were many factors behind the Renaissance, but it is a remarkable feature of this time that a relatively small set of books could cause such massive intellectual changes. In part, this is because there simply wasn’t that much new stuff to read. Something coming out was a big deal. Even if it was a re-release. In fact, it wasn’t really until the 19th Century that it became impossible to read everything worth reading in most subjects.

Beyond the scarcity of written material, religious and political persecution has been another persistent feature of the Western Philosophical tradition’s opposition to progress. The political turmoil in the lives of almost every major Medieval and pre-Modern philosopher is striking. Each writer had to self-censor, and in many cases were forced to flee or outright killed. To name a handful:

Boethius (tortured and killed by the Ostrogothic King Theodoric)

Giordano Bruno (denounced and burned at the stake in Rome)

Baruch Spinoza (excommunicated and exiled from the Jewish community in the Netherlands)

John Locke (fled England to the Netherlands to avoid political persecution before returning)

In stark contrast to this is the extraordinary availability of information today and the ease with which new ideas can be articulated. This is perhaps the most remarkable fact about our era (and what makes you reading this possible at all). It also opens the question, why, since the invention and wide scale adoption of the internet, productivity and economic growth haven’t sped up more? One theory (articulated by Tyler Cohen), is that we have already taken much of the low hanging fruit that yielded the massive economic progress of the 1900s. Science, likewise, is using more people to make less progress than it did in the past. 

If this is true, then it seems that we hit a sweet spot for GDP growth and scientific progress somewhere in the 20th Century. Our intellectual and political climates were just good enough to unleash discoveries and inventions just out of reach of previous generations, but much easier to find than those to follow.

On the personal side, it might be hard to relate to GDP figures. But the relationship between personal productivity and economic productivity is a topic that still sometimes crosses my mind (despite how differently they may be defined). For myself, having been born in an age and place where the internet was nearly ubiquitous, and my capacity for distraction by it nearly endless, I wonder what its overall effect on our productivity has been.

On the one hand, learning has been unquestionably easier. Writing papers often includes of cycles of: typing, opening a new tab, searching Google, finding crucial information, and switching back to type my findings and analysis mere seconds later. This would have taken orders of magnitude longer in the pre-internet age but is now a seamless feature of student and writer’s lives. Educational content producers and random helpful figures on the internet are easily found and often filtered by how useful their information is. Finally, Wikipedia (which yesterday turned 20!) is always there to provide an overview on just about anything.

But that is helpful only when I am working. An expression which I have found most apt in describing my personal productive capacity is Parkinson’s law: “work expands so as to fill the time available for its completion”. The shorter the deadline, the more productive I will be to finish it. A longer deadline gives me time to slack off and fuel procrastination. And while procrastination has existed since the day man began working, the magnitude of its influence has grown larger than ever before.

The attractiveness of distractions has particularly grown as our attention has been commodified with a profit motive attached to our eyeballs. Devices and applications are extremely efficient, not at improving your overall well-being, but guiding your attention in whatever way software engineering teams see fit. This is a uniquely modern curse. 

To bring this back full circle, I must clarify that I would unquestionably submit to the current challenges of slowing growth and hyper-distraction rather than those of intellectual scarcity and persecution. We have traded away the incredibly cruel world of the past for good reason. 

However, we must think harder about the questions posed by the information age. How should one deal with the experience of information overload and the increasing complexity of decisions (particularly major life decisions)? How should we design our relationship with our technology to leave us well informed, more in control, and less distracted? How should we think about the economy and our role in it — particularly if much of the low-hanging fruit has been plucked, and humans (with the same brains and bodies) are demanded to jump higher than before in order to achieve the same GDP growth achieved in the past?

The curses of the past have been traded away for lesser, and in some ways opposite, curses of the present. Acknowledging them, and answering the questions they raise is something I will continue to attempt. Luckily, the internet has shown me that I am not alone.

Consciousness: What it is and why it matters

This is part one in a series on consciousness

I’ve desired for some time to begin writing about my view on philosophical topics in an approachable but serious manner. With the advent of a new year, I figured I would now begin publishing weekly posts in this vein, starting with a series of posts on consciousness. 

By consciousness, I mean something quite basic: the fact of experience or what it is like to be something (in Thomas Nagel’s sense). I do not mean self awareness, the capacity to reflect, to report, or remember. On the other hand, non-consciousness is simply the absence of consciousness. I do not use unconsciousness here, because it often relates to parts of the human mind that lie out of reach of consciousness, but I am talking about more than just us.

My rationale for writing about consciousness is two-fold. First, consciousness is in many ways foundational to everything we care about, especially in ethics, another topic of great interest to me. Understanding how widespread consciousness is, is crucial for developing our moral frameworks (lest we vivisect dogs again because we believe they are soulless) and general theories of the world. Second, I believe the current dominance of a physicalism (the belief that physical matter is the only fundamental substance that exists) that sees non-conscious things as the default is misguided. I find that this version of physicalism rests on shaky premises which I wish to attempt to investigate and challenge. 

That challenge is what I will begin in this post. The fact of positing that something is non-conscious at all is an odd endeavor. It involves using your own conscious states to try and demarcate what, outside your own mind, is not conscious. 

When you imagine a rock, or another entity you believe to be non-conscious, you are using your consciousness to imagine or sense it. You cannot go a further step to imagine its non-consciousness, for imagining entails consciousness of some sort. Instead, what appears to occur is that you are unable to utilize your theory of mind on such an object, and perhaps with the assistance of other beliefs (consciousness requires a brain or information processing or movement) you then have the thought: “this rock is not conscious”. 

However, when you drill down on these thoughts, they become difficult to justify. The capacity to use your theory of mind does not determine whether or not any given person, animal, or object is conscious. We can imagine what a dead person would be thinking, while failing to imagine what a bat is feeling. Furthermore, investigating other beliefs about what is and isn’t consciousness often relies on the premise: things that are not sufficiently like me are not conscious. 

This is what I shall write about in next week’s post.

-Alexander Pasch

Consciousness: Where it might not be

This is a part two in a series on consciousness

Continuing from last week’s post, I shall explore avenues on how exactly one can doubt the consciousness of objects you encounter. Again, by consciousness I mean any type of experience something or someone might have; or what it is like to be something.

From the birthplace of modern philosophy, we have irrefutable reasons to say conscious stuff exists (from Descartes) within those thinking the sentences ‘I think therefore I am’. Beyond the odd solipsist, most everyone agrees that it is also reasonable to assume that other people are conscious as well. Today, we further assume that dogs and other mammals are conscious. What about trees? Grass? Rocks? The Sun?

I have found, for most of my life, an obvious answer to these sorts of questions. While the exact nature of what consciousness is remains mysterious, it was obvious to me that it was a product of the brain. The mind is what the brain does, to use a neuroscientific quip. Consciousness is something like information being processed, or a byproduct of a working functional system. 

Yet I began to doubt these answers as I considered the unity of nature — the fact that all things, including our bodies, are made of the same particles that stars are made out of; emergent from the same quantum fields. The trajectory of history also seemed to point in the direction of decreasing human distinctiveness (from Copernicus to Darwin to Goodall to AlphaGo), an expanding circle of moral worthiness, and a wider range of animals considered conscious.

We're All Stardust | Stardust Meme on ME.ME
Source

So I investigated the actual premises — the underlying reasons — for a belief in non-consciousness. A starting point is noticing that human consciousness is profoundly altered by changes in the brain. This was noticed as far back as Roman physician Galen, who wrote about how gladiators who suffered head injuries were permanently psychologically harmed. This presaged the connections between brain activity and conscious states that modern neuroscience has done much to uncover.

From here, it could be assumed that the requirements for consciousness to exist are found in certain properties the brain has (whether as an information processor or for the functional roles it plays in living organisms). After all, if the brain is harmed, or is sedated, you lose consciousness (or at least the memory of it). Every theory of consciousness therefore gets selected first by whether it explains the consciousness of those who can say that they are consciousness. Right now that’s just us humans.

But the problem is that you can’t boot a restrictive theory of consciousness off the ground without some additional assumption: that anything that isn’t sufficiently similar enough to us humans isn’t conscious at all. Otherwise, there is no way to disprove countervailing theories of consciousness that describe non-human objects as conscious.

If you want to say consciousness emerges when brains, or similarly complex objects are formed, I can come along and say, “yes that is one example of consciousness, but consciousness also occurs when only relatively simple objects are present.” You have to fall back on an intuition that things that are not similar enough to us are not conscious. No matter what restrictive theory you have to explain consciousness, there is no way to refute a wider theory of consciousness without that intuition.

The following argument articulates how this line of reasoning works:

1. I am conscious. 

2. I can sense many things that are not similar to me (or the body I consider mine).

3. Things that aren’t (sufficiently) similar to me are non-conscious.

4. Therefore there are many things that are non-conscious. 

The argument relies on the intuition present in premise 3 to be valid (as well as a vague notion of similarity). Yet, every theory that excludes consciousness to any subset of things similar to us relies on it. Where this intuition arises from is of interest to me.

In next week’s post, I will investigate how this intuition might itself be emergent from the physicalist worldview, creating a circular argument.

Consciousness: Why people think it might not be everywhere

This is a part three in a series on consciousness

Last week, I introduced the intuition that things “that are not (sufficiently) similar enough to us are not conscious.” This intuition matters because, without it, there is no way to ground a restrictive theory of consciousness. Put another way, without this intuition, you would find it impossible to defend the position that anything at all is non-conscious. It is present, whether explicitly or implicitly, in every restrictive explanation someone gives for why consciousness is or isn’t present somewhere.

One could argue that the intuition could be ignored by instead falling back on some other defining feature of consciousness. For example, if you believe processing information is necessary for consciousness to exist, you might instead think the phenomenology grounding this belief (consciousness simply is information processing) justifies it, and thus justifies consciousness being restricted to information processors. Ostensibly, falling back on this belief could remove the need to rely on this intuition described above. However, this falls apart when you look at the details.

For one, there is not one definition of information processing. It could be that everything in the universe is describable as an information processor (perhaps in the way a particle or an object enacts the laws of physics in order to interact with surrounding objects). But, this ends up being an entirely non-restrictive theory. 

To counter this, one might make the definition of information processing could be made more restrictive. However, for any restrictive definition of information processing, the phenomenological grounding breaks down. It is possible for me to see how my consciousness might be, in some loose sense, information being processed. But, it is very unclear phenomenologically, why any one restrictive definition of information processing is the one correct definition. Then, without phenomenology to explain the choice of any specific restrictive definition of information processing, one would have to, again, fall back on the intuition that things that aren’t sufficiently similar to us humans are non-conscious (as that type of information processing would happen to occur in our brains but not everywhere). 

This brings us to the question: where does this intuition come from? Why believe that anything we experience is non-conscious? I believe that it is a consequence of our current physicalist worldview. If the things in our environment move like clockwork, as physics tells us, they can be predicted without any mention of consciousness. In this case, the fact that we are conscious at all is something special that needs to be explained. This explanation ends up usually being a restrictive theory of consciousness (X in the diagram below). Because most things in the universe aren’t like you, you can then use this theory to explain why these things are, in fact, non-conscious. This can be used to justify the version of physicalism you began with (the one which explains the world without reference to consciousness). This however, creates a circular chain of justification.

In next week’s post, I will conclude my thoughts on consciousness by addressing some critiques and discussing why I think this topic is relevant in the first place. 

Consciousness: The relationship with the current physicalist worldview

This is part four in a series on consciousness

Last week, I discussed how to justify any restrictive theory of consciousness (that is, any theory which says consciousness is not universal). I concluded that even if you try to ground your restrictive theory in your own phenomenology (or first hand experience), you still cannot do so without holding the intuition: things that aren’t similar to you aren’t conscious. I shall call this the “similarity intuition,” or simply “the intuition” in this post.

Put in argument form, here is a way you might try to avoid relying on the intuition.

  1. Consciousness requires X
  2. X doesn’t occur in things not similar to me
  3. Therefore, things that aren’t similar to me aren’t conscious

Now you rely on (1) instead of the intuition. But you still need a way to believe X is required. This could be done phenomenologically.

  1. My consciousness has certain essential properties that I can discover phenomenologically
  2. These properties are essential to any other consciousness
  3. These conscious properties can be mapped on to certain properties X, which are present in certain physical systems 
  4. Therefore, if X isn’t present in something, it is non-conscious

This argument appears to sidestep the intuition, but relies on it nonetheless. First, in (2) it assumes that properties essential to your consciousness are present in any consciousness. In other words, all consciousness must be similar to your consciousness; at least in so far as it has certain properties. 

The similarity intuition is more clearly present in premise (3). Any restrictive mapping of phenomenological property to a physical or mathematical system will require an intrinsically self-centered approach. This is because it consists of humans mapping their experience to their brain states. In order to justify this mapping, one has to rely on the intuition that other less restrictive mappings don’t describe consciousness. In other words, things not sufficiently similar to me (where phenomenological states are mapped to a physical system dissimilar to me) are not conscious.

One could argue more easily against the second main claim I introduced in last week’s post. Here, I linked the current physicalist worldview to this similarity intuition in a circular, self-justifying relationship. One could argue that physicalism is compatible with panpsychism, an expansive view of consciousness that sometimes describes consciousness as a physical property common to all particles or physical systems.

Moreover, some might claim that physicalism needn’t weigh in on the debate over exactly where consciousness exists at all. Simply put, the more dissimilar a physical system is from a human being, the less we know about whether it is non-conscious or conscious. 

If this were all that people claimed I would have less of a problem. But most physicalists do not only argue that there isn’t epistemic justification for believing that things dissimilar to us are or are not conscious. They don’t sit in a state of agnosticism about this. They believe that such things are, in fact, non-conscious (eg. rocks, plants, waterfalls, etc.). 

My claim is that there is an obvious connection between the common scientific-physicalist worldview, conceptualizing the world as clockwork, and the belief that most of the world is non-conscious. Furthermore, the similarity intuition is both justified by this worldview, and helps maintain it.

Some actual clockwork

I want to say here that science continues to be the best way we have for explaining much of the world. In countless ways it has made our lives easier to live. But it is also true that the questions scientists are asking do not try to answer what I am talking about. They usually ignore consciousness, and for good reason. Treating things in the world as clockwork puts us in a frame of mind to start making hypotheses, mapping out relations between cause and effect, and making predictions. This is an eminently useful endeavor. 

But success in treating objects in the world as clockwork should not permanently cloud our judgements about whether, at the ground level, everything in the universe actually is determined. And it certainly should not prompt us to permanently believe that consciousness is present only in systems similar to us; at least not without proper justification.

My attempt in this series on (non-)consciousness was to push back against a common dogma and identify a common intuition justifying physicalism. I don’t know how many readers I have convinced of this, but I hope to have at least pushed the conversation forward a little bit.

Best,

Alexander Pasch

My Place in the Education Data

In December, 2019 (which feels like a decade ago), I had just wrapped up my degree in Neuroscience and wanted to see where I stood alongside all the other college graduates of the year. I came up with a set of waffle charts showing the number of graduates in several different categories but decided not to publish them.

Well, I’ve changed my mind. No longer will they sit in a file on my computer. Here is my place in the US Education Data.

First, I wanted to see myself compared with the other 123 UT graduating neuroscience bachelors.

One box = One student (123 students total)

Here I am am next to all US neuroscience bachelors.

One box = One student (~6500 students total)

I can’t fit any longer, but here are UT and US neuroscience bachelors next to the other US bioscience bachelors.

One Box = 100 students (~120,000 students total)

And finally, for the broadest perspective of the bunch, here are all the US bachelors next to other degree recipients (Phd, Ma, certification etc.).

One box = 6581 students (~4 million students total)

And that’s before getting to one billion Americans!

Hope you enjoyed. You can find the code here.

Alexander Pasch

Book Review: Homo Deus – A Brief History of Tomorrow by Yuval Noah Harari

Rating: 4 out of 5.

Previous week’s post

Some authors are capable of bringing so many disparate ideas to the table that you begin to wonder where the limits of their creativity lie. A few are able to turn the tangle of ideas they introduce into a coherent, compelling synthesis. Yuval Noah Harari has shown in Homo Deus, that he is not only capable of doing this for human history (in his widely appreciated Sapiens), but for the human future as well. Where Homo Deus falters most is perhaps in its repetition. The first two sections do contain several informative strands of thought, but it takes too long to reach the meat of the work: the section concerning the future of humanity. 

Harari’s thesis will certainly be controversial to many readers. He claims that the past has seen the religions of old replaced by the story of humanism: concerned fundamentally with the experiences of human beings themselves. The liberal variety of humanism, which dominated the 20th Century and lives on in democratic societies today, is now under threat by improving technology.

Giving value to individuals makes sense when you need them to fight wars, run factories, and participate in a growing economy. Through artificial intelligence and genetic engineering, advanced machines and enhanced humans will likely be capable of these tasks in the future. This will result in the breakdown of the liberal humanist story. Harari suggests two alternatives: a form of techno-humanism, valuing the experiences of technologically modified humans, or Dataism, valuing the free exchange of information above all else.

Concerned more with convincing than assuaging the reader, Harari relies on analogies from the past and present. He starts by chronicling the shift away from the pre-agricultural human worship of animals. Non-coincidentally, this shift occurred precisely when farmers began to domesticate animals. Suddenly, animals were seen either as a means for human gain or ignored completely. 

Furthermore, rulers were often given divine status precisely to justify the unequal value given to them and provide a structure that society could operate under. When capitalism and mass-mobilization required that men perform additional economic and military duties, they were given more inherent value. When mass-mobilization required these men to leave the factories in World War I, the women who replaced them also gained inherent value in the eyes of society.

These and other examples lead Harari to the conclusion that history describes a web of stories that humans tell one another to justify their actions. These stories are not feeble bits of imagination. The stories we tell ourselves about Jesus, capitalism, science, France, and others, direct the lives of billions of people, altering the world in their wake. For readers of Sapiens, this will be a familiar concept. 

When the peaks of intelligence become uncoupled from regular human beings, the value we give humans will certainly change.

Harari makes it clear that the dominant story of our age, liberal humanism, is under threat. When the peaks of intelligence become uncoupled from regular human beings, the value we give huamans will certainly change. There is certainly evidence that technology will profoundly alter the value structures we now cling to. Liberal humanism, which values every person’s experiences enough to allow them to vote and speak their mind, is likely to change.

The question Harari leads the reader to ponder is what story or value structure will come next? Harari suggests that the likeliest successor is Dataism, or the belief in the value of connecting bits of information. I think this is questionable. Harari does little to convince me that we will be walking away from fundamentally valuing certain conscious experiences themselves. Perhaps this is because this work takes the contemporary materialist line on consciousness (which I will critique in a post next week). Regardless, I think the experiences of the most powerful beings around will likely dictate the value structure society operates under. 

I do accept the likelihood that increasing information flow between people, cyborgs, and machines would usually provide net benefits to society at large. But I think every improvement to society will emphasize the amazing states of consciousness and harmony provided by increased information flow (as Harari in fact emphasizes to defend Dataism). This is different from valuing information flow a priori. The quadrillionaire cyborgs of tomorrow (perhaps a future Elon Musk) will likely not be pleased if increased informational flow leads to their suffering and ultimate destruction.

Whether Dataism pans out or gets panned by its cyborg critics, Homo Deus will certainly expand your conception of what the future will look like and where we’re heading as a civilization. It stands as a creative, albeit lengthy, successor to Sapiens.

-Alexander Pasch

Want a book reviewed? Suggest one here!

Follow me on Twitter to keep up to date with the latest posts and discussions

Book Review: One Billion Americans – The Case for Thinking Bigger by Matthew Yglesias

Rating: 4.5 out of 5.

Certain issues with obvious premises are sometimes crucial to lay out in the mainstream simply because of wildly neglected and consequential nature. The connection between America’s population and its relative economic might vis-a-vis China is certainly one. America falling behind in GDP means less global influence, and a relatively stronger China means greater global pressure for illiberal values. One Billion Americans attempts to break the relative silence on this topic and layout paths to reach the title’s ambitious target. 

A few basic factors seem to be fundamental to the continued strength of global superpowers: a strong military; a network of reliable allies; a functioning government; economic prowess; a resilient culture. Yet perhaps the most obvious factor is a population sufficient to produce enough excess wealth to dedicate towards the end of global influence. No matter how innovative the culture and economies of Singapore, Sweden, and New Zealand might be, their smaller populations place firm limits on their global clout. In the 21st Century, they will never hold the global influence that the US and China do.

For many patriotic Americans, the benefit of having America play a leading role in the world is a no-brainer. To them, acknowledging our international mistakes doesn’t refute our greatness by any means. Yet understandingly, many others resist the notion that America need play a leading role at all. For them, wars in Vietnam, Iraq, and Afghanistan are echoed as examples suggesting that the right amount of international involvement is little to none at all.

Yet, as so often occurs in such discussions, there is an absence of counterfactual reasoning. America does not act in a vacuum, and if you consider what other actors will do in America’s absence, the conversation becomes muddied and the cry for America to back off rings hollow. Here, a common Churchill quote is effectively employed, albeit in modified form. As “Democracy is the worst form of government, except for all the others”, perhaps America has been the worst global superpower, except for all the others. Assuming the EU is unable to foster cooperation of the sort that will provide credible protection to South Korea while promoting its liberal influence across the world, the only other realistic world leader in the next 50-100 years is China. For those with a liberal and democratic mindset, this should be a terrifying future.

America has been the worst global superpower, except for all the others

Yglesias notes that the way we have been able to lead up to now isn’t by dominating the population metrics. We can’t rely on, and shouldn’t hope that less wealthy countries stay poor. He hammers down on this point repeatedly. Despite the threats associated with climate change, stalling poorer countries’ growth is profoundly immoral, even if it would reduce emissions. This results in an acknowledgment that the only moral future available is one where poorer countries with larger populations follow our economic growth and challenge our dominance in gross GDP statistics.

Meanwhile, pushing back against the Chinese Communist Party (CCP) is one of the few areas Democrats and Republicans often agree on in the Trump (and likely post-Trump) era. While the exact nature of the pushback is up for debate, both sides are certainly rooting for team USA and against team CCP. Noting this, Yglesias attempts to engender a set of proposals compatible with further bi-partisan action. 

He introduces numerous ideas to foster population growth in both of the two possible avenues: immigration and babies. Here, Yglesias demonstrates himself to be a competent and wide-read interlocutor on the subjects of pro-family and pro-immigration policy. Comparing us to Canada and Australia, he shows how accepting more immigrants in a points-based system could bring us hundreds of millions of people over the next century. Such a points-based system could value skills like knowledge of English and the ability to immediately get a job, making integration of immigrants swifter and less prone to conservative pushback.

On the family side, he suggests the Family Fun Pack, a collection of proposals including a baby box of items for newborns, universal child medical care, and other family-friendly policies. Following in the footsteps of most other wealthy nations, he suggests we adopt parental leave, more holidays, universal daycare, and create a more friendly culture for families with children. This is sensible. The lack of such policies makes childrearing in the United States much more arduous than in many parts of Europe and East Asia. 

Yet the connection between such policies and population growth is limited at best. Yglesias doesn’t take this fact seriously enough. He would likely respond that this would still likely marginally increase childbirth, but acknowledge that it would really take increased immigration to lead to massive population growth. I think this is a serious fact worthy of greater consideration. Relatedly, he fails to seriously examine the connection between religion and childbirth, which might not directly affect policy in America, but obviously still matters in considerations of population growth.

Yglesias addresses numerous objections to his proposal, including which cities residents will go to, housing shortages, and transportation woes. As he points out, we have more than enough space to accommodate massive population growth. One billion Americans would make us about as dense as France and half as dense as Germany. In addition, many American cities are shrinking, especially in the midwest. Greater immigration and federal government relocation of jobs to these areas could foster revitalization of these areas. 

Problems of housing shortages can be alleviated by making zoning regulations and housing policy decisions at a higher level of government with more sensible incentive structures. This would allow more duplexes and residencies that accommodate greater numbers of people to be built. Congestion will go up, but policies like fixing roads, reducing cars on the road via taxes, and investing in smart urban transit (like S-trains) can alleviate much of this issue. Except for some odd takes, including how it would be a waste to go to Mars (which certainly doesn’t fit with the theme of the book) this section is replete with sensible analyses of urban policy. 

All in all, I admire Yglesias’ patriotic and direct perspective. This is an important and timely book, especially in an era of divided government seemingly perpetually bereft of unity itself. We need to rediscover a healthy variety of patriotism and love of fellow countryman and woman. We need to remember that the alternative to American leadership is not leadership by the UN or the EU. Currently, the alternative is leadership by a government that fully believes in its fusion of Maoist political totalitarianism with state capitalist economic policy. Pushing America to be larger can curb this dark vision. To this end, I would be happy to welcome a future with one billion Americans.

Review by Alexander Pasch