Andrew J. Ko

Does automation free us or enslave us?

January 19, 2011

In his new book Shop Class as Soulcraft, Michael Crawford shares a number of fascinating insights about the nature of work, its economic history, and its role in the maintenance of our individual moral character. I found it a captivating read, encouraging me to think about the distant forces of tenure and reputation that impact my judgments as a teacher and researcher and to reconsider to what extent I let them intrude upon what I know my work demands.

Buried throughout his enlightening discourse, however, is a strike at the heart of computing—and in particular, automation—as a tool for human good.

His argument is as follows:

"Representing states of the world in a merely formal way, as "information" of the sort that can be coded, allows them to be entered into a logical syllogism of the sort that computerized diagnostics can solve. But this is to treat states of the world in isolation from the context in which their meaning arises, so such representations are especially liable to nonsense."

This nonsense often gives machine, rather than man, the authority:

"Consider the angry feeling that bubbles up in this person when, in a public bathroom, he finds himself waving his hands under the faucet, trying to elicit a few seconds of water from it in a futile rain dance of guessed-at mudras. This man would like to know: Why should there not be a handle? Instead he is asked to supplicate invisible powers. It's true, some people fail to turn off a manual faucet. With its blanket presumption of irresponsibility, the infrared faucet doesn't merely respond to this fact, it installs it, giving it the status of normalcy. There is a kind of infantilization at work, and it offends the spirited personality."

It's not just a lack of accurate contextual information, however, that is missing from the infrared faucet, thieving our control to save water. Crawford argues that there is something unique that we do as human beings that is critical to sound judgment, but inimitable in machines:

"... in the real world, problems don't present themselves in this predigested way; usually there is too much information, and it is difficult to know what is pertinent and what isn't. Knowing what kind of problem you have on hand means knowing what features of the situation can be ignored. Even the boundaries of what counts as "the situation" can be ambiguous; making discriminations of pertinence cannot be achieved by the application of rules, and requires the kind of judgment that comes with experience."

Crawford goes on to assert that this human experience, and more specifically, human expertise, is something that must be acquired through situated engagement in work. He describes his work as a motorcycle mechanic, articulating the role of mentorship and failure in acquiring this situated experience, and argues that "the degradation of work is often based on efforts to replace the intuitive judgments of practitioners with rule following, and codify knowledge into abstract systems of symbols that then stand in for situated knowledge."

The point I found most damning was the designer's role in all of this:

"Those who belong to a certain order of society—people who make big decisions that affect all of us—don't seem to have much sense of their own fallibility. Being unacquainted with failure, the kind that can't be interpreted away, may have something to do with the lack of caution that business and political leaders often display in the actions they undertake on behalf of other people."

Or software designers, perhaps. Because designers and policy makers are so far removed from the contexts in which their decisions will manifest, it is often impossible to know when software might fail, or even what failure might mean to the idiosyncratic concerns of the individuals who use it.

Crawford's claim that software degrades human agency is difficult to contest, and yet at odds with many core endeavors in HCI. As with the faucet, deficient models of the world are often at the root of usability problems and yet we persist in believing we can rid of them with the right tools and methods. Context-aware computing, as much as we try, is still in its infancy in trying to create systems that come remotely close in making facsimiles of human judgments. Our efforts to bring machine learning to the fold may help us reason about problems that were before unreasonable, but in doing so, will we inadvertently compel people, as Crawford puts it, "to be that of a cog ... rather than a thinking person"? Even information systems, with their focus on representation, rather than reasoning, frame and fix data in ways that we never intended (as in Facebook's recent release of phone numbers to marketers).

As HCI researchers, we also have some role to play in Crawford's paradox about technology and consumerism:

"There seems to be an ideology of freedom at the heart of consumerist material culture; a promise to disburden us of mental and bodily involvement with our own stuff so we can pursue ends we have freely chosen. Yet this disburdening gives us fewer occasions for the experience of direct responsibility... It points to a paradox in our experience of agency: to be master of your own stuff entails also being mastered by it."

Are there types of software technology that enhance human agency, rather than degrade it? And to what extent are we, as HCI researchers, furthering or fighting this trend by trying to make computing more accessible, ubiquitous, and context-aware? These are moral questions that we should all consider, as they are at the core of our community's values and our impact on society.

Comments Summaries
  • 1 Travis Kriplean · 11:02 AM, 1/24/11

    Andy, it seems to me that speaking about degrading or enhancing human agency seems a bit too coarse. To me, agency is about one's ability to effect change in one's life, and human agency is the same thing at a collective level. It is in fact difficult for me to come up with examples of software designs that solely degrade human agency.

    Take something like Facebook and its relationship to the "work" that it takes to maintain one's social network. It can degrade in the sense that there are perhaps more shallow forms of interaction that are afforded in a way that makes it possible to invest less time, energy, and thought than one might have previously undertaken, foregoing some of the potential benefits that comes from trying to forge a connection deeper than "friending" and hitting the like button on a couple status updates. But doesn't that lower barrier also enhance one's agency as well, for example, in maintaining a wider range of weak ties that might enhance one's ability to manage one's career path through and across different organizations (as Bonnie Nardi describes in "NetWORKers and their Activity in Intensional Networks")?

    I'm wondering if a better way of discussing this is to ask to what extent a given design helps to augment one's own existing learned capabilities vs. the extent to which it substitutes for learned skills, and how this plays out over long periods of time as the design more deeply woven into the fabric of society. This is about creating dependencies where there was no previous dependency. Obviously a single design can do both, and have different effects for different people and use cases. The calculator is a great example.

  • 2 Jacob Wobbrock · 9:11 PM, 1/24/11

    The agency debate framing "man vs. machines" has been raging for a long, long time. The dreams and, in some cases, promises of Artificial Intelligence propelled this contest for decades. It becomes "new again" for HCI as we regard the role of designers and semi-autonomous, indirect, or ambient interfaces. But an error lies in the "vs." setup: human agency has been extended by machines more often than reduced by it. There's deep cooperation here. Consider the ramp, the pulley, and the wheel. (Simple machines, but machines nonetheless.) The word "automation" challenges our concept of "tool," and we no longer feel that we wield technology; maybe it even wields us. (And politics shows us how much humans like to feel "wielded.") A wheel is a tool as long as we're the ones driving it; once we have a runaway train on our hands, we question the benefits of that tool and regard it as an unholy machine with a "mind of its own."

    Just like that damned faucet.

    So maybe the goal here is not to avoid semi-autonomous machines, but to avoid making humans feel like the "used" rather than the "user." The design question is: HOW?

    (The faucet paragraph misses an important benefit: not having a physical handle is not just about saving water. After relieving oneself, one must touch a manual faucet to turn it on. After washing, one must touch the manual faucet to turn it off. Yuk. So one must then wash again, or leave with one's neighbors on one's hands. Enter the infinite loop.)

  • 3 James Landay · 9:57 PM, 1/24/11

    I had the same reading as Jake. This guy is full of BS (I think). It feels like someone who really doesn't understand computing, machine learning, and how systems are designed.

    Yes, there are bad designs (not just for automation-based products, but for all kinds of products).

    Yes, some designers take the user out of the loop in places where they should not and other times they remove the user control and people LOVE it (i.e., anti-lock brakes). We can continue to find these issues and solve them.

    Also, I have the same reaction on the faucets and the automatic towel dispensers -- I don't want to TOUCH them or the door handle for that matter. For him to not even realize that is one of the major issues, calls into question the entire argument.

  • 4 Jonathan Grudin · 9:23 AM, 1/31/11

    Well, I was prepared to argue against Crawford, but Jake and James swung the pendulum so far in the other direction that I'll give him a little support before concluding with guarded optimism. Discussions about the decontextualization of digital representations, and the hubris and limited outlook of designers, have been around. The latter is a staple of CHI and the former particularly salient in CSCW, with discussion of the role of tacit knowledge, unseen context around email or video, and so on.

    There has been progress on both fronts. Designers are more likely to recognize some role for data and testing, and non-designers are starting to acknowledge limits to a purely engineering approach. And we are now on the rising edge of a tremendous wave of digital representation of context as sensor/effector networks slowly come together -- a golden age for machine learning and information visualization is coming to HCI. Andy is right that it's primitive now, but it is coming. And I agree with Jake and James (I think) that the issue isn't new to digital technology. The invention of the calligraphy pen enabled many of us to write with an elegance that otherwise would have required much training and skill -- was that dehumanizing? Why is any tool different in principle? They all embody assumptions about their users which are probably not universally sound and may have ill effects if used outside the appropriate context.

    • Reply to Jonathan