Selfishness, change, and the stability of groups


Reading for exams, I can’t help but see this passage from Lahti and Weinstein (2005) as an abstract description for what is happening under the current presidential administration. The entire paper promotes understanding morality as group stability insurance, as a tension between enjoying some free riding behavior during times of group stability and success—and doubling down to service the group when stability is threatened from without. But, what happens when the instability comes from within, and from the top? Well, as Lahti and Weinstein (2005) describes it, there’s a “race to the bottom.”

“Variation in commitment can endanger group stability
The principle of stability-dependent cooperation predicts an inverse correlation between the stability of groups and the tendency of their members to adhere to moral rules. However, if the dynamics of human cooperation were this simple, there are at least two potential problems that, when significant, could cause group stability to deteriorate too quickly for individual behavior in service to the group to increase to counteract it.

First, factors influencing group stability can be extrinsic and therefore not under the control of the group. When factors like resource limitation and intergroup competition act quickly, group destabilization may be difficult to anticipate and prevent. If this is true, high group stability and its concomitant low levels of service to the group will lead to an increased vulnerability of the group to fast-acting extrinsic sources of group instability.

Second, positive and negative effects on group stability are asymmetrical. As with many organized structures, an individual has more power to affect group stability negatively than positively. Under circumstances favoring group stability, each cooperator restrains self-service for the sake of the group, but generally contributes to group stability only in a small way. However, if individuals jockey for position within the group, they can initiate a rapid decline in group stability, as the prospect of exploitation shifts everyone’s adaptive strategy away from group service towards self-service. If moral rules are less important to people in times of group stability, and the usual restraints on within-group competition are relaxed, the opportunity would be created for individuals to compete to slightly exceed their neighbors’ moral decay. An individual would attempt to gain the greatest possible benefits from the group’s moral relaxation. The result would be a brace to the bottom, where the bottom is the breakdown of group-serving cooperation and the outright neglect of group stability.

Of course, the dependence of individuals on their group means that when the disastrous race began to threaten group stability, the interests of everyone would be served by reversing the trend and maintaining the group. However, in cases where everyone’s interests are served by community action that is costly to each individual if unilaterally pursued, a tragedy of the commons results (Hardin, 1968). Everyone may continue to pursue actions that are beneficial to no one in the end, resulting in group destabilization. Indirect reciprocity is unlikely to be able to rescue a community from this situation. (Milinski et al., 2002, concluded otherwise, but the situation being described here is different from their experimental milieu. In actual societies, reputational costs and benefits may return too slowly to counteract the immediate benefits accruing to competitors in a race to the bottom). Thus, in both classes of hazard—extrinsic threats as well as races to the bottom—the fast acting nature of the changes is what is expected to jeopardize stability in human groups.” (pp. 56-7)

At a time when Russian hackers are trolling Star Wars fans and the president’s appointees to federal agencies essentially made their money by cheating those exact agencies (such as Betsy DeVos and Ajit Pai), we have both external threats and a situation where the people at the top—typically the unifying force of a group—are leveraging our ever-shrinking stability for personal gain. These are the conditions under which a group collapses.

  • Bay, M. (2018). Weaponizing the haters: The Last Jedi and the strategic politicization of pop culture through social media manipulation.
  • Lahti, D. C., & Weinstein, B. S. (2005). The better angels of our nature: Group stability and the evolution of moral tension. Evolution and Human Behavior, 26(1), 47–63.


Quicksave: Ethics of Computer Games


In The Ethics of Computer Games (2009), Miguel Sicart expands on his dissertation and endeavors to “understand the ethics of computer games” (Sicart, 2009, p. 7), a broad goal which yields an equally broad explication. The “big picture” point of his book is that everything people experience is ethical, because people are ethical beings are bring their lens of ethics to bear on every object and relationship–including videogames.

Chapter 1 — Introduction
Chapter 2 — Ontology of games as designed objects
Chapter 3 — Player as a moral being
Chapter 4 — Framework for analysis of computer game ethics


Identity—Sicart disagrees with Turkle’s notion of the “second self,” asserting that it implies a subordination to a “first” self which influences the second self but remains immune from being influenced by role-playing as that second self. While I believe that Turkle would probably agree that the relationship is mutually transactive, Sicart makes sure the reader understands just how permeable the barrier is between the first and second self:

“In Turkle’s work the presence of that first is somewhat unclear, yet it does undermine the second self’s ethical autonomy. I will argue that being a player means creating a subject with ethical capacities who establishes phenomenological and hermeneutical relations with the subject outside the game, with the game experience, and with the culture of players and games. It is not a self parallel to the out-of-the-game self, but a mode of being that takes place in the game” (Sicart, 2009, p. 11).

Again, most videogame scholars would agree with this perspective. Gee’s “identity work” principle states that role-play inspires personal reflection and fosters social growth, just as Barab and his cohorts detail in their studies of “transformational play.” The play-self is informed by “real” self and, in turn, the experiences of the individual under the guise of the play-self inform the “real” self by giving the individual novel, lived-through situations with which to contemplate his or her identity. This focus on the player’s agency as a moral agent is what sets Sicart’s work apart from prior literature on videogame violence and computer ethics:

“The way games are designed, and how that design encourages players to make certain choices, is relevant for the understanding of the ethics of computer games.

But the main argument of this book, the one that I believe marks a turn from the conventional discourse relating to computer games and ethics, is my dedication to putting the player in the center. As designed objects, computer games create practices that could be considered unethical. Yet these practices are voluntarily undertaken by a moral agent who not only has the capacity, but also the duty to develop herself as an ethical being by means of practicing her own player-centric ethical thinking whole preserving the pleasures and balances of the game experience. The player is a moral user capable of reflecting ethically about her presence in the game, and aware of how that experience configures her values both inside the game world and in relation to the world outside the game” (Sicart, 2009, p. 17).

Design—“It is not about how we inhabit a world, but how that world allows us to inhabit it” (Sicart, 2009, p. 36).

Sicart specifies that he is discussing computer game ethics, not simply game ethics in general, because 1) computer games have rules that cannot be altered or ignored by players; and 2) most computer games situate the player in a simulation, a carefully designed virtual environment meant to depict certain aspects of reality, typically infused with fictional elements (pp. 15-16). These simulations usually present more efficient, conveniently structured recreations of natural physical and social environments, filtering reality through a lens which focuses on the gameplay-relevant elements, and restructuring those elements to communicate clear goals for the player. Lastly, even though the rules are so powerful as to be immutable, computer games often obscure the presence and complexity of the rules which results in the “black box syndrome” (Salen & Zimmerman, 2005, p. 88) which “strengthens the supremacy of the rules system in the experience of the game” (Sicart, 2009, p. 27).

He breaks down games in terms of two “fundamental elements:” SYSTEMS and WORLDS (p. 21).

Of course, these two elements should align in purpose and function but, often, the rules of a game contradict the fiction of the world. This happens in XIII, when an amnesiac assassin is not allowed to kill a police office (his example) and in Uncharted: Among Thieves when plucky artifact-plundering Nathan Drake is meant to be an likable hero, yet players are forced to use him as an avatar through which to kill hundreds of people (my example). “The design of rules, then, can create values we have to play by” (Sicart, 2009, p. 22).

“The representational aspect of a computer game–its visual and narrative elements—is of secondary importance when analyzing the ethics of computer games. Games force behaviors by rules: the meaning of those behaviors, as communicated through the game world to the player, constitutes the ethics of computer games as designed objects” (Sicart, 2009, p. 23).

“Only when we have described the rules of the game can we analyze the game world, the narrative, and other audiovisual elements in relation to the core values and behaviors proposed by the game system. In other words, a computer game’s morals rest in its design” (Sicart, 2009, p. 24).

Sicart uses Jesper Juul’s definition of games to support his systems-world dualism of design, then elaborates by saying that analysis of design can be parsed into four categories: systems, worlds, both, and their interrelationship–“implying at least four dominant modalities of understanding games” (p. 25). In Chapter 2, Sicart uses Half-Life 2 to illustrate this point. The systems of the game involve the mechanics of shooting and physics manipulation, and the world builds a narrative in which the player character is a freedom fighter–and, more importantly, a friend to his allies. They work in tandem whenever the player must use a gravity gun to solve a puzzle, but they are at odds whenever the player attempts to shoot a friendly NPC. In those instances, the game rule supersedes the simulation rule “and these types of overrulings… are key elements for the understanding of computer games as ethical objects” (Sicart, 2009, p. 30). “Rules create the game; the fictional world contains it” (Sicart, 2009, p. 33).

Having rules, games have an ergodic nature–meaning the game as a system 1) states clear rules and goals and 2) evaluates players’ progress and determines their success and/or failure (pp. 30-31). The term “ergodic” was coined by Espen Aarseth and it is a “fundamental concept in the history of computer games research” (p. 30)–at least as far as Sicart, a fellow graduate of the IT University of Copenhagen, is concerned.

The rules of a game determine which aspects of the world must be represented in the game environment. Though a game’s narrative might encompass more physical territory or social interaction than is present in the game or accessible to players with any degree of agency, all that must be simulated are the elements and systems with which the player needs to interact in order to negotiate the rules within the system and achieve the goal. In this respect, “the formal structure of the game, understood as its rules and mechanics, is to some extent accountable for the end result of the fictional world” (Sicart, 2009, p. 32). By logical extension of the fact that the rules are the core ethical component of computer games, those elements of level and world design which are determined by the rules are also “ethically relevant” (p. 32). “In fact, virtual environments are constrained by the game rules, since all the elements that are not fundamental to the game are a mere setting for the actions of the game” (Sicart, 2009, p. 34).

Players are embodied agents, bringing their perception of reality to bear on their conceptualization of virtual game environments. Sicart uses the example of falling in video games, which we tend to consider a bad idea, unless the game (or genre) indicates otherwise. “This comparison [to the real world] implies that there are actually connections made between the real world and the game world in the mind of the player” (Sicart, 2009, p. 34), which he argues are on a deeper level than simply connecting the physics of reality to those in a virtual environment. Players also consider themselves embodied beings in the game world, having social agency–and responsibility–in the context of the game narrative.

The player, explicates Sicart, is the missing piece to defining the ethical gameplay of a computer game. It is not enough to analyze the rules of a game to understand its ethical design; the researcher must also account for the ways in which players will interpret the rules, react to them, create new rules, and psychologically process the experience. Sicart explains that the concept of “empowered players” is what enables emergent rules, like when an MMO community ostracizes those who harass new players (p. 36).

The “player-subject” is an ethical skin which an individual wears as a playful identity, co-constructed by the ontology of a game as determined by its rules and world, in order to remain faithful to that game’s ontology and optimize his or her game experience. However, this negotiated identity is simply a lens through used by the real-world individual in order to enact and interpret the in-game events as ethically meaningful. The construct of a player-subject is dependent on a moral being voluntarily assuming a role in a virtual environment. The moral being still has primary agency over the game experience; it is what determines the morality of the player-subject and, at times when the game asks the player to consider the ethics of a game with a level of self-awareness beyond that of just the player-subject, it is the moral being generating this player-subject which then takes precedence in interpreting and making decisions (Sicart, p. 77).

“To play computer games is a cultural process in which we grow up and mature as players” (Sicart, 2009, p. 89). So, more experienced players are more capable of understand a game’s rules in the context of other games with similar experiences, and are more literate in the rhetoric of games.

Quicksave: Values at Play


In the book Values at Play in Digital Games (2014), Mary Flanagan and Helen Nissenbaum write about how various values—primarily ethical and political values—are evidenced in videogames, and how designers can consciously integrate values into the play experience. Though they choose to focus on ethical and political values, the Values at Play model is culturally agnostic and is a value-free tool to analyze how people can communicate values through play.

Summary: Their “core premises: (1) there are common (not necessarily universal) values; (2) artifacts may embody ethical and political values; and (3) steps taken in design and development have the power to affect the nature of these values” (Flanagan & Nissenbaum, 2014, p. 11).

“We propose three key reasons why it’s important to study values in games. First, the study of games enriches our understanding of how deep seated sociocultural patterns are reflected in norms of participation, play, and communication. Second, the growth in digital media and expanding cultural significance of games constitutes both an opportunity and responsibility for the design community to reflect on the values that are expressed in games. Third, games have emerged as the media paradigm of the twenty-first century, surpassing film and television in popularity; they have the power to shape work, learning, health care, and more” (Flanagan & Nissenbaum, 2014, p. 3).

They believe, as I do, that Schön’s work on reflective practice is a key conceptual link in fostering conscientious social agents (p. 11).

In Chapter 3, they–along with contributing writer Jonathan Belman–create a list of aspects of games which can be used to promote values through play:

  1. Narrarive premise and goals
  2. Characters
  3. Actions in game
  4. Player choice
  5. Rules for interaction with other players and nonplayable characters
  6. Rules for interaction with the environment
  7. Point of view
  8. Hardware
  9. Interface
  10. Game engine and software
  11. Context of play
  12. Rewards
  13. Strategies
  14. Game maps
  15. Aesthetics

They also coin the Values at Play (VAP) heuristic, which is supposed to be “a hands-on, dynamic approach to considering values in design” (p. 75). Its three components—discovery, implementation, and verification—represent the design, development, and assessment of values-focused game.

The chapter on discovery can be summarized like this:

  1. “key actors” = get the right people involved
  2. “functional description” = understand your intent
  3. “societal input” = be mindful of context
  4. “technical constraints” = don’t design something unattainable
  5. “defining values” = craft a clear and consistent meaning

The last chapter (that is written by the authors, not a contributing writer) focuses on verification, on the assessment of games designed to convey values through play.

“Verification is crucial to any technological system. It is relatively simple to verify that a toaster achieves its aim of browning bread evenly without blowing a fuse. It is somewhat more difficult to verify that a Web search engine finds what users are seeking. Verifying values in games poses even greater challenges, primarily because assessment must take into account the complex interdependencies among the game (as artifact), its players, and the context of play” (Flanagan & Nissenbaum, 2014, p. 119).

They suggest three interpretations of the idea that games have verifiable values:

  1. people change their attitudes, beliefs, and actions
  2. appreciation and understanding are broadened and/or deepened
  3. a game could have a “systematic impact on players’ attitudes, empathy, or affect” (p. 120)

They fail to outline a rigorous methodology of verification, but they do encourage the reader to think about verifying values in play just as a designer would assess any other aspect of a game’s design—as an iterative process which should be assessed and reassessed continuously, by many groups of people, throughout the entirety of game production. Essentially, they suggest the same three methods which game designers already use: meetings, playtests, and scientific assessments.

Review: It’s a good book to read if you’re just starting to think about how games can be designed to convey values. However, if you’ve read other scholarly books on game analysis, this one reads very similarly:

Games are important —> case studies —> here’s my concept —> more case studies —> generic tips for implementation

Their case studies are fine, but they’re not cite-worthy, especially since they’re basic descriptions of well-known games, with little to no reference to academic writing embedded in their accounts. All of the philosophy and science is stuck squarely at either end of the book. Gee, Bogost, and many others have written game analyses worth citing—but the ones here are just to illustrate the listed terms.

Still, their concept of the Values at Play heuristic is useful, in that it canonizes the pragmatic approach to designing games with value-laden play. Also, their list of game aspects is handy if you need a cite for an exhaustive list of avenues through which one can design or analyze design of games with values at play.

Verdict: Fledgling designers will find plenty of anecdotes and practical advice, but there isn’t much here for academics. If you’re interested in their case studies, it might be worth a careful read. Otherwise, you can skim this one.

Press Start


Hello, everyone.

My name is Ken Rosenberg. I am entering the fourth year of my doctoral program in the Telecommunications department at Indiana University. This blog was created with the intent to organize my thoughts, as well as to encourage consistent content production. Like you, I have many places to post my writing: social media networks (both casual and professional), department websites and school newspapers, other established blogs and websites—and, of course, the copious amount of journals and conferences, where academic writing is most respected. However, journals are not the place for idle thoughts, and others’ blogs do not neatly aggregate the writing of just one contributor. I hope that many of my future posts serve as drafts for publications, but much of what you’ll read here is more of a storyboard-like sketch of my research trajectory, rather than completely polished nuggets of knowledge.

I also intend the other pages of this website to serve as a public collection of my research. It is difficult to successfully break into a niche area of academic research, especially in such a pan-departmental endeavor as videogame studies. Furthermore, when leveraging multiple disciplines, it can be difficult to make a cohesive argument. I will use this website to create and detail my viewpoint, which will become a repository for me and a blueprint for others with similar interests.

Briefly, my interests lie at the intersection of communication and media studies, psychology, education, cognitive science, and game design. My major is in communications and I am minoring in cognitive science, but I also have a background in journalism and I’ve taken several classes in the School of Education here at IU. Each of these disciplines brings something different to the field of videogame studies, and all of them are important for my research.

So, what exactly is “my research” and why does it matter?

In defense of a having a mission.

Social scientists are interested in how individuals and groups create, moderate, and maintain their psychosocial reality, and how the multiple realities from myriad individuals and groups interact to create, moderate, and maintain larger groups and cultures. Put simply, in the jargon of everyday social science research, we are interested in people’s attitudes, thoughts, and behaviors. We study things like attitude formation and persuasion techniques. Often, the only explicit goal is to increase our understanding of human cognition and behavior, but there is always an underlying purpose to social science.

Just as other sciences seek to report, analyze, and eventually predict natural phenomena, social science is also charged with the mission of better predicting social phenomena—but even the goal of prediction suggests another implicit goal, a more proactive mission. We are not merely trying to understand the world, we are trying to make it better. We don’t research climate change merely to craft more accurate models. The goal, of course, is to research the problem so that we can find solutions. Granted, not all research is so strongly normative. Ethnographic research, for example, is almost purely an endeavor directed solely at understanding. Ethnographers spend a great deal of time unpacking their identities and those of their subjects, in order to eschew biased judgment (while simultaneously embracing diversity of perspective). However, even this research is constructive and not merely descriptive. Logically, the goal of increasing our cultural understanding is to foster a society which values cultural diversity and seeks to remedy social inequality.

If knowledge is power, then academics spend their careers building power—and it would be foolish to think that we do not, at the very least, have some ideals concerning the application of our research. Often, it is those ideals which direct us toward a particular niche in the first place. Media effects studies tend to use public service announcements to test theories of attention and attitude change, a strongly normative decision. We researched propaganda so that we could nullify its impact. We research agenda setting because we value our democracy and its marketplace of ideas. Sometimes, we do research without benevolent aspirations. like when persuasion research is leveraged to make better advertising campaigns. Still, even then, research is applied to the constructive and proactive shaping of society.

My mission, justified and explained.

I spend most of my time considering the motivations and justifications behind people’s social behavior. I am both fascinated and disturbed by the decision-making models of game theory. The Prisoner’s Dilemma and other coordination problems handily illustrate that even rational beings who would optimally cooperate are often led toward conflict instead of compromise—and that’s when people are assumed to be acting rationally. Albert Bandura, the man behind social learning theory, has also written extensively on the disengagement of moral agency and how people use contextual cues to justify antisocial behavior. Similarly, Phillip Zimbardo, the primary researcher for the infamous Stanford prison ‘experiment,’ says that when people are placed in bad situations, they adjust their standards accordingly. There is hope, though, because Zimbardo also says “there are no bad apples. There are good apples placed into bad barrels.” The implicit solution, then, is to provide people with contextual cues which promote prosocial behavior.

One of the most prominent goals of social science research is the promotion of prosocial behavior. There is plenty of literature on aggression and empathy, because we want to respectively curb and foster their resultant thoughts, attitudes, and behaviors. Until somewhat recently, most research on the psychological effects of videogames was focused on how violent gameplay might lead to aggression. In the past decade or so, people have started to research the possible prosocial impact of videogames, including the potential for games to teach or foster empathy, perspective-taking, and reflection on social identity.

Games and learning is a fledgling field, but it is rapidly growing. It includes researchers, teachers, and game designers who are interested in the use of videogames as a mechanism for the embodied learning of systems thinking and other procedurally-generated expertise.

Games have goals, which necessitates the definition and recognition of success and failure states. By evaluating outcomes, games have the capacity to either promote or punish different actions. Despite being abstract models at their core, most games feature some sort of narrative which situates the player in a world that resembles our own reality. And, by virtue of having a situated narrative with normative outcomes, most of a player’s experience can be described as ethical—that is, as having the capacity to build the player’s ethical subjectivity (Sicart, 2009). Games, as interactive narratives, can serve as guided playgrounds in which players can learn reflective practice of their moral agency.

There are still many questions to answer concerning the application of videogames as tools to teach ethical subjectivity, questions like the following:

Is a violent in-game action a morally reprehensible act? If so, when and why?

If ethical decision-making is contextually situated, then are in-game choices a reliable measure of real-world ethical behavior?

How can game designers successfully balance overt instruction with emergent lessons? 

Most games feature gameplay mechanics which simulate aggressive acts of conflict solution, and ethical play is about making players reflect on the ethical nature of their in-game behavior. So, are shooters, for example, a genre in which it is impossible to feature ethical play that is also enjoyable? In other words, how can game designers craft a narrative in which the shooting mechanic is enjoyable while still positioned as unethical?

Despite the many questions that remain, we do have some answers.

For example, we know that people—at least those told that they’re being observed in an experimental setting—typically try to act in games according to their personal code of ethics. We also know that their self-reported attitudes concerning certain moral foundations do, in fact, correlate to in-game behavior which reflects those same values (Weaver & Lewis, 2012).

We also know that contextual cues impact players’ moral assessment of in-game acts and that social cues can even have a priming effect on players’ decision-making (Hartmann & Vorderer, 2010; Weaver & Matthews, in press). This seems to work the same way in games that it does in reality. For example, making organ donation the default option on driver’s license applications greatly increases the number of organ donors—which isn’t really reflection, as much as it is optimizing the environment to elicit prosocial behavior from individuals (Gigerenzer, 2010). Still, this means that it is possible to create “good barrels” for people, both in games and reality. Perhaps by experiencing this in videogames, people could begin to reflection on their own situations. Maybe we can consciously strive to build good barrels for ourselves and others.

And there it is. I have faith that people can become more ethical agents by learning reflective practice of their own morality. This post should at least indicate how and why this is possible. It certainly seems like a plausible method for promoting prosocial behavior—and, on a small scale, ethical games played in a classroom setting have been shown to foster reflection on identity and ethics. Games like Modern Prometheus task grade-school players to make ethical decisions, and the accompanying discussion sessions and writing exercises reinforce the transformational play which results from the negotiation of a real and virtual identity (Barab, Gresalfi, & Ingram-Goble, 2010).

I am interested in the potential for these lessons to be instilled into commercial game design, which is culturally pervasive and doesn’t carry the stigma of institutional learning. Recently, I presented a Well Played paper which analyzes how TellTale Games’ The Walking Dead uses principles of good learning (Gee, 2003) to teach reflective awareness of moral agency.