Tải bản đầy đủ - 0 (trang)
Chapter 19. Managing Deviant Behavior in Online Worlds

Chapter 19. Managing Deviant Behavior in Online Worlds

Tải bản đầy đủ - 0trang

This document was created by an unregistered ChmMagic, please go to http://www.bisenter.com to register it. Thanks

more pejorative meaning: someone whose behavior is somehow perverted, twisted, corrupt, or destructive.

Such "deviant" behavior may not be rare or idiosyncratic at all! If the online environment is such that destructive and abusive acts are

encouraged and rewarded, then such behavior all too quickly becomes the norm.

Thus, we can only speak of behavior being "deviant" in the context of an online world where there is a code of conduct, and in which the

vast majority of players adhere to this code. In this case, "deviant" behavior is simply behavior that violates this code of conduct.

I think it's important to avoid pejorative language, and in particular, "moralistic" language when discussing undesirable behavior. The

issues here can be both sensitive and emotionally charged. It's all too easy for us, as the creators and maintainers of the system, to feel

"besieged" by a deluge of abuse and to think of our less tractable customers as "bad" people. But the line between desirable and

undesirable behavior is fuzzy and often crossed inadvertently and innocently. What may be "bad" in our view may be perfectly legitimate

in the value system of the customer. Many of the violators of our codes of conduct are not "scum" but merely overzealous.

That is not to say that we do not have the right to take punitive action to protect the integrity of our world. But in my view, such action

should be taken dispassionately and without moral condemnation.

I believe that the best policy is to always treat our customers with a high degree of respect, even when they have gone "astray."

[ Team LiB ]


[ Team LiB ]

What Are Some Kinds of Undesirable Behavior?

For an online world, undesirable behavior is behavior that damages or inhibits the proper functioning of the technical infrastructure, the

game balance, or the social fabric. The following sections contain some examples of undesirable behavior, but they are by no means

complete (human inventiveness will always outstrip attempts to classify it).

Attacks Against Technical Infrastructure

There are at

least three types of attacks against technical infrastructure:

Denial of service— This could include crashing or locking up the servers, tying up the network bandwidth, corrupting the

account database, or any other method of making the game unplayable for other players.

Unauthorized access— This could include access to internal parts of the system, or it could mean that the abuser is able to

enjoy unpaid-for services. The most common type of unauthorized access is the use of another player's account by obtaining

that login and password (typically via email scam).

Disclosure of private or sensitive information— This could include disclosure of credit card numbers, passwords, and

other personal user data, or secure system data such as encryption keys and digital certificates.

Disruption of Game Balance

Game balance can be disrupted in a couple of ways:

Exploitation of design loopholes or bugs— In an ideal world, the gameplay would be perfectly balanced, and no player

could gain an unfair advantage over any other or be able to advance their goals more rapidly than the designer of the system

had intended. However, in complex systems like these, it is often the case that the design overlooks some subtle interaction

between features of the environment. Players will inevitably find these loopholes and exploit them.

The situation is exacerbated by the fact that players have a strong incentive not to report the loopholes when they are

discovered. This is part of the paradox of challenge—we like challenge, but we also want it to go away (by overcoming it). As

game designers, it is our goal to create challenges, to "disempower people in interesting ways." It is the goal of players to

re-empower themselves, perhaps in ways that we designers did not anticipate. (Part of the joy of building these systems is

seeing the clever and unanticipated strategies that players come up with.) Thus, although players and designers are set in

opposition, at a higher level, we are all on the same team, and the behavior of both sides should reflect this. But it is

sometimes hard for players to keep this in mind.

Denial of access to game resources— Many coveted game items (for example, special ingredients) can only be obtained

at certain places or at certain times. Part of the "puzzle" of the game is figuring out where an item is available or completing a

difficult journey to where the item is located. However, players who are powerful or adept at gameplay may monopolize the

resource—not necessarily out of a desire to deny access to other players but out of a need to advance their own character's


An example is the phenomena of "farming" seen in EverQuest (EQ). High-level players will enter a low-level dungeon,

seeking some rare item that is obtained by killing and looting a particular type of monster. Often the item will only "drop" a

small fraction of the time, so many monsters of a particular type need to be killed in order to obtain the item. When the

high-level players enter the scene, however, they quickly dispatch all of the monsters in the region, leaving none for the lower

level players who are also trying to obtain the same item. In addition, the high-level players may decide to "camp" the area,

rapidly harvesting each new monster as it spawns. The result is an effective denial of the resource to any but the most

powerful players.

Disruption of the Social Fabric

Disruption of

the social fabric can be seen in several forms:

Harassment— Harassment is a broad class of behavior, typically involving repeatedly directing unwanted attention, via

speech or action, toward another player, despite clear signals from the other player that such attention is unacceptable. The

line between what is harassment and what is legitimate gameplay is fuzzy, especially in an environment in which players are

competing directly against one another.

Spamming— Online games benefit from having a wealth of communication options, but many kinds of communication are

unwanted. In particular, flooding a communication channel with unwanted messages negates the benefit of that channel and

makes it harder for users to receive "legitimate" messages.

Begging— One very popular form of annoyance is begging—asking random passersby for gifts of money, items, and so on.

Just as in the real world, beggars often take steps to ensure that they appear much worse off than they actually are, even

going so far as to create a brand-new character (which then gives the received items to their primary character).

It should be noted, however, that while begging is annoying and undesirable, it is not technically speaking an abuse. Begging

can be appropriate in some situations, although it should always be discouraged.

Frauds and scams— Various kinds of scams, particularly involving an exchange of material goods, are quite popular in

game environments. Players may attempt to misrepresent the capabilities of an item, its rarity, or its monetary value. Players

may attempt to substitute one item for another, in a classic "bait and switch" tactic. While most games provide for a

two-phase commit transaction system for the exchange of goods, in which both players must agree before the exchange can

take place, many kinds of exchanges (such as "heal me and I'll pay you 100 gold") are not directly fungible and cannot be

handled within the scope of a single transaction.

Identity scams— A particular type of scam is the attempt to pose as another player, especially if that player is well known

and has a positive reputation. The perpetrator of the scam can then use the illicitly acquired reputation to gain advantages or

favors from other players. Often the true character's reputation will suffer as a result of these actions.

Often this is done via ambiguities in the text font used for the display of character names—for example, in EQ, which uses a

Helvetica-like font for the chat transcript, the number one, the lowercase "l," and the uppercase "I" all look identical.

Slander and libel— There are a number of reasons why a player might want to damage the reputation of another. The

player may be angry at some real or imagined slight; the player may have been done real harm; the player might be enjoying

the thrill of a practical joke; or the player might simply be engaged in a primate dominance game of some sort, stroking his

own ego by undermining others. A popular pastime among adolescent males (who are often insecure about their own

developing sexuality) is to advertise, "So-and-so is gay!"

False charges of abuse— One of the more difficult types of abuse to deal with is misuse of the system for reporting abuse.

This is a special kind of libel, in which one player attempts to file a false charge of abuse or unacceptable behavior against

another. This is a type of abuse in which automated collection of corroborating evidence can be helpful.

Inappropriate language— In order to gain the widest possible customer base, we need to ensure that the "norm" of social

behavior is one that would be approved of by the larger culture outside. Parents are unlikely to let their children play on a

system that is reputed to teach them habits of which they disapprove. Unfortunately, online worlds allow a limited form of

escape from those very parents, and may be perceived by those same children as a way of avoiding the parental restrictions.

Inflammatory language, especially racial or cultural slurs— In a similar vein, there are certain topics or utterances that

can trigger the feeling of a hostile environment. Many people have "hot buttons," that is, highly charged issues that habitually

invoke strong feelings, especially feelings of anger. Others will not be able to enjoy the online experience in an environment

in which they do not feel "safe." (It is unfortunate that a "least common denominator" approach to discourse of this type must

be enforced—that is, limiting the discussion to language and topics that would be acceptable to nearly everyone. In

particular, the intersection of the community standards of, say, San Francisco, California, and Charlotte, North Carolina is

smaller than you might think.)

Note that attacks against the technical infrastructure are, for the most part, "out of game" activities—that is, the attack is conducted not

within the game itself, but against the technical infrastructure (servers and network) that supports the game. This document is primarily

concerned with "in-game" countermeasures—that is, remedies that occur at the design level, rather than at the infrastructure level. For

this reason, technical attacks and countermeasures will not be discussed further in this document.

This document will also not attempt to address the issue of "out of game" email scams or other actions that are disruptive to the larger

community outside of the context of the game itself.

All of the behavior types listed can be divided into three categories of severity:

Violations— This category of behavior is always inappropriate and should always carry a negative consequence when

detected. An example of this category is false charges of abuse.

Annoyances— This category of behavior is usually inappropriate, but done in moderation is acceptable. An example of this

category is begging. These kinds of behaviors should be handled via in-game punishments or incentives, unless the behavior

is extreme.

Too much of a good thing— There are some types of behavior that are actually positive, but if overused or used in the

wrong context can be undesirable. This kind of behavior should also be handled at the design level by in-game punishments

and rewards.

[ Team LiB ]

[ Team LiB ]

Why Undesirable Behavior Is a Complex Problem

If a particular behavior is undesirable, why not simply modify the program code to make that behavior impossible? The problem is that

in many cases, undesirable behavior is very similar in outward form to permissible or even desirable behavior. Distinguishing between

the two often requires a human value judgment. For example, a computer subroutine would have a hard time distinguishing between

someone who was "stalking" another player, and someone who was merely following a player because he was lost and too shy to ask

for help directly.

In addition, different people have different opinions about what is and is not desirable, and that may change with a given context.

Many kinds of undesirable behavior are easily identified by human perception and judgment. However, the supply of "trusted" humans is

limited. Reports from untrusted humans (the vast majority) need to be accompanied by corroborating evidence; otherwise, the reporting

facility itself can become a kind of systemic abuse.

[ Team LiB ]

[ Team LiB ]

Why Do People Engage in Abusive or Undesirable Behavior?

One of the most desirable things about being in an online world is the freedom it offers. Being in an imaginary world, one is freed from

real-world consequence and real-world accountability. With any reduction in accountability, there is always a strong temptation to explore

the limits of one's newfound freedom, in particular by pushing the boundaries of acceptable behavior.

This is exactly what toddlers do during the "terrible twos"; having discovered a new world and a new perception that other peoples'

feelings and judgments are not the same as their own, they attempt to discover the boundaries of this world—much to the consternation

of parents whose job it is to set those boundaries.

[ Team LiB ]

[ Team LiB ]

Establishing a Code of Conduct

A code of

conduct has multiple functions:

The primary purpose of the code of conduct is to inform the player population as to what kinds of behavior are considered

permissible and to establish a social norm. Most players, if given a structure or set of boundaries, will willingly conform to

them, especially if mild incentives are provided.

A few individuals will, of course, rebel against the restrictions in the code of conduct. Thus, the second function of the code of

conduct is to establish a basis for taking corrective action against these recalcitrant users. The goal is to have users that

conform to the code, with a minimum use of force or penalties.

Finally, a third function of the code is to assure parents or legal guardians that the environment we are creating is a

wholesome one for their children. (This also applies to individuals who are considering joining for themselves but are not sure

that they will feel safe within the environment.)

In a real online game there will, in fact, be two codes of conduct: an explicit, written code that is imposed by the designers of the system,

and an implicit code that will emerge from the social interactions within the game itself. This second code is an important component of a

rich and diverse play experience, possessing a homegrown feeling of authenticity that may be lacking in the first, explicit code. Thus, the

growth of this implicit code of conduct should be watched and nurtured.

However, the code of conduct is more than just a set of rules. It is an integral part of the game's social discourse. In an MMORPG, most

conversations tend to be "about" the goals of the game—leveling, getting better weapons, and so on. In an ideal situation, the codes of

conduct, both formal and informal, are an important part of these conversations. By establishing goals, the designers of the game define

an omnipresent context in which all actions, and in particular all conversations, take place.

A Mild Tangent

I like to visualize this as a "gradient" in the mathematical sense—that is, at any given moment in a multi-dimensional

realm, you can always tell which direction is "up" and which is "down" by the gradient—sort of like standing on the slope

of a hill in the fog; you always know which direction will take you higher. Thus, applying this metaphor to a game

experience, it means that no matter what situation you find yourself in, you can always determine which actions will lead

you nearer or farther from a specified goal, at least locally. And you can converse with those near you about the nearness

of the goal and the direction to it, since you are in a shared context. We can talk about "unipolar" worlds, in which there

exists a single overarching goal to which all other goals are subsidiary [such as leveling in EQ], versus "poly-polar" worlds,

in which there are multiple orthogonal goals, another time.

Of course, in order for a code of conduct to be effective, it has to be enforced. Two steps for doing that are detection and corrective


[ Team LiB ]

[ Team LiB ]


Because the difference between desirable and undesirable behavior is often subtle, detecting when such behavior occurs is often a

challenge. Typically, it's easy enough to see the final consequence of undesirable behavior (angry parents, lost customers); it's often

much more difficult to establish the root cause.

More significantly, there will always come a point where someone needs to make a value judgment as to whether a particular act was

acceptable or not within a particular context. This judgment may occur far in advance of the actual act (as, for example, prohibitions that

are embedded in program code), or it may take place immediately after the event (as when someone reports a violation). It may in some

cases even be made long after the fact, when someone notices a statistical pattern or analyzes a set of user complaints.

Fortunately, many of the more serious abuses tend to be consistent behavior patterns on the part of individuals. These behaviors tend to

be a reflection of the underlying value system of the player, and that is not something that changes quickly. Bruce Schneier, security

consultant and author of Applied Cryptography, once said this about online games: "It's not important to detect every cheat. It's important

to detect every cheater." Thus, it is in many cases possible to get a history of significant actions and judgments associated with a

particular individual, and use that information in making future judgments.

The two primary means of data collection will be:

Reports by automated agents within the system

Reports by witnesses within the game

Unfortunately, neither of these sources of information is reliable, but there is hope because they are unreliable in different ways and can

be corroborated in order to gain a more accurate picture. In particular, witnesses can be biased or untrustworthy but are really good at

interpreting what they see in terms of values. Automated agents are all too easily misled (mistaking legitimate behavior for impermissible

behavior, for example) but are incapable of dissembling or shading the truth.

[ Team LiB ]

[ Team LiB ]


Simply detecting the presence of an undesirable behavior is often not sufficient to take corrective action, especially if the corrective

action has a large potential negative impact or cost. A trust value needs to be assigned to the detection report and supporting or refuting

evidence gathered.

For simple cases, where the cost of corrective action is low, the reports can be collected by an automated system and the corrective

action taken automatically. An example is profanity filtering; there is no need to report to a human customer representative each time

someone uses an impermissible word or phrase.

For more complex cases, especially where the players in question have a considerable stake, a trusted human may be required to

intervene. It is likely that there will be several "tiers" of trust, where there will be a large outer layer of "slightly trusted" individuals and an

inner core of "highly trusted" individuals.

For extremely sensitive cases, there may be a "commit/no-commit" protocol that requires multiple trusted representatives to be involved,

so that no single individual (either inside or outside the company) can gain great advantage by manipulating the system.

When combining witness testimony with data gathered by automated agents, care must be taken to ensure that the proper context of the

event is maintained. A sentence taken in isolation can easily be misinterpreted. The customer service (CS) user interface should be

designed so that arbiters can easily access and comprehend the relevant details of an incident without wasting too much time searching

through log files.

[ Team LiB ]

This document was created by an unregistered ChmMagic, please go to http://www.bisenter.com to register

. it. Thanks

[ Team LiB ]

Corrective Action and Remedies

This section

contains a number of ideas for corrective action.

Make It Impossible

The most obvious way to deal with undesirable behavior is to modify the game rules so that the behavior is simply impossible.

This strategy works well if the distinction between permissible and impermissible behavior is easily discriminated by an algorithm. Care

should be taken, however, to understand the long-term side effects of a prohibition.


If there is

no way to prohibit the behavior outright, then it may be possible to make it either unprofitable or unenjoyable.

This can be done by associating a cost with the action (which would be low enough not to deter legitimate instances of the behavior), by

removing the rewards gained, or by providing an alternate, easier way to achieve the same rewards.

This particular strategy only works, however, if the incentives are amenable to manipulation. In particular, games tend to have lots of

overlapping incentive structures, and it is not always possible to remove an incentive without destroying the game.


This is the threat of punishment against players who engage in proscribed behavior. This "punishment" action can be taken against the

account holder (such as "banning" or canceling the account), or it can be taken against the avatar (removal of game capabilities).

One special type of punishment that has been used in some MUDs is public humiliation—turning the character into a "toad," public

floggings of the avatar, and so on. These types of "virtual body punishments" are only effective if the player has an emotional investment

in his or her character. There is even a danger that players will seek out such extreme punishments for the thrill of causing a bizarre


Social Ostracism

This technique attempts to deter violators by making their actions known to the community and encouraging them to shun the

violators. This approach is quite limited, for several reasons. First, if the rule being violated is an unpopular one, the community might

sympathize with the transgressor. Also, there is a possibility that the transgressor may actively seek notoriety. ("There's no such thing as

bad publicity!") This type of strategy is most effective in enforcing the emergent, unwritten code of behavior, rather than the explicit and

formal one.

Subjective State

One concept that has been used a number of times is the idea of "subjective state." In this strategy, it is impossible for players to

monopolize resources or spoil another player's experience because each player has his or her own independent experience. A classic

example is the idea of the "subjective dungeon"—when a party of adventurers enters a dungeon to explore, a personal copy of the

dungeon is created just for them, so that no other players can affect or spoil the experience for them. A subtler example is the "subjective

quest" system used in Dark Age of Camelot. In this system, a quest is given to a specific character, and only that character can gain the

benefits of that particular quest (although other characters may have their own quests).

Subjective state is a great strategy; however, it has one drawback—it undermines the sense that the player is participating in a shared

world. Competing with other players for resources, causing events to happen that affect the experience of other players, creating

complex interactions between factions; all of these are part of the fun. Thus, the "subjective" state strategy needs to be used in

moderation, balanced with a global "objective" state that all players can share.


The idea behind this method is to give into the hands of the players themselves the ability to take corrective action against abusers—to

create a kind of "checks and balances" so that the outlaws are deterred in their efforts by the actions of other players.

There are two forms of this: defensive and offensive. Defensive vigilantism allows players to assist other players in defending against

undesirable behavior but does not allow players to take retribution against miscreants, which offensive vigilantism does.

The advantage of this technique is that it creates a whole new dimension to the game, which can be quite enjoyable. The disadvantage,

however, is that the same powers that can be used against abusers can also be used against innocent players. (Remember that the

reason for giving the players this power in the first place is that we can't always tell the difference.) In particular, the difference between a

vigilante and an outlaw is often just a matter of how they select targets.

Vigilantism is most effective when there is an easy way to keep the vigilantes accountable.


If you can't beat 'em, join 'em! With assimilation, we attempt to turn a weakness into a strength by incorporating the undesirable

behavior into the game itself. The primary challenge is channeling the behavior so that it occurs primarily in the times and places that are

appropriate to the design. For example, we might notice that a number of players like to ambush and rob other players; we could then

encourage those players to take up residence in New York Central Park and become muggers, perhaps by discretely letting it be known

that the police presence in that area is especially thin, and that a mugger could make a good living there. In this way, we create a "place

of danger" that is part of the game.

[ Team LiB ]

Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Chapter 19. Managing Deviant Behavior in Online Worlds

Tải bản đầy đủ ngay(0 tr)