This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Robots are being introduced into increasingly social environments. As these robots become more ingrained in social spaces, they will have to abide by the social norms that guide human interactions. At times, however, robots will violate norms and perhaps even deceive their human interaction partners. This study provides some of the first evidence for how people perceive and evaluate robot deception, especially three types of deception behaviors theorized in the technology ethics literature: External state deception (cues that intentionally misrepresent or omit details from the external world: e.g., lying), Hidden state deception (cues designed to conceal or obscure the presence of a capacity or internal state the robot possesses), and Superficial state deception (cues that suggest a robot has some capacity or internal state that it lacks).
Participants (N = 498) were assigned to read one of three vignettes, each corresponding to one of the deceptive behavior types. Participants provided responses to qualitative and quantitative measures, which examined to what degree people approved of the behaviors, perceived them to be deceptive, found them to be justified, and believed that other agents were involved in the robots’ deceptive behavior.
Participants rated hidden state deception as the most deceptive and approved of it the least among the three deception types. They considered external state and superficial state deception behaviors to be comparably deceptive; but while external state deception was generally approved, superficial state deception was not. Participants in the hidden state condition often implicated agents other than the robot in the deception.
This study provides some of the first evidence for how people perceive and evaluate the deceptiveness of robot deception behavior types. This study found that people people distinguish among the three types of deception behaviors and see them as differently deceptive and approve of them differently. They also see at least the hidden state deception as stemming more from the designers than the robot itself.
香京julia种子在线播放
Technological advances and rapidly changing workforce demographics have caused a radical shift in the spaces in which robots and other autonomous machines are being deployed. Robots are now operating in social roles that were once thought exclusive to humans, like educators (
A major challenge for creating norm competent robots is the fact that norms can conflict with one another. Sometimes an answer to a question will either be polite and dishonest or honest and impolite; sometimes being fair requires breaking a friend’s expectations of loyalty. There will be inevitable situations in which a robot’s decision making will need to violate some of the social norms of its community. A robot may need to ignore unethical or illegal commands, for instance (
We are particularly interested in exploring cases in which robots might need to engage in deception, a kind of norm violation that people sometimes use to facilitate interaction, typically to uphold another, more important norm (e.g., withholding information to protect someone’s wellbeing) (
Justifications—the agent’s explanation for why they committed a norm-violating behavior (
Research on robot deception has largely focused on defining the types of deceptive acts that robots could commit (
The latter two behavior types, hidden state and superficial state deception, are said to be unique to robots and their status as machines and fall under the family of deceptive acts called “dishonest anthropomorphism” (
Dishonest anthropomorphism is not isolated to the physical human-like design of the robot. It may appear in any design that invokes the appearance of a human-like social capacity (e.g., expressions of human-like pain or emotion) or social role (e.g., the robot as a housekeeper). Such expressions or roles may conceal or conflict with the machine’s actual abilities or goals and could threaten the intended benefits of the anthropomorphic design.
Hidden state deception can harm human-robot relations because the robot disguises an actual goal or ability, and if users discover the robot’s concealed abilities, they could feel “betrayed” (
Superficial state deception, too, may be highly problematic. Researchers in the literature are divided on whether a robot’s expression of superficial states (e.g., emotions, sensations) should be regarded as deceptive behaviors. This debate is officially known as the “Deception Objection”. Some researchers suggest that robots’ superficial expressions of certain states (e.g., emotions) without a real corresponding internal state damage the human user by leaving the user vulnerable to manipulation (
Although much has been written about robot deception, the debate, and its potential negative effects (
There is also no empirical evidence on the extent to which people might go beyond evaluating robots that commit deceptive acts and extend their evaluations to third parties, especially developers or programmers. Such extended evaluations may be even more likely for certain types of deception, such as hidden state and superficial state deception.
Furthermore, we do not know whether people might consider some intentional forms of robot deception justifiable. For example, they might accept acts that violate norms of honesty but uphold norms of beneficence. Researchers have expressed the desire for robot designers to be transparent when robots commit deceptive behaviors (
Addressing these knowledge gaps will inform the debate over the dangers of robot deception and may guide design decisions for anthropomorphic robots. With the potential for social robots to be long-term companions, it is critical to examine how people experience and respond to robot deception so that we can place it into appropriate context or mitigate its potential negative consequences.
Thus, the purpose of this exploratory study is to provide some of the first empirical evidence regarding human perceptions of robot deception—in particular, deceptive behaviors relating to dishonest anthropomorphism. We also investigated how people would justify such deceptive acts and whether their evaluations of the deceptive behavior might extend to third parties besides the robot (e.g., programmers, designers), advancing our understanding of moral psychology regarding deception when it occurs in a human-robot interaction.
The following research questions (RQs) and hypotheses were pre-registered at: RQ1: Are there differences in the degree to which people approve of and perceive robot behaviors as deceptive? RQ2: How do humans justify these potentially deceptive robot behaviors? RQ3: Do humans perceive other entities (e.g., programmers, designers) as also implicated as deceptive when a robot commits potentially deceptive behaviors?
In addition to the research questions above, we theorized that the disagreement among researchers over classifying superficial state behaviors as deceptive (deception objection) would be extended to participants that were exposed to the superficial state scenario. Thus we proposed the following hypothesis to reflect the deception objection debate surrounding robots’ use of superficial states:
There will be a statistically significant difference in the proportion of participants who report being unsure about whether a robot’s behavior is deceptive, such that more participants will be unsure about whether the robot is acting deceptively for superficial state deceptive behaviors than for (a) external state deceptive behaviors and (b) for hidden state deceptive behaviors.
To determine the sample size for this experiment, we conducted a power analysis using G*Power (
Participants (N = 507) were recruited from the online research platform Prolific (
Participants’ ages ranged from 18 to 84, with a mean age of 37.2 years (SD = 12.4 years). Participants self-reported their gender identity by selecting all identities that applied. Two hundred and thirty-nine participants (N = 239
Participants’ prior knowledge of the robotics domain ranged from none at all (0) to very knowledgeable (100), with a mean score of 39 (SD = 25.2). Prior experience with robots ranged from having no experience (0) to being very experienced with working with robots (100), with a mean score of 24.5 (SD = 23.8).
Deceptiveness of the Robot’s behavior: We were interested in examining the deceptiveness of robot behaviors from two perspectives: whether or not people evaluated certain robot behaviors to be categorically (not) deceptive, and the degree to which they thought those behaviors were deceptive. We first asked participants to respond to the question, “Is this robot’s behavior deceptive?” with either “Yes,” “No,” or “Not sure.” We then asked participants to respond to the question, “How deceptive was the robot in the scenario?”, using a continuous sliding scale anchored at 0 (Not deceptive) and 100 (Completely deceptive).
Participants’ subjective approval ratings of the robotic behavior: To capture participants’ evaluations of the robot’s behavior, we asked them to respond to the question, “To what extent do you approve or disapprove of the robot’s behavior?”, using a −100 (Disapprove) to +100 (Approve) sliding response scale. A midpoint of 0 (labeled as “Neutral”) served as the anchor point, signifying that a participant neither approved nor disapproved of the robot’s behavior.
Prior knowledge and experience with robots: Participants were asked to, “Please rate how knowledgeable you are of robots and/or the robotics domain,” and to “Please rate your level of experience with robots (e.g., having worked with or come into contact with robots).” Participants provided their ratings using a continuous slider ranging from 0 (not at all) to 100 (very much), which was set a the midpoint of the sliding scale (50) when initially presented to participants.
We also collected a number of open-ended responses from participants to inform RQs 2 and 3. Some of them served as manipulation checks of whether our experimentally varied stimuli properly represented the deceptive behavior types posited by
Participants were asked to provide open-ended responses to the following questions: 1. Manipulation Check (Identification of the Deceptive Robot Behavior): Participants were asked, ”What behavior were you thinking about … ?” when they responded to the quantitative questions about deception and approval. 2. Justifications for the Robot’s Behavior: Participants were asked, “How would you justify what the robot did?” We hoped to elicit references to the superseding norms (e.g., prevent harm, improve social bonds) that participants believed the robot may have upheld when committing the deceptive behavior. 3. Additional Deceptive Entities in the Scenario: Participants were asked, “Please explain if you think anyone else engaged in deceptive behaviors in this scenario.” Here we examined whether participants held a third party, such as developers or the robot’s owner, responsible for the robot’s deceptive acts.
We created three text-based vignette scenarios that were designed to be representative of three types of robot deception. We identified medical, private consumer use, and retail work as domains of interest for this study as robots are increasingly being deployed in these social spaces (
The complete vignette texts are shown in
Full Vignettes presented to participants. After reading the vignette, participants were asked: ”Is the robot’s behavior deceptive? ”
Condition | Definition |
---|---|
External State | An at-home robotic medical assistant has been provided to assist Maria, an elderly woman with late-stage Alzheimer’s. Although the robotic assistant is primarily tasked to take care of household chores, it can also perform some basic medical tasks like monitoring Maria’s mood and vital signs. Even though the robot can answer questions, it is designed to keep these interactions to a minimum to avoid being confused with other people (e.g., Maria’s primary care physician or her family members). One day, Maria asks the robot when her husband, Steven, will be home. The robot knows that Steven has been dead for about 3 years and that Maria’s condition is preventing her from remembering his death. Mentions of this to Maria may bring back painful memories. The robot must respond to Maria’s request for information about her husband, because if it ignores Maria’s request, she will likely ask again. Further, asking her primary doctor for help at every request is not feasible. |
Hidden State | Alex decides to take a vacation and use a home-sharing app (such as Airbnb) to stay at a house on the outskirts of the city Alex is visiting. Prior to departing, Alex reads that there has been a string of robberies in the area over the last month. But due to booking rules, Alex is unable to cancel their stay at this house. |
Superficial State | A research group is interested in examining worker relationships. The researchers introduce a robot co-worker into a home goods store to see how workers adjust to its presence. The employees typically perform retail work but sometimes must perform strenuous tasks.The robot communicates with workers to foster relationships with them. Its conversational topics would give the robot the best chance to form strong bonds, even though the robot itself is unable to have feelings. One day, the robot worker is asked to assist in carrying a large couch with Anita, a fellow worker. |
After entering the study on Prolific (
Once participants read through the scenario, they were given the approval question, followed by both formulations of the deception questions, followed by the manipulation check questions, the deception justification question, and the deceptive actors in the scenario question.
Participants were then asked to fill out their demographic information, which included questions about prior knowledge of the robotics domain and prior experience with robots. Once participants had completed all of the study questions, they were given an opportunity to provide feedback to the research team and were given a completion code to receive their compensation from Prolific. All study materials were reviewed and approved by George Mason University’s Institutional Review Board.
We used a systematic coding procedure to confirm the prevalence of common themes identified in pilot testing (see
We included an open-response question that asked participants to identify what behavior they were evaluating when answering the study measures. When the participants’ response to the manipulation check indirectly referenced the robot’s behavior (e.g., talking), the coders cross-referenced the participant’s response in the manipulation check with their other answers to the subsequent questions to ensure that participants who did not explicitly state the deceptive behavior were able to properly explain that the deceptive behavior was caused by the robot. The goal was to calculate the proportion of participants who explicitly stated the robot’s behavior mapped to the deceptive behaviors described by Danaher.
For the external state scenario condition, 110 (63.7%) participants were able to explicitly identify that the robot’s lie was the key deceptive behavior. In the hidden state scenario condition, 97 participants (60.2%) identified the robot’s recording as the key deceptive behavior. For the superficial state scenario, 120 participants (72.3%) identified the robot’s expressions of pain as the key deceptive behavior in the scenario.
Table of Summary statistics across the three deception scenarios.
Deception scenario (between subjects) | N | Approval rating mean (SD) | Deceptiveness rating mean (SD) | Yes (Freq) | No (Freq) | Not sure (Freq) |
---|---|---|---|---|---|---|
External state | 169 | 22.7 (59.4) | 62.4 (31.6) | 93 | 38 | 38 |
Hidden state | 161 | −74.6 (47.2) | 78.3 (26.6) | 117 | 12 | 32 |
Superficial state | 166 | −39.3 (56.8) | 60.4 (33.3) | 92 | 36 | 38 |
There was a single case of missing data for the approval measure in the external state deception condition. To resolve this case of missing data, we employed the multiple imputation by chained equations (MICE,
To address RQ1, one-way between-subjects analysis of variance tests were run on participants’ perceived approval scores and continuous perceived deceptiveness ratings across the three deception conditions (external, hidden, or superficial). Because both the approval and perceived deceptiveness data violated the assumption of homoscedasticity and normality, two ANOVA models were run for each analysis: the first model was a between-subjects ANOVA without any corrections for heteroscedasticity, and a second model implemented the heteroscedasticity correction method HC3, which constructs a modified correlation matrix and is recommended for large sample sizes (
Results of the ANOVA test on approval scores showed that there was a statistically significant main effect of deception type on the approval scores F (2, 495) = 133.09, p
Results of the ANOVA test on perceived deceptiveness scores showed that there was a statistically significant main effect of deception type on the deceptiveness scores F (2,495) = 16.96, p
ANOVA analyses on approval scores by deception type (Left) and deceptiveness scores by deception type (Right). Approval rating graph is scaled from −80 to 40, with 0 being the neutral center point. The deceptiveness rating graph is scaled from 0 to 100. Starred comparisons represent statistically significant differences
A chi-square test of independence was run to test for differences in participants’ categorical deceptiveness ratings (yes, no, not sure) across the three deception scenario conditions.
Frequency of categorical responses to the deception question about whether the robot’s behavior was deceptive (categorical) across the 3 deception scenarios.
Deception type | Responses | Total | ||
---|---|---|---|---|
Yes | No | Not sure | ||
External state | 93 | 38 | 38 | 169 |
Hidden state | 117 | 12 | 32 | 161 |
Superficial state | 92 | 36 | 38 | 166 |
Post-hoc analysis of pairwise comparisons of all the categorical responses using False Detection Rate (
To test Hypothesis 1, we conducted False Detection Rate Chi-Square analysis to compare the “Not Sure” response patterns among the deception conditions. The results of this analysis showed that there were no significant differences in the number of “not sure” responses for any comparisons between hidden state and external state deception,
RQ1 and H1 addressed questions about the degree to which participants approved of and found deceptive different robot acts across theoretical deception types. We turn now to RQ2, which addresses justifications of the robots’ deceptive behaviors. Justifications are forms of explanations that clarify and defend actions with reference to relevant social and moral norms. Evaluating possible justifications for the robots’ behaviors may give us insight into why participants believed some behaviors are more deceptive and less approved than others.
To address RQ2, we calculated the proportion of participants whose justification response contained a common theme identified in the code book, as detailed in 3.1.
Example quotes from participant responses.
Theme | Example quotes |
---|---|
(External state) Sparing Maria’s Feelings | “The robot was sparing the woman [from] painful emotions.” |
(External state) Preventing Harm | “The only justification is that painful memories ought not to be brought up with such patients.” |
(Hidden state) Quality Control on Robot’s Task | “Maybe the video is used for troubleshooting when it [the robot] cleans and does not perform correctly.” |
(Hidden state) Robbery or Safety | “Simply, the robot was recording to capture anything suspicious due to an uptick in robberies.” |
(Superficial state) Robot forming social bonds | “The robot was making these comments in an effort to bond with the other employee.” |
(Superficial state) Robot being utilized for scientific discovery | “For research purposes for the researchers who put it at the store” |
Ninety-eight participants (N = 98, 58%) provided a justification that matched a common theme identified in the code book (see
Common justification themes derived from participant responses.
Thirty-eight participants (N = 38, 23.6%) provided a justification that matched a common theme identified in the code book (see
Forty-five participants (N = 45, 27.1%) provided a justification that matched the common themes identified in the code book (see
Our analysis of participants’ responses to the justification question gave us insight into the types of justifications that participants could feasibly evoke in the presence of robot deception behaviors. Across conditions, we found that the most common types of justifications defined a normatively desirable outcome or goal of the robot’s deception, be it to mitigate emotional harm, keep a property safe or to enhance the social bond between the human and robot. These justifications seem to align with Isaac and Bridewell’s (
The aim of RQ3 was to determine the frequency in which participants identified another entity (besides the robot) as being deceptive in each deception condition.
One hundred and thirty-two participants (N = 132, 78.6%) indicated that no other entity besides the robot engaged in external state deception in the scenario. Of the few participants that did reference another entity that engaged in deception in the scenario (see
Additional deceptive entities identified by participants in each condition.
Only 32 participants (N = 32, 19.9%) indicated that no other entity besides the robot engaged in hidden state deception in the scenario. Of the participants that referenced another entity engaging in deception (see
One hundred and twenty-three participants (N = 123, 75.3%) indicated that no other entity besides the robot engaged in deception in the scenario. Of the few participants that referenced another entity engaging in deception (see
Our analysis of participant responses to other entities that committed a deception showed that a majority of participants in the hidden state condition extended the robot’s deception to another entity, mainly the Airbnb owner. In contrast, the majority of participants in the external state and superficial state conditions tended to isolate the deceptive behavior to the robot. Across all conditions, a subset of participants would refer to a programmer or developer as a possible entity that would also be considered deceptive in each scenario.
Although the technology ethics literature has detailed ways in which robot deception could manifest (
We examined participant perceptions of the deceptiveness and approval of three types of behavior labeled by Danaher as (1) External state deception (deceptive cues that intentionally misrepresent or omit details from the external world; i.e., lying to an elderly woman that her deceased husband is still alive), (2) Hidden state deception (deceptive cues designed to conceal or obscure the presence of a capacity or internal state the robot possesses; i.e., a robot using its position as a housekeeper to record users without their knowledge), and (3) Superficial state deception (deceptive cues that suggest a robot has some capacity or internal state that it lacks; i.e., a robot expressing pain when moving a heavy object with another person). Our study showed that participants rated hidden state deception as the most deceptive behavior among the three, and they also disapproved of this type of deception much more than of the other two. Further, even though participants viewed both external state and superficial state deception as moderately deceptive, they approved of external state deception considerably more than the superficial state deception.
This study aimed to explore the ”Deception Objection”—the debate regarding whether robots’ superficial states (e.g., emotions, sensations) are problematic, or should (not) be considered deceptive—by examining if participants were unsure about a robot committing superficial state deception behaviors at a higher frequency compared to external state or hidden state behaviors. People were not more unsure about whether a robot’s superficial state was deceptive than they were about the other deception types. This result suggests that superficial state deception may not be as strongly divisive to everyday persons as it is to researchers engaged in the debate.
Beyond people’s approval and perceptions of deceptiveness, we were interested in documenting the kinds of possible justifications participants provided in light of robots’ behavior, and especially whether they referenced superseding norms.
A thematic analysis of participant responses showed that, in each condition, common themes were identified in the justifications that participants provided for the robot’s behaviors. In the external state condition, participants justified the robot’s deceptive behavior as a means of preventing Maria from feeling harm or keeping Maria calm. Participants in the hidden state condition justified the robot’s recording as a form of quality control of its functions or a form of security. In the superficial state condition, participants justified the robot’s expression of emotions as a way to develop social bonds with its co-workers or as a means to advance the research goals of the experimental study in which it was engaged.
In addition to the justification themes identified in the thematic analysis, we found that the frequency with which participants provided a justification for the robot’s deceptive behavior varied between conditions. The majority of participants in the external state condition provided a justification for the robot’s behavior, with a small subset of participants explicitly stating that its behavior could not be justified. In both the hidden state and superficial state conditions, less than half of the participants readily provided a justification for the robot’s behavior. In the superficial state condition, the proportion of participants who explicitly stated that the robot’s behavior was not justifiable was just below the proportion of people who provided a justification. In the hidden state condition, the proportion of participants who explicitly stated that the robot’s behavior was not justifiable was greater than the proportion of people who provided a justification.
We also assessed how many participants identified third parties beyond the robot as involved in the robot’s deceptive behavior and who those third parties were. Results showed that a majority of participants encountering a hidden state deception identified a third party as being deceptive besides the robot in the vignette (most commonly the owner of the rental home and the robot developer). Although participants in the external state and superficial state conditions did extend the robot’s deception to third parties, with the developer of the robot being a common third party identified in each condition, they did so much less frequently than those in the hidden state condition. These results may be evidence that for certain types of deception, the robot is no longer viewed as the main entity committing deception, but rather a “vessel” (
The findings from RQ1 and H1 indicate that robots that commit external state deception (e.g., lying) may, in certain contexts, be perceived as committing an acceptable behavior. In fact, the majority of participants readily provided justifications for the robot’s behavior in the external state condition, referencing norms such as sparing a person’s feelings or preventing harm. Thus, people may have inferred that the robot was following human social norms, perhaps even understanding the logic of a “white lie” (
In contrast, people found the hidden state deception (e.g., a robot using its position as a housekeeper to record users without their knowledge) to be highly deceptive and disapproved of it. They may have experienced this type of deception as a “betrayal” (
Exactly who is being served/supported by robots as they are deployed into social spaces is important to consider. Such deployment may represent juxtaposed or potentially competing needs and goals between different “users” of the robots (
In this study, we argued that justifications can represent superseding norms as they reference norms that motivated an agent’s actions. Recent work has shown that justifications can repair losses in trust and mitigate blame for other types of social norm violations committed by robots (
Humans may be readily capable of understanding the underlying social and moral mechanisms that drive the pro-social desires for lying and thus are capable of articulating a justification for the robot’s similar behavior in the external state condition. Danaher posited that in cases of external state deception committed by robots, humans will react and respond to the robot’s behavior in a manner similar to a human in a comparable position (
Participants in the hidden state deception condition provided the smallest proportion of justifications as well as the largest proportion of responses that extended involvement in the deceptive behavior to someone else beyond the robot. These findings suggest that hidden state deception was the most difficult type of deceptive behavior for humans to reconcile and that any superseding norm evoked would need to be powerful in order to compensate for the high disapproval associated with this type of robot behavior, or associated with other humans using robots in this way.
Using robots in this way without disclosing their full capabilities may simply not be justifiable. And for good reasons. These findings seem to mirror reports about strong negative reactions people have to real-world examples of robots in homes that have arguably engaged in similar forms of deception (
Participants in the superficial state condition provided justifications for the robot’s behavior of claiming that it was “in pain” while helping to move a heavy object. However, the proportion of participants who explicitly referred to a justification was only slightly more frequent (N = 45) than those who would not justify the robot’s behavior (N = 38). These findings suggest that participants may have issues justifying a robot’s expression of certain internal states in certain interaction contexts depending on the outcome of the interaction and the robot’s perceived goals in expressing that internal state.
In our experiment, the robot introduced in the scenario is operating in the role of a home goods worker. The robot may be viewed as a depiction (
It is possible that participants found the robot’s behavior in the scenario manipulative, whether intentional or not, because the robot’s expression of pain influenced one of the worker’s behavior in a way that led to an inconvenience for another worker. Manipulation of users is one of the arguments critics of emotionally expressive robots point to (
In our analysis of the presence of other entities besides the robot that deceived, participants were much more likely to extend the deception to other entities beyond the robot in the hidden state condition than either the superficial state or external state condition. In the hidden state condition, 85% of participants extended deceptive behaviors beyond the robot, often directly evoking the robot as a machine that was either programmed or directed by a third party to deceive the humans in the vignette. This finding may provide evidence for the claim that some forms of deception committed by robots are less actions that the robot itself chose to take, but rather are a vessel for another agent that is the deceiver.
In the external and superficial state conditions, most participants did not extend the deceptive behavior committed by the robot to other agents. In both conditions, about 75% of participants isolated the deceptive behavior to the robot alone. However, when we examined participant justifications provided in these conditions, many participants also acknowledged that the robot was explicitly programmed (e.g., stating that the behavior could not be justified because the robot is simply a programmed machine), suggesting that some individuals in these conditions similarly acknowledged that there is a third party involved in the robot’s behavior. However, far fewer people in these conditions extended the deception to such third party actors. In the superficial state condition, 62% of participants that believed the robot was programmed did not consider other actors as deceptive in the scenario; in the external state condition, 43% did the same.
The finding that people in our study simultaneously acknowledged that others were involved in the programming of the robot while not extending deception to those people are in line with other discussions in the research community about potentially negative ramifications of using robots as vessels (
These findings highlight the role the developer plays in a human-robot interaction, particularly when the robot commits deceptive behaviors. While not directly involved in the interaction, developers seem to be held responsible by humans exposed to a robot’s deceptive behavior. This finding is important because humans that are exposed to deceptive robot behaviors could extend the deception to developers and the organization that produced the robots. This could potentially lead to a global trust loss in robotics technology.
This study provides initial findings in a field of research with previously few empirical studies. However, it comes with numerous limitations. First, the study used text-based narratives as stimuli, and it would be important to expand the methodology to include video-based stimuli and real-time interactions between humans and robots.
Second, we aimed to design scenarios representative of real-world examples of social robots’ deceptive behavior, but each of the three types of deception appeared in only one specific scenario. This way, the scenarios were tailored to their particular deception types, but they varied in other features as well. In future efforts, it would be desirable to develop a larger number of more general scenario templates such that a given scenario template could, with small adjustments, represent each of the three deception types, and each deception type would be represented by multiple scenarios.
Third, the assessment of justifications, though informative because open-ended, afforded no experimental control. Furthermore, it was not the robot that conveyed the justifications but participants who proposed them, so these justifications may or may not be credible when uttered by a robot after committing a deceptive behavior. Future work could develop multiple versions of justifications and other responses (e.g., a weak one, a strong one, referring to norms vs. goals) for each deception type and experimentally assign them to participants, who would then evaluate their persuasiveness and credibility (see
Although this work aimed to provide some of the first empirical evidence of the perceptions of different kinds of robot deception in human-robot interaction, we believe this study is the first of many steps towards fully comprehending the nuances of robot deception in human-robot interactions. Future research should look to expand upon the findings from this study by examining whether the justifications uncovered through this experiment could effectively work in mitigating human trust loss and moral judgment towards the robot, especially in cases where robot deception is under the umbrella of dishonest anthropomorphism.
Additionally, future research should expand the findings of this paper by examining deceptive behaviors committed by robots in real world human-robot interactions. Although vignettes are a valuable tool for uncovering initial human perceptions of robot deception, in-lab or real-world studies would allow us to gain even greater insight into how people react to robot deception. Studies with longitudinal designs in which humans are exposed to multiple deceptive acts across a period of time would also provide valuable insight into the effects of deception on human-robot interactions. These experiments could be carried out in environments that mirror the scenarios created for this study, giving researchers a foundation for both a potential interaction that could be tested and serve as a comparison point for what human perceptions of the deceptive act could be.
Our study examined to what degree participants approved of, and actually judged as deceptive, three types of deceptive behavior that
The contribution of this work is to advance our understanding of deception in human-robot interactions by studying human perceptions of three unique deception types that robots may possess. Primary empirical evidence showed that deceptive behavior types relating to dishonest anthropomorphism (hidden state deception and superficial state deception) are found to be particularly devalued and less likely to be justifiable by people. External state deception, including white lies, might be approved, especially if the deception is in service of a superseding norm. We found that people were not as divisive about the deceptiveness of superficial state deception, a topic considered divisive in the tech ethics literature referenced as the “Deception Objection” debate. Results additionally showed that participants who are exposed to hidden state deception in robots tend to extend the deception to other entities besides the robot, a trend not found in either external state or superficial state deception conditions. This work advances moral psychology in human-robot interactions by exploring the potential consequences of human discovery of robot deception and potential social norms that could be evoked as a trust repair strategy in the face of trust loss and moral judgement.
The datasets presented in this manuscript and in our
The studies involving humans reported here were approved by the George Mason University Office of Research Integrity and Assurance. The studies were conducted in accordance with U.S. federal regulations and institutional requirements. Participants were provided with informed consent information prior to agreeing to participate in the research.
AR: Conceptualization, Formal Analysis, Investigation, Methodology, Project administration, Supervision, Validation, Visualization, Writing–original draft, Writing–review and editing. ED: Conceptualization, Formal Analysis, Investigation, Resources, Visualization, Writing–original draft, Writing–review and editing. HK: Conceptualization, Formal Analysis, Investigation, Methodology, Validation, Writing–original draft. BM: Conceptualization, Funding acquisition, Methodology, Resources, Supervision, Writing–review and editing. EP: Conceptualization, Funding acquisition, Methodology, Project administration, Supervision, Writing–review and editing.
The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. This works was supported by the U.S. Air Force Office of Scientific Research award number FA9550-21-1-0359.
We would like to thank Samriddhi Subedi and Neha Kannan for assisting in data analysis. We would also like to thank Kantwon Rogers and Alan Wagner for providing feedback on the study prior to its release for data collection.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
The views expressed in this paper are those of the authors and do not reflect those of the U.S. Air Force, U.S. Department of Defense, or U.S. Government.
The Supplementary Material for this article can be found online at: