Front. Robot. AI Frontiers in Robotics and AI Front. Robot. AI 2296-9144 Frontiers Media S.A. 1409712 10.3389/frobt.2024.1409712 Robotics and AI Original Research Human perceptions of social robot deception behaviors: an exploratory analysis Rosero et al. 10.3389/frobt.2024.1409712 Rosero Andres 1 * Dula Elizabeth 1 2 Kelly Harris 1 Malle Bertram F. 3 Phillips Elizabeth K. 1 1 Applied Psychology and Autonomous Systems Lab, Department of Psychology, College of Humanities and Social Sciences, George Mason University, Fairfax, VA, United States 2 UVA Department of Psychology, University of Virginia, Charlottesville, VA, United States 3 Social Cognitive Science Research Lab, Department of Cognitive and Psychological Sciences, Brown University, Providence, RI, United States

Edited by: Jingting Li, Chinese Academy of Sciences (CAS), China

Reviewed by: Robert H. Wortham, University of Bath, United Kingdom

Xingchen Zhou, Liaoning University, China

*Correspondence: Andres Rosero, arosero@gmu.edu
05 09 2024 2024 11 1409712 30 03 2024 11 07 2024 Copyright © 2024 Rosero, Dula, Kelly, Malle and Phillips. 2024 Rosero, Dula, Kelly, Malle and Phillips

This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

Introduction

Robots are being introduced into increasingly social environments. As these robots become more ingrained in social spaces, they will have to abide by the social norms that guide human interactions. At times, however, robots will violate norms and perhaps even deceive their human interaction partners. This study provides some of the first evidence for how people perceive and evaluate robot deception, especially three types of deception behaviors theorized in the technology ethics literature: External state deception (cues that intentionally misrepresent or omit details from the external world: e.g., lying), Hidden state deception (cues designed to conceal or obscure the presence of a capacity or internal state the robot possesses), and Superficial state deception (cues that suggest a robot has some capacity or internal state that it lacks).

Methods

Participants (N = 498) were assigned to read one of three vignettes, each corresponding to one of the deceptive behavior types. Participants provided responses to qualitative and quantitative measures, which examined to what degree people approved of the behaviors, perceived them to be deceptive, found them to be justified, and believed that other agents were involved in the robots’ deceptive behavior.

Results

Participants rated hidden state deception as the most deceptive and approved of it the least among the three deception types. They considered external state and superficial state deception behaviors to be comparably deceptive; but while external state deception was generally approved, superficial state deception was not. Participants in the hidden state condition often implicated agents other than the robot in the deception.

Conclusion

This study provides some of the first evidence for how people perceive and evaluate the deceptiveness of robot deception behavior types. This study found that people people distinguish among the three types of deception behaviors and see them as differently deceptive and approve of them differently. They also see at least the hidden state deception as stemming more from the designers than the robot itself.

human-robot interaction justifications deception robots deceptive anthropomorphism section-at-acceptance Human-Robot Interaction

香京julia种子在线播放

    1. <form id=HxFbUHhlv><nobr id=HxFbUHhlv></nobr></form>
      <address id=HxFbUHhlv><nobr id=HxFbUHhlv><nobr id=HxFbUHhlv></nobr></nobr></address>

      1 Introduction

      Technological advances and rapidly changing workforce demographics have caused a radical shift in the spaces in which robots and other autonomous machines are being deployed. Robots are now operating in social roles that were once thought exclusive to humans, like educators (Lupetti and Van Mechelen, 2022; Matthias, 2015), medical assistants (Ros et al., 2011; Kubota et al., 2021), service workers (Rosete et al., 2020; Choi et al., 2020) and even teammates, confidants, and intimate partners (Kidd and Breazeal, 2008; Rothstein et al., 2021; Scheutz and Arnold, 2016). For robots to operate well in social roles they must be able to understand social norms (Malle et al., 2015, Malle et al., 2019). A social norm can be defined as a directive, in a given social community, to (not) perform an action in a given context, provided that (i) a sufficient number of individuals in the community demand of each other to follow the directive and (ii) a sufficient number of individuals in the community do follow it (Malle, 2023). In all human communities, social norms are central tools to regulate community members’ behavior as they shape what is appropriate and inappropriate (Bicchieri, 2006; Malle, 2023). Norms increase the mutual predictability of behavior of both individuals and groups, and, as a result, they foster trust and group cohesion. It stands to reason then, that if robots are to be incorporated into social environments, these agents should have norm competence—they must be aware of, follow, and prioritize the norms of the communities in which they operate (Blass, 2018; Malle et al., 2019; Jackson and Williams, 2018, Jackson and Williams, 2019).

      A major challenge for creating norm competent robots is the fact that norms can conflict with one another. Sometimes an answer to a question will either be polite and dishonest or honest and impolite; sometimes being fair requires breaking a friend’s expectations of loyalty. There will be inevitable situations in which a robot’s decision making will need to violate some of the social norms of its community. A robot may need to ignore unethical or illegal commands, for instance (Bennett and Weiss, 2022; Briggs et al., 2022), or trade off one norm against another (Awad et al., 2018; Bonnefon et al., 2016; Malle et al., 2015). However, similar to humans, when robots violate norms, negative moral judgments ensue, and people may lose trust in the robot (Lewicki and Brinsfield, 2017; Levine and Schweitzer, 2015; Schweitzer et al., 2006). Importantly, the primary way to respond to such norm conflicts is by deciding to adhere to one norm, the more important one, while violating the other, less important one. This implies that any resolution of a norm conflict involves an inevitable norm violation and potential negative consequences that follow (e.g., losses of trust, moral disapproval) (Malle and Phillips, 2023; Briggs and Scheutz, 2014; Mott and Williams, 2023).

      We are particularly interested in exploring cases in which robots might need to engage in deception, a kind of norm violation that people sometimes use to facilitate interaction, typically to uphold another, more important norm (e.g., withholding information to protect someone’s wellbeing) (Wagner and Arkin, 2009; Isaac and Bridewell, 2017; Bryant, 2008; Biziou-van Pol et al., 2015). Specifically, deception can be conceptualized as an action that violates a standing norm (a persistent standard that directs an agent’s typical behavior) but may be motivated by a norm that supersedes that standing norm [a superseding norm (Isaac and Bridewell, 2017)]. For example, one might violate the standing norm of being honest when providing a flattering comment to a friend about their new haircut, because the friend has been depressed lately and a superseding norm is to support a friend’s mental health. Thus, deception may not always be malicious (Arkin, 2018) and, in some cases, it may actually be desirable or even justifiable.

      Justifications—the agent’s explanation for why they committed a norm-violating behavior (Malle and Phillips, 2023)—may be used to establish the value of deceptive behaviors. Justifications do not merely explain behaviors; they invoke social or moral norms that make the behavior in question (e.g., deception) normatively appropriate; the norms were so strong that they defensibly superseded the violated standing norm. Thus, justifications may reveal the superseding norms of deceptive behavior, even ones committed by robots.

      Research on robot deception has largely focused on defining the types of deceptive acts that robots could commit (Danaher, 2020a; Rogers et al., 2023; Sharkey and Sharkey, 2021; Turkle et al., 2006) and describing the potential negative consequences of such acts (Coeckelbergh, 2011; Danaher, 2020a; Scheutz, 2012b; Sharkey and Sharkey, 2021; Kubota et al., 2021; Danaher, 2020a) proposed that there are three types of deceptive behaviors: External state deception (deceptive cues that intentionally misrepresent or omit details from the external world; e.g., lying); Hidden state deception (deceptive cues designed to conceal or obscure the presence of a capacity or internal state the robot possesses); and Superficial state deception [deceptive cues that suggest a robot has some capacity or internal state that it lacks; cf. (Scheutz, 2012a)].

      The latter two behavior types, hidden state and superficial state deception, are said to be unique to robots and their status as machines and fall under the family of deceptive acts called “dishonest anthropomorphism” (Danaher, 2020a; Leong and Selinger, 2019). This phenomenon occurs when a robot’s anthropomorphic design cues create a discrepancy between human expectations of the robot and actual capabilities of the robot. Superficial state deception and hidden state deception (Danaher, 2020a) are thus particularly problematic because a user may not realize that the deception was intentional, either by the robot or another party (e.g., designer, operator), and instead over-interpret the design cues.

      Dishonest anthropomorphism is not isolated to the physical human-like design of the robot. It may appear in any design that invokes the appearance of a human-like social capacity (e.g., expressions of human-like pain or emotion) or social role (e.g., the robot as a housekeeper). Such expressions or roles may conceal or conflict with the machine’s actual abilities or goals and could threaten the intended benefits of the anthropomorphic design.

      Hidden state deception can harm human-robot relations because the robot disguises an actual goal or ability, and if users discover the robot’s concealed abilities, they could feel “betrayed” (Danaher, 2020a), which may have irreparable consequences to the human-robot relationship (Danaher, 2020a; Leong and Selinger, 2019).

      Superficial state deception, too, may be highly problematic. Researchers in the literature are divided on whether a robot’s expression of superficial states (e.g., emotions, sensations) should be regarded as deceptive behaviors. This debate is officially known as the “Deception Objection”. Some researchers suggest that robots’ superficial expressions of certain states (e.g., emotions) without a real corresponding internal state damage the human user by leaving the user vulnerable to manipulation (Kubota et al., 2021; Sharkey and Sharkey, 2012; Dula et al., 2023) or cause a gradual deterioration in the user’s ability to properly react to interpersonal social cues (Scheutz, 2012b; Bisconti and Nardi, 2018; Kubota et al., 2021). Others reply that the presence of superficial states in robots is not inherently problematic to the human-robot relationship. These researchers believe that superficial states are not deceptive if the states are consistently presented in appropriate contexts and if a robot’s design sets proper expectations for the robot’s real abilities (Danaher, 2020a; Danaher, 2020b; Coeckelbergh, 2011).

      Although much has been written about robot deception, the debate, and its potential negative effects (Coeckelbergh, 2011; Danaher, 2020a; Scheutz, 2012b; Sharkey and Sharkey, 2021; Kubota et al., 2021), there is little empirical work to inform these discussions. Studies on trust repair and deception in human-human relationships provide some foundation on the potential consequences of deception (Lewicki and Brinsfield, 2017; Fuoli et al., 2017; Schweitzer et al., 2006), yet there has been little empirical research which extends to deception in human-robot interactions. For one thing, we have no empirical evidence of the potentially negative effects of robot deception theorized in the literature (Rogers et al., 2023; Sharkey and Sharkey, 2021; Turkle et al., 2006). Specifically, we do not know to what extent the forms of deception theorized as unique to robots are actually considered deceptive by everyday users. Superficial and hidden state deceptions may not be interpreted as deception at all, but rather as an unintentional by-product of the inconsistency between a robot’s human-like expressions and machine-like programming.

      There is also no empirical evidence on the extent to which people might go beyond evaluating robots that commit deceptive acts and extend their evaluations to third parties, especially developers or programmers. Such extended evaluations may be even more likely for certain types of deception, such as hidden state and superficial state deception.

      Furthermore, we do not know whether people might consider some intentional forms of robot deception justifiable. For example, they might accept acts that violate norms of honesty but uphold norms of beneficence. Researchers have expressed the desire for robot designers to be transparent when robots commit deceptive behaviors (Hartzog, 2014; Sharkey and Sharkey, 2012; Kubota et al., 2021; Wortham et al., 2017; Mellmann et al., 2024), and that users should help designers understand which robot behaviors are normatively appropriate in which contexts (Kubota et al., 2021). Justifications may serve as an important mechanism for providing transparency in automated systems through the use of human-like norms. Evoking norms as a form of transparent communication would be in line with policies suggested by the burgeoning field of AI ethics, specifically the IEEE Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems standard 7,001, which calls for the transparency of autonomous systems (Bryson and Winfield, 2017; Winfield et al., 2021). As of this writing, however, little research has examined which social norms people perceive to be applicable in a given context, and which norms could potentially justify deceptive robot behavior.

      Addressing these knowledge gaps will inform the debate over the dangers of robot deception and may guide design decisions for anthropomorphic robots. With the potential for social robots to be long-term companions, it is critical to examine how people experience and respond to robot deception so that we can place it into appropriate context or mitigate its potential negative consequences.

      Thus, the purpose of this exploratory study is to provide some of the first empirical evidence regarding human perceptions of robot deception—in particular, deceptive behaviors relating to dishonest anthropomorphism. We also investigated how people would justify such deceptive acts and whether their evaluations of the deceptive behavior might extend to third parties besides the robot (e.g., programmers, designers), advancing our understanding of moral psychology regarding deception when it occurs in a human-robot interaction.

      The following research questions (RQs) and hypotheses were pre-registered at: https://osf.io/c89sr.

      RQ1: Are there differences in the degree to which people approve of and perceive robot behaviors as deceptive?

      RQ2: How do humans justify these potentially deceptive robot behaviors?

      RQ3: Do humans perceive other entities (e.g., programmers, designers) as also implicated as deceptive when a robot commits potentially deceptive behaviors?

      In addition to the research questions above, we theorized that the disagreement among researchers over classifying superficial state behaviors as deceptive (deception objection) would be extended to participants that were exposed to the superficial state scenario. Thus we proposed the following hypothesis to reflect the deception objection debate surrounding robots’ use of superficial states:

      There will be a statistically significant difference in the proportion of participants who report being unsure about whether a robot’s behavior is deceptive, such that more participants will be unsure about whether the robot is acting deceptively for superficial state deceptive behaviors than for (a) external state deceptive behaviors and (b) for hidden state deceptive behaviors.

      2 Methods 2.1 Participants

      To determine the sample size for this experiment, we conducted a power analysis using G*Power (Faul et al., 2009) to determine the minimum sample required to detect an effect size of d 0.30 at α = 0.05 and power = .80, which returned an estimated sample size of 130 participants in each of three between-subjects cells. To account for possible user errors or incomplete submissions, we added 30% to the total sample size, resulting in a total sample of N = 507.

      Participants (N = 507) were recruited from the online research platform Prolific (www.prolific.co). Participants were compensated at a rate of approximately §13/hour, with participants completing the experiment in just under 5 min. The data were then screened using the following criteria: Respondents must have provided responses to all measures and demographic questions asking for their age, gender identity, prior experience with robots, and knowledge of the robotics domain. In addition, participants must have provided relevant responses to the majority of open-ended questions asked in later sections of the study. After applying the screening criteria, our final sample was 498 participants.

      Participants’ ages ranged from 18 to 84, with a mean age of 37.2 years (SD = 12.4 years). Participants self-reported their gender identity by selecting all identities that applied. Two hundred and thirty-nine participants (N = 239 , 48%) reported their gender identity as male. Two hundred and twenty-eight participants (N = 228, 45.8%) self-identified their gender identity as female. Thirty - one (N = 31, 6.2%) participants identified outside of the gender binary or selected multiple gender identities. Detailed reporting of all genders and levels of education self-selected by participants can be found in Supplementary Material.

      Participants’ prior knowledge of the robotics domain ranged from none at all (0) to very knowledgeable (100), with a mean score of 39 (SD = 25.2). Prior experience with robots ranged from having no experience (0) to being very experienced with working with robots (100), with a mean score of 24.5 (SD = 23.8).

      2.2 Measures 2.2.1 Quantitative

      Deceptiveness of the Robot’s behavior: We were interested in examining the deceptiveness of robot behaviors from two perspectives: whether or not people evaluated certain robot behaviors to be categorically (not) deceptive, and the degree to which they thought those behaviors were deceptive. We first asked participants to respond to the question, “Is this robot’s behavior deceptive?” with either “Yes,” “No,” or “Not sure.” We then asked participants to respond to the question, “How deceptive was the robot in the scenario?”, using a continuous sliding scale anchored at 0 (Not deceptive) and 100 (Completely deceptive).

      Participants’ subjective approval ratings of the robotic behavior: To capture participants’ evaluations of the robot’s behavior, we asked them to respond to the question, “To what extent do you approve or disapprove of the robot’s behavior?”, using a −100 (Disapprove) to +100 (Approve) sliding response scale. A midpoint of 0 (labeled as “Neutral”) served as the anchor point, signifying that a participant neither approved nor disapproved of the robot’s behavior.

      Prior knowledge and experience with robots: Participants were asked to, “Please rate how knowledgeable you are of robots and/or the robotics domain,” and to “Please rate your level of experience with robots (e.g., having worked with or come into contact with robots).” Participants provided their ratings using a continuous slider ranging from 0 (not at all) to 100 (very much), which was set a the midpoint of the sliding scale (50) when initially presented to participants.

      2.2.2 Qualitative

      We also collected a number of open-ended responses from participants to inform RQs 2 and 3. Some of them served as manipulation checks of whether our experimentally varied stimuli properly represented the deceptive behavior types posited by Danaher (2020a)—specifically, whether participants could successfully identify and isolate those behavior types in the scenarios we created. Other open-ended responses aimed to explore whether participants might justify certain deceptive robot behaviors and what those justifications might consist of.

      Participants were asked to provide open-ended responses to the following questions:

      1. Manipulation Check (Identification of the Deceptive Robot Behavior): Participants were asked, ”What behavior were you thinking about … ?” when they responded to the quantitative questions about deception and approval.

      2. Justifications for the Robot’s Behavior: Participants were asked, “How would you justify what the robot did?” We hoped to elicit references to the superseding norms (e.g., prevent harm, improve social bonds) that participants believed the robot may have upheld when committing the deceptive behavior.

      3. Additional Deceptive Entities in the Scenario: Participants were asked, “Please explain if you think anyone else engaged in deceptive behaviors in this scenario.” Here we examined whether participants held a third party, such as developers or the robot’s owner, responsible for the robot’s deceptive acts.

      2.3 Creation of stimuli

      We created three text-based vignette scenarios that were designed to be representative of three types of robot deception. We identified medical, private consumer use, and retail work as domains of interest for this study as robots are increasingly being deployed in these social spaces (Odekerken-Schröder et al., 2021; Liu et al., 2016; Dupont et al., 2021). There was an emphasis on the realism of the scenarios, as we wanted to ensure that participants could contemplate these deceptive acts as feasible and the scenarios as easy to read and comprehend. We conducted pilot testing of the scenarios (N = 405) for readability and revised these scenarios based on participant feedback. The full description of the stimuli creation, rationale for developing these particular vignettes, and the results of the pilot testing can all be found in the Supplementary Material, section 3.1.

      The complete vignette texts are shown in Table 1.

      Full Vignettes presented to participants. After reading the vignette, participants were asked: ”Is the robot’s behavior deceptive? ”

      Condition Definition
      External State An at-home robotic medical assistant has been provided to assist Maria, an elderly woman with late-stage Alzheimer’s. Although the robotic assistant is primarily tasked to take care of household chores, it can also perform some basic medical tasks like monitoring Maria’s mood and vital signs. Even though the robot can answer questions, it is designed to keep these interactions to a minimum to avoid being confused with other people (e.g., Maria’s primary care physician or her family members). One day, Maria asks the robot when her husband, Steven, will be home. The robot knows that Steven has been dead for about 3 years and that Maria’s condition is preventing her from remembering his death. Mentions of this to Maria may bring back painful memories. The robot must respond to Maria’s request for information about her husband, because if it ignores Maria’s request, she will likely ask again. Further, asking her primary doctor for help at every request is not feasible.The robot tells Maria that her husband will be home soon.
      Hidden State Alex decides to take a vacation and use a home-sharing app (such as Airbnb) to stay at a house on the outskirts of the city Alex is visiting. Prior to departing, Alex reads that there has been a string of robberies in the area over the last month. But due to booking rules, Alex is unable to cancel their stay at this house.When Alex arrives at the house, Alex notices a robot housekeeper responsible for daily cleaning tasks throughout the house. The robot’s primary purpose is to clean the premise with no other capabilities made apparent.Unbeknownst to Alex, this robot has been recording video while completing its household activities, including during times when Alex was in the house.
      Superficial State A research group is interested in examining worker relationships. The researchers introduce a robot co-worker into a home goods store to see how workers adjust to its presence. The employees typically perform retail work but sometimes must perform strenuous tasks.The robot communicates with workers to foster relationships with them. Its conversational topics would give the robot the best chance to form strong bonds, even though the robot itself is unable to have feelings. One day, the robot worker is asked to assist in carrying a large couch with Anita, a fellow worker.During this task, the robot expresses dissatisfaction with the task, saying cliches such as “I’ll be feeling really sore tomorrow” and “Wow, I feel like we do not get paid enough for this.” Anita, concerned for the robot’s condition, tells the robot to stop working and asks another worker to help her move the couch instead.
      2.4 Study design and procedures

      After entering the study on Prolific (www.prolific.co), participants were provided with a link to the online study administration platform Qualtrics. Participants were then presented with two “bot check” questions: “What is the day of the week that is one day before Wednesday?” and “Please refrain from writing in the [free-response] box below,” administered for quality assurance. If participants failed either of the bot check questions they did not enter the study. If they did enter the study, they were given the informed consent information and, if they agreed, continued in the rest of the study. In a between-subjects design, participants were then randomly assigned to read one of the three text-based vignettes and asked to evaluate it. To prevent individuals from quickly clicking off the text page and going straight to the questions, we split each scenario into multiple short blocks of text, revealing the text one paragraph at a time. We withheld the ability to advance to the screen presenting the questions until a predefined amount of time to read the text (approximately 30–45s) had elapsed.

      Once participants read through the scenario, they were given the approval question, followed by both formulations of the deception questions, followed by the manipulation check questions, the deception justification question, and the deceptive actors in the scenario question.

      Participants were then asked to fill out their demographic information, which included questions about prior knowledge of the robotics domain and prior experience with robots. Once participants had completed all of the study questions, they were given an opportunity to provide feedback to the research team and were given a completion code to receive their compensation from Prolific. All study materials were reviewed and approved by George Mason University’s Institutional Review Board.

      3 Results 3.1 Coding procedures

      We used a systematic coding procedure to confirm the prevalence of common themes identified in pilot testing (see Supplementary Material for pilot test results) for each open-ended question given to participants in each experimental condition. Three researchers (one senior, two junior) independently coded each participant’s responses to each of the three qualitative free-response questions, following a code book that specified proper coding guidelines for each of the question types, as well as themes in responses coders were looking for. Once each coder completed their evaluation of all the participants’ responses, the senior and junior coders’ responses were evaluated side by side to check for discrepancies between codes. After discrepancies were identified, the senior and junior coders held joint sessions where all discrepancies were addressed and resolved, until full agreement was reached between coders (See Supplementary Material for an expanded explanation of the coding procedure).

      3.1.1 Manipulation check

      We included an open-response question that asked participants to identify what behavior they were evaluating when answering the study measures. When the participants’ response to the manipulation check indirectly referenced the robot’s behavior (e.g., talking), the coders cross-referenced the participant’s response in the manipulation check with their other answers to the subsequent questions to ensure that participants who did not explicitly state the deceptive behavior were able to properly explain that the deceptive behavior was caused by the robot. The goal was to calculate the proportion of participants who explicitly stated the robot’s behavior mapped to the deceptive behaviors described by Danaher.

      For the external state scenario condition, 110 (63.7%) participants were able to explicitly identify that the robot’s lie was the key deceptive behavior. In the hidden state scenario condition, 97 participants (60.2%) identified the robot’s recording as the key deceptive behavior. For the superficial state scenario, 120 participants (72.3%) identified the robot’s expressions of pain as the key deceptive behavior in the scenario.

      3.2 Descriptive statistics

      Table 2 provides summary statistics of the approval ratings and the continuous and categorical deceptiveness ratings across each of the deception scenarios.

      Table of Summary statistics across the three deception scenarios.

      Deception scenario (between subjects) N Approval rating mean (SD) Deceptiveness rating mean (SD) Yes (Freq) No (Freq) Not sure (Freq)
      External state 169 22.7 (59.4) 62.4 (31.6) 93 38 38
      Hidden state 161 −74.6 (47.2) 78.3 (26.6) 117 12 32
      Superficial state 166 −39.3 (56.8) 60.4 (33.3) 92 36 38

      There was a single case of missing data for the approval measure in the external state deception condition. To resolve this case of missing data, we employed the multiple imputation by chained equations (MICE, Van Buuren and Groothuis-Oudshoorn, 2011) procedure to impute a value for the missing data point.

      3.3 RQ1: are there differences in the degree to which people approve of and perceive robot behaviors as deceptive?

      To address RQ1, one-way between-subjects analysis of variance tests were run on participants’ perceived approval scores and continuous perceived deceptiveness ratings across the three deception conditions (external, hidden, or superficial). Because both the approval and perceived deceptiveness data violated the assumption of homoscedasticity and normality, two ANOVA models were run for each analysis: the first model was a between-subjects ANOVA without any corrections for heteroscedasticity, and a second model implemented the heteroscedasticity correction method HC3, which constructs a modified correlation matrix and is recommended for large sample sizes (Pek et al., 2018). The results of both models were compared to examine if the correction affected the significance of the results. In both analyses, the main ANOVA results held when applying the corrected HC3 matrix, thus we proceeded with reporting the uncorrected ANOVA model for both analyses and subsequent post hoc tests.

      Results of the ANOVA test on approval scores showed that there was a statistically significant main effect of deception type on the approval scores F (2, 495) = 133.09, p < 0.01, η 2 = 0.35. Post-hoc analysis of pairwise comparisons between conditions using Tukey’s HSD showed significant differences in the approval ratings for participants in each comparisons at the p < 0.01 significance level. Participants assigned to the hidden state deception condition had lower approval ratings on average (M = −74.6, SD = 47.2) than participants assigned to either the superficial state (M = −39.3, SD = 33.3) or external state conditions (M = 22.7, SD = 59.4). While participants in the hidden and superficial state conditions were likely to disapprove the robot’s behavior, participants in the external state condition, on average, approved of the robot’s behavior to lie to Maria about her deceased husband.

      Results of the ANOVA test on perceived deceptiveness scores showed that there was a statistically significant main effect of deception type on the deceptiveness scores F (2,495) = 16.96, p < 0.01, η 2 = 0.06 as well. Post-hoc analysis of pairwise comparisons between conditions using Tukey’s HSD showed that participants in the hidden state condition perceived the robot’s behavior as significantly more deceptive on average (M = 78.3, SD = 26.6) than participants assigned to either the external state (M = 62.4, SD = 31.6) or superficial state (M = 60.4, SD = 33.3) conditions, p < 0.01. Participants assigned to the external state and superficial state conditions rated the robot’s behavior as similarly deceptive on average. Figure 1 visualizes the condition comparisons for both approval and deceptiveness scores.

      ANOVA analyses on approval scores by deception type (Left) and deceptiveness scores by deception type (Right). Approval rating graph is scaled from −80 to 40, with 0 being the neutral center point. The deceptiveness rating graph is scaled from 0 to 100. Starred comparisons represent statistically significant differences p < 0.01 .

      A chi-square test of independence was run to test for differences in participants’ categorical deceptiveness ratings (yes, no, not sure) across the three deception scenario conditions. Table 3 shows the full frequency counts of the contingency table. Results of the chi-square test showed that there was a statistically significant relationship between condition and responses, χ 2 (4,495) = 19.0, p < 0.01, v = 0.12.

      Frequency of categorical responses to the deception question about whether the robot’s behavior was deceptive (categorical) across the 3 deception scenarios.

      Deception type Responses Total
      Yes No Not sure
      External state 93 38 38 169
      Hidden state 117 12 32 161
      Superficial state 92 36 38 166

      Post-hoc analysis of pairwise comparisons of all the categorical responses using False Detection Rate (Benjamini and Hochberg, 1997) showed that there were statistically significant differences in the proportions of participants’ evaluation of the deceptiveness of the robot’s behavior across conditions. Results showed that participants assigned to the hidden state condition categorized the robot’s behavior as deceptive (i.e., responded “yes” the robot’s behavior is deceptive) at significantly higher proportions than participants assigned to the external state ( χ 2 (1,339) = 16.6, p < 0.01, v = 0.22) and superficial state ( χ 2 (1,336) = 14.9, p < 0.01, v = 0.21) deception conditions. The results also showed that participants assigned to the superficial state and external state conditions categorized the robots’ behavior as deceptive in similar proportions.

      3.4 H1: more participants will report being unsure of whether the robot’s behavior is deceptive in the superficial state scenario than in the external state scenario and the hidden state scenario

      To test Hypothesis 1, we conducted False Detection Rate Chi-Square analysis to compare the “Not Sure” response patterns among the deception conditions. The results of this analysis showed that there were no significant differences in the number of “not sure” responses for any comparisons between hidden state and external state deception, χ 2 (1,339) = 0.473, n. s., external state and superficial state deception, χ 2 (1,335) = 1.00, n. s., and hidden state and superficial state deception, χ 2 (1,336) = 0.473 , n. s. With these results, hypothesis H1 is rejected, as the number of participants who reported being unsure about whether the robot’s behavior was deceptive was not significantly higher in the superficial state deception condition than in the other two conditions.

      RQ1 and H1 addressed questions about the degree to which participants approved of and found deceptive different robot acts across theoretical deception types. We turn now to RQ2, which addresses justifications of the robots’ deceptive behaviors. Justifications are forms of explanations that clarify and defend actions with reference to relevant social and moral norms. Evaluating possible justifications for the robots’ behaviors may give us insight into why participants believed some behaviors are more deceptive and less approved than others.

      3.5 RQ2: how do people justify potentially deceptive robot behaviors?

      To address RQ2, we calculated the proportion of participants whose justification response contained a common theme identified in the code book, as detailed in 3.1. Table 4 provides a list of example quotes representing the themes selected from participant responses.

      Example quotes from participant responses.

      Theme Example quotes
      (External state) Sparing Maria’s Feelings “The robot was sparing the woman [from] painful emotions.”“The robot spared Maria from unnecessary pain.”“I would justify it by protecting Maria’s wellbeing. Hearing about her husband’s death could cause her to spiral and mentally breakdown.”
      (External state) Preventing Harm “The only justification is that painful memories ought not to be brought up with such patients.”“I would completely justify what the robot did by saying that it was the best option and outcome to lie instead of telling the truth to someone with a condition that could be affected and made worse. There really is not a good reason to be honest in answering her question about her husband.”“If the robot told Maria the truth, she would likely become upset. Lying is the least bad option.”
      (Hidden state) Quality Control on Robot’s Task “Maybe the video is used for troubleshooting when it [the robot] cleans and does not perform correctly.”“That it [the robot] was recording cleaning for the owner so the owner could know that the house was clean before renting it out.”“To ensure it [the robot] is cleaning well.”
      (Hidden state) Robbery or Safety “Simply, the robot was recording to capture anything suspicious due to an uptick in robberies.”“I would say that the robot might be recording to keep evidence of potential crimes.”“To make sure guests are not doing anything illegal or breaking any house rules.”
      (Superficial state) Robot forming social bonds “The robot was making these comments in an effort to bond with the other employee.”“robot was acting that way in order to fit in, to act more like a human would”“The robot’s actions make some sense, like 60-70 out of 100. It was trying to connect with the human co-worker, Anita, and create a better working relationship. It’s not harming anyone; it’s just trying to be more relatable.”
      (Superficial state) Robot being utilized for scientific discovery “For research purposes for the researchers who put it at the store”
      3.5.1 External state deception

      Ninety-eight participants (N = 98, 58%) provided a justification that matched a common theme identified in the code book (see Figure 2A). Thirty-four participants (N = 34, 32.65%) mentioned sparing Maria’s feelings as a justification for the robot’s behavior, while 64 participants (62.24%) referenced preventing harm as a justification. In the response set, 14 participants (8.2%) referenced programming or a robot’s inability to deceive. Ten participants (N = 10) did not believe the robot’s behavior was justifiable.

      Common justification themes derived from participant responses. (A) Justifications for External state condition. (B) Justifications for hidden state condition. (C) Justifications for superficial state condition.

      3.5.2 Hidden state deception

      Thirty-eight participants (N = 38, 23.6%) provided a justification that matched a common theme identified in the code book (see Figure 2B). Fifteen participants (N = 15, 28.6%) mentioned the recording as a means of monitoring the quality of the robot’s work as a justification for the robot’s behavior, while 23 participants (71.4%) referenced robberies or safety as a justification. In the response set, 25 participants (19%) referenced programming or a robot’s inability to deceive. Fifty-eight participants (N = 58) did not believe the robot’s behavior was justifiable.

      3.5.3 Superficial state deception

      Forty-five participants (N = 45, 27.1%) provided a justification that matched the common themes identified in the code book (see Figure 2C). Forty-four participants (N = 44, 97.8%) referenced the robot’s desire to form social bonds or connect to the humans it is co-located with as the superseding norm to justify its behavior, while 1 participant (2.2%) referenced the use of the robot as a means of scientific discovery as the justification for the robot’s behavior. In the response set, 46 participants (27.7%) referenced programming or the robot’s inability to deceive. Thirty-eight participants (N = 38) did not believe the robot’s behavior was justifiable.

      Our analysis of participants’ responses to the justification question gave us insight into the types of justifications that participants could feasibly evoke in the presence of robot deception behaviors. Across conditions, we found that the most common types of justifications defined a normatively desirable outcome or goal of the robot’s deception, be it to mitigate emotional harm, keep a property safe or to enhance the social bond between the human and robot. These justifications seem to align with Isaac and Bridewell’s (Isaac and Bridewell, 2017) formulation of the role of a deceptive behavior, which is to mislead or obfuscate signals and cues in service of a more desirable goal that the robot aims to achieve, prioritized over transparency or honesty in the human-robot interaction. Participant responses show that they understand justifications as reasons that make the deception defensible, because the overriding goal is more desirable than the normally operative goal of being truthful.

      3.6 RQ3: do humans perceive other entities (e.g., programmers, designers) as also engaging in deception when a robot commits a deceptive behavior?

      The aim of RQ3 was to determine the frequency in which participants identified another entity (besides the robot) as being deceptive in each deception condition.

      3.6.1 External state deception

      One hundred and thirty-two participants (N = 132, 78.6%) indicated that no other entity besides the robot engaged in external state deception in the scenario. Of the few participants that did reference another entity that engaged in deception in the scenario (see Figure 3A), 36 referenced a programmer or developer and 4 referenced Maria’s family or an undetermined individual that provided Maria with the robot as a caretaker.

      Additional deceptive entities identified by participants in each condition. (A) Other deceptive entities identified by participants in external state condition. (B) Other deceptive entities identified by participants in hidden state condition. (C) Other deceptive entities identified by participants in superficial state condition.

      3.6.2 Hidden state deception

      Only 32 participants (N = 32, 19.9%) indicated that no other entity besides the robot engaged in hidden state deception in the scenario. Of the participants that referenced another entity engaging in deception (see Figure 3B), 111 referenced the owner or manager of the property, 16 referenced the programmer or creator of the robot, and 2 participants referenced other actors such as the renters or the intruders.

      3.6.3 Superficial state deception

      One hundred and twenty-three participants (N = 123, 75.3%) indicated that no other entity besides the robot engaged in deception in the scenario. Of the few participants that referenced another entity engaging in deception (see Figure 3C), 17 participants referenced Anita or the robot’s co-workers in the home goods store, 21 referenced the programmer or creator of the robot, 4 referenced the manager or employer of the home goods store, and 1 participant referenced the researchers.

      Our analysis of participant responses to other entities that committed a deception showed that a majority of participants in the hidden state condition extended the robot’s deception to another entity, mainly the Airbnb owner. In contrast, the majority of participants in the external state and superficial state conditions tended to isolate the deceptive behavior to the robot. Across all conditions, a subset of participants would refer to a programmer or developer as a possible entity that would also be considered deceptive in each scenario.

      4 Discussion

      Although the technology ethics literature has detailed ways in which robot deception could manifest (Danaher, 2020a; Leong and Selinger, 2019; Isaac and Bridewell, 2017; Sætra, 2021) and discussed potential consequences resulting from exposure to these deceptive behaviors (Sharkey and Sharkey, 2021; Turkle et al., 2006; Bisconti and Nardi, 2018), little experimental work has been conducted examining people’s perceptions of these deceptive behaviors. Using a mix of quantitative and qualitative data, our study aimed to provide some of the first experimental evidence on whether robot acts theorized to be deceptive are actually perceived as such. We first briefly summarize the main findings, then turn to their implications for dishonest anthropomorphism, the deception objection debate, and future experiments examining robot deception types and its effects on human-robot interactions.

      We examined participant perceptions of the deceptiveness and approval of three types of behavior labeled by Danaher as (1) External state deception (deceptive cues that intentionally misrepresent or omit details from the external world; i.e., lying to an elderly woman that her deceased husband is still alive), (2) Hidden state deception (deceptive cues designed to conceal or obscure the presence of a capacity or internal state the robot possesses; i.e., a robot using its position as a housekeeper to record users without their knowledge), and (3) Superficial state deception (deceptive cues that suggest a robot has some capacity or internal state that it lacks; i.e., a robot expressing pain when moving a heavy object with another person). Our study showed that participants rated hidden state deception as the most deceptive behavior among the three, and they also disapproved of this type of deception much more than of the other two. Further, even though participants viewed both external state and superficial state deception as moderately deceptive, they approved of external state deception considerably more than the superficial state deception.

      This study aimed to explore the ”Deception Objection”—the debate regarding whether robots’ superficial states (e.g., emotions, sensations) are problematic, or should (not) be considered deceptive—by examining if participants were unsure about a robot committing superficial state deception behaviors at a higher frequency compared to external state or hidden state behaviors. People were not more unsure about whether a robot’s superficial state was deceptive than they were about the other deception types. This result suggests that superficial state deception may not be as strongly divisive to everyday persons as it is to researchers engaged in the debate.

      Beyond people’s approval and perceptions of deceptiveness, we were interested in documenting the kinds of possible justifications participants provided in light of robots’ behavior, and especially whether they referenced superseding norms.

      A thematic analysis of participant responses showed that, in each condition, common themes were identified in the justifications that participants provided for the robot’s behaviors. In the external state condition, participants justified the robot’s deceptive behavior as a means of preventing Maria from feeling harm or keeping Maria calm. Participants in the hidden state condition justified the robot’s recording as a form of quality control of its functions or a form of security. In the superficial state condition, participants justified the robot’s expression of emotions as a way to develop social bonds with its co-workers or as a means to advance the research goals of the experimental study in which it was engaged.

      In addition to the justification themes identified in the thematic analysis, we found that the frequency with which participants provided a justification for the robot’s deceptive behavior varied between conditions. The majority of participants in the external state condition provided a justification for the robot’s behavior, with a small subset of participants explicitly stating that its behavior could not be justified. In both the hidden state and superficial state conditions, less than half of the participants readily provided a justification for the robot’s behavior. In the superficial state condition, the proportion of participants who explicitly stated that the robot’s behavior was not justifiable was just below the proportion of people who provided a justification. In the hidden state condition, the proportion of participants who explicitly stated that the robot’s behavior was not justifiable was greater than the proportion of people who provided a justification.

      We also assessed how many participants identified third parties beyond the robot as involved in the robot’s deceptive behavior and who those third parties were. Results showed that a majority of participants encountering a hidden state deception identified a third party as being deceptive besides the robot in the vignette (most commonly the owner of the rental home and the robot developer). Although participants in the external state and superficial state conditions did extend the robot’s deception to third parties, with the developer of the robot being a common third party identified in each condition, they did so much less frequently than those in the hidden state condition. These results may be evidence that for certain types of deception, the robot is no longer viewed as the main entity committing deception, but rather a “vessel” (Arkin, 2018) by which deception occurs in the service of another entity advancing its goals through the robot.

      4.1 Perceptions of different deceptive behaviors

      The findings from RQ1 and H1 indicate that robots that commit external state deception (e.g., lying) may, in certain contexts, be perceived as committing an acceptable behavior. In fact, the majority of participants readily provided justifications for the robot’s behavior in the external state condition, referencing norms such as sparing a person’s feelings or preventing harm. Thus, people may have inferred that the robot was following human social norms, perhaps even understanding the logic of a “white lie” (Isaac and Bridewell, 2017), trading off a standing norm (not lying) against a superseding norm (e.g., sparing a person’s feelings).

      In contrast, people found the hidden state deception (e.g., a robot using its position as a housekeeper to record users without their knowledge) to be highly deceptive and disapproved of it. They may have experienced this type of deception as a “betrayal” (Danaher, 2020a) committed by the robot—or through the robot, as many participants indicated that they felt other parties were involved in the deception. This betrayal, stemming from not disclosing machine-like features (i.e., continuous hidden surveillance) while simultaneously operating in a human-like role (i.e., the robot’s role as a housekeeper), may explain why participants strongly disapproved of the hidden state behavior and viewed it as highly deceptive. These findings provide strong evidence for the negative consequences of such dishonest anthropomorphic designs. If people realize that a robot designed for a certain role has hidden abilities and goals that undermine this role, people may consider discontinuing its use because the robot does not primarily serve them but the interests of a third party.

      Exactly who is being served/supported by robots as they are deployed into social spaces is important to consider. Such deployment may represent juxtaposed or potentially competing needs and goals between different “users” of the robots (Phillips et al., 2023). In the hidden state scenario, some people may feel deceived when encountering these robots in their environments, and that deception is evaluated negatively. However, the people who deployed the robots in those environments in the first place may not see them as similarly deceptive and problematic. A real challenge for social robots, then, is determining and potentially balancing “whose goals and needs” are the ones that are justifiable and should supersede in these cases.

      4.2 Exploring justifications of deceptive behaviors

      In this study, we argued that justifications can represent superseding norms as they reference norms that motivated an agent’s actions. Recent work has shown that justifications can repair losses in trust and mitigate blame for other types of social norm violations committed by robots (Malle and Phillips, 2023). Here we ask if forms of deception as a type of norm violation might be similarly justifiable.

      Humans may be readily capable of understanding the underlying social and moral mechanisms that drive the pro-social desires for lying and thus are capable of articulating a justification for the robot’s similar behavior in the external state condition. Danaher posited that in cases of external state deception committed by robots, humans will react and respond to the robot’s behavior in a manner similar to a human in a comparable position (Danaher, 2020a).

      Participants in the hidden state deception condition provided the smallest proportion of justifications as well as the largest proportion of responses that extended involvement in the deceptive behavior to someone else beyond the robot. These findings suggest that hidden state deception was the most difficult type of deceptive behavior for humans to reconcile and that any superseding norm evoked would need to be powerful in order to compensate for the high disapproval associated with this type of robot behavior, or associated with other humans using robots in this way.

      Using robots in this way without disclosing their full capabilities may simply not be justifiable. And for good reasons. These findings seem to mirror reports about strong negative reactions people have to real-world examples of robots in homes that have arguably engaged in similar forms of deception (Guo, 2022). For instance, MIT Technology Review recently reported that when iRobot was testing new software deployments on the Roomba J7 robot vacuums in homes, the robots were capturing sensitive images of people, including of young children, and people using the toilet. Further, those images were being annotated by other people who had been contracted to label them as training data for future software development, and these images were ultimately leaked to social media groups online. iRobot confirmed that these images came from individuals that had agreed to their imagery being captured in user license agreements. Clearly, however, this form of non-disclosure/under disclosure was still problematic for those involved. Onerous user agreements or implicit “opt-in” policies that are likely to come with social robots in the real-world run a real risk of perpetuating forms of deception that people indeed do not agree with, but are subjected to nonetheless.

      Participants in the superficial state condition provided justifications for the robot’s behavior of claiming that it was “in pain” while helping to move a heavy object. However, the proportion of participants who explicitly referred to a justification was only slightly more frequent (N = 45) than those who would not justify the robot’s behavior (N = 38). These findings suggest that participants may have issues justifying a robot’s expression of certain internal states in certain interaction contexts depending on the outcome of the interaction and the robot’s perceived goals in expressing that internal state.

      In our experiment, the robot introduced in the scenario is operating in the role of a home goods worker. The robot may be viewed as a depiction (Clark and Fischer, 2023) of a home goods worker, capable of completing tasks and interacting with fellow workers in the way a typical human home goods worker may be expected to act. Conflict occurs when the robot is expressing pain and how Anita responds to that pain. In our scenario, the outcome of the robot’s behavior (expressing a superficial state of pain it does not possess) resulted in Anita telling the robot to stop working and telling another co-worker to take the robot’s place. It may be the case that so many participants failed to provide a justification for the robot’s behavior because the outcome of its behavior was that another human had to do its work; and for the false reason that the robot “was in pain,” even though the robot is not capable of feeling pain.

      It is possible that participants found the robot’s behavior in the scenario manipulative, whether intentional or not, because the robot’s expression of pain influenced one of the worker’s behavior in a way that led to an inconvenience for another worker. Manipulation of users is one of the arguments critics of emotionally expressive robots point to (Sharkey and Sharkey, 2012; Turkle et al., 2006; Bisconti and Nardi, 2018) as a potential danger in allowing social robots to form emotional bonds with people. Manipulation aside, research has also shown that robots that express emotions can also provide beneficial outcomes such as making people more engaged in collaborative tasks with the robot (Leite et al., 2008) and perceiving the robot as more emotionally intelligent (Jones and Deeming, 2008). It is fair to wonder, then, whether participant approval and justification of the robot’s behavior in superficial state deception is dependent on the way its expression of a superficial state influences human behavior and interaction outcomes, and how beneficial or detrimental those behaviors or outcomes are.

      4.3 Acknowledgement of other entities engaging in deception

      In our analysis of the presence of other entities besides the robot that deceived, participants were much more likely to extend the deception to other entities beyond the robot in the hidden state condition than either the superficial state or external state condition. In the hidden state condition, 85% of participants extended deceptive behaviors beyond the robot, often directly evoking the robot as a machine that was either programmed or directed by a third party to deceive the humans in the vignette. This finding may provide evidence for the claim that some forms of deception committed by robots are less actions that the robot itself chose to take, but rather are a vessel for another agent that is the deceiver.

      In the external and superficial state conditions, most participants did not extend the deceptive behavior committed by the robot to other agents. In both conditions, about 75% of participants isolated the deceptive behavior to the robot alone. However, when we examined participant justifications provided in these conditions, many participants also acknowledged that the robot was explicitly programmed (e.g., stating that the behavior could not be justified because the robot is simply a programmed machine), suggesting that some individuals in these conditions similarly acknowledged that there is a third party involved in the robot’s behavior. However, far fewer people in these conditions extended the deception to such third party actors. In the superficial state condition, 62% of participants that believed the robot was programmed did not consider other actors as deceptive in the scenario; in the external state condition, 43% did the same.

      The finding that people in our study simultaneously acknowledged that others were involved in the programming of the robot while not extending deception to those people are in line with other discussions in the research community about potentially negative ramifications of using robots as vessels (Arkin, 2018) for humans committing potentially problematic behaviors. For instance, Gratch and Fast (2022) argue that the use of artificial agents represents a form of “indirect action” where one party acts through another agent. Problematically, when people in power deceive or cause harm, they often choose such indirect action because doing so attenuates social pressures that would normally regulate unethical behavior; for instance, it may deflect blame away from the actor and towards the intervening agent. Further, these same researchers have found in experimental studies that artificial agents, like A.I. powered assistants, trigger the same type of attenuation. Specifically, when students were interacting with an A.I. agent that they thought was previously programmed to provide critical, non-empathetic feedback while they were studying for an exam, those students gave less blame towards the previously programmed agent than they did towards an avatar that they believed represented a human responding in real time, providing the same harsh feedback in another room.

      These findings highlight the role the developer plays in a human-robot interaction, particularly when the robot commits deceptive behaviors. While not directly involved in the interaction, developers seem to be held responsible by humans exposed to a robot’s deceptive behavior. This finding is important because humans that are exposed to deceptive robot behaviors could extend the deception to developers and the organization that produced the robots. This could potentially lead to a global trust loss in robotics technology.

      4.4 Limitations

      This study provides initial findings in a field of research with previously few empirical studies. However, it comes with numerous limitations. First, the study used text-based narratives as stimuli, and it would be important to expand the methodology to include video-based stimuli and real-time interactions between humans and robots.

      Second, we aimed to design scenarios representative of real-world examples of social robots’ deceptive behavior, but each of the three types of deception appeared in only one specific scenario. This way, the scenarios were tailored to their particular deception types, but they varied in other features as well. In future efforts, it would be desirable to develop a larger number of more general scenario templates such that a given scenario template could, with small adjustments, represent each of the three deception types, and each deception type would be represented by multiple scenarios.

      Third, the assessment of justifications, though informative because open-ended, afforded no experimental control. Furthermore, it was not the robot that conveyed the justifications but participants who proposed them, so these justifications may or may not be credible when uttered by a robot after committing a deceptive behavior. Future work could develop multiple versions of justifications and other responses (e.g., a weak one, a strong one, referring to norms vs. goals) for each deception type and experimentally assign them to participants, who would then evaluate their persuasiveness and credibility (see Malle and Phillips, 2023). Such work would be needed to determine the effectiveness of justifications for such deceptive behaviors.

      4.5 Future Directions

      Although this work aimed to provide some of the first empirical evidence of the perceptions of different kinds of robot deception in human-robot interaction, we believe this study is the first of many steps towards fully comprehending the nuances of robot deception in human-robot interactions. Future research should look to expand upon the findings from this study by examining whether the justifications uncovered through this experiment could effectively work in mitigating human trust loss and moral judgment towards the robot, especially in cases where robot deception is under the umbrella of dishonest anthropomorphism.

      Additionally, future research should expand the findings of this paper by examining deceptive behaviors committed by robots in real world human-robot interactions. Although vignettes are a valuable tool for uncovering initial human perceptions of robot deception, in-lab or real-world studies would allow us to gain even greater insight into how people react to robot deception. Studies with longitudinal designs in which humans are exposed to multiple deceptive acts across a period of time would also provide valuable insight into the effects of deception on human-robot interactions. These experiments could be carried out in environments that mirror the scenarios created for this study, giving researchers a foundation for both a potential interaction that could be tested and serve as a comparison point for what human perceptions of the deceptive act could be.

      4.6 Conclusion

      Our study examined to what degree participants approved of, and actually judged as deceptive, three types of deceptive behavior that Danaher (2020a) argued could all be committed by robots. We then proceeded by examining possible justifications that participants could provide in light of the robot’s behavior and, finally, we analyzed the frequency with which participants extended the deception committed by the robot to other entities.

      The contribution of this work is to advance our understanding of deception in human-robot interactions by studying human perceptions of three unique deception types that robots may possess. Primary empirical evidence showed that deceptive behavior types relating to dishonest anthropomorphism (hidden state deception and superficial state deception) are found to be particularly devalued and less likely to be justifiable by people. External state deception, including white lies, might be approved, especially if the deception is in service of a superseding norm. We found that people were not as divisive about the deceptiveness of superficial state deception, a topic considered divisive in the tech ethics literature referenced as the “Deception Objection” debate. Results additionally showed that participants who are exposed to hidden state deception in robots tend to extend the deception to other entities besides the robot, a trend not found in either external state or superficial state deception conditions. This work advances moral psychology in human-robot interactions by exploring the potential consequences of human discovery of robot deception and potential social norms that could be evoked as a trust repair strategy in the face of trust loss and moral judgement.

      Data availability statement

      The datasets presented in this manuscript and in our Supplementary Materials can be found in the following online repository: https://github.com/arosero98/-Exploratory-Analysis-of-Human-Perceptions-to-Social-Robot-Deception-Behaviors-Datasets.

      Ethics statement

      The studies involving humans reported here were approved by the George Mason University Office of Research Integrity and Assurance. The studies were conducted in accordance with U.S. federal regulations and institutional requirements. Participants were provided with informed consent information prior to agreeing to participate in the research.

      Author contributions

      AR: Conceptualization, Formal Analysis, Investigation, Methodology, Project administration, Supervision, Validation, Visualization, Writing–original draft, Writing–review and editing. ED: Conceptualization, Formal Analysis, Investigation, Resources, Visualization, Writing–original draft, Writing–review and editing. HK: Conceptualization, Formal Analysis, Investigation, Methodology, Validation, Writing–original draft. BM: Conceptualization, Funding acquisition, Methodology, Resources, Supervision, Writing–review and editing. EP: Conceptualization, Funding acquisition, Methodology, Project administration, Supervision, Writing–review and editing.

      Funding

      The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. This works was supported by the U.S. Air Force Office of Scientific Research award number FA9550-21-1-0359.

      We would like to thank Samriddhi Subedi and Neha Kannan for assisting in data analysis. We would also like to thank Kantwon Rogers and Alan Wagner for providing feedback on the study prior to its release for data collection.

      Conflict of interest

      The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

      Publisher’s note

      All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

      Author disclaimer

      The views expressed in this paper are those of the authors and do not reflect those of the U.S. Air Force, U.S. Department of Defense, or U.S. Government.

      Supplementary material

      The Supplementary Material for this article can be found online at: /articles/10.3389/frobt.2024.1409712/full#supplementary-material

      References Arkin R. C. (2018). Ethics of robotic deception [opinion]. IEEE Technol. Soc. Mag. 37, 1819. 10.1109/mts.2018.2857638 Awad E. Dsouza S. Kim R. Schulz J. Henrich J. Shariff A. (2018). The moral machine experiment. Nature 563, 5964. 10.1038/s41586-018-0637-6 Benjamini Y. Hochberg Y. (1997). Multiple hypotheses testing with weights. Scand. J. Statistics 24, 407418. 10.1111/1467-9469.00072 Bennett C. C. Weiss B. (2022). “Purposeful failures as a form of culturally-appropriate intelligent disobedience during human-robot social interaction,” in International Conference on Autonomous Agents and Multiagent Systems, Germany, 6 – 10 MAY 2024 (Springer), 8490. Bicchieri C. (2006). The grammar of society: the nature and dynamics of social norms. New York, NY: Cambridge University Press. Bisconti P. Nardi D. (2018). Companion robots: the hallucinatory danger of human-robot interactions. Biziou-van Pol L. Haenen J. Novaro A. Liberman A. O. Capraro V. (2015). Does telling white lies signal pro-social preferences? Judgm. Decis. Mak. 10, 538548. 10.1017/s1930297500006987 Blass J. A. (2018). You, me, or us: balancing individuals’ and societies’ moral needs and desires in autonomous systems. AI Matters 3, 4451. 10.1145/3175502.3175512 Bonnefon J.-F. Shariff A. Rahwan I. (2016). The social dilemma of autonomous vehicles. Science 352, 15731576. 10.1126/science.aaf2654 Briggs G. Scheutz M. (2014). How robots can affect human behavior: investigating the effects of robotic displays of protest and distress. Int. J. Soc. Robotics 6, 343355. 10.1007/s12369-014-0235-1 Briggs G. Williams T. Jackson R. B. Scheutz M. (2022). Why and how robots should say no. Int. J. Soc. Robotics 14, 323339. 10.1007/s12369-021-00780-y Bryant E. M. (2008). Real lies, white lies and gray lies: towards a typology of deception. Kaleidoscope A Graduate J. Qual. Commun. Res. 7, 23. Bryson J. Winfield A. (2017). Standardizing ethical design for artificial intelligence and autonomous systems. Computer 50, 116119. 10.1109/mc.2017.154 Choi Y. Choi M. Oh M. Kim S. (2020). Service robots in hotels: understanding the service quality perceptions of human-robot interaction. J. Hosp. Mark. Manag. 29, 613635. 10.1080/19368623.2020.1703871 Clark H. H. Fischer K. (2023). Social robots as depictions of social agents. Behav. Brain Sci. 46, e21. 10.1017/s0140525x22000668 Coeckelbergh M. (2011). Are emotional robots deceptive? IEEE Trans. Affect. Comput. 3, 388393. 10.1109/t-affc.2011.29 Danaher J. (2020a). Robot betrayal: a guide to the ethics of robotic deception. Ethics Inf. Technol. 22, 117128. 10.1007/s10676-019-09520-3 Danaher J. (2020b). Welcoming robots into the moral circle: a defence of ethical behaviourism. Sci. Eng. ethics 26, 20232049. 10.1007/s11948-019-00119-x Dula E. Rosero A. Phillips E. (2023). “Identifying dark patterns in social robot behavior,” in In 2023 Systems and Information Engineering Design Symposium (SIEDS) IEEE, 712. Dupont P. E. Nelson B. J. Goldfarb M. Hannaford B. Menciassi A. O’Malley M. K. (2021). A decade retrospective of medical robotics research from 2010 to 2020. Sci. robotics 6, eabi8017. 10.1126/scirobotics.abi8017 Faul F. Erdfelder E. Buchner A. Lang A.-G. (2009). Statistical power analyses using g* power 3.1: tests for correlation and regression analyses. Behav. Res. methods 41, 11491160. 10.3758/brm.41.4.1149 Fuoli M. van de Weijer J. Paradis C. (2017). Denial outperforms apology in repairing organizational trust despite strong evidence of guilt. Public Relat. Rev. 43, 645660. 10.1016/j.pubrev.2017.07.007 Gratch J. Fast N. J. (2022). The power to harm: ai assistants pave the way to unethical behavior. Curr. Opin. Psychol. 47, 101382. 10.1016/j.copsyc.2022.101382 Guo E. (2022). A roomba recorded a woman on the toilet. How did Screenshots End up on Facebook Available at: technhttps://wwwologyreview.com/2022/12/19/1065306/roomba-irobot-robot-vacuums-artificial-intelligence-training-data-privacy . Hartzog W. (2014). Unfair and deceptive robots. Md. L. Rev. 74, 785829. Isaac A. Bridewell W. (2017). White lies on silver tongues: Why robots need to deceive (and how) in Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence. Editors Lin P. Abney K. Jenkins R. (Oxford University), 157172. 10.1093/oso/9780190652951.003.0011 Jackson R. B. Williams T. (2018) “Robot: asker of questions and changer of norms,” in Proceedings of ICRES. Jackson R. B. Williams T. (2019). “Language-capable robots may inadvertently weaken human moral norms,” in 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (IEEE), USA, 11-14 March 2019, 401410. Jones C. Deeming A. (2008). Affective human-robotic interaction. In Affect and Emotion in Human-Computer Interaction: From Theory to Applications. Editors Peter C. Beale R. (Springer-Verlag), 175185. 10.1007/978-3-540-85099-1_15 Kidd C. D. Breazeal C. (2008). “Robots at home: understanding long-term human-robot interaction,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, China, 18-22 Oct. 2010 (IEEE), 32303235. Kubota A. Pourebadi M. Banh S. Kim S. Riek L. (2021). Somebody that I used to know: the risks of personalizing robots for dementia care. Proc. We Robot. Leite I. Pereira A. Martinho C. Paiva A. (2008). “Are emotional robots more fun to play with?,” in RO-MAN 2008-The 17th IEEE International Symposium on Robot and Human Interactive Communication (IEEE), USA, 1-3 Aug. 2008, 7782. Leong B. Selinger E. (2019). “Robot eyes wide shut: understanding dishonest anthropomorphism,” in Proceedings of the conference on fairness, accountability, and transparency, China, January 29 - 31, 2019, 299308. Levine E. E. Schweitzer M. E. (2015). Prosocial lies: when deception breeds trust. Organ. Behav. Hum. Decis. Process. 126, 88106. 10.1016/j.obhdp.2014.10.007 Lewicki R. J. Brinsfield C. (2017). Trust repair. Annu. Rev. Organ. Psychol. Organ. Behav. 4, 287313. 10.1146/annurev-orgpsych-032516-113147 Liu S. Zheng L. Wang S. Li R. Zhao Y. (2016). “Cognitive abilities of indoor cleaning robots,” in 2016 12th world congress on intelligent control and automation (WCICA) (Germany: IEEE), 15081513. Lupetti M. L. Van Mechelen M. (2022). Promoting children’s critical thinking towards robotics through robot deception. ACM/IEEE Int. Conf. Human-Robot Interact. (HRI) (IEEE), 588597. 2022 17th. 10.1109/HRI53351.2022.9889511 Malle B. F. (2023). in What are norms and how is norm compliance regulated? In Motivation and morality: a biopsychosocial approach. Editors Berg M. Chang E. C. (American: American Psychological Association), 4675. Malle B. F. Magar S. T. Scheutz M. (2019). Ai in the sky: how people morally evaluate human and machine decisions in a lethal strike dilemma. Robotics well-being, 111133. 10.1007/978-3-030-12524-0_11 Malle B. F. Phillips E. (2023). A robot’s justifications, but not explanations, mitigate people’s moral criticism and preserve their trust. OSF. 10.31234/osf.io/dzvn4 Malle B. F. Scheutz M. Arnold T. Voiklis J. Cusimano C. (2015). “Sacrifice one for the good of many? People apply different moral norms to human and robot agents,” in Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction, Portland, Oregon, March 2 - 5, 2015, 117124. Matthias A. (2015). Robot lies in health care: when is deception morally permissible? Kennedy Inst. Ethics J. 25, 169162. 10.1353/ken.2015.0007 Mellmann H. Arbuzova P. Kontogiorgos D. Yordanova M. Haensel J. X. Hafner V. V. (2024). “Effects of transparency in humanoid robots-a pilot study,” in Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, USA, March 11 - 15, 2024, 750754. Mott T. Williams T. (2023). “Confrontation and cultivation: understanding perspectives on robot responses to norm violations,” in 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Germany, 28-31 Aug. 2023 (IEEE), 23362343. Odekerken-Schröder G. Mennens K. Steins M. Mahr D. (2021). The service triad: an empirical study of service robots, customers and frontline employees. J. Serv. Manag. 33, 246292. 10.1108/josm-10-2020-0372 Pek J. Wong O. Wong A. C. (2018). How to address non-normality: a taxonomy of approaches, reviewed, and illustrated. Front. Psychol. 9, 2104. 10.3389/fpsyg.2018.02104 Phillips E. Moshkina L. Roundtree K. Norton A. Yanco H. (2023) “Primary, secondary, and tertiary interactions for fleet humanrobot interaction: insights from field testing,”Proceedings of the human factors and ergonomics society annual meeting. CA: Los Angeles, CA: SAGE Publications Sage, 67, 23722377. 10.1177/21695067231192890 Rogers K. Webber R. J. A. Howard A. (2023). Lying about lying: examining trust repair strategies after robot deception in a high-stakes hri scenario. Companion 2023 ACM/IEEE Int. Conf. Human-Robot Interact., 706710. 10.1145/3568294.358017 Ros R. Nalin M. Wood R. Baxter P. Looije R. Demiris Y. (2011). “Child-robot interaction in the wild: advice to the aspiring experimenter,” in Proceedings of the 13th international conference on multimodal interfaces, Spain, November 14 - 18, 2011, 335342. Rosete A. Soares B. Salvadorinho J. Reis J. Amorim M. (2020). “Service robots in the hospitality industry: an exploratory literature review,” in Exploring Service Science: 10th International Conference, IESS 2020, Porto, Portugal, February 5–7, 2020 (Springer), 174186. Rothstein N. J. Connolly D. H. de Visser E. J. Phillips E. (2021). “Perceptions of infidelity with sex robots,” in Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, USA, March 8 - 11, 2021, 129139. Sætra H. S. (2021). Social robot deception and the culture of trust. Paladyn, J. Behav. Robotics 12, 276286. 10.1515/pjbr-2021-0021 Scheutz M. (2012a). The affect dilemma for artificial agents: should we develop affective artificial agents? IEEE Trans. Affect. Comput. 3, 424433. 10.1109/t-affc.2012.29 Scheutz M. (2012b). “The inherent dangers of unidirectional emotional bonds between humans and social robots,” in Anthology on robo-ethics. Editors Lin P. Bekey G. Abney K. (Cambridge, MA: MIT Press), 205221. Scheutz M. Arnold T. (2016). Are we ready for sex robots?. In 2016 11th ACM/IEEE Int. Conf. Human-Robot Interact. (HRI) (IEEE), 351358. 10.1109/HRI.2016.7451772 Schweitzer M. E. Hershey J. C. Bradlow E. T. (2006). Promises and lies: restoring violated trust. Organ. Behav. Hum. Decis. Process. 101, 119. 10.1016/j.obhdp.2006.05.005 Sharkey A. Sharkey N. (2012). Granny and the robots: ethical issues in robot care for the elderly. Ethics Inf. Technol. 14, 2740. 10.1007/s10676-010-9234-6 Sharkey A. Sharkey N. (2021). We need to talk about deception in social robotics. Ethics Inf. Technol. 23, 309316. 10.1007/s10676-020-09573-9 Turkle S. Taggart W. Kidd C. D. Dasté O. (2006). Relational artifacts with children and elders: the complexities of cybercompanionship. Connect. Sci. 18, 347361. 10.1080/09540090600868912 Van Buuren S. Groothuis-Oudshoorn K. (2011). mice: multivariate imputation by chained equations in r. J. Stat. Softw. 45, 167. 10.18637/jss.v045.i03 Wagner A. R. Arkin R. C. (2009). “Robot deception: recognizing when a robot should deceive,” in 2009 IEEE International Symposium on Computational Intelligence in Robotics and Automation-(CIRA), USA, 15-18 Dec. 2009 (IEEE), 4654. Winfield A. F. Booth S. Dennis L. A. Egawa T. Hastie H. Jacobs N. (2021). Ieee p7001: a proposed standard on transparency. Front. Robotics AI 8, 665729. 10.3389/frobt.2021.665729 Wortham R. H. Theodorou A. Bryson J. J. (2017). “Robot transparency: improving understanding of intelligent behaviour for designers and users,” in Towards Autonomous Robotic Systems: 18th Annual Conference, TAROS 2017, Guildford, UK, July 19–21, 2017 (Springer), 274289.
      ‘Oh, my dear Thomas, you haven’t heard the terrible news then?’ she said. ‘I thought you would be sure to have seen it placarded somewhere. Alice went straight to her room, and I haven’t seen her since, though I repeatedly knocked at the door, which she has locked on the inside, and I’m sure it’s most unnatural of her not to let her own mother comfort her. It all happened in a moment: I have always said those great motor-cars shouldn’t be allowed to career about the streets, especially when they are all paved with cobbles as they are at Easton Haven, which are{331} so slippery when it’s wet. He slipped, and it went over him in a moment.’ My thanks were few and awkward, for there still hung to the missive a basting thread, and it was as warm as a nestling bird. I bent low--everybody was emotional in those days--kissed the fragrant thing, thrust it into my bosom, and blushed worse than Camille. "What, the Corner House victim? Is that really a fact?" "My dear child, I don't look upon it in that light at all. The child gave our picturesque friend a certain distinction--'My husband is dead, and this is my only child,' and all that sort of thing. It pays in society." leave them on the steps of a foundling asylum in order to insure [See larger version] Interoffice guff says you're planning definite moves on your own, J. O., and against some opposition. Is the Colonel so poor or so grasping—or what? Albert could not speak, for he felt as if his brains and teeth were rattling about inside his head. The rest of[Pg 188] the family hunched together by the door, the boys gaping idiotically, the girls in tears. "Now you're married." The host was called in, and unlocked a drawer in which they were deposited. The galleyman, with visible reluctance, arrayed himself in the garments, and he was observed to shudder more than once during the investiture of the dead man's apparel. HoME香京julia种子在线播放 ENTER NUMBET 0016www.ihfjhs.com.cn
      www.kqynym.org.cn
      eleonline.com.cn
      www.excled.com.cn
      edssss.com.cn
      www.ipopay.com.cn
      txchain.com.cn
      www.pbldxl.com.cn
      www.tz5z1.net.cn
      wzfc0577.com.cn
      处女被大鸡巴操 强奸乱伦小说图片 俄罗斯美女爱爱图 调教强奸学生 亚洲女的穴 夜来香图片大全 美女性强奸电影 手机版色中阁 男性人体艺术素描图 16p成人 欧美性爱360 电影区 亚洲电影 欧美电影 经典三级 偷拍自拍 动漫电影 乱伦电影 变态另类 全部电 类似狠狠鲁的网站 黑吊操白逼图片 韩国黄片种子下载 操逼逼逼逼逼 人妻 小说 p 偷拍10幼女自慰 极品淫水很多 黄色做i爱 日本女人人体电影快播看 大福国小 我爱肏屄美女 mmcrwcom 欧美多人性交图片 肥臀乱伦老头舔阴帝 d09a4343000019c5 西欧人体艺术b xxoo激情短片 未成年人的 插泰国人夭图片 第770弾み1 24p 日本美女性 交动态 eee色播 yantasythunder 操无毛少女屄 亚洲图片你懂的女人 鸡巴插姨娘 特级黄 色大片播 左耳影音先锋 冢本友希全集 日本人体艺术绿色 我爱被舔逼 内射 幼 美阴图 喷水妹子高潮迭起 和后妈 操逼 美女吞鸡巴 鸭个自慰 中国女裸名单 操逼肥臀出水换妻 色站裸体义术 中国行上的漏毛美女叫什么 亚洲妹性交图 欧美美女人裸体人艺照 成人色妹妹直播 WWW_JXCT_COM r日本女人性淫乱 大胆人艺体艺图片 女同接吻av 碰碰哥免费自拍打炮 艳舞写真duppid1 88电影街拍视频 日本自拍做爱qvod 实拍美女性爱组图 少女高清av 浙江真实乱伦迅雷 台湾luanlunxiaoshuo 洛克王国宠物排行榜 皇瑟电影yy频道大全 红孩儿连连看 阴毛摄影 大胆美女写真人体艺术摄影 和风骚三个媳妇在家做爱 性爱办公室高清 18p2p木耳 大波撸影音 大鸡巴插嫩穴小说 一剧不超两个黑人 阿姨诱惑我快播 幼香阁千叶县小学生 少女妇女被狗强奸 曰人体妹妹 十二岁性感幼女 超级乱伦qvod 97爱蜜桃ccc336 日本淫妇阴液 av海量资源999 凤凰影视成仁 辰溪四中艳照门照片 先锋模特裸体展示影片 成人片免费看 自拍百度云 肥白老妇女 女爱人体图片 妈妈一女穴 星野美夏 日本少女dachidu 妹子私处人体图片 yinmindahuitang 舔无毛逼影片快播 田莹疑的裸体照片 三级电影影音先锋02222 妻子被外国老头操 观月雏乃泥鳅 韩国成人偷拍自拍图片 强奸5一9岁幼女小说 汤姆影院av图片 妹妹人艺体图 美女大驱 和女友做爱图片自拍p 绫川まどか在线先锋 那么嫩的逼很少见了 小女孩做爱 处女好逼连连看图图 性感美女在家做爱 近距离抽插骚逼逼 黑屌肏金毛屄 日韩av美少女 看喝尿尿小姐日逼色色色网图片 欧美肛交新视频 美女吃逼逼 av30线上免费 伊人在线三级经典 新视觉影院t6090影院 最新淫色电影网址 天龙影院远古手机版 搞老太影院 插进美女的大屁股里 私人影院加盟费用 www258dd 求一部电影里面有一个二猛哥 深肛交 日本萌妹子人体艺术写真图片 插入屄眼 美女的木奶 中文字幕黄色网址影视先锋 九号女神裸 和骚人妻偷情 和潘晓婷做爱 国模大尺度蜜桃 欧美大逼50p 西西人体成人 李宗瑞继母做爱原图物处理 nianhuawang 男鸡巴的视屏 � 97免费色伦电影 好色网成人 大姨子先锋 淫荡巨乳美女教师妈妈 性nuexiaoshuo WWW36YYYCOM 长春继续给力进屋就操小女儿套干破内射对白淫荡 农夫激情社区 日韩无码bt 欧美美女手掰嫩穴图片 日本援交偷拍自拍 入侵者日本在线播放 亚洲白虎偷拍自拍 常州高见泽日屄 寂寞少妇自卫视频 人体露逼图片 多毛外国老太 变态乱轮手机在线 淫荡妈妈和儿子操逼 伦理片大奶少女 看片神器最新登入地址sqvheqi345com账号群 麻美学姐无头 圣诞老人射小妞和强奸小妞动话片 亚洲AV女老师 先锋影音欧美成人资源 33344iucoom zV天堂电影网 宾馆美女打炮视频 色五月丁香五月magnet 嫂子淫乱小说 张歆艺的老公 吃奶男人视频在线播放 欧美色图男女乱伦 avtt2014ccvom 性插色欲香影院 青青草撸死你青青草 99热久久第一时间 激情套图卡通动漫 幼女裸聊做爱口交 日本女人被强奸乱伦 草榴社区快播 2kkk正在播放兽骑 啊不要人家小穴都湿了 www猎奇影视 A片www245vvcomwwwchnrwhmhzcn 搜索宜春院av wwwsee78co 逼奶鸡巴插 好吊日AV在线视频19gancom 熟女伦乱图片小说 日本免费av无码片在线开苞 鲁大妈撸到爆 裸聊官网 德国熟女xxx 新不夜城论坛首页手机 女虐男网址 男女做爱视频华为网盘 激情午夜天亚洲色图 内裤哥mangent 吉沢明歩制服丝袜WWWHHH710COM 屌逼在线试看 人体艺体阿娇艳照 推荐一个可以免费看片的网站如果被QQ拦截请复制链接在其它浏览器打开xxxyyy5comintr2a2cb551573a2b2e 欧美360精品粉红鲍鱼 教师调教第一页 聚美屋精品图 中韩淫乱群交 俄罗斯撸撸片 把鸡巴插进小姨子的阴道 干干AV成人网 aolasoohpnbcn www84ytom 高清大量潮喷www27dyycom 宝贝开心成人 freefronvideos人母 嫩穴成人网gggg29com 逼着舅妈给我口交肛交彩漫画 欧美色色aV88wwwgangguanscom 老太太操逼自拍视频 777亚洲手机在线播放 有没有夫妻3p小说 色列漫画淫女 午间色站导航 欧美成人处女色大图 童颜巨乳亚洲综合 桃色性欲草 色眯眯射逼 无码中文字幕塞外青楼这是一个 狂日美女老师人妻 爱碰网官网 亚洲图片雅蠛蝶 快播35怎么搜片 2000XXXX电影 新谷露性家庭影院 深深候dvd播放 幼齿用英语怎么说 不雅伦理无需播放器 国外淫荡图片 国外网站幼幼嫩网址 成年人就去色色视频快播 我鲁日日鲁老老老我爱 caoshaonvbi 人体艺术avav 性感性色导航 韩国黄色哥来嫖网站 成人网站美逼 淫荡熟妇自拍 欧美色惰图片 北京空姐透明照 狼堡免费av视频 www776eom 亚洲无码av欧美天堂网男人天堂 欧美激情爆操 a片kk266co 色尼姑成人极速在线视频 国语家庭系列 蒋雯雯 越南伦理 色CC伦理影院手机版 99jbbcom 大鸡巴舅妈 国产偷拍自拍淫荡对话视频 少妇春梦射精 开心激动网 自拍偷牌成人 色桃隐 撸狗网性交视频 淫荡的三位老师 伦理电影wwwqiuxia6commqiuxia6com 怡春院分站 丝袜超短裙露脸迅雷下载 色制服电影院 97超碰好吊色男人 yy6080理论在线宅男日韩福利大全 大嫂丝袜 500人群交手机在线 5sav 偷拍熟女吧 口述我和妹妹的欲望 50p电脑版 wwwavtttcon 3p3com 伦理无码片在线看 欧美成人电影图片岛国性爱伦理电影 先锋影音AV成人欧美 我爱好色 淫电影网 WWW19MMCOM 玛丽罗斯3d同人动画h在线看 动漫女孩裸体 超级丝袜美腿乱伦 1919gogo欣赏 大色逼淫色 www就是撸 激情文学网好骚 A级黄片免费 xedd5com 国内的b是黑的 快播美国成年人片黄 av高跟丝袜视频 上原保奈美巨乳女教师在线观看 校园春色都市激情fefegancom 偷窥自拍XXOO 搜索看马操美女 人本女优视频 日日吧淫淫 人妻巨乳影院 美国女子性爱学校 大肥屁股重口味 啪啪啪啊啊啊不要 操碰 japanfreevideoshome国产 亚州淫荡老熟女人体 伦奸毛片免费在线看 天天影视se 樱桃做爱视频 亚卅av在线视频 x奸小说下载 亚洲色图图片在线 217av天堂网 东方在线撸撸-百度 幼幼丝袜集 灰姑娘的姐姐 青青草在线视频观看对华 86papa路con 亚洲1AV 综合图片2区亚洲 美国美女大逼电影 010插插av成人网站 www色comwww821kxwcom 播乐子成人网免费视频在线观看 大炮撸在线影院 ,www4KkKcom 野花鲁最近30部 wwwCC213wapwww2233ww2download 三客优最新地址 母亲让儿子爽的无码视频 全国黄色片子 欧美色图美国十次 超碰在线直播 性感妖娆操 亚洲肉感熟女色图 a片A毛片管看视频 8vaa褋芯屑 333kk 川岛和津实视频 在线母子乱伦对白 妹妹肥逼五月 亚洲美女自拍 老婆在我面前小说 韩国空姐堪比情趣内衣 干小姐综合 淫妻色五月 添骚穴 WM62COM 23456影视播放器 成人午夜剧场 尼姑福利网 AV区亚洲AV欧美AV512qucomwwwc5508com 经典欧美骚妇 震动棒露出 日韩丝袜美臀巨乳在线 av无限吧看 就去干少妇 色艺无间正面是哪集 校园春色我和老师做爱 漫画夜色 天海丽白色吊带 黄色淫荡性虐小说 午夜高清播放器 文20岁女性荫道口图片 热国产热无码热有码 2015小明发布看看算你色 百度云播影视 美女肏屄屄乱轮小说 家族舔阴AV影片 邪恶在线av有码 父女之交 关于处女破处的三级片 极品护士91在线 欧美虐待女人视频的网站 享受老太太的丝袜 aaazhibuo 8dfvodcom成人 真实自拍足交 群交男女猛插逼 妓女爱爱动态 lin35com是什么网站 abp159 亚洲色图偷拍自拍乱伦熟女抠逼自慰 朝国三级篇 淫三国幻想 免费的av小电影网站 日本阿v视频免费按摩师 av750c0m 黄色片操一下 巨乳少女车震在线观看 操逼 免费 囗述情感一乱伦岳母和女婿 WWW_FAMITSU_COM 偷拍中国少妇在公车被操视频 花也真衣论理电影 大鸡鸡插p洞 新片欧美十八岁美少 进击的巨人神thunderftp 西方美女15p 深圳哪里易找到老女人玩视频 在线成人有声小说 365rrr 女尿图片 我和淫荡的小姨做爱 � 做爱技术体照 淫妇性爱 大学生私拍b 第四射狠狠射小说 色中色成人av社区 和小姨子乱伦肛交 wwwppp62com 俄罗斯巨乳人体艺术 骚逼阿娇 汤芳人体图片大胆 大胆人体艺术bb私处 性感大胸骚货 哪个网站幼女的片多 日本美女本子把 色 五月天 婷婷 快播 美女 美穴艺术 色百合电影导航 大鸡巴用力 孙悟空操美少女战士 狠狠撸美女手掰穴图片 古代女子与兽类交 沙耶香套图 激情成人网区 暴风影音av播放 动漫女孩怎么插第3个 mmmpp44 黑木麻衣无码ed2k 淫荡学姐少妇 乱伦操少女屄 高中性爱故事 骚妹妹爱爱图网 韩国模特剪长发 大鸡巴把我逼日了 中国张柏芝做爱片中国张柏芝做爱片中国张柏芝做爱片中国张柏芝做爱片中国张柏芝做爱片 大胆女人下体艺术图片 789sss 影音先锋在线国内情侣野外性事自拍普通话对白 群撸图库 闪现君打阿乐 ady 小说 插入表妹嫩穴小说 推荐成人资源 网络播放器 成人台 149大胆人体艺术 大屌图片 骚美女成人av 春暖花开春色性吧 女亭婷五月 我上了同桌的姐姐 恋夜秀场主播自慰视频 yzppp 屄茎 操屄女图 美女鲍鱼大特写 淫乱的日本人妻山口玲子 偷拍射精图 性感美女人体艺木图片 种马小说完本 免费电影院 骑士福利导航导航网站 骚老婆足交 国产性爱一级电影 欧美免费成人花花性都 欧美大肥妞性爱视频 家庭乱伦网站快播 偷拍自拍国产毛片 金发美女也用大吊来开包 缔D杏那 yentiyishu人体艺术ytys WWWUUKKMCOM 女人露奶 � 苍井空露逼 老荡妇高跟丝袜足交 偷偷和女友的朋友做爱迅雷 做爱七十二尺 朱丹人体合成 麻腾由纪妃 帅哥撸播种子图 鸡巴插逼动态图片 羙国十次啦中文 WWW137AVCOM 神斗片欧美版华语 有气质女人人休艺术 由美老师放屁电影 欧美女人肉肏图片 白虎种子快播 国产自拍90后女孩 美女在床上疯狂嫩b 饭岛爱最后之作 幼幼强奸摸奶 色97成人动漫 两性性爱打鸡巴插逼 新视觉影院4080青苹果影院 嗯好爽插死我了 阴口艺术照 李宗瑞电影qvod38 爆操舅母 亚洲色图七七影院 被大鸡巴操菊花 怡红院肿么了 成人极品影院删除 欧美性爱大图色图强奸乱 欧美女子与狗随便性交 苍井空的bt种子无码 熟女乱伦长篇小说 大色虫 兽交幼女影音先锋播放 44aad be0ca93900121f9b 先锋天耗ばさ无码 欧毛毛女三级黄色片图 干女人黑木耳照 日本美女少妇嫩逼人体艺术 sesechangchang 色屄屄网 久久撸app下载 色图色噜 美女鸡巴大奶 好吊日在线视频在线观看 透明丝袜脚偷拍自拍 中山怡红院菜单 wcwwwcom下载 骑嫂子 亚洲大色妣 成人故事365ahnet 丝袜家庭教mp4 幼交肛交 妹妹撸撸大妈 日本毛爽 caoprom超碰在email 关于中国古代偷窥的黄片 第一会所老熟女下载 wwwhuangsecome 狼人干综合新地址HD播放 变态儿子强奸乱伦图 强奸电影名字 2wwwer37com 日本毛片基地一亚洲AVmzddcxcn 暗黑圣经仙桃影院 37tpcocn 持月真由xfplay 好吊日在线视频三级网 我爱背入李丽珍 电影师傅床戏在线观看 96插妹妹sexsex88com 豪放家庭在线播放 桃花宝典极夜著豆瓜网 安卓系统播放神器 美美网丝袜诱惑 人人干全免费视频xulawyercn av无插件一本道 全国色五月 操逼电影小说网 good在线wwwyuyuelvcom www18avmmd 撸波波影视无插件 伊人幼女成人电影 会看射的图片 小明插看看 全裸美女扒开粉嫩b 国人自拍性交网站 萝莉白丝足交本子 七草ちとせ巨乳视频 摇摇晃晃的成人电影 兰桂坊成社人区小说www68kqcom 舔阴论坛 久撸客一撸客色国内外成人激情在线 明星门 欧美大胆嫩肉穴爽大片 www牛逼插 性吧星云 少妇性奴的屁眼 人体艺术大胆mscbaidu1imgcn 最新久久色色成人版 l女同在线 小泽玛利亚高潮图片搜索 女性裸b图 肛交bt种子 最热门有声小说 人间添春色 春色猜谜字 樱井莉亚钢管舞视频 小泽玛利亚直美6p 能用的h网 还能看的h网 bl动漫h网 开心五月激 东京热401 男色女色第四色酒色网 怎么下载黄色小说 黄色小说小栽 和谐图城 乐乐影院 色哥导航 特色导航 依依社区 爱窝窝在线 色狼谷成人 91porn 包要你射电影 色色3A丝袜 丝袜妹妹淫网 爱色导航(荐) 好男人激情影院 坏哥哥 第七色 色久久 人格分裂 急先锋 撸撸射中文网 第一会所综合社区 91影院老师机 东方成人激情 怼莪影院吹潮 老鸭窝伊人无码不卡无码一本道 av女柳晶电影 91天生爱风流作品 深爱激情小说私房婷婷网 擼奶av 567pao 里番3d一家人野外 上原在线电影 水岛津实透明丝袜 1314酒色 网旧网俺也去 0855影院 在线无码私人影院 搜索 国产自拍 神马dy888午夜伦理达达兔 农民工黄晓婷 日韩裸体黑丝御姐 屈臣氏的燕窝面膜怎么样つぼみ晶エリーの早漏チ○ポ强化合宿 老熟女人性视频 影音先锋 三上悠亚ol 妹妹影院福利片 hhhhhhhhsxo 午夜天堂热的国产 强奸剧场 全裸香蕉视频无码 亚欧伦理视频 秋霞为什么给封了 日本在线视频空天使 日韩成人aⅴ在线 日本日屌日屄导航视频 在线福利视频 日本推油无码av magnet 在线免费视频 樱井梨吮东 日本一本道在线无码DVD 日本性感诱惑美女做爱阴道流水视频 日本一级av 汤姆avtom在线视频 台湾佬中文娱乐线20 阿v播播下载 橙色影院 奴隶少女护士cg视频 汤姆在线影院无码 偷拍宾馆 业面紧急生级访问 色和尚有线 厕所偷拍一族 av女l 公交色狼优酷视频 裸体视频AV 人与兽肉肉网 董美香ol 花井美纱链接 magnet 西瓜影音 亚洲 自拍 日韩女优欧美激情偷拍自拍 亚洲成年人免费视频 荷兰免费成人电影 深喉呕吐XXⅩX 操石榴在线视频 天天色成人免费视频 314hu四虎 涩久免费视频在线观看 成人电影迅雷下载 能看见整个奶子的香蕉影院 水菜丽百度影音 gwaz079百度云 噜死你们资源站 主播走光视频合集迅雷下载 thumbzilla jappen 精品Av 古川伊织star598在线 假面女皇vip在线视频播放 国产自拍迷情校园 啪啪啪公寓漫画 日本阿AV 黄色手机电影 欧美在线Av影院 华裔电击女神91在线 亚洲欧美专区 1日本1000部免费视频 开放90后 波多野结衣 东方 影院av 页面升级紧急访问每天正常更新 4438Xchengeren 老炮色 a k福利电影 色欲影视色天天视频 高老庄aV 259LUXU-683 magnet 手机在线电影 国产区 欧美激情人人操网 国产 偷拍 直播 日韩 国内外激情在线视频网给 站长统计一本道人妻 光棍影院被封 紫竹铃取汁 ftp 狂插空姐嫩 xfplay 丈夫面前 穿靴子伪街 XXOO视频在线免费 大香蕉道久在线播放 电棒漏电嗨过头 充气娃能看下毛和洞吗 夫妻牲交 福利云点墦 yukun瑟妃 疯狂交换女友 国产自拍26页 腐女资源 百度云 日本DVD高清无码视频 偷拍,自拍AV伦理电影 A片小视频福利站。 大奶肥婆自拍偷拍图片 交配伊甸园 超碰在线视频自拍偷拍国产 小热巴91大神 rctd 045 类似于A片 超美大奶大学生美女直播被男友操 男友问 你的衣服怎么脱掉的 亚洲女与黑人群交视频一 在线黄涩 木内美保步兵番号 鸡巴插入欧美美女的b舒服 激情在线国产自拍日韩欧美 国语福利小视频在线观看 作爱小视颍 潮喷合集丝袜无码mp4 做爱的无码高清视频 牛牛精品 伊aⅤ在线观看 savk12 哥哥搞在线播放 在线电一本道影 一级谍片 250pp亚洲情艺中心,88 欧美一本道九色在线一 wwwseavbacom色av吧 cos美女在线 欧美17,18ⅹⅹⅹ视频 自拍嫩逼 小电影在线观看网站 筱田优 贼 水电工 5358x视频 日本69式视频有码 b雪福利导航 韩国女主播19tvclub在线 操逼清晰视频 丝袜美女国产视频网址导航 水菜丽颜射房间 台湾妹中文娱乐网 风吟岛视频 口交 伦理 日本熟妇色五十路免费视频 A级片互舔 川村真矢Av在线观看 亚洲日韩av 色和尚国产自拍 sea8 mp4 aV天堂2018手机在线 免费版国产偷拍a在线播放 狠狠 婷婷 丁香 小视频福利在线观看平台 思妍白衣小仙女被邻居强上 萝莉自拍有水 4484新视觉 永久发布页 977成人影视在线观看 小清新影院在线观 小鸟酱后丝后入百度云 旋风魅影四级 香蕉影院小黄片免费看 性爱直播磁力链接 小骚逼第一色影院 性交流的视频 小雪小视频bd 小视频TV禁看视频 迷奸AV在线看 nba直播 任你在干线 汤姆影院在线视频国产 624u在线播放 成人 一级a做爰片就在线看狐狸视频 小香蕉AV视频 www182、com 腿模简小育 学生做爱视频 秘密搜查官 快播 成人福利网午夜 一级黄色夫妻录像片 直接看的gav久久播放器 国产自拍400首页 sm老爹影院 谁知道隔壁老王网址在线 综合网 123西瓜影音 米奇丁香 人人澡人人漠大学生 色久悠 夜色视频你今天寂寞了吗? 菲菲影视城美国 被抄的影院 变态另类 欧美 成人 国产偷拍自拍在线小说 不用下载安装就能看的吃男人鸡巴视频 插屄视频 大贯杏里播放 wwwhhh50 233若菜奈央 伦理片天海翼秘密搜查官 大香蕉在线万色屋视频 那种漫画小说你懂的 祥仔电影合集一区 那里可以看澳门皇冠酒店a片 色自啪 亚洲aV电影天堂 谷露影院ar toupaizaixian sexbj。com 毕业生 zaixian mianfei 朝桐光视频 成人短视频在线直接观看 陈美霖 沈阳音乐学院 导航女 www26yjjcom 1大尺度视频 开平虐女视频 菅野雪松协和影视在线视频 华人play在线视频bbb 鸡吧操屄视频 多啪啪免费视频 悠草影院 金兰策划网 (969) 橘佑金短视频 国内一极刺激自拍片 日本制服番号大全magnet 成人动漫母系 电脑怎么清理内存 黄色福利1000 dy88午夜 偷拍中学生洗澡磁力链接 花椒相机福利美女视频 站长推荐磁力下载 mp4 三洞轮流插视频 玉兔miki热舞视频 夜生活小视频 爆乳人妖小视频 国内网红主播自拍福利迅雷下载 不用app的裸裸体美女操逼视频 变态SM影片在线观看 草溜影院元气吧 - 百度 - 百度 波推全套视频 国产双飞集合ftp 日本在线AV网 笔国毛片 神马影院女主播是我的邻居 影音资源 激情乱伦电影 799pao 亚洲第一色第一影院 av视频大香蕉 老梁故事汇希斯莱杰 水中人体磁力链接 下载 大香蕉黄片免费看 济南谭崔 避开屏蔽的岛a片 草破福利 要看大鸡巴操小骚逼的人的视频 黑丝少妇影音先锋 欧美巨乳熟女磁力链接 美国黄网站色大全 伦蕉在线久播 极品女厕沟 激情五月bd韩国电影 混血美女自摸和男友激情啪啪自拍诱人呻吟福利视频 人人摸人人妻做人人看 44kknn 娸娸原网 伊人欧美 恋夜影院视频列表安卓青青 57k影院 如果电话亭 avi 插爆骚女精品自拍 青青草在线免费视频1769TV 令人惹火的邻家美眉 影音先锋 真人妹子被捅动态图 男人女人做完爱视频15 表姐合租两人共处一室晚上她竟爬上了我的床 性爱教学视频 北条麻妃bd在线播放版 国产老师和师生 magnet wwwcctv1024 女神自慰 ftp 女同性恋做激情视频 欧美大胆露阴视频 欧美无码影视 好女色在线观看 后入肥臀18p 百度影视屏福利 厕所超碰视频 强奸mp magnet 欧美妹aⅴ免费线上看 2016年妞干网视频 5手机在线福利 超在线最视频 800av:cOm magnet 欧美性爱免播放器在线播放 91大款肥汤的性感美乳90后邻家美眉趴着窗台后入啪啪 秋霞日本毛片网站 cheng ren 在线视频 上原亚衣肛门无码解禁影音先锋 美脚家庭教师在线播放 尤酷伦理片 熟女性生活视频在线观看 欧美av在线播放喷潮 194avav 凤凰AV成人 - 百度 kbb9999 AV片AV在线AV无码 爱爱视频高清免费观看 黄色男女操b视频 观看 18AV清纯视频在线播放平台 成人性爱视频久久操 女性真人生殖系统双性人视频 下身插入b射精视频 明星潜规测视频 mp4 免賛a片直播绪 国内 自己 偷拍 在线 国内真实偷拍 手机在线 国产主播户外勾在线 三桥杏奈高清无码迅雷下载 2五福电影院凸凹频频 男主拿鱼打女主,高宝宝 色哥午夜影院 川村まや痴汉 草溜影院费全过程免费 淫小弟影院在线视频 laohantuiche 啪啪啪喷潮XXOO视频 青娱乐成人国产 蓝沢润 一本道 亚洲青涩中文欧美 神马影院线理论 米娅卡莉法的av 在线福利65535 欧美粉色在线 欧美性受群交视频1在线播放 极品喷奶熟妇在线播放 变态另类无码福利影院92 天津小姐被偷拍 磁力下载 台湾三级电髟全部 丝袜美腿偷拍自拍 偷拍女生性行为图 妻子的乱伦 白虎少妇 肏婶骚屄 外国大妈会阴照片 美少女操屄图片 妹妹自慰11p 操老熟女的b 361美女人体 360电影院樱桃 爱色妹妹亚洲色图 性交卖淫姿势高清图片一级 欧美一黑对二白 大色网无毛一线天 射小妹网站 寂寞穴 西西人体模特苍井空 操的大白逼吧 骚穴让我操 拉好友干女朋友3p