Value Thoery/Ethcis

Collins (2023) on Collective Guilt Feeling

Soyo_Kim 2025. 5. 6. 10:50

Stephanie Collins. Organizations as Wrongdoers (2023)

And what about feelings of guilt, remorse, and shame—can organ izations experience these feelings in a phenomenological sense (not just their ‘functional equivalents’)—and why does this matter for organizations’ blameworthiness? [3]
In Part II (Chapters 4 and 5), I demonstrate that this metaphysical thesis yields important moral conclusions: organizations can be blame worthy for a wide range of actions, attitudes, and character traits (including those of members qua members) and organizations can literally feel guilt (when members feel guilt qua members). [4]

5.3 Why Moral Self-Awareness Matters

We seek a condition on blameworthiness that appropriately renders machines non-blameworthy, while appropriately rendering most adult humans blameworthy. The subsequent task will be to argue that organiza tions can satisfy that condition. In this section, I argue that the relevant condition is the capacity for moral self-awareness.

An entity has moral self-awareness when it has a phenomenal belief-like attitude (‘awareness’) to the proposition ‘I will do wrong’, ‘I have done wrong’,or ‘I am doing wrong’. There are three components to moral self awareness: a wrongness component (the ‘moral’ part), a first-personal self identifying component (the ‘self’ part), and the phenomenal belief-like component that is best captured by the notions of awareness, grasping, feeling, or acquaintance (the ‘awareness’ part). Present-day machines have the capacity for the first two of these, but not the last. This explains why they cannot be blameworthy. Most adult humans have the capacity for all three. This explains why they can be blameworthy. In Section 5.5, I will argue that organizations, too, can have all three. In this section, I’ll explain the three components and motivate their importance for blameworthiness.

So: what is moral self-awareness? And why is the capacity for it necessary for blameworthiness? The ‘self’ component is familiar from discussions of ‘de se’ beliefs, originating in John Perry (1979). To represent content that’s de se—‘about the self’—List and Pettit’s robot must be capable of representing not just ‘whichever robot is on the table is about to do wrong’ (a ‘de dicto’ belief), nor even ‘this particular robot, which happens to be on the table, is about to do wrong’ (a ‘de re’ belief).

Rather, the robot must be capable of representing ‘whichever robot is on the table is about to do wrong and I am the robot on the table’ or ‘this particular robot, which happens to be on the table, is about to do wrong and I am that robot’. The robot can have such representations in a functionalist sense: it can move to correct itself when it’s about to do wrong, it can say ‘I’m sorry’ when it has done wrong, and it can have the functionally characterized mental states that are entailed by ‘I am a wrongdoer’, such as ‘I am less good than I could have been’ (List and Pettit 2011, 187–8).

The ‘moral’ component of moral self-awareness requires that the entity has the concept of ‘wrongness’. Again, a simple robot can satisfy this. As I mentioned above, machines can abide by hard constraints. There is no roadblock to them labelling those constraints ‘what morality requires’ or ‘constraints the violation of which would be wrong’. In this sense, they can possess the concept of wrongness. Satisfying the moral component doesn’t require that the entity has the correct theory of morality, or even that its moral beliefs are ever true: a blameworthy entity might be systematically incorrect about whether it has done wrong, as callous humans are. Indeed, some callous humans never even bother to operationalize a flawed concept ‘wrongness’. Thankfully for their blameworthiness, my claim is that blame worthiness presupposes the capacity to operationalize the concept of wrong ness. So the moral component is relatively easy to satisfy.

 A robot can draw all sorts of inferences, and perform all sorts of behaviours, based the belief ‘I have done wrong.’ So it can believe ‘I have done wrong’, in a non-phenomenal sense of ‘believe’. This gives an entity a lot, in terms of its relation to that proposition.