Autonomous System Harm Allocation Design State
Class
06f02d68
http://proethica.org/ontology/intermediate#AutonomousSystemHarmAllocationDesignState
Definition
State in which a professional engineer is engaged in designing or evaluating the decision logic of an autonomous system — such as a driverless vehicle operating system — that must be pre-programmed to allocate harm among identifiable parties (passengers, pedestrians, cyclists, bystanders) in unavoidable crash scenarios, where the system's algorithmic choices constitute irreversible, pre-committed moral decisions about whose safety is prioritized, activating obligations to evaluate the ethical frameworks underlying those choices, disclose tradeoffs to the client, and consider public welfare implications beyond the immediate client relationship.
Properties
Subclass of
Definition
State in which a professional engineer is engaged in designing or evaluating the decision logic of an autonomous system — such as a driverless vehicle operating system — that must be pre-programmed to allocate harm among identifiable parties (passengers, pedestrians, cyclists, bystanders) in unavoidable crash scenarios, where the system's algorithmic choices constitute irreversible, pre-committed moral decisions about whose safety is prioritized, activating obligations to evaluate the ethical frameworks underlying those choices, disclose tradeoffs to the client, and consider public welfare implications beyond the immediate client relationship.
Source Evidence
Source Text
does the vehicle's system choose the outcome that will likely result in the greatest potential for safety for the vehicle's passengers or does the vehicle's software system instead choose an option in which the least amount of potential harm is done to any of those involved in an accident
TTL
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix proethica_intermediate_extended: <http://proethica.org/ontology/intermediate-extended> .
<http://proethica.org/ontology/intermediate#AutonomousSystemHarmAllocationDesignState> a owl:Class ;
rdfs:label "Autonomous System Harm Allocation Design State" ;
rdfs:comment "State in which a professional engineer is engaged in designing or evaluating the decision logic of an autonomous system — such as a driverless vehicle operating system — that must be pre-programmed to allocate harm among identifiable parties (passengers, pedestrians, cyclists, bystanders) in unavoidable crash scenarios, where the system's algorithmic choices constitute irreversible, pre-committed moral decisions about whose safety is prioritized, activating obligations to evaluate the ethical frameworks underlying those choices, disclose tradeoffs to the client, and consider public welfare implications beyond the immediate client relationship." ;
rdfs:subClassOf <http://proethica.org/ontology/core#State> .
Metadata
Ontology
Type
Class
Content Hash
06f02d68e95e03b4...Last Updated
2026-03-12 16:49
Extraction Provenance
Discovered in Case
165
Discovered In Pass
1
Discovered In Section
facts
First Discovered At
2026-02-27T23:23:14.080776+00:00
First Discovered In Case
165
Generated
2026-02-27T23:23:14.080776+00:00
Was Attributed To
Case 165 Extraction