1. Introduction
The widespread emergence of autonomous driving systems (ADS), autonomous maritime vessels (AMV), and uncrewed aircraft systems (UAS) (collectively, AV) raises several questions regarding the liability issues arising from design flaws, malfunctions, accidents or other errors.
This paper considers the legal and ethical implications of ADS and the laws of war, assessing liability in various scenarios, including wartime, with a focus on the "accountability gap".
We compare Australia’s current framework to that of our AUKUS alliance partners, the United Kingdom and the United States of America, using case studies and hypothetical scenarios to demonstrate the challenges in attributing liability where an AMV, UAS or ADS is involved.
2. Executive Summary
There is a dearth of tested laws in the issues that may arise around the deployment of AVs including AMVs, and UASs. Liability in Australia under common law principles, legislation and regulation including the possible application of the Australian Consumer Law (ACL) to accidents and incidents, both in civilian and military contexts, is untested. As often happens in fast-developing fields of technology, the formulation and especially, the testing and application of legal principles lags behind the development and deployment of technology.
Ultimately, deploying autonomous vehicles requires careful consideration of public perceptions, safety expectations and regulatory frameworks. Addressing concerns about safety, control and liability will be crucial for the successful integration of automated technology into society.
Current regulatory framework
Australian road rules do not currently permit the in-service operation of fully autonomous vehicles on public roads. International legal frameworks regulating the use of ADS and other AVs inform the framework for Australia’s proposed AV Safety Law, expected to be implemented in 2026.
Current national law applied to AMVs is not fit for purpose as it lacks the flexibility to accommodate the different operational parameters of AMVs. Maritime defence and civilian operators have raised regulatory concerns due to the exemption regime permitting the operation of AMVs on a case-by-case basis.
There is a gap in the attribution of responsibility for civil liability involving defence-employed AVs. The land, maritime and aviation defence industries will continue to face complex legal issues concerning tortious, contractual, statutory and criminal liability.
Emerging regulatory framework
The Australian national framework for automated vehicles is expected to commence in 2026. The Automated Vehicle Safety Law (Cth) (AVSL) will regulate the safe operation of automated vehicles when in-service and the first stage importation process.
Australia’s emerging ADS framework and the United Kingdom’s upcoming framework address the issue of “accountability gaps” by imposing liability on the corporate entity responsible for creating the ADS. However, the exact standard to which Australian entities will be held is not yet known.
Regulation for AMVs and UAS in the defence space is likely to be less restrictive, allowing the ADF to fulfil its intention to create a partially uncrewed force in the coming decades. However, the Commonwealth has not flagged a specific legislative model.
Other areas of concern
Autonomous systems are and will, one expects, increasingly be targets for cyberattacks. As cyber criminals become more sophisticated, nations globally must implement measures to rapidly adapt and respond to the technological and privacy-related challenges of in-service autonomous systems. Both the civilian and defence industries have incentives to pre-empt cybersecurity requirements in ADS and in AVs more generally.
In the conflict realm, cyber disruption to autonomous systems, both warfighting and infrastructure, will be a major tactical and strategic consideration. The defence industry will need to implement pre-emptive measures in AVs to minimise vulnerability.
To integrate seamlessly into the civilian space, ADSEs and the defence industry must also consider the issue of public perception and acceptance.
3. Background
A number of systems exist for classification of AVs in civilian and defence contexts. The three major systems considered in this paper are:
the Society of Automotive Engineers (SAE) J3016 5 Levels of Driving Automation (‘J3016’);
NATO’s SG/75 Four-Level Automation System (‘NATO’); and
the International Maritime Organisation (‘IMO’) four-point taxonomy of maritime autonomous surface ships (‘MASS’).
Classification Level / Degree | Description | |
SAE Levels of Driving Automation | ||
Level 0 (No Automation) | Human driver responsible for all aspects of driving. | |
Level 1 (Driver Assistance) | Vehicle assists with either steering or acceleration/deceleration. Human driver responsible for the rest. | |
Level 2 (Partial Automation) | Vehicle controls both steering and acceleration/deceleration under certain conditions. | |
Level 3 (Conditional Automation) | Vehicle performs all driving tasks under certain conditions. Human must take over when requested. | |
Level 4 (High Automation) | Vehicle performs all driving tasks and monitors environment in specific conditions. No human intervention needed within these conditions. | |
Level 5 (Full Automation) | Vehicle performs all driving tasks under all conditions without human intervention. | |
NATO SG/75 Four-Level Classification | ||
Level 1 (Remote Control) | System reactions and behaviour depend on operator input. | |
Level 2 (Supervised Autonomy) | System performs tasks autonomously under human supervision. | |
Level 3 (Full Autonomy) | System performs tasks independently for extended periods; human can override if needed. | |
Level 4 (Cognitive Autonomy) | Behaviour depends upon a set of rules that can be modified for continuously improving goal directed reactions and behaviours within an overarching set of inviolate rules/behaviours. | |
IMO Four-Point Taxonomy of MASS | ||
Degree 1 (Automated Ships) | Automated processes and decision support with crew onboard. Crew can take control if needed. | |
Degree 2 (Remotely Controlled with Crew) | Ships remotely controlled but with crew onboard to take control if necessary. | |
Degree 3 (Remotely Controlled without Crew) | Ships controlled from a remote location with no crew onboard. | |
Degree 4 (Fully Autonomous Ships) | Ships operate autonomously without any human intervention. |
PART I: Current Framework for Autonomous Systems
4. Autonomous Driving Systems
Australia
Fully autonomous vehicles (JS3016 Levels 3-5) are prohibited for road use as they do not meet Australian Design Rule 90. Australian drivers are required to retain ‘proper control’ of any vehicle even with driver assistance features engaged. Therefore the driver will be responsible for any accidents resulting from a failure to maintain proper control of a vehicle.
Organisations seeking to run automated vehicle trials may do so in several jurisdictions with Ministerial approval and in compliance with the National Transportation Commission AV trial guidelines. State and Territory permits or exemptions are also required to legally operate a conditionally (Level 3) or fully autonomous vehicle (Levels 4 and 5) on public roads.
United Kingdom
The Automated and Electric Vehicle Act 2018 (UK) (UK Act) was passed by the UK Parliament in 2018. Under the UK Act insurers are liable for damages arising from an accident caused by an automated vehicle driving itself. Insurers are liable under a no-fault system, with exclusions for contributory negligence and unauthorised software alterations.
On 20 May 2024, the UK passed a separate Autonomous Vehicles Act 2024 (UK) which dictates the regulatory requirements surrounding widespread rollout of ADS. The primary purpose of this Act is to require ADS to achieve a level of safety and road competence equal to or higher than human drivers. However, the required standards are yet to be published.
The UK law attributes liability for any crash or damage caused by a car in self-driving mode to the manufacturer, software developer or insurance company. Further development of ‘black-box’ like technologies similar to commercial aircraft that record the control inputs at the time of an incident will be required to assess whether the car is in self-driving mode at the time of any crash or damage.
Australia’s AVSL effectively mirrors the UK’s 2024 Act (refer to ‘Emerging Framework for Autonomous Systems’ at paragraph 9).
Germany
The German Road Traffic Act, as amended in 2017 and 2021, permits the operation of both JS3016 Level 3 and 4 in various areas throughout Germany. The 2017 amendments permitted the operation of JS3016 Level 3 ADS systems insofar as they must meet specific requirements to operate on public roads, including the ability to warn the operator of the ADS’ needs to relinquish its autonomous operation back to the human driver. However, the first vehicle compliant with these requirements was only approved in Germany in December 2021. As of 2021, certain JS3016 Level 4 ADS may operate within ‘specified operating areas’ on public roads with ‘technical supervision’ (which may be remote control).
The driver or ‘keeper’ (being the beneficial owner) of Level 3 or Level 4 ADS under German law is still strictly liable for any incident caused by the ADS’ operation, with the caveat that incidents caused by an ADS are subject to higher statutory liability caps. Manufacturers are only liable in incidents under German product liability law if certain operational requirements, namely that the ADS is protected from unauthorised interferences and that it may self-diagnose and safely transition to human control where necessary, are not met.
United States
In the United States, twenty-four states have implemented ADS laws.
In Utah and Nevada, an ADS must be equipped with recording devices and data must be available for law enforcement and other purposes.
ADS at JS3016 Level 4 and 5 are permitted on all roads in South Dakota provided they are capable of compliance with road rules and meet the minimum insurance requirements for regularly manned vehicles. In South Dakota for the purpose of road traffic and motor vehicle laws, the ‘automated driving system’ is deemed the driver and operator.
In California and Nevada, manufacturers must ensure vehicles can operate safely and comply with relevant laws and they must certify safety and cybersecurity measures.
5. Maritime Commercial and Private Vessels
There is a range of AMVs currently operating in Australia and internationally. These vessels are used in marine surveying, scientific research and oil and gas transport, such as the Yara Birkeland, a completely autonomous cargo ship operating at MASS Degree 4.
The same Australian regulatory framework applies to AMVs as other marine vessels, including for survey standards and crewing requirements. This is because the definitions of ‘vessel’ in the Navigation Act 2012 (Cth) (‘Navigation Act’) and the Marine Safety (Domestic Commercial Vessel) National Law Act 2012 (Cth) (‘Marine Safety Act’) are very broad and include any kind of vessel used in navigation by water. Therefore, the Australian Maritime Safety Authority (‘AMSA’) oversees the operation of AMVs.
Before AMSA will certify a maritime vessel, permitting it to operate, it must have both a:
Certificate of Survey issued by AMSA which states the vessel has complied with the Marine Safety Act; and
Certificate of Operation issued by AMSA which states the vessel’s operational class and its operational areas (and any other requirements).
However, AMVs operating without any responsible crew (i.e., operating at MASS Degree 3 or 4) are incapable of meeting the Certificate of Survey or operational requirements to have responsible crew under Marine Order 503 and may not obtain the necessary approvals.
Therefore, under the present regime, all AMVs require one of several exemptions from those requirements under the Marine Safety Act to actually operate.
The exemptions available are:
For operation less than 90 days – exemption from certificates of survey, certificates of operation or load line certificate (Form 777);
For longer operations (where the below apply):
AMVs less than 12m long and operating exclusively within 15nm of the mainland coast do not require a certificate of survey but do require an exemption for class C restricted operations. Vessels longer than 12m will require both. To obtain this exemption the AMV must meet several construction specifications.
For longer operations (where longer than 12m or otherwise ineligible for the above):
Exemption from certificate of survey (Form 579); and
Specific exemption from certificate of operation (Form 547).
If a ship is operated without safety management systems or other precautions that ensure the safety of that vessel, the owner and master of that ship will be deemed to have committed an offence, and will be criminally liable for a malfunction or other unsafe operation.
In circumstances where an AMV is ‘not safe if used for a purpose for which it was designed’ designers, manufacturers and suppliers of that autonomous system will be liable. Therefore it is foreseeable that a manufacturer/supplier will be liable for a vessel designed for the purpose of operating autonomously if it is not safe to be operated for that purpose.
Different regimes apply to AMVs when operated outside of Australia’s Exclusive Economic Zone. Under the United Nations Convention on the Law of the Sea (‘UNCLOS’) international law, ‘ships’ are afforded the following two sets of rights that are vital for states and their registered commercial vessels:
Rights of innocent passage, transit passage, freedom of navigation; and
The right of sovereign immunity (for non-commercial state-run ships).
These rights (primarily the navigational rights) are crucial for shipping and commercial operations as they are necessary for navigating vital commercial sea routes.
Traditionally, UNCLOS assumes that ships require crew to meet its prescriptive requirements. UNCLOS imposes requirements on states to only register vehicles with adequately trained crew and equipped vessels as ‘necessary to ensure safety at sea’. However, UNCLOS also allows compliance in accordance with ‘generally accepted international procedures and practices’.
The flag state has broad duties to ensure it has effective jurisdiction over maritime vessels flying its flag under Article 94 UNCLOS. For AMVs, this will effectively mandate some form of legislative regime dictating liability of AMVs but, as discussed below, for the time being, existing legislation will likely permit their operation in international waters.
Submarines have the same rights of innocent passage as long as they navigate on the surface displaying their state flag. While underwater, subsurface AMVs, like crewed underwater vessels, operate in a legal grey-zone on the high-seas where their coverage by UNCLOS outside of innocent passage rights is unclear.
Whether states accept maritime AVs as ‘ships’ for the purposes of UNCLOS will dictate the nature of ‘international procedures and practices’. At present, many maritime nations have definitions of ships that would catch AMVs. Thus, if the current trend continues, remotely managed or autonomous maritime vessels will soon be considered capable of ensuring safety at sea, granting the aforementioned rights jus cogens and increasing exponentially the commercial value of AMVs to those states. The more this understanding is accepted as ‘standard international procedure and practice’, the more concrete the rights afforded to AMVs will become. Under this jurisdictional understanding, the responsible party in maritime incidents would likely be the ‘master’, or remote operator, who has ultimate jurisdiction over the ship as a captain would. Through vicarious liability, the owners of ships are then generally responsible.
However, compliance with the obligation of flag states to ensure that ships are safe includes ensuring that masters, officers and crew are aware of and observe regulations relating to preventing collision. The International Maritime Organisation’s Convention on the International Regulations for Preventing Collisions at Sea (‘COLREGs’) provide regulations for AMV collision avoidance with which all vessels currently must comply as ratified by Australian law under Marine Order 30.
The COLREGs are a form of ‘road rules’ for international waters and provide a baseline for establishing when an AMV has breached a rule and is liable for accidents caused by that breach. Similarly to UNCLOS, these definitions depend on what ‘ships’ are, but are quickly becoming accepted in international law as including AMVs. AMV systems are already in place that proclaim to comply with the COLREGs during passage at levels comparable or higher than those of human crew.
Presently in Australia’s waters, COLREG compliance only applies according to the extent of the exemption permitting the AMV to operate. Regardless, the requirement that exemptions that seriously jeopardise safety not be granted will likely continue to require COLREG compliance certification as a requirement for domestic AMV approvals.
6. Maritime Autonomous Defence Vessels
Classifying Defence Vessels
Australia’s Marine Safety Act 2012 (Cth) carves out ‘defence vessels’ and this has caused complications in regulating autonomous maritime surface and particularly subsurface vessels. A similar carve out exists for ‘naval vessels’ under the Navigation Act 2012 (Cth).
AMVs are generally captured and regulated by the Acts. However, for the exemption for ‘defence vessels’ under the Marine Safety Act to apply, the vessel requires crewing by ADF personnel or that it is operated as a ‘naval auxiliary’. As a consequence, there is ambiguity as to whether uncrewed contractor-operated ADF vessels (where technically owned by the ADF) are subject to that Act without a clear chain of liability.
In its submission to the current maritime framework for autonomous systems, the Trusted Autonomous Systems Defence Cooperative Research Centre (TASDCRC) highlighted that the current exemption process is inefficient, opaque and uncertain, leading to increased financial and opportunity costs for both vessel owners and AMSA.
Additionally, the existing framework fails to recognise or facilitate appropriate mechanisms for conducting testing. It does not reflect or support the strategic, regulatory and operational agility that modern defence forces, working closely with diverse industry stakeholders, require.
The appropriate classification of AMVs is critical in determining responsibility and liability. In international waters warships are permitted to exercise belligerent rights during maritime conflict. Article 29 of UNCLOS defines a 'warship' with four cumulative requirements, one of which is that it must be ‘manned by a crew under regular armed forces discipline.’ This definition, derived from the 1907 Hague Convention VII on the Conversion of Merchant Ships into Warships, is arguably outdated but still prescriptive. Based on a plain reading, AMVs do not meet this definition due to the absence of a crew, but again jus cogens appears to point to the acceptance of AMVs as warships.
Ultimately, from a maritime security perspective, resolving the issues of classification may not be necessary. Provided that an autonomous vessel qualifies as a ‘ship’ and is operated by a government for exclusively non-commercial purposes, it would enjoy similar rights as a warship under UNCLOS, including the rights of visit.
However, as such vessels may be ‘auxiliary’ vessels, they would be subject to attack under naval warfare laws without possessing the same belligerent rights and privileges as warships.
The present regulatory framework for defence maritime vessels consequently leaves defence AMVs vehicles treated separately from ADF-crewed vessels and creates barriers to the entry and operation of AMVs.
These barriers in international law do not appear as though they will hamper commercial prospects to develop AMVs for Australia or its AUKUS partners, all of which have plans to implement uncrewed warships in their naval fleets.
7. Autonomous Aviation Systems - Defence
Uncrewed aircraft systems (‘UAS’) (i.e. anywhere between NATO levels 1-4), generally drones, are only permitted for restrictive use under the UAS rules set by the Defence Aviation Safety Authority for ADF use. Although restrictions vary by the UAS size, the overlapping operation parameters for the 3 standard scenarios (i.e. without DASR approval) in the ADF are:
The aircraft must weigh less than 150kg (or less depending on the scenario);
An operator must be able to intervene “during all stages of the flight”;
The UAS must not leave the visual line of sight of the operator;
The UAS must only be operated during daytime and no higher than 400ft.
In the standard scenarios, for maritime usage (12nm or further) or for defence exercises where:
It has ‘suitable risk controls’ (i.e., a sufficiently developed autonomous system) when breaching the above general requirements for usage;
Only operate over Defence Controlled Land or water designated for planned defence exercise during the designated period.
It is unlikely that, in these circumstances, the ADF will permit contractors to operate their own UAS and instead, will have to directly purchase the UAS and assign ADF operators to allow the operation. If contractors wish to privately design and test their UAS, (for drones 25-150kg) they must hold a remotely piloted aircraft operator’s certificate.
AUKUS regulation
The United States has no legislation dealing with development and deployment of (lethal) defence AVs at a federal level. US Department of Defence Policy presently only requires that AVs “allow commanders and operators to exercise appropriate levels of human judgment over the use of force”.
In the UK, UAS are governed by the same regimes that govern conventional manned aircraft, being the Military Aviation Authority Regulatory Article 1000. This permits contractors to operate UAS under Ministry of Defence contracts.
Export controls in the UK limit the export of UAS to systems, including ‘complete unmanned aerial vehicle systems (including cruise missile systems, UAVs and remotely piloted aircraft systems capable of delivering at most a 500-kilogram payload to a range of at most 300 kilometres.
8. Liability for errors
As discussed above, ADS are presently banned in Australian civilian contexts and operators of vehicles will be liable for accidents caused by the ADS. The following existing principles of law assume that an ADS can legally be operated on a road. The more general principles are also applicable to AMVs and UAS, and a defence-specific scenario is addressed separately.
There have been no court cases where a fully autonomous vehicle has been held liable for an errored collision. The legal frameworks for autonomous vehicles in the aforementioned jurisdictions are still developing and most incidents involving these vehicles have been managed through settlements or regulatory processes rather than litigation. In the above jurisdictions, particularly the United States, ADS manufacturers have succeeded in placing the blame for accidents on inattentive drivers failing to re-take control of the vehicle. These incidents are discussed further below.
Contractual Liability
Breach of contract enables a party who has suffered as a result of a party’s breach of an obligation under a contract to recover damages suffered as a result of that breach.
For an injured party to make out a claim for damages based on a breach of contract, that party would have to prove that a contract existed, that it was a party to the contract, that the contract contained either an implied or express warranty as to the operation of a system, that the warranty was breached and that it suffered loss or damage as a result of that breach.
A consumer is generally unable to pursue an AV manufacturer for the failure of an autonomous system, as usually the relevant warranties are not in the purchaser’s contract with the AV reseller but are in the contract between the manufacturer and the developer of the relevant software.
Astute purchasers of AV systems, such as the Commonwealth government and prime defence contractors are likely to negotiate contracts with suppliers of AV systems that contain warranties and performance guarantees as to operation of those systems, thus enabling breach of contract claims where systems fail to perform in accordance with contractual specifications. Performance specifications may be expressed in terms of a percentage of ‘correctly’ made automated decisions or otherwise.
In an ADF context, the ASDEFCON suite of contracts typically contain these types of terms but will contain (or contractors will seek to negotiate) liability caps and indemnities for manufacturers, and exclusions of liability for third party damage.
Tortious Liability
A claim based on the tort of negligence requires a party adversely affected by an AV incident to demonstrate that the party responsible for the AV, which may be the manufacturer, the owner or the operator, owed to the injured party a duty of care not to cause injury or loss and that if the AV malfunctioned, it was reasonably foreseeable that the injury or loss would be caused. It is arguable that a manufacturer of the equipment owes a duty of care to its ‘end consumer’, rather than to third parties who may be affected by the use of the equipment. However, for vehicles, there is authority for the view that an extended duty of care is owed to those ‘endangered by its probable use’. While this extended duty has been posited in relation to ADS, arguably the same contention could be advanced in relation to AMVs.
Determining whether a duty of care is owed, who owes it and to whom and whether the duty has been breached all pose challenges. Especially where an AV is utilizing “black box” machine learning protocols, it may be almost impossible to foresee whether the decisions of the AV may give rise to a risk of harm being caused and to whom the harm, if it occurs, may be caused. The decisions of the AV may simply be the result of the machine learning as intended. It will primarily depend on the type of fault and whether harm arising from a coding fault can ever reasonably be foreseen.
At present, the ultimate statutory (e.g. the ‘proper control’ requirement) and common law barrier to any plaintiff (e.g. driver/operator of a vehicle) is demonstrating that they made no intervening act or omission that caused the crash. Most ADS operate at J3016 Level 2 or 3 and are therefore not technically ‘fully self-driving’. AV manufacturers can seemingly rely on their use of disclaimers and warnings to remain attentive to AV technology to demonstrate ‘reasonable steps’ to prevent the harm and that the user’s inattentiveness is a break in the chain of causation.
The responsibility of drivers for negligence has been upheld within the United States, where Tesla has been found not to be liable even where AV software failures have caused vehicles to swerve off roads, ending in fatalities.
A US case study provides insight into the application of these principles. The first fatality involving a self-driving car occurred when Tesla’s systems failed to distinguish a white truck trailer from the brightly lit sky behind it, causing the Tesla to plough into the trailer and causing the death of its only occupant. Tesla was ultimately cleared of liability as the autopilot was intended to be operated with driver attention. If Australia gets to the point of permitting self-driving, the failure to be attentive and retake control would also likely constitute a break in the chain of causation.
Government Departments
The Crown and government departments more generally are not afforded blanket immunity from tortious liability under Australian law and may have a duty of care even when acting within their authority. The tort of misfeasance allows those harmed by misuse of governmental powers to pursue damages caused by that misuse.
In the context of granting exemptions or permitting AVs to operate, the judicial position is presently that the government’s exercise of discretion for example through policy or application processes, can only result in negligence where there is no reason not to use the discretion to protect a plaintiff from harm. The high threshold required for misfeasance makes it unlikely to appear in AV contexts.
Defence contexts for tortious liability
In ADF contexts, international operations give rise to the jurisdictional issue of determining where an action occurred and therefore, the law of which jurisdiction is applied to determine whether the action was tortious. This may give rise to a difficulty in applying Australian rules of tortious liability in the context of international operations.
The Commonwealth owes a limited common law duty of care to both civilians and ADF personnel. In civilian contexts, ADF members responsible for supervision of AVs may be protected by their statutory exclusion from liability while performing their duties in preparation for ‘an emergency’.
Crucially, any claim of negligence occurring in the course of ‘actual operations against the enemy’ will not be considered by the Courts, as the ADF has immunity from owing a duty of care in such operations. ‘Actual operations against the enemy’ does not include patrol or non-wartime maneuvering and thus in a training scenario, the immunity does not apply.
People suffering harm from the malfunctioning of an ADF AV will still struggle to demonstrate that their harm was ‘reasonably foreseeable’, because the attribution-of-fault issue for machine-learnt behaviour will again apply.
However, as previously discussed, maritime AVs are expected to conform with COLREGs and other maritime law and thus, any collisions resulting from insufficient compliance with those rules (especially as mandated by ratified Australian law) and where not protected by Commonwealth immunity during actual operations, arguably will be ‘reasonably foreseeable’ for the purpose of proving negligence.
Additionally, breach of the COLREGs will constitute a breach of Marine Order 30 and thus the Navigation Act, attracting an additional penalty of 6,000 penalty units ($1.878 mil) to the vessel’s owner.
Statutory Liability
The Australian Consumer Law (ACL) under Schedule 2 of the Competition and Consumer Act 2010 (Cth) may provide a statutory avenue for a claim based on strict liability rather than a claimant having to satisfy the higher fault requirements of negligence.
The ACL applies to transactions involving ‘ships, aircrafts, vehicles, components, software and subassemblies’. Specifically purchasers of AVs may then rely on demonstrating that ‘vehicles’ and ‘software’ are, for example:
not of ‘acceptable quality’: not acceptable safety and/or compliance with road rules for the price and condition of the AV; or
not fit for the ‘disclosed purpose’: e.g., when advertised for safe autonomous operation.
Additionally, a purchaser may argue that the manufacturer has engaged in misleading and deceptive conduct or made false and misleading representations as to the level and reliability of automation of the AV.
Either the manufacturer of the AV itself, or the component/software manufacturer may be defendants in a claim under the ACL.
The present barrier to use of the consumer-based portions of the ACL is the current upper limit of $100,000 on the value of goods, as well as the effectiveness of disclaimers for demonstrating consumer expectations. Again, Tesla has successfully used disclaimers in the past (despite its use of the ‘Fully Self Driving’ label in the US) to defend such allegations in US jurisdictions. Although untested in Australia (as these technologies are not (yet) legal - see section 2), the effective disclaimers state, for reference:
“In a TESLA vehicle, before enabling Autopilot, you must agree to ‘keep your hands on the steering wheel at all times’ and to always ‘maintain control and responsibility for your vehicle.’”
Under Part 3-5 of the ACL, the manufacturers of AVs (of all types) also have duties to third parties to ensure the resilience and effectiveness of systems to the extent that they are free from defects for operation, such as GPS and sensory inputs. This has specific implications which are discussed below (see Defence contexts for ACL obligations) but are also applicable for civilian contexts.
In the UK, the Consumer Protection Act provides a regime similar to the ACL except that it puts a higher burden on the plaintiff to demonstrate that relevant standards, inspection and testing requirements were insufficient to prevent the defect.
Questions of liability in ACL actions are complex. As drivers of ADS (due to present statute) and other AVs will be expected to retain control in almost any context (for example, in J3016 Level 1-3 cars), manufacturers have multiple avenues to demonstrate that the vehicles meet statutory consumer guarantees and that harm was caused by an individual’s inattentiveness or error.
The US Tesla case discussed above could arguably be an example of a vehicle not being fit for its ‘disclosed purpose’. The legal implication of Tesla’s claim of its vehicle being ‘Fully Self Driving’ as against its disclaimer has not been tested in Australian courts. Under the ACL, the application of relevant consumer guarantees cannot be excluded by contract disclaimers, but it is difficult to predict the force an Australian court will give to Tesla-type disclaimers stating that drivers must still retain proper control during autonomous operation. Australian courts have previously been sceptical of the effectiveness of such disclaimers in defeating misleading and deceptive conduct claims.
Assuming that a purchase of an AV falls within the ACL threshold, if a J3016 Level 5 AV could legally be operated in Australia and was involved in a crash caused by the malfunction or failure of its systems, the vehicle’s owner would likely have a claim against the manufacturer under the ACL that those systems were in breach of an ACL guarantee.
Such claims would necessarily depend on the advertising and the presented purpose of such vehicles. For example, advertising that an AV operates in compliance with road rules could trigger a claim for breach of a consumer guarantee if the AV did not do so. A similar basis for claim may arise from the malfunction of e.g. a driverless taxi where the passenger could claim against the taxi owner or the supplier of the self-driving technology.
Defence contexts for ACL obligations:
The ACL has been flagged as particularly relevant for defence contractors. A third party plaintiff, not necessarily a consumer, affected by a crash demonstrably caused by a “safety defect” in the AV will have a direct cause of action against the manufacturer (likely the defence contractor) under Part 3-5 of the ACL.
The relevant test for establishing that a safety defect existed is whether the AV’s safety parameters were ‘not such as persons generally are entitled to expect’.
Safety defects may be programming errors, manufacturing defects or design defects. Design defects may be apparent in the design documentation and manufacturing defects may be apparent on inspection. Programming errors, particularly in machine learning environments may be difficult to show as commented above.
Although ACL-based claims are flagged by academics as the best option for third party plaintiffs, several defences are available to manufacturers:
that the defect did not exist at the time of supply of the AV (to the ADF, for example);
the ‘cutting-edge defence’, i.e. that technology capable of detecting the defect did not exist at the time of supply;
when targeting component/software manufacturers, that the component was only defective because of its inclusion in a larger design i.e., the software interacted poorly with a particular type of sensor placed by another party on an AV.
As in a negligence action, any non-compliance with COLREGs or other maritime law (including by having insufficient sensory input) resulting in harm is likely to be a safety defect under the ACL equally for ADF and civilian AMVs.
The following scenario is adapted from a rescue scenario example given by Dr. Walker-Munro. It is a detailed example of the application of a safety defect claim under the ACL in a defence context:
An ADF-operated AMV is utilised in a training operation off Australia’s shores, on its way to a proving ground. The AMV navigates autonomously using sonar and other sensors to detect other vessels. However, its sensors do not properly detect a small fishing vessel, which it collides with and destroys the vessel while injuring the crew.
The crew bring an action for a safety defect claim under Part 3-5 of the ACL. It is safe to assume that the Court will take jurisdiction over such a case as being ‘sufficiently connected’ to Australia.
The plaintiffs will then have to face the challenge of identifying the safety defect to determine whether the AMV did in fact have a safety defect. This could be either demonstrating a design defect, i.e. insufficient design of the detection or navigation systems, or a manufacturing defect (i.e. poor wiring or obstruction/damage to the detection systems).
Even if a safety defect demonstrably exists, the manufacturer may be able to make out the ‘state of the art’ defence i.e., if it can demonstrate that the AMV employed what was state of the art technology and no technology at the time of design/manufacture could detect the defect. For example, an error arising from an unknowable issue in machine-learning algorithms is likely to be undetectable until it has first presented itself.
If the plaintiffs can demonstrate that a safety defect did exist and that it caused their injuries or loss, they may only recover damages for injuries/death or for damage to land, buildings or fixtures (not applicable in maritime contexts, except for private piers, etc.).
Criminal Liability
In his academic analysis on criminal responsibility in autonomous weapons systems, Johnathan Kwik posed the following hypothetical scenario.
Imagine two commanders, A and B, each tasked with disabling enemy tanks within a city they aim to capture. Commander A controls a rocket system, while Commander B operates an Autonomous Weapon System (‘AWS’). Both commanders are given performance indicators, such as Circular Error Probable (‘CEP’) and other accuracy metrics, to decide whether to deploy their weapons based on the actual operational conditions.
Commander B's AWS temporarily malfunctions due to sunlight reflecting off a church’s stained-glass windows, causing it to fire a missile at a market that results in significant civilian casualties. Subsequent analysis reveals that this incident falls within the 20% of edge cases mentioned in the AWS’s manual.
Two additional factors frequently discussed in the literature complicate the attribution of criminal liability for harm caused by AWS. These are the control problem and the problem of many hands.
The control problem suggests that criminal liability cannot be established because AWS are not under the operator’s control when the incident occurs. This has led to a focus on meaningful human control in the AWS debate, as maintaining control is seen as essential for establishing criminal liability.
The problem of many hands posits that attributing criminal liability is challenging because many actors are involved throughout the weapon's lifecycle. However, this issue is less relevant for the deploying commander, as their decision to deploy the weapon recenters the causal responsibility on a single decision-making figure.
The control problem and many hands problem effectively highlight a key area of concern when one is considering giving weapons or vessels ‘real’ full autonomy. Opacity often referred to as the ‘black box phenomenon’, refers to the situation where we cannot understand an AI’s internal workings nor trace how decisions are made.
This lack of transparency means that even the system's designers often cannot fully comprehend how their systems operate or predict how they might change. This is a technical impossibility, not a result of negligence or insufficient technical knowledge on the part of the operator.
Opacity connects to the concept of intent in criminal law. There are suggestions that it can effectively shield AI users from liability because the decision-making process is inaccessible and incomprehensible. As a result, opacity presents a significant challenge to attributing criminal liability for war crimes committed by AWS, since it is difficult to prove intent or understand the AI's decision-making process. Commentators have identified this lack of transparency as a major barrier to holding individuals accountable for the actions of autonomous systems.
PART II. Emerging Regulatory Framework for Autonomous Vehicles in Australia
9. Emerging ADS Regulation
In 2018, it was agreed that a national approach will be taken to regulate ADS in Australia. In line with the National Transport Commission’s national in-service framework for automated vehicles, the Automated Vehicle Safety Law (Cth) (AVSL) is expected to commence in 2026.
9.1 Implementation of AVSL
A new regulatory body, the ‘first supply’ regulator, will be established to manage the automated driving system entity (‘ADSE’) approval for the Australian market.
The introduction of the AVSL will regulate ADS while they are in service on the road. It will also regulate ADSEs, ADSE executive officers and so-called remote drivers.
The ADSE, i.e. the legal entity, will be required to certify that its ADS can perform tasks safely in place of a human driver. The ASDE will be required to have a corporate presence in Australia.
The rollout of automated systems will first be met with mandatory self-certification against safety criteria at first supply. The AVSL is expected to incorporate 11 outcomes-based safety criteria as well as relevant standards from the United Nations regulations for example, UN regulation 157 on automated lane-keeping systems. The AVSL will be incorporated into existing regulatory frameworks, such as the Road Vehicles Standards Act 2018 (Cth), which will be amended to implement the safety criteria.
The ADSE will have to show that its insurance would be appropriate to cover personal injury, death and property damage caused by the ADS when it is engaged.
The ADSE will be required to disseminate crash data to road agencies, insurance and law enforcement. Data will also be shared with individuals to dispute liability when a reasonable request is made. The Privacy Act 1988 (Cth) will place limits on the data collected and shared by ADSEs.
The ADSE will be required to store crash data for a period depending on the purpose(s) for which the information could be used, for example, data that would be relevant to law enforcement, insurance and liability disputes. This data must be stored in Australia.
Once an import approval (such as a type approval) has been received from the Department of Infrastructure, Transport, Regional Development, Communications and the Arts, and transport of the vehicle(s) has started, an import declaration or self-assessed clearance declaration will have to be lodged with the Australian Border Force. On arrival in Australia, to be cleared from customs control, importers may need to pay customs duty, goods and services tax, luxury car tax and other charges.
9.2 Liability under AVSL
A ‘General Safety Duty’
A general safety duty will be imposed on relevant parties based on a ‘reasonably practicable’ standard. What will be ‘reasonably practical’ has yet to be defined and will likely create further legal discussion as there is no set legal precedent for ADS in Australia. The law currently assesses the liability of vehicles when drivers are behind the wheel and ADS liability may be assessed on the same standard as other ADS.
There is an expectation that the separation of the AVSL from existing legislation will lay the groundwork for the development of a fit for purpose regulatory infrastructure that would provide an important signal to stakeholders that they can plan for an investment in autonomous vessels in Australia.
9.3 Corporations Act
Under the Australian Corporations Act 2001 (Cth) directors and other officers owe duties to their corporation and not to the road users who may be injured by a faulty ADS.
There is a suggestion that since ADSE executive officers will have significant influence over the ADSE’s safe performance, a statutory obligation must exist that would oblige ADSE executive officers to ensure that their ADSE complies with its general safety duty. The ADSE would be prosecuted either under the general safety duty or a prescriptive duty for a particular breach.
9.4 Work Health Safety Laws
Injury caused by a dangerous or faulty ADS soon after the vehicle enters service may be covered under WHS law.
The issue is that it is unclear how and to the extent to which WHS laws will apply to ADSEs. For example, it is unclear whether an ADS failure that occurs after the vehicle has been in service for several years would also be covered under the ADSE’s WHS duties.
Another issue is that WHS duties are regulated by each individual state. WHS regulators may not be experts in ADS technology and may have different enforcement priorities and differing interpretations of WHS law’s application to automated vehicle safety.
9.5 Regulatory Gaps in AVSL
Despite drawing from current models such as the United Kingdom’s Automated and Electric Vehicle Act 2018 (UK) and Germany’s Road Traffic Act, AVSL does not contemplate:
• government access to automated vehicle data;
• access to data by motor accident injury insurers;
• new powers for government agencies to access data;
• how Australia’s information access framework will apply to the private sector; and
• access to data by consumers for disputing liability.
10. Emerging AMV Regulation
The direction of AMV regulation is emerging through various recommendations and codes of conduct, but no maritime equivalent to the AV Safety Law has yet emerged in Australia.
It is clear that UNCLOS will continue to be of prime importance as states begin to accept AMVs as ‘ships’ of varying IMO Mass Degrees for its purposes.
The Trusted Autonomous Systems Defence Cooperative Research Centre recommended that the Marine Safety Act and Navigation Act be amended to reduce certification requirements for shorter-life-cycle AMVs and reduce reliance on AMSA issuing exemptions where the manufacturer must satisfy it that an exemption would not jeopardise marine safety. TASDCRC’s recommendations also specifically addressed the gap for defence AMVs. It recommended that the definition of defence vessel under the Navigation Act be amended to add an exclusion for AMVs intended for use or testing for defence purposes.
In response to these recommendations, the Independent Review of Domestic Commercial Vessel Safety Legislation accepted that the certification requirements should be simplified and that, in theory, the defence vessel definition is currently an issue. The exact method of any reform in this space is presently unclear.
The Maritime UK ‘Maritime Autonomous Ship Systems (MASS) UK Industry Conduct Principles and Code of Practice’ is a voluntary Code published in 2023 which sets out codes for operating maritime AVs under 24m in length. It has several requirements that will likely eventually inform Australian regulation:
AV Operators will have to be trained and certified to the same level required for manned vessels equivalent to the relevant AV.
That AV systems and their operators must comply with local and international navigational rules, including the COLREGs.
That safety management systems on AVs should comply with any applicable guidelines or standards mandated or recommended by the International Maritime Organisation.
11. Emerging UAS Regulation
The future of UAS regulations is uncertain. Civilian UAS are likely to continue to be limited as being below 150kg, insofar as ADF UAS continue to require specific approvals from Defence Aviation Safety Authority to operate above 150kg or generally outside standard operating scenarios.
However, the ADF’s acquisition of the MQ-4C Triton UAS, being remotely operated from interstate (in Edinburgh, but the UAS are based in Darwin) demonstrate a willingness to grant such approvals and purchase UAS systems.
PART III: Areas of Concern
12. Cybersecurity
Autonomous systems are vulnerable to many kinds of cyberattacks. AVs, including drones and unmanned ground and marine surface and subsurface vessels play a pivotal role in modern defence operations, offering enhanced capabilities and flexibility. Heavy reliance will be placed on their decision-making software which will be vulnerable to cyber-attacks such as denial of service attacks, hacking and spoofing attacks and malware infiltration. Networks used to communicate with remotely operated vehicles are liable to disruption.
A software vulnerability can potentially affect all weapons or vehicles of the same class. While traditional weapon systems also face this risk, systemic vulnerabilities in increasingly autonomous weapon systems might go unnoticed for longer periods. This is because human operators are further removed from the system both spatially and temporally, reducing their ability to interact with it effectively. System compromises or disruptions may be triggered for specific tactical purposes or at tactically critical times.
Countermeasures to information communication technology (ICT) dependent weapon systems might include techniques such as jamming communications. An electromagnetic pulse (EMP) weapon could also cause widespread technology failures, possibly even in hardened systems.
Maritime vessels will likely depend heavily on secure communications links and constant high-volume data transfer. This reliance could be a disadvantage, as losing a secure communications link might impair or potentially disable the autonomous vessel.
Supply chain vulnerabilities are another significant concern. Much of the innovation in autonomous technologies comes from the private sector, with defence departments adopting these technologies through "spin-on" applications. Integrating civilian technologies into defence systems poses risks, as military-level security may not be inherent and may need to be retrofitted. Stringent test and evaluation processes will be required for privately sourced or commercially developed hardware and software components which could represent potential vulnerabilities, either through deliberate inclusion (e.g., backdoors) or unintentional flaws.
Consequently, if a particular data issue is known to cause failures in an autonomous system, those employing the system have a responsibility to anticipate such issues and factor these risks into their operational decisions.
This responsibility is particularly crucial when users cannot respond to issues in real-time. The ability to anticipate data issues varies; some, like weather conditions, can be forecasted, while others, such as the confluence of multiple environmental factors or unknown adversarial countermeasures, are harder to predict.
The Criminal Code and Australia's obligations under the Convention on Cybercrime of the Council of Europe (Budapest Convention) may provide avenues for criminal liability in cases of malicious hacking of an ADS. This includes intentional unauthorized interference to cause harm, or scenarios where an owner unknowingly uploads software containing malware, or a third party installs unauthorised spyware to gather geolocation data.
Concerns are raised as fully autonomous systems increasingly will be uncrewed in most defence scenarios. When a remote attack occurs, the system will be left to fend for itself if disruptions occur whilst in operation. To minimise these vulnerabilities, cybersecurity systems must incorporate the latest threat detection measures, strong encryption techniques and real-time alerting to detect threats early.
Opinions on having a fall-back ready user in such cases have raised the practical limitation of machine autonomy during in-service operations and risks of disruption to communication links. Cybersecurity training for personnel across ground, aerial and naval domains is crucial to defend against cyberattacks. A multi-layered defence strategy, incorporating physical security, network defences and application-level safeguards will be needed to provide a comprehensive approach to protecting the system’s integrity.
Despite the reduced costs stemming from uncrewed vehicles and operator controlled equipment, the legal entity responsible for the deployment of the autonomous system should have regard to the financial costs that will be associated with remote fall-back users and other safety measures.
Nations worldwide are exploring innovative methods to bolster their defences against cyber-attacks. The American National Security Innovation Network for example, has taken practical measures by utilising student white hat hackers through hackathons to generate new ideas. During one such event, a group of students from Texas A&M University's College of Engineering developed the PHC Defense system, which is a multi-layered blend of software and mechanical measures designed to allow ADS to defend against cyber-attacks autonomously without human intervention.
Addressing security issues proactively requires close collaboration with manufacturers to ensure cybersecurity measures are integrated at all stages of design, development and testing. Prime and subcontractors should prioritize the development of innovative defence systems to counter evolving security threats.
Significant investments should be directed towards predictive technologies to safeguard assets, data and operations from cyber-attacks and other security breaches. Engaging white hat and grey hat hackers can be highly effective during the development process.
13. Australian Privacy Principles
The APPs currently apply to companies in Australia with an annual turnover of over $3 million. These entities must collect personal information only as relevant to their functions and activities, inform individuals about the collection and use of their personal information through an up-to-date privacy policy and collection notices provided at the time of collection, use and disclose personal information only for the purpose it was collected or with the individual's consent, and for limited other purposes, and keep personal information secure.
It is likely the data captured by ADSs will often constitute personal information and thus come under privacy and data protection law in Commonwealth and state and territory jurisdictions. The privacy and data protection regulatory framework for Cooperative Intelligent Transport Systems (C-ITS) and AV systems analyses the application of the APPs to AV systems.
ADSEs will have an obligation to ensure that:
users of the ADS are aware of the information that is being collected and how it is used;
users are given a choice about what information is collected – there should be options for users to remain anonymous where possible (e.g. an option not to use facial recognition technology);
users are informed about changes in information practices. Again, this should be brought to their attention prominently and promptly after any such changes.
Questions arise regarding the ownership of this data. ADSEs may only have a restricted license to certain personal information, as individuals retain ownership rights. Therefore, it is crucial for companies to determine which information they prefer individuals not to disclose.
Application of the APPs to government entities has not been significantly investigated.
14. General Public Perception
There is a serious need for public awareness and education in the autonomous system space. The overwhelming perceptions of driverless systems tend towards the idea that they are dangerous and susceptible.
Because automated vehicle technology is new and incredibly complex and its safety performance is uncertain, providing a high standard of safety may require government intervention. Consumers will likely have an expectation that automated vehicles need to be safer than human drivers.
Some research indicates that the large majority of the Australian population feels AVs must provide a bypass that would allow the vehicle to be driven by a person. Many indicated that giving complete driving responsibility to a computer would make people feel stressed. A person should be able to determine when to use the automated functions and which functions to use.
The research indicates that the majority of Australians (72%) believe automated vehicles must be made identifiable by, for example, a specific label, licence plate or sign. More than half (55%) thought it was unsafe for children to travel without an adult in a self-driving car.
A 2018 survey found that 37% of females and 28% of male respondents expect that automated vehicles are to be 100% safe and will never be involved in a collision. From an international perspective, three in four Americans remain afraid of automated vehicles.
37% of the public expect that the greatest benefit of self-driving cars will be to eliminate or reduce deaths due to accidents. A third of respondents to a survey identified safety concerns as the biggest obstacle to the growth of automated vehicles in the next five years.
Consumer anticipation of the rollout of autonomous vehicles highlights the need for regulatory oversight. Surveys reveal varying perceptions regarding the safety of automated vehicles, with a notable proportion expecting them to be mostly infallible. Addressing the diverse expectations and concerns surrounding autonomous vehicles poses challenges, particularly regarding liability issues.
Moreover, overreliance on automated technology may lead to negligence on the part of users, potentially resulting in liability for manufacturers or operators.
If you have any questions, please contact the authors Brett Cowell or Anna Young of our Defence team.
Brett Cowell, Anna Young and Alex Dorrington wish to thank Chriselle Alfred for her contribution to this insight.
This publication has been prepared for general guidance on matters of interest only and does not constitute professional legal advice. You should not act upon the information contained in this publication without obtaining specific professional legal advice. No representation or warranty (express or implied) is given as to the accuracy or completeness of the information contained in this publication and to the extent permitted by law, Cowell Clarke does not accept or assume any liability, responsibility or duty of care for any consequences of you or anyone else acting or refraining to act in relation on the information contained in this publication or for any decision based on it.