Generative AI in 2025: Input, Output & Liability Risks for Australian Businesses
- Dorrian & Co
- 3 days ago
- 4 min read
By Dorrian & Co Lawyers
Introduction: AI Has Entered the Boardroom — But So Have the Risks
Generative AI is no longer an experiment sitting quietly in the IT department — it now powers decision-making, contract creation, risk analysis, marketing, financial modelling, recruitment, and even internal legal workflows. As major Australian law firms such as Ashurst, MinterEllison and King & Wood Mallesons highlight in their most recent insights, businesses are under increasing pressure to adopt AI quickly — while also navigating a rapidly evolving risk and regulatory landscape.
In 2025, Australian companies are asking the same urgent question: How do we use AI safely without exposing the business to legal liability?
The answer lies in understanding two fundamental concepts: input risks and output risks.
1. Input Risks: What You Feed Into AI Can Create Significant Legal Exposure
Top-tier firms have emphasised the growing legal focus on inputs, meaning the data, instructions, prompts or documents you submit into an AI system.
A. Confidentiality & Data Leakage
If staff enter sensitive commercial information — such as client data, financials, contract terms or personal identifiers — the AI platform may store, replicate or reuse that data.Potential legal consequences include:
breach of confidentiality
breach of the Privacy Act
cyber security exposure
contractual breaches with partners or customers
B. IP Ownership & Copyright Risks
Businesses may unknowingly use copyrighted or proprietary material as inputs. This may:
infringe IP rights
void licensing arrangements
trigger disputes around ownership of newly generated content
KWM and Ashurst repeatedly warn that “input contamination” can lead to downstream legal issues that businesses don't detect until it's too late.
C. Accuracy & Reliability of Business Inputs
Incorrect or biased input data leads to inaccurate outputs — but ultimately, the business is responsible for the initial data.
In high-stakes sectors such as finance, lending, real estate, manufacturing, food compliance and healthcare, poor-quality inputs can translate into serious regulatory breaches.
2. Output Risks: AI’s Answers May Be Wrong, Biased or Legally Non-Compliant
AI tools can produce polished but incorrect or misleading answers.
Common output risks include:
A. Hallucinations (False or Fabricated Information)
AI may provide legal, financial or operational information that appears authoritative but is entirely inaccurate. This can lead to:
flawed commercial decisions
breached compliance frameworks
incorrect contract clauses
reputational damage
B. Liability for Automated Decisions
If AI is used in:
employment decisions
credit assessments
pricing
consumer interactions
…companies may attract risks under:
discrimination laws
Fair Work obligations
Australian Consumer Law
privacy legislation
new regulatory frameworks on AI fairness and transparency
C. IP Ownership of AI Output
The current Australian legal position is that AI-generated content may not be protectable under copyright unless meaningful human authorship is involved. This has significant implications for:
contracts
marketing materials
software development
product content
manufacturing processes
3. What Laws Currently Apply in Australia?
Australia does not yet have an AI-specific regulatory regime — but multiple existing laws already apply, including:
✔ Privacy Act (personal information handling)
✔ Australian Consumer Law (misleading outputs, unfair practices)
✔ Competition law (algorithmic collusion risks)
✔ Corporations Act (governance, director duties and disclosure)
✔ Fair Work laws (AI-driven employment decisions)
✔ IP law (copyright, patents, trade secrets)
Meanwhile, global frameworks — such as the EU AI Act — are influencing expectations around governance, risk and transparency, particularly for companies operating across borders.
4. Governance Failures Are Becoming a Major Board Risk
Our most recent AI guidance emphasises that regulators expect to see AI governance frameworks in place — even before formal legislation arrives.
Companies must demonstrate:
policies around AI use
human oversight
documentation of how AI is integrated into processes
data security and privacy risk management
ongoing monitoring of output accuracy
version control and audit trails
vendor risk assessments
From SMEs to major banks, directors are now expected to understand and manage AI-related risks under director’s duties.
5. How Australian Businesses Should Protect Themselves in 2025
A. Implement an AI Acceptable Use Policy
This is now essential for:
SMEs
growing businesses
teams using AI tools for contracts, emails, marketing or data analysis
Policies should cover:
permitted tools
prohibited inputs
approval processes
confidentiality safeguards
vendor restrictions
B. Conduct Contract Reviews for AI Use
Check your contracts with:
software vendors
cloud service providers
clients
contractors
employees
Look for clauses dealing with:
confidentiality
data ownership
IP rights
liability caps
indemnities
data storage
C. Maintain “Human-in-the-Loop” Oversight
AI can assist — but cannot replace — human legal or commercial judgement. Businesses must ensure:
human review
legal validation of output
director awareness
audit checkpoints
D. Strengthen Privacy and Cyber Compliance
Before using AI for any personal data, consider:
Privacy Act obligations
APP compliance
de-identification processes
data minimisation
cyber breach notification schemes
E. Seek Legal Advice During Adoption
Lawyers play a crucial role in helping businesses:
draft safe policies
assess contractual risks
manage liability
design governance frameworks
meet regulatory expectations
evaluate AI-based vendor agreements
This is where Dorrian & Co Lawyers uniquely supports SMEs and high-growth companies — fast, practical and aligned to commercial realities.
Conclusion: AI Will Transform Businesses — But Only With Proper Governance
AI presents extraordinary opportunities for Australian businesses, but the legal, regulatory and commercial risks are equally significant. By understanding input risks, output risks, and liability issues, and developing clear governance frameworks, businesses can innovate confidently while staying compliant.
Dorrian & Co Lawyers advises SMEs, lenders, investors and high-growth companies on AI governance, commercial contracts, risk mitigation, corporate compliance and regulatory considerations.
If your business is adopting AI in 2025 — now is the time to review your legal and risk frameworks.




Comments