How UTS’s AISP Lab is redefining security and privacy in the age of AI
At UTS’s AI security and privacy (AISP) Lab, A/Prof Bo Liu and his team build Trusted AI systems that are safe, private, and resilient to misuse. They design proactive defenses against AI-generated misinformation, privacy-preserving and traceable generative data, and secure LLM pipelines for industry. Their goal is to shift from after-the-fact detection to verifiable authenticity and robust-by-design AI, so critical services such as finance, health, education and government, can adopt AI with confidence.
What inspired or triggered this line of research- was it a real-world incident, a technology gap, or a collaboration with partners?
All of them.
- Real-world risks: deepfakes, scams, and data misuse eroding public trust.
- Industry needs: partners (e.g., RBA, AEMG) needed deployable, auditable solutions—not just lab demos.
- A technology gap: AI systems are powerful but vulnerable at the same time. Most existing methods are reactive; we saw a path to proactive authenticity and privacy by design.
What exactly are you and your team developing, and how does it differ from or improve on current approaches in the field?
- Verified at birth: authenticity signals/provenance embedded at content creation → proactive misinformation defence (multi-modal, tamper-evident).
- Controllable, traceable generative data: watermarking and privacy-preserving synthesis that stays useful for training yet auditable in the wild.
- Secure LLM pipelines: privacy-preserving training, attack/abuse detection, and bias mitigation integrated end-to-end for real platforms.
What’s different: industry-grade, cross-modal, and “secure-by-default”—moving beyond detection to prevention, verification, and accountable deployment.
What obstacles have you and your team come across in your project?
- Data access vs privacy: strict governance, synthetic/diarised datasets, and federated methods.
- Evaluation gaps: building realistic, multi-modal benchmarks and red-teaming protocols.
- Evolving adversaries: continuous threat modeling and adaptive defences.
- Deployment gap: co-design with partners to meet performance, compliance, and operability constraints.
Looking ahead, what are the next steps or opportunities for this research, and how might ACS Members get involved?
- Pilot our proof-of-concept systems with publishers, platforms, and government communications.
- Open reference implementations and evaluation suites for LLM security & privacy.
- Partner on sector trials (finance/health/edtech), contribute domain datasets under MOUs, or co-supervise industry PhDs.
Through our Academic Spotlight series, we highlight pioneering research emerging from Australia’s universities. These projects tackle some of the most pressing challenges facing industry and society, whether safeguarding critical infrastructure, securing the next generation of networks, or building trust in emerging technologies. By sharing the stories behind this work, we connect the ACS community with the ideas, people, and innovations shaping the future of technology and its impact on industry.