How to Trust Trusted Autonomous Systems

Where We Were This Week Accelerating Trusted Autonomous Systems 2021 Symposium by Defence Cooperative Research Centre, Townsville

 

The recent Townsville conference on Autonomous Systems highlighted the overlap in issues, concerns, and areas of study between the multiple sectors which deal heavily in autonomous technologies. There is overlap in international and national law, in ethical frameworks, and in approaching complex systems with complex thinking.

 

For example —

Ships and the Law of the Sea:

With upcoming Trans-Tasman Sea race, what is the law of the sea? What constitutes a “ship”? As ships become increasingly complex autonomous systems, how do we collectively define them? Is an autonomous vessel a “ship”?

 

Military Applications:

The ethical dilemmas of using autonomous technologies anywhere we are accustomed to having human’s making the immediate ethical decisions. With state tools of warfare, but even with our vehicles. The challenge with the military drone and with the autonomous car. How is “harm” determined and quantified? In a nano-second decision, whose interests are protected? Weighty ethical questions for all sectors to consider, and ideally work through together.

 

The Complexity of Machine Learning in Decision Support

John McDermott of York University shared an interesting framework for non-deterministic system’s safety. It is a complex framework to address a complex problem. When validating the machine learning used to support decision making by an autonomous system, it is essential to consider and take into account all the dimensions of the data used by that system to learn. (e.g. Self-learning algorithms, sources and biases of training datasets etc).

 

Cyber-Security

The primary attack pathways of cyber crime are always shifting to new targets, reflecting the new technologies relied upon by our society. Cyber-security attacks, understanding our increasing reliance on machine learning for decision support systems, have begun looking at tampering with the models and learning datasets which support those systems. It is a difficult attack to detect, but essential for maintaining integrity of the decisions which rely on the infromation from those systems.

 

We will endeavour to bring a guest speaker to the QLD Robo podcast shortly who can address these questions with us.

Leave a Comment

Your email address will not be published. Required fields are marked *