A poll conducted by Essential Research showed that many Australians support the implementation of further regulations on the use of artificial intelligence (AI) technology.
The survey was conducted after the Human Rights and Technology report was released on Thursday, in order to gain an insight towards Australians’ views on human rights with the use of AI technology. More than 1,000 Australians submitted responses with 55 percent wanting to pause the use of facial recognition technology until further rules are in place to ensure the privacy of individuals.
A majority of the participants also advocated to have human oversight for accountability over the AI technology process in decision making. Alongside that, 60 percent of the poll results indicated that people wanted AI to comply with anti-discrimination laws in Australia.
Director of the Australia Institute’s Centre for Responsible Technology, Peter Lewis, said that the poll results show that Australians are becoming more aware of the use of developing AI technology in their daily lives.
“The public has witnessed countries like China where state video surveillance controls citizens with a social credit rating system and the USA where the pursuit of profit positions people as crash test dummies,” he said.
“This is an opportunity to create a uniquely Australian tech industry, where our liberal-democratic values create fairer and more effective technology.”
The Human Rights and Technology report highlights that Australians want fairer and safer use of newer technologies such as AI by implementing specific laws to ensure the protection of humans rights. Several recommendations of the recommendations published in the report include:
- National Strategy – use of the digital economy strategy for the Australian government to embrace technological developments such as AI and promote responsible innovation and human rights through innovation, investment and education.
- AI-informed decision making – prior to introducing any new AI systems the Australian Government should undertake a human rights impact assessment (HRIA) of the technology to improve transparency in process and function of the AI.
- AI safety commissioner – An independent statutory office that provides technical and capacity building of AI technology while upholding public interest and making sure human right laws are not being disregarded.
- Facial recognition and algorithmic bias – provide better human rights and privacy protection when AI technology has decision making errors such as in policing and identifying individuals. Encourages governments to have improved guidance in complying with anti-discrimination laws involving AI-informed decision making.
The Australian government has developed an AI Technology Roadmap to help solve problems in areas such as health and welfare, environment, energy, infrastructure and transport and education.
Photo: Black and grey laptop computer by Luis Gomes HERE and used under a Creative Commons Attribution. Image has not been modified.