Search
Close this search box.

Is it time to implement AI ethical frameworks?

New survey data conducted by technology authority Tech.co has revealed 68% of business leaders think it’s unethical for employees to use AI tools without the permission of a manager.

New survey data conducted by leading technology authority Tech.co has revealed 68% of business leaders think it’s unethical for employees to use AI tools without the permission of a manager.

The meteoric rise of generative AI tools has emphasized the need for complex ethical AI frameworks to govern its application in the workplace. Without these ethical frameworks, the technology risks threatening human roles and intellectual property in morally dubious and potentially harmful ways. 

With new ethical questions relating to AI’s usage emerging every day, Tech.co surveyed a group of business leaders on the use of AI tools in the workplace. When asked if they felt it ethical for employees to use AI tools such as ChatGPT without their employer’s permission, 68.5% said that employees shouldn’t be using AI tools without permission from an employer, manager, or supervisor.

The survey also revealed that AI ethics has divided business leaders on who should take responsibility for AI mistakes made in the workplace. Almost a third of respondents (31.9%) lay the blame solely on the employees operating the tool. Just over a quarter (26.1%), on the other hand, believe that all three parties – the AI tool, the employee, and the manager share some responsibility for the mistake.

Businesses across the globe are continuing to leverage artificial intelligence – from AI-optimized website builders to project management software providers. While it remains uncertain whether new AI-powered tools such as ClickUp’s new AI Writing Assistant will aid productivity or drive job replacement, major tech companies and governmental bodies are pioneering AI ethics by establishing guidelines on how to implement the tool in an ethical manner.

A number of major US-based authorities have already started implementing AI ethical frameworks. In October 2022, the Whitehouse released a nonbinding blueprint for an AI Bill of Rights, designed to guide responsible use of AI in the US using 5 key principles. The United Nations has also outlined 10 principles for governing the ethical use of AI within their inter-governmental system.

Meanwhile, major tech organization Microsoft has released 6 key principles to underline responsible AI usage. These principles include fairness, transparency, privacy and security, inclusiveness, accountability, reliability and safety.

A draft version of the EU’s new AI Act, which aims to promote safe and trustworthy AI development, has also recently been agreed upon and will now be negotiated by the Council of the European Union and EU member states.

Tech.co’s Lead Writer Aaron Drapkin shares his thoughts: “There exists a myriad of ethical questions relating to AI systems that need immediate attention, spanning from the correct ways to conduct explorative AI research to the appropriate maintenance and usage of AI tools.

While governments scramble to implement regulatory frameworks designed to govern the responsible research and development of AI systems, businesses are being presented with novel use cases every day that bring with them pertinent questions relating to employee transparency, privacy and individual responsibility.

It’s hard to shake the feeling that, in many ways, we’re behind the curve when it comes to deciding how we should morally navigate questions posed by our use of artificial intelligence. Of course, rapid technological innovation is often associated with abundant benefits – but in the context of AI, the most high-reward areas are also often the most high risk. This means that careful consideration of the ethical implications of AI usage and implementation must be at the forefront of decision making, from day-to-day business uses to state-funded research.” 

The more inventive uses that businesses find for ChatGPT and other AI tools, the more questions will arise about how to use them ethically. Companies with team members already leveraging AI tools should provide clear guidelines on precisely how and when they should be using these tools – this is key to avoiding the negative consequences that can occur when AI tools are misapplied.

    Read more

    Latest News

    Read More

    How to avoid employee disengagement in the age of AI

    29 April 2024

    Newsletter

    Receive the latest HR news and strategic content

    Please note, as per the GDPR Legislation, we need to ensure you are ‘Opted In’ to receive updates from ‘theHRDIRECTOR’. We will NEVER sell, rent, share or give away your data to third parties. We only use it to send information about our products and updates within the HR space To see our Privacy Policy – click here

    Latest HR Jobs

    The post holder will lead coach and steer the UK amp I People team allowing the delivery of a high class HR service to support

    This is a permanent and full time position which will be based in our Aberdeen HQ AB21 0BH Generous annual leave that increases in line

    Software Development Director Exec Team Seat Remote Working with Ellesmere Port Office Based Minimum 1 Day Per Week + Contribution towards membership fees £120 000

    Moulton CollegeSalary From £22 308 pa 8211 Band 5 £23 031 00 8211 £24 123 00 pa 8211 Band 6 dependent on experience

    Read the latest digital issue of theHRDIRECTOR for FREE

    Read the latest digital issue of theHRDIRECTOR for FREE