Recently, the Canadian government launched a voluntary code of conduct related to generative AI. Many organizations have agreed to this code of conduct, although support has not been universal.
This was the topic of our last biweekly "AI Issues and Ethics" discussion, what role should there be for regulation of artificial intelligence? It was a great conversation with plenty of dissent, as always, and I found my opinion changing as a result of the wisom of the participants.
I used to be skeptical about the idea of government involvement in regulating artificial intelligence, assuming that policymakers lacked the expertise and agility to make informed decisions. This is a rapidly evolving and technically complicated field and, as others have argued, regulation must stifle innovation. I often argued for fewer regulations and restrictions on technologies, or at least suggested that existing rules and laws related to privacy or safety were sufficient to address these new technologies.
However, my views have evolved, and I now firmly believe that some level of governance is essential to ensure the responsible and ethical development and deployment of AI technologies.
Lawmakers don't need to understand the complexities of internal combustion engines, or electric vehicles for that matter, in order to set speed limits on our roads. They rely on experts, data, and an understanding of societal needs to establish safe parameters. Speed limits are not designed to stifle our driving.
And to use a similar analogy, there was a time in Alberta the seatbelt use in cars was not compulsory. I'll admit that I didn't always wear a seatbelt, even though I knew it was safer to do so. States with fewer requirements for motorcycle helmets have increased rates of injuries and deaths. Humans don't always make the best decisions around our own safety.
That's not to say that the government will always make the best decisions either. There will be instances where regulations fall short or overreach, but we shouldn't abandon the idea of AI governance entirely. Through an iterative process that is open to feedback and fosters collaboration among stakeholders, experts, and policymakers, legislation can be amended and refined. Regulations should be adaptive and responsive to evolving needs and norms of society.
I would argue that we don't require an absence of regulations in order to innovate, nor should we rely solely on the judgement of technology organizations to work for the good of humankind. By fostering collaboration, making informed decisions, and committing to ethical principles, A voluntary code of conduct is a good start, and I hope that we can create an and safe suite of artificial intelligence tools for everyone.
And let me know if you'd be interested in participating in some of these "AI Issues and Ethics" discussions.
No comments:
Post a Comment