Management Blogs & Careers Advice

Generative AI: The End of Creativity & Trust? Expert Predictions

Written by Templeton | Friday, 04 August 2023

The true excitement in generative AI lies in its potential to positively influence humanity and the planet.

Responsible AI development can indeed lead to significant advancements for society. With examples ranging from transforming education through virtual classrooms to revolutionising healthcare with AI and quantum computing in drug discovery, the potential impact of generative AI is far-reaching. However, to unlock its true benefits for everyone, it’s important to establish a strong foundation for responsible AI integration. As we navigate the AI revolution, embracing the transformative potential of AI while maintaining ethical standards is crucial in shaping a future where technology uplifts society as a whole.

In our latest Fireside Chat, Kay Firth-Butterfield, the Head of AI & Machine Learning at the World Economic Forum, delves into the world of Generative AI, discussing its potential and the responsibilities we must embrace to harness its benefits responsibly. The webinar covers a wide range of topics, including:

  • The most common misconceptions surrounding Generative AI
  • What ‘trustworthy’ AI really means
  • The potential implications of ChatGPT on creativity and integrity
  • Whether there’s a need for a more unified global regulatory framework
  • The greatest unseen risks and opportunities in Generative AI

 

A Few Words for Our Guest

Kay Firth-Butterfield is a prominent global technologist and entrepreneur with a passion for leveraging new technologies, particularly AI, to benefit humanity equitably. Formerly the Head of AI at the World Economic Forum, Kay currently serves as the Executive Director at the Centre for Trustworthy Technology. Her illustrious career began as a barrister, where she focused on helping people in distress and advocating for human rights. As her career evolved, she became a Judge, professor, and AI ethics advocate.

Kay's dedication to ethical AI practices led her to become the world's first Chief AI Ethics Officer and co-founder of AI Global. Her exceptional contributions to the field of AI have earned her recognition as a leading woman in the industry, featured in the New York Times' 10 Women Changing the Landscape of Leadership in 2021.

 

Fireside Chat Highlights

The world of AI has already made significant transformations in various fields, and we all agree that its potential impact on society, work, and life, in general, is just beginning. What are your personal thoughts on the potential impact of AI? Where do you see it heading, and what aspects of it excite you?

Well, what truly excites me, and I believe many of us working in this field, is the prospect of using AI to positively influence humanity and the planet. While my work in policy often involves discussing the risks of AI, it's crucial to recognise that our dedication to this field stems from the belief that we can achieve great advancements for society if we approach AI responsibly.

Let's consider some examples of its potential impact. Education, for instance, poses a global challenge, particularly in regions like Africa, where classrooms are often crowded with many students of different ages. However, with the help of AI, we could create an educational miracle by bringing students together in virtual classrooms, connecting them with their teachers, and enhancing their learning experience. This could lead to the realisation of the goal of Education for All. Nevertheless, we must also address the challenge of providing internet access to the 3 billion people who are currently without it to truly unlock the benefits of AI for everyone.

Healthcare is another area with enormous potential. Just recently, at the Centre for Trustworthy Technology, we discussed the exciting prospects of using AI and quantum computing in drug discovery. Such advancements could bring about significant benefits and progress in the field of healthcare.

Indeed, these opportunities are widespread, but to fully harness the benefits of AI, we need to lay the groundwork for responsible AI development and integration. Additionally, the emergence of quantum computing is just around the corner, presenting a unique set of challenges that we must address responsibly.

 

One aspect that frequently comes up in discussions is regulation. We recently had the privilege of interviewing Beena Ammanath, who's an expert in AI and its impacts. She emphasised the importance of implementing controls and guidelines to create AI systems that can be trusted, delivering sensible and accurate outcomes. Considering this, what advice would you offer to organisations venturing into the realm of AI? What are the key aspects they should look into or consider from a regulatory perspective as they start exploring AI environments?

I know Beena well as a friend, and she sits on my board, so I wholeheartedly support her views, even though I haven't seen the interview.

Regarding your question about businesses and regulation, let's set policymakers aside for now and focus on what businesses should be doing. The European AI Act is a crucial regulation that's quickly approaching. If companies haven't started considering how this act will impact their use of AI, they are falling behind and need to catch up immediately. The European Regulatory AI Act has the potential to be as significant as GDPR for data protection. Some companies even adopted GDPR measures globally because of their operations in Europe. It's possible we'll see a similar trend with the AI Act, as other governments might follow suit to some extent.

One of the problems for the European Union is that they need to pass this act before the next elections or its progress might stall, causing more delays and uncertainties. It's vital to keep a close eye on this development. In addition, the White House recently established a forum to address the safety of large language models and major tech companies have been invited to participate. If your business heavily relies on generative AI and operates in or with the US, it's worth exploring whether you qualify to be part of this forum. There's an ongoing call for participants.

On a global scale, we're also beginning to see bits of regulations and government advice emerging spread out around the world. Japan initiated ethical principles related to AI back in 2015, India is on the verge of implementing regulations with Modi's support, and Brazil is also considering its own regulations. It's a diverse and evolving landscape.

For businesses in the US, it's important to remember that the governor's website is your friend. Regulation is happening at the state level, as there's no comprehensive federal law yet. So, keeping track of relevant state-level regulations is crucial.

Do you believe having regulations in place will hinder the rapid pace of innovation we've witnessed so far?

Speaking of innovation, it's interesting to note that constraints can actually spur innovation in many areas. For example, in the United States, we have the FDA and various legislation governing drug discovery. However, this hasn't impeded drug companies from making significant discoveries. The main challenge for drug companies lies in the three trial stages, especially the third stage involving human testing, which significantly contributes to the high costs of producing a new drug—approximately 11 to 15 billion dollars per drug. With the application of AI, there's hope that we can reduce the cost of these trials. Instead of eliminating the FDA or skipping human testing, the aim is to leverage AI to enhance early-stage drug development and molecule analysis. Human testing will remain an essential step before any drug is approved for the market.

 

One of the key challenges faced by the industry and regulators is the rapid pace of change. Just a few months ago, we were discussing with Beena whether there’s a need for more controlled self-regulation and the significance of building teams capable of effectively monitoring and controlling the guardrails and datasets within specified controls and legislation. What are your thoughts on the topic of self-regulation?

Well, in my opinion, self-regulation is now a vital requirement for any company that aims to be trustworthy and responsible. No company wants to carry the burden of being deemed untrustworthy, as it can severely impact its brand value and bottom line. I had the opportunity to speak at Davos with Brad Smith and the CEO of HSBC, emphasising that this should be on the CEOs' radar. While vision is crucial, CEOs should also stay informed weekly about AI developments and how AI is being used within their companies, enabling them to envision the company's future accurately.

The implementation of AI across different verticals within the company requires a collaborative effort from the C-suite. Hiring people using AI, for instance, can be a risky endeavour due to potential biases. There is existing legislation or upcoming regulations on this matter, and companies should be cautious not to be the first ones facing lawsuits for discriminatory practices associated with AI usage.

What we know is that regulators are now looking into these areas. In the United States, the Equal Employment Opportunities Commission, along with several other regulators, has taken a clear stance. They have stated that they will apply existing laws to address AI-related issues. Even if there is no specific law solely focused on AI, if your company uses AI and it leads to discrimination against an individual in a job, the responsibility falls on your company, not on the AI itself. While it may be possible to involve the person who sold you the AI in a lawsuit, ultimately, the lawsuit will be against your company.

Therefore, it's essential to ensure that you don't solely delegate AI-related matters to the CTO or assume that they are solely responsible for knowing everything about AI within the company. Different verticals in your organisation might be using AI in different ways. To address these challenges, when I was at the (World Economic) Forum, we developed a toolkit the C-suite can use to monitor and regulate AI usage effectively.

The other piece of self-regulation is the board, many of which can’t keep up with the pace of changes or even lack the technical expertise to understand the different aspects of AI. Again, at the Forum, we also produced a toolkit for boards, with a comprehensive set of questions to ask the C-suite to ensure effective governance responsibilities in the age of AI.

I must mention that both toolkits primarily focus on pre-generative AI. Generative AI has amplified the issues we were trying to solve with traditional AI – it’s like putting the problems of ordinary AI on steroids.

Could you explain in simple terms how generative AI works? What happens behind the scenes to make it function?

That's an excellent question, especially now that scientists working with AI claim to begin seeing signs of intelligence emerging. These aspects inevitably fall into the realm of existential risks.

Generative AI works by predicting the next word in a sequence. The underlying mechanism involves training large language models using vast amounts of internet data. However, there are significant challenges with this approach as it primarily relies on available data, which often overlooks diverse perspectives. This leads to a crucial issue when it comes to geopolitical politics, gender balance, and the divide between the Global South and Global North.

Much of the data that's being used and researched with these tools has been created by white men, while the voices of women, people of colour, and those with disabilities are often underrepresented or absent. Men have been creating data much more than the rest of us for generations. This situation perpetuates biases that are already known to exist in AI, and unless we carefully consider this issue, we risk further reinforcing a dominant global North White voice in the data and responses generated by these AI systems.

Going back to how the algorithm works with the data, once it's trained on this vast amount of data, its primary objective is to predict the next word in a sequence, essentially making predictions about what comes next. Quite remarkably, it often generates responses that are fluent, eloquent, and spot-on. However, occasional inaccuracies also occur, and it might even come up with completely fabricated answers. This presents a risk that users should be mindful of when employing these tools.

Peter Lee, the VP of Research at Microsoft who was also involved in training GPT4, shares an intriguing insight in his book: While we commonly claim it just predicts, there's more to it. For example, if you ask it a complex question like how to put a sofa on top of a roof, it will be able to tell you without even having any real understanding of what space looks like. Those sorts of things make scientists either concerned or amused by the fact that we might start seeing signs of intelligence in large language models.

Another emerging development we should be aware of is what we call AI cannibalism. Computers are now generating new data at a higher rate than humans produce. Interestingly, this self-generated data is used to inform the AI's future predictions. However, this process can perpetuate errors. Users of such tools have noticed a decline in performance, and this might be attributed to AI cannibalism. Whether it's a limitation of generative AI or a challenge that can be overcome remains uncertain for now.

 

You've been using the word "tool" repeatedly, and indeed, AI has become a powerful tool that simplifies our lives. Lately, however, there's growing concern about AI's impact on creative fields. We've seen AI produce incredible and clever pieces of art, and many even see its potential to replace human artists in the film and music industry. Do you think AI's involvement is going to limit or enhance it? What are the potential implications for the future of creativity in these fields?

As human beings, our creativity is boundless and will continue to flourish alongside AI. The key here is to carefully consider how we work in conjunction with AI. The opportunities to use AI wisely are immense. The challenge arises when AI replaces human involvement entirely. Certain jobs, like call centres, are likely to be taken over by AI in the near future. The question then becomes, what will those displaced workers do?

We can also see AI encroaching on knowledge-based professions, such as lawyers and doctors, potentially leading to job losses in those areas. It's crucial to ensure that we optimise AI to complement human abilities rather than replace them. We must work on creating a balance where AI enhances human productivity and doesn't replace skilled workers entirely. Losing knowledgeable workers, who contribute significantly to the economy, could have far-reaching consequences.

When considering the economy, we should note that robotics is not as advanced as knowledge-based AI. While AI can replicate certain tasks performed by knowledgeable workers, AI-enabled robots are yet to match the capabilities of skilled workers in manual and physical tasks, such as designing, crafting, and construction.

As we navigate this landscape, it's essential to focus not only on the risks to our businesses but also on the overall economy. History has shown that past revolutions eventually found equilibrium. However, we need to be particularly attentive to the transitional phase, ensuring that the middle segment of the workforce doesn't face undue challenges over the next 10 to 15 years.

 

When we consider the global impact of AI and technology, there's a crucial question that arises: Is there a need for a more unified regulatory framework to ensure that underdeveloped countries are not left further behind and can also benefit from these advancements? How can we approach this complex issue?

Actually, there is a significant concern, not just for underdeveloped countries but also for so-called liberal democracies. A pressing existential risk is that in 2024, over half of the world's GDP will be tied to elections. If we don't have proper control over the way information is disseminated and manipulated by algorithms, we could face challenging situations in our liberal democracies by 2024. Let's take nudging and manipulation as an example: Fitbits telling us to exercise are helpful, but the very same algorithms can be tuned to manipulate us and compromise our agency. This issue is approaching rapidly, and it's essential to address it in the coming year. I wanted to emphasise this point about liberal democracies.

Going back to the actual question regarding global regulation and coalitions, there has been considerable progress. The World Economic Forum recently formed a coalition, and the UN held discussions on this subject last week. UNESCO also issued recommendations on AI use in education, which are crucial and worth exploring. The OECD has developed principles, with many member countries signing up. Additionally, the Global Partnership on AI, including members from the Global South, was established about four years ago by Macron and Trudeau. So, international efforts are underway.

Some propose the idea of a global organisation like the atomic organisation, but applying that to generative AI presents challenges. Unlike nuclear, which is generally controlled by countries, generative AI is widely accessible, making it difficult to monitor usage. Observers can't easily check how everyone uses it.

We have witnessed efforts from the G20 and the G7, as well as collaboration with the OECD, expressing their desire to take action together. However, I must admit that I'm not optimistic. For the past seven or eight years, we have been striving to achieve a simple goal – to get the world to agree not to develop lethal autonomous weapons. These weapons are algorithms capable of making their own decisions to kill. Unfortunately, we have not succeeded in reaching a consensus on this crucial issue alone. So, considering the challenges we faced with just one aspect, I am doubtful about achieving success across the board.

The issue is that while there are numerous principles around the world – approximately 190 sets of them – there are about nine principles that everyone agrees on, regardless of whether it's in China, Africa, or the UK. However, the real challenge lies in transforming these principles into actionable practices. This applies not only to governments but also to organisations. At CTT, we assist organisations in implementing their principles into practice. I recently had a conversation with someone who runs a company with 10,000 employees, and they were wondering how to ensure that all of their employees prioritise responsible AI in their decision-making. It's indeed a multi-faceted challenge, but a significant step is having it endorsed and legitimised by the C suite. The CEO's support and commitment to putting responsible AI first are essential for fostering a culture of responsible AI throughout the organisation.

You've had a remarkable career journey, transitioning from the legal world to the tech world. As both fields are often dominated by men, what has been your personal experience as a woman entering the tech industry compared to your time in law? Could you share some insights from your unique perspective?

When I began my journey as a barrister, the legal profession was heavily male-dominated, though it has changed somewhat now. Reflecting on your question, I realised that the traits required for survival as a woman in tech were quite similar to what I needed as a female barrister in the 20th century. It's disheartening to think that the advice I might give to women in tech today resembles what I needed back then.

In my case, as one of the pioneers of the responsible AI movement in 2014, the challenges I faced were more focused on objections to the idea itself rather than my gender. Over time, I've grown wiser and more resilient, making it easier for me to stand up for my beliefs. During my journey in law, I learned the importance of resilience, self-belief, and determination to persevere even when facing critical judges.

Building a supportive community around oneself is crucial, enabling you to bounce back from setbacks and keep pushing forward. Mentors played a significant role in my success as a barrister, and I believe that having a good mentor is equally essential for success in both law and the tech industry.

On a positive note, I am encouraged to see that the responsible AI movement is being led by women and people of colour, adding diverse voices to the field. This counters the fact that many technologists are young, white men. To promote responsible AI and self-regulation, it's vital to bring together diverse teams, ensuring academic and gender diversity, when developing algorithms or creating products. This inclusive approach enhances the quality and impact of our work while avoiding potential biases.

 

Conclusion

Upon reflection, the world of AI undoubtedly presents incredible opportunities but also concerning signs that demand controlled change and accountability. As a society, we must vigilantly monitor and advocate for regulation while supporting businesses and global organisations. At the same time, we cannot stifle progress. AI is a captivating subject that will continue to evolve rapidly.

On a positive note, more women stepping into AI leadership roles and diverse teams can ensure that AI benefits a broader population. As we move forward, let's hope common sense prevails and shape a future where AI serves us all in a responsible and inclusive manner.

 

 

 

About Us

Templeton holds a 27-year track record of recruiting thousands of IT professionals around the globe and a vast database filled with potential candidates that suit your needs. Find out more about our multi-award-winning recruitment services.

 

Discover more from our Fireside Chat Series: