Generative Artificial Intelligence – Smart Friend Or Sneaky Rival?

The first 20-odd years of the 21st Century have not been for the change averse. Technology-based disruption is not new but, in this particular period of history, the waves of change seem to be coming harder and faster.

With the digital revolution now maturing as most aspects of contemporary interaction and economic activity move online, focus has turned to the next wave of automation – the real rise of the machines.

In recent weeks this discussion has been somewhat dominated by a deafening buzz around a generative Artificial Intelligence tool called Chat GPT. It is one of many, but its arrival is being seen as a game changer, creating the clear and present danger of major disruption.

For anyone who has missed the hype, Chat GPT is a product of Open AI that can access trillions of pieces of data online in response to requests for all manner of content, reports, presentations and narrative. It is basically a chatbot on steroids with the ability to mimic nuances and do massive amounts of research and compilation in seconds.

Its arrival along with other types of generative AI, has elicited the expected responses:

  1. Wow, that will make life and work so much easier.
  2. Hang on, will that take away all the jobs in the content, research, communications and design industries?

The reality is usually in the middle. Waves of new technology, right back to the industrial revolution, have disrupted the labour force and changed the occupational mix. However, the lessons of history are that humans do not just sit around lamenting that jobs change. We move to a different space and do different things.

The computer revolution followed closely by the digital revolution wiped out hundreds of thousands of jobs in a relatively short period. Yet, according to a 2017 report by McKinsey and Company, in the United States there was still a 15.8 million net increase in jobs off the back of the changes.

The McKinsey research suggests up to 375 million people may need to change occupations by 2030 due to the amping up of AI adoption. Based on history, humans will make the change and move on. However, this is massive disruption, and we would be naive to believe it will happen without collateral damage.

On the specific issue of Chat GPT and its ultra-smart generative AI cousins, quite a few issues arise.

On the upside, generative AI can be a fast and powerful research buddy that gives you documents and images quickly based on access to a vast amount of research and content.

This new generation of software does this with an ability to mimic the nuances of style and produce what can appear to be a job-ready report, design, script, presentation or speech.

It surpasses what humans can do with respect to speed, collation consistency and content source access. It certainly takes desk research to a new level and provides a useful safety net to avoid missing issues and information that may be important.

On the flipside, there is increasing evidence that relying on Generative AI without significant checks and balances may be perilous.

It is easy to dismiss a lot of the content as convincing-sounding nonsense. That is an overly harsh assessment but there is little doubt that this AI tool can make mistakes and pull in offensive, inaccurate and poorly curated content.

The counter to that could be to limit the input to a smaller number of trusted sources. That would be using it more as a writer than a researcher. The jury is still out on the quality of the writing.

These AI tools can also only use, as raw materials at least, content and images that already exist. It is not the place for rampant creativity, fresh ideas and new thinking. Humans and their creative minds still bring a lot to the table.

Having said that, there is also paradoxical evidence that generative AI is opening up new types of creativity based on pseudo random collations of existing content to produce something new.

For example, part of a recent episode of the animated television show South Park was written by AI. The content had to come from somewhere but, for all intents and purposes, its looks and sounds original. There is much debate to be had on these new norms of originality and the very nature of creativity.

One of the disturbing observations in this discussion came from the Penn State College of Information Services and Technology which found that generative AI algorithms and models contained “significant implicit bias” against people with disabilities. This appeared to be a result of the programs drawing in common viewpoints and prejudices to inform responses.

Then there is the prickly area of copyright, plagiarism and intellectual property protection. Copyright law allows for fair dealing of content for particular purposes, but how do we know if Generative AI is dealing fair?

There was considerable controversy last year when an AI-generated image won a Colorado art prize. According to the Harvard Business Review, the artist argued that the work was 10% AI, 90% artist because he spent more than 80 hours doing 900 different versions of the art.

There will also be a lot of eyes on the action by Getty Images against AI image creator Stability AI, claiming that the AI tool unlawfully copied millions of images. This will just be the tip of the iceberg when you throw in the complications of cultural appropriation.

With so many things that could bring you unstuck or need careful checking, there may be a fine line between saving lots of time with Generative AI and creating a curation nightmare.

That said, the technology is still evolving. Over time the risks may become better mitigated and the rules of engagement will change.

In the meantime, the advice seems to be: “Embrace with caution”. And, on the jobs risk: “Be alert but not alarmed.”

Shane Rodgers, Head of Media and Platforms & Executive Strategy Adviser

Sarah Heath, Design Director