Articles

Don’t view AI as a thing of despair, says cyber security expert

Amid the widening wave of artificial intelligence(AI) adoption in recent years, the debate around its impact on jobs has reached fever pitch, with many worried about mass layoffs as a consequence of AI’s progress.

 

However, the AI story is not one of despair at large-scale job losses, but will rather result in new roles that no one has ever done before, especially in cyber security.

So says Daniel Cuthbert, member of the Cyber Technology External Advisory Group at the UK’s Department of Science, Innovation and Technology.

Cuthbert this morning delivered the keynote address at ITWeb Security Summit 2024, at the Sandton Convention Centre, in Johannesburg.

He said the hurdles identified in regards to AI resulted in two opposing camps: people who genuinely fear for their jobs and the cynical bunch that comprises of people from the security world.

 

“I think both sides are quite dangerous. On the one hand, I don’t foresee robots coming to kill us all. I think we need to have a central balance, where we start to see AI not as something that’s going to take control of us, but as something that’s going to enhance us and make us super-capable.”

The next rush of people will be coming into new jobs that require understanding the

data

 

that AI models are pulling. This, as estimates show that 320 million petabytes of data are generated every day, he stated.

 

“If you look at the vast majority of vulnerabilities today, they are all data-driven. We still can’t get to grips with the fact that the data we’re putting into and training these AI models on needs to be clean. I think that’s where we’re going to see far more jobs pop up; roles such as AI data engineers and AI data experts.”

He noted that ethics specialists, security engineers, behaviour analysts and legislative experts will also be required.

“As we start to build and understand these models, the notion of ethics really does come into play, with questions as to whether the models are doing things that are good for humankind. There needs to be someone to validate that. We need security engineers that understand artificial intelligence.

“There are new roles popping up and I don’t see AI as being a totally bad thing. As we take the three rules of robotics and really use artificial intelligence for the better, I see it as us creating superbeings over anything else.”

Last year’s “The Future of Jobs Report” by the World Economic Forum revealed that employers across the globe anticipate 69 million new jobs will be created and 83 million positions will be eliminated by 2027, as a result of the adoption of new technology and increased digital access.

Cuthbert explained that his journey with AI over the past five years has taught him to view it as a concept for synergic learning. “We are in a phase where there is so much data. We consume and create a lot of data, which in itself is a problem, but more so for artificial intelligence.

“When we consider AI today, we are in a unique position, because we have the concept of humans who are cognitive, creative, emotional, have feelings and can sense danger, and we have machines that are very good at consuming data and doing tasks far faster than we could ever do.”

He pointed out that given the vast amount of data that is consumed, the solution is to consider how to leverage AI to make things easier.

“Some of the questions are: Am I going to lose my job in the next couple of years with ChatGPT-5 being trained? Is my job security ineffective? Are the models able to exploit errors, find vulnerabilities, can they consume stuff better than I can? Based on my opinion, the answer is no.

“The models are only as good as the input you give them. What we’re starting to see is the world of prompt engineering, speaking to a model like it’s a seven-year-old.”

He stated there are many areas where AI can be used to help in the threat intelligence space, including creating more reports easily and using it to read all the reports.

Cuthbert advised the summit audience – consisting of security personnel and experts – to start thinking about how to use AI to improve their jobs, augment their capabilities to better understand vulnerabilities and build systems in a secure way, as well as better understand the attacks that are happening.

Tags:
Comments
No Comments found
Leave a Comment