The side effects of (re)generative AI impacting cyber security
Professionally paranoids can't but look at ChatGPT and its siblings from a risk perspective. Well, at least initially. We tend to think in risk vectors, threat actors and alike. Leaving all innovative benefits of the technology aside, there are sensitive elements that require attention. This is how we currently deal with this kind of innovation: Impulse driven …
In a first "impulse", we try to fit what we see into the simple equation where risk equals probability times damage. As a result, we get nervous as, looking at ChatGPT and its siblings, we can do the math neither on probability nor on the potential damage that the use of this technology could bear for us. The deeper we drill, the more questions we get. Scientifically sound answers drip through the tickers only at a slow pace since many of those require research time prior to assessing the effects. And that does not happen over night.
In a second "impulse", we focus on the obvious: the elements, that we are familiar with and which resonate the most in public perception. I.e. those clickbaits which constantly make headlines. This can be the result of a first, publicly prominent traumatic experience in adopting a new technology. It can also be intelligent marketing where our expectations are repetitively framed in order to overshadow those risks, that are not within the "cozy" technology coloured framework of the cyber community.
In a third "impulse", we call for regulation and put the responsibility on the shoulders of the lawmakers as we're lacking all means of dealing with what we can't combat in the very moment. Interestingly enough, that typically does not lead to short term solutions but to a reputation disaster of the cyber community where the classic perception of us hindering innovation is taking over the public discourse. Regulation, on top, is perceived as evil. Even more than the targets of the regulatory efforts. Whoever they are.
In a fourth "impulse", and in absence of any progress in dealing with the tangible and intangible risks, we'll blame what will inevitably go wrong, to those who are putting this new technology to the practice and, in absence of protective measures, awareness and training, unavoidably hit wrong buttons on the way. I call that the "Lieschen Mueller-effect". The blame goes to the weakest link in the chain.
After all those impulses, we'll realize that we wasted time, efforts and money fighting the genie who is irreversibly out of the bottle rather than understanding the various risk factors, what they entail, their interdependence and which experts to consult, in dealing with those and, ultimately, mitigating their impact.
In our workshop, I'll share with you some of my findings of my book and invite you to jointly revisit a selection of those factors that detrimentally impact our cyber security infrastructure such as: Trust, Credibility, Identity, Authenticity, Context, Value, Bias, Control, Intellectual Property, Confidentiality, Convenience, Liability, Ethics, Integrity.
We'll go beyond the obvious and get a better understanding on how to consciously and responsibly put new technology to action. All of that while increasing the security awareness around the use of (re)gen AI.