LLMs and the upcoming patching nightmare
Patching is already painful, and most companies struggle with patching at 30-day intervals like the infamous "Patch Tuesday." LLMs like ChatGPT will elevate that pain to nightmare levels and favor hyperscalers that take that burden away.
Current State:
A vendor releases a patch with one or more CVEs, and that starts a race between teams within companies patching their systems before bad actors can reverse engineer the patch, determine the vulnerable code, and write an exploit for the vulnerability. In many cases, this cycle takes days, weeks, and even months.
LLM Capability:
LLMs can find security vulnerabilities in code. They also can determine the difference between two sets of code and then write an exploit for the difference. Don't believe me? I will post some examples tomorrow to keep this post shorter. Update: Now tomorrow's example today: Finding a vulnerability and building an exploit with ChatGPT (tinselai.com)
Future State:
Bad actors and vendors will be racing to use LLMs to find vulnerabilities in code which will ramp up the rate of patches to 11. And bad actors will use LLMs to determine the difference between the old code and the patch and then have it create the exploit. Finally, using the LLM to deploy the exploit in a myriad of ways depending on the exploit type. This cycle will no longer be the days, weeks, or months in is right now...it will drop to hours, then minutes, and eventually, be measured in just seconds.
Consequence:
This problem will not be solved by "patch faster." It will require a significant architectural rethink to prevent significant unscheduled downtime, blowing up SLAs, and sleepless teams. I usually call good security a "hidden quality" because no one realizes it exists if you do it reasonably well. This new future will bring it out of the shadows and will make it top of mind. Similar to when you research a car to purchase, the first things you look at are gas mileage and safety ratings.