Artificial Intelligence (AI)
(Content Via Intact Insurance Newsletter)
We are regularly bombarded with AI in all manners of communications. Lately there has been a lot of media around OpenAI,
the vender behind ChatGPT, which can be used to automate customer service, streamline internal communication, write code,
answer questions, summarize documents and overall improve efficiency. However, it’s important to keep in mind this technology
can also pose a risk to our company’s data privacy and security.
- AI is highly reliant on the available sources of data used to pull responses from, so a single “bad” source could skew results in a very negative way or even provide incorrect answers completely.
- Sensitive data and intellectual property are finding their way into AI apps, so additional precautions are necessary to ensure non-public information is appropriately protected.
- Threat actors and other “bad” guys are using ChatGPT and other AI solution capabilities to create sophisticated malware and attacks that can evade traditional detection methods. That includes SPAM, business compromise emails and SMS written with the help from AI.
- Validate data returned by AI engines, like ChatGPT, for accuracy. This is called hallucinating when the AI systems makes up facts and even people that don’t exist.
- Never enter non-public information into any public web site for any reason.
- As would be the case with any threat, avoid clicking links provided and always question and verify anything odd or suspicious.
REMEMBER – DO YOUR PART, BE SECURITY SMART!