Loading...
Loading...

Big thank you to Cisco for sponsoring this video!
Hackers are hacking AI models. Prompt injection attacks are happening all the time. AI's are hallucinating and giving incorrect information. The AI models you download could be made by hackers. Your users are posting confidential information like passwords and API keys into online AI models. Developers are leveraging AI systems in their applications without checking that the AI models are not open to prompt injections.
We need a way to protect AI systems. And Cisco have a solution.
// DJ Sampath's SOCIALS //
LinkedIn: / djsampath
Twitter/X: / djsampath
// David's SOCIAL //
Discord: discord.com/invite/usKSyzb
Twitter: www.twitter.com/davidbombal
Instagram: www.instagram.com/davidbombal
LinkedIn: www.linkedin.com/in/davidbombal
Facebook: www.facebook.com/davidbombal.co
TikTok: tiktok.com/@davidbombal
// MY STUFF //
www.amazon.com/shop/davidbombal
// SPONSORS //
Interested in sponsoring my videos? Reach out to my team here: [email protected]
Please note that links listed may be affiliate links and provide me with a small percentage/kickback should you use them to purchase any of the items listed or recommended. Thank you for supporting me and this channel!
Disclaimer: This video is for educational purposes only.
No transcript available for this episode.