SCMagazine.com reported that “Microsoft has discovered a new method to jailbreak large language model (LLM) artificial intelligence (AI) tools and shared its ongoing efforts to improve LLM safety and security in a blog post Thursday.” The April 15, 2024 article entitled ” Microsoft’s ‘AI Watchdog’ defends against new LLM jailbreak method” (https://tinyurl.com/3px8e7r8) included
Internet, IT & e-Discovery
Blog Authors
Latest from Internet, IT & e-Discovery
Bad news! Poor Cloud Controls at HHS!
HeathCareInfoSecurity.com reported that “A Department of Health and Human Services division that administers funding, training and other services to children and families is putting sensitive data at high risk because of gaps in cloud security controls and practices, according to a watchdog agency report.” The April 2, 2024 article entitled ” Poor Cloud Controls at…
$22M Ransomware Payment apparently stolen from UnitedHealth Group!
SCMagazine.com reported that “A $22 million ransom payment allegedly made by Optum, which is supported by blockchain transaction records associated with ALPHV/BlackCat, was apparently stolen by the ransomware-as-a-service (RaaS) in an exit scam.” The April 8, 2024 reported entitled “Change Healthcare breach data may be in hands of new ransomware group“ (https://tinyurl.com/yc8nzak2) included…
Do you know about the three cloud security misconceptions?
SCMagazine.com reported that “There’s a lot going on inside the minds of small and medium-sized business (SMB) owners….. Increasingly, those opportunities exist in the cloud, whether it’s gaining new insights from data, effortlessly scaling to meet demand, or enabling collaboration from anywhere. But when it comes to cloud security,…” The March 29, 2024 article entitled…
Open Source AI framework may be a security risk!
SCMagazine.com reported “An active attack targeting a vulnerability in Ray, a widely used open-source AI framework, has impacted thousands of companies and servers running AI infrastructure — computing resources that were exposed to the attack through a critical vulnerability that’s under dispute and has no patch.” The March 26, 2024 article entitled “Flaw in Ray…
CIOs need to work with CFOs for better IT funding!
CIO.com reported that “Digital success requires a product-based approach to IT — and a shift to persistent rather than per-project funding. Here’s how to address your CFO’s concerns about costs and risks. CFOs want certainty when it comes to spend. And they want to know exactly how much return on investment (ROI) can be expected when…
Payments Fraud is faster and easier with AI!
BankInfoSecurity.com reported that “Artificial intelligence technologies such as generative AI are not helping fraudsters create new and innovative types of scams. They are doing just fine relying on the traditional scams, but the advent of AI is helping them scale up attacks and snare more victims, according to fraud researchers at Visa.” The March 21,…
CIOs need to take the time to think about legal issues in SaaS!
CIO.com reported that “Years into strategies centered on adopting cloud point solutions, CIOs increasingly find themselves facing a bill past due: rationalizing, managing, and integrating an ever-expanding lineup of SaaS offerings — many of which they themselves didn’t bring into the organization’s cloud estate.” The March 15, 2024 article entitled “CIOs take aim at SaaS…
Healthcare breach at UT Southwestern!
SCMagazine.com reported that “Dallas-based UT Southwestern Medical Center had data from almost 2,100 individuals compromised following a data breach, The Dallas Morning News reports.”
The March 12, 2024 report entitled “UT Southwestern breach hits over 2K patients” (https://www.scmagazine.com/brief/ut-southwestern-breach-hits-over-2k-patients) included these comments a UT Southwestern spokesperson:
We are assessing the data to prepare notifications…
Will the major Generative AI vendors allow an academic investigation of their security?
Computerworld.com reported that “More than 150 leading artificial intelligence (AI) researchers, ethicists and others have signed an open letter calling on generative AI (genAI) companies to submit to independent evaluations of their systems, the lack of which has led to concerns about basic protections. The letter, drafted by researchers from MIT, Princeton, and Stanford University, called…