- Secure AI - Thinks and Links
- Posts
- Big Data & Analytics - Thinks and Links | June 10, 2023
Big Data & Analytics - Thinks and Links | June 10, 2023
Big Data & Analytics - Thinks and Links | June 10, 2023
Happy Weekend!
Conversations about Data and AI this week included a lot of discussion about getting Enterprise AI programs off the ground. Investors and boards are asking about it, and people in all business functions are grappling with the questions of how AI can impact the business. The benefits that investors expect from AI will not come simply from email autocomplete and better chatbots. We’ve been looking at studies that speak to the significant value delivered with AI which is only possible when significant investments in capabilities has been made.
It is also clear that regulatory and operational risks are only going to increase from here. State, Country, and Continental government bodies are all putting forth potential regulation and laws that could impact how firms work with AI. Legal challenges to firms using Generative AI have already started and are likely to increase.
All this opportunity and risk translates to a clear need for Strategy, Governance, and Execution around Data and AI. Enterprises need to have a plan. Leading organizations are drafting AI Policies, governance processes, staffing plans, and technology infrastructure to be ready for the surge in demand for AI capabilities and associated risk. Here are some steps every organization should be considering:
Understand AI: Begin by gaining a comprehensive understanding of AI, specifically generative models, and their implications for your business. This includes grasping potential benefits, risks, and the ways these models are beginning to be incorporated into the technology stack.
Assess Current Capabilities: Review your existing technological infrastructure and skills base. Identify gaps that could hinder AI implementation or lead to enhanced risks to develop a strategy to address them.
Develop AI Policies: Establish clear enterprise AI policies that define guidelines for its usage and protection within your organization. These guidelines should cover topics like approved use cases, ethics, data handling, privacy, legality, and regulatory impacts of AI-generated content.
Establish Governance Processes: Create governance processes to oversee AI deployment and ensure compliance with internal policies and external regulations.
Plan Resource Allocation: Consider staffing and resourcing plans to support AI integration. This may include hiring AI specialists, engaging with consulting firms, developing staff training, or investing in new technology.
Prepare for Risks: Generative AI can present many unique risks, such as IP leakage, reputational damage, and operational issues. Risk management strategies should be included in all phases of your AI plan.
Manage Data Effectively: Ensure that your data management systems can support AI demands, including data quality, privacy, and security.
Watch this space for updates on service briefs that address these market needs. For now, feel free to request an AI Executive Briefing where we cover these topics and more in depth.
Want AI? How’s your data?
AI is all about data. Here’s a Wall Street Journal article describing just that. The article describes several examples of companies working on data to out-innovate competition with AI. Syneos Health’s data journey was shared in the article. This included:
Developing a corporate data management strategy for “managing, cleaning, and organizing all the data across the entire business.”
Integrating data from operational systems, such as enterprise resource planning and clinical trial information, into a data lake.
Spending 18 months preparing to train AI models by building “feature stores” based on data scientists and business domain expert insights.
Now they’re releasing “Protocol Genius” tool which is build on OpenAI’s LLM models to search across 400,000 clinical protocols in a secure and innovative way
Shadow IT is Increasing; Risks Too
Gartner found that 41% of employees acquired, modified, or created technology outside of IT’s visibility in 2022. They expects that number to climb to 75% by 2027. Shadow IT introduces major risks, including data breaches, compliance violations, and loss of visibility and control. Digital transformation, expansion of cloud services, and remote work are all driving this trend. One topic this article doesn’t discuss: Artificial Intelligence. I believe that estimates not including AI should be revised up. AI makes it easier to build your own shadow IT. It also accelerates capabilities for data breaches, compliance violations, and loss of visibility and control. Organizations should adopt a proactive approach to manage shadow IT (and AI) by identifying, assessing, and securing it. They should also provide guidance for teams about the rules for Shadow IT and the alternative, secure options to solve their business problems.
Nasty Zero Day Found in Popular MLOps System
Machine Learning Operations (MLOps) is the process by which AI/ML are developed, deployed, managed, and monitored. It is a core function of Machine Learning engineering that streamlines the model development lifecycle from data ingestion to model production. MLFlow is very popular open source tool to facilitate MLOps. The vulnerability was discovered and patched weeks ago, but it would allow for system and cloud takeover because the tool could be exposed to the internet without authentication. No technology is immune from zero days, but with many organizations rushing to adopt AI capabilities vulnerabilities like this could become a larger issue. And if the AI development is happening as a form of Shadow IT – vulnerabilities like this could in your environment, running without the latest update.
2 Ways to Block AI and 1 Way to Use AI from Palo Alto
Firewall Settings: https://www.paloaltonetworks.com/blog/2023/05/securing-and-managing-chatgpt-traffic/
Enterprise DLP configuration: https://docs.paloaltonetworks.com/enterprise-dlp/enterprise-dlp-admin/configure-enterprise-dlp/enterprise-dlp-and-ai-apps/how-enterprise-dlp-safeguards-against-chatgpt-data-leakage
How to use ChatGPT within XSOAR Playbooks: https://www.paloaltonetworks.com/blog/security-operations/using-chatgpt-in-cortex-xsoar/
Lariar on AI – My Better Half Writing About AI in Search
Shameless spousal promotion. My wife has been working with Google and Bing for years and seeing first-hand the AI developments that are going into their flagship products. In this article she shares what the likely impacts are in her industry where AI is accelerating some things and introducing new risks.
AI Will Save The World
This long essay by Marc Andreessen is worth a read. Andreessen notably coined the phrase “Software is eating the world” twelve years ago – and it has. For Andreessen the potential of artificial intelligence (AI) to augment human intelligence and improve life far outweighs the existential risks. He argues that AI is a tool that can help us solve problems, create new knowledge, and enhance our creativity. He gives examples of how AI can assist us in education, health, law, art, and other domains. He also addresses some of the common fears and misconceptions about AI, such as its impact on jobs, ethics, and privacy. He concludes by urging us to embrace AI to make everything we care about better.
In another hopeful (and shorter) story –amazing software engineering R&D to speed up code using Google’s Deepmind AI.
Errata: The Air Force Did Not Simulate a Drone Killing Its Operator
Oops. I got caught up sharing an article last week that fit a narrative about AI running off the rails. It turns out there was significant human error in the reporting. The Air Force now denies that the simulation ever took place. The original source “misspoke” and the story went viral. Perhaps more evidence in favor of Marc Andreessen’s thesis above.
Have a Great Weekend!
This is not a recommended Security Use Case