Python Automation

AI Governance: Shadow AI and Python on Endpoints

The discussion around AI governance is no longer just theory.

Today, generative AI, copilots, and local automations are already part of the daily routine of business areas, data teams, and operations.

At the same time, the use of AI tools outside IT’s control is growing. This phenomenon has become known as Shadow AI.

According to projections presented by Gartner, a significant share of companies is expected to suffer security or compliance incidents linked to Shadow AI by 2030, mainly due to unmonitored use of AI by employees.

The blind spot is that this Shadow AI doesn’t live only in SaaS apps or chat windows in the browser. It starts to turn into Python+AI code running directly on endpoints: notebooks, workstations, VMs, and servers close to business areas.

That’s where the conversation about AI governance in companies meets a classic corporate Shadow IT problem, now powered by Python and AI.

What AI governance actually means in practice

AI governance in companies has become an urgent topic.

It’s no longer about whether AI will be used, but how it will be used in a secure, auditable way and aligned with internal and regulatory norms.

In practice, AI governance has to deal simultaneously with:

  • data and privacy risks

  • operational and continuity risks

  • regulatory and reputational risks

Without this integrated view, a company may advance in AI initiatives, but remain exposed when incidents, audits, or board questions arise.

AI has become operational infrastructure

Generative AI tools now support tasks such as:

  • generating Python code to integrate internal systems

  • automating data processing and reporting routines

  • creating small agents that run queries, transformations, and daily outputs

In other words, AI already touches critical data, regulated systems, and sensitive processes.

When this happens without visibility and clear trails, AI governance becomes just another PDF document instead of a real control mechanism.

From Shadow IT to Shadow AI

Shadow IT is the use of technology outside the official IT umbrella.

Shadow AI is the new layer of this problem, now with artificial intelligence tools:

  • employees use AI without formal approval

  • decisions are influenced or automated with no clear trail

  • code is generated and adapted by AI with no review standard

This Shadow AI may start in the browser, but it quickly materializes as Python scripts running on users’ machines.

When this happens without inventory or monitoring, the organization loses control of:

  • which data is being processed

  • on which endpoints this is happening

  • who is behind those automations

The problem becomes even more critical when this Shadow AI is implemented as Python scripts running directly on endpoints, often with access to sensitive systems and data.

Where Shadow AI hides: Python on endpoints

Shadow AI doesn’t show up only in cloud provider contracts or large AI projects.

A relevant part of it lives in the day-to-day of teams, in automations created to solve specific problems.

And in this day-to-day reality, Python on endpoints has become a privileged channel.

Scripts and agents generated by AI outside official pipelines

With generative AI, anyone with intermediate knowledge can ask:

“Create a Python script to pull data from our internal system, consolidate it in a spreadsheet, and send a summary by email every day.”

That script typically:

  • is saved in a local folder or on a VM

  • runs via a simple scheduler or command line

  • accesses internal APIs, databases, and network file shares

All of this often happens without going through official development, DevSecOps, or change management pipelines.

In practice, this creates a scenario where:

  • part of the company’s critical automation is in non-inventoried scripts

  • these scripts use AI to generate, adjust, or assist decisions

  • none of these flows shows up in traditional AI governance dashboards

Endpoints as the new blind spot

When people talk about AI governance, attention usually goes to:

  • AI models in the cloud

  • official integrations between systems

  • large applications with embedded AI

But endpoints are increasingly the place where:

  • Python scripts with AI are executed

  • personal and sensitive data is handled

  • credentials and tokens are stored with no standard

Without visibility into what runs on those endpoints, the company creates an operational blind spot.

This is exactly where Shadow AI, Python, and Shadow IT converge: Python+AI running at the edge, outside official governance.

Why the risk is greater in regulated industries

Sectors such as finance, healthcare, insurance, energy, and utilities usually have:

  • strong regulatory pressure

  • high volumes of sensitive data

  • intense dependence on automation

In these environments, the impact of Shadow AI in Python+AI on endpoints is even greater, because any misstep can translate into:

  • reportable incidents to regulators

  • administrative sanctions

  • material reputational damage

LGPD and responsibility over AI use

Brazil’s LGPD (and similar privacy regulations) place responsibility on organizations for:

  • how personal data is processed

  • which controls exist to protect that data

  • how the company responds to incidents

If a Python+AI script on an endpoint accesses personal data, that is data processing.

Even if the script was created “just to help the team,” the responsibility remains with the organization.

Without inventory, the company cannot:

  • know how many of these scripts exist

  • assess the real risk they represent

  • respond solidly in an audit or investigation

Global regulations and the need for inventory

Regulations such as GDPR, DORA (in the European financial sector), and operational risk standards point in the same direction:

  • require inventory of relevant systems and flows

  • test and prove operational resilience

  • maintain audit trails for important decisions and automations

In practice, this means AI governance cannot be just a set of principles.

It needs concrete data on:

  • where AI is running

  • which flows use Python+AI

  • how this connects to endpoints, systems, and people

Without Python+AI inventory on endpoints, there is no AI governance

The expression “AI governance” only makes sense if there is real visibility.

Without an inventory of Python+AI on endpoints, the company is governing only part of the problem.

The questions that expose the gap

A few simple questions help test the real level of AI governance:

  • Do you know on which endpoints Python is being executed today?

  • Do you know which scripts use AI models or AI APIs?

  • Do you know which data and systems those scripts access?

  • In an audit, can you show who ran what, when, and where?

If the answer is “no” or “more or less,” there is a governance gap.

This gap is precisely the combination of:

  • Shadow AI

  • Python on endpoints

  • lack of inventory and trail

Limitations of traditional tools

Traditional security and inventory tools were not designed for this scenario.

In general, they:

  • see installed applications, but not scripts that run sporadically

  • monitor traffic and events in central systems, but not local execution details

  • do not distinguish “generic Python” from Python+AI that touches sensitive data

As a result, a company may have:

  • formally approved AI policies

  • an active AI governance committee

  • well-produced reports for the board

But at the same time, it still fails to see a large portion of the real use of Python+AI on endpoints.

In practice, AI governance remains limited to what goes through official pipelines, ignoring what is already in production at the edge.

How BotCity Sentinel closes this governance gap

From here on, we get into the practical layer.

If the problem is Python+AI already in production on endpoints, it makes sense to have a specific agent for this context.

This is the role of BotCity Sentinel.

Python+AI monitoring agent on endpoints

BotCity Sentinel acts as a Python+AI monitoring agent on endpoints, looking at Python code that is already running in real, post-deploy environments.

It was designed to:

  • identify where Python is being executed

  • log real scripts and executions, not just installations

  • detect when there is AI usage inside those scripts

Instead of relying on local perceptions, you get an objective inventory of Python+AI usage on endpoints.

This layer takes AI governance out of the realm of guesswork and brings the discussion into concrete production data.

View by machine, user, and system

Sentinel organizes Python+AI usage along three key dimensions:

  • machine (endpoint) – which scripts ran there, how often, and for how long

  • user – who executed what

  • system/data – which systems, APIs, or files were accessed

With this view, it becomes much easier to:

  • pinpoint Shadow AI hotspots that turned into critical Python automations on endpoints

  • prioritize risks based on sensitive data and regulated systems

  • build a solid narrative for the board, auditors, and regulators

Compliance Trial: fast and usable diagnosis

The BotCity Sentinel Compliance Trial was designed to answer the question:

“What is the real picture of Python+AI usage on our endpoints today?”

By activating Sentinel on a set of endpoints for about 30 days, you get:

  • a consolidated inventory of observed Python+AI usage on the evaluated endpoints

  • a view of the percentage of compliance with internal policies

  • a risk map focused on sensitive data and critical flows

  • an executive report that can be used in risk committees, audits, and board meetings

This turns AI governance from something abstract into something:

  • measurable

  • actionable

  • defensible in front of auditors and regulators

BotCity reaches this point with more than 7 years governing Python in operations at highly regulated companies, supporting the orchestration of automations and critical flows with control, observability, and auditable trails.

👉 Get to know BotCity Sentinel and request Early Access
https://www.botcity.dev/en/sentinel-4

Leave a Reply

Discover more from Blog BotCity - Content for Automation and Governance

Subscribe now to keep reading and get access to the full archive.

Continue reading