
Ransomware in Higher Education Is Rising: How Real-Time Network Monitoring Changes the Outcome
April 14, 2026The Next Phase of AIOps: From Insights to Autonomous Action
Thoughtworks published a significant assessment of AIOps in early 2026, reflecting on lessons from over 16 enterprise clients and 20 proof-of-concept deployments across 2025. Their conclusion: AIOps has crossed the credibility threshold. More than half of those proofs of concept reached production. The era of AIOps as an experimental technology is over. The era of AIOps as operational infrastructure is beginning.
The next phase of this evolution — agentic AI in IT operations — represents a qualitative shift. Agentic AI systems are not just tools that surface insights for human decision-making. They are systems that can plan multi-step workflows, take actions, invoke tools, and operate over extended periods with minimal human intervention at each step. Applied to IT operations, this means AI systems that can not only detect that a campus network segment is trending toward saturation but automatically trigger capacity adjustments, notify relevant stakeholders, update incident management systems, and document the resolution — without a human being involved at each step.
67% of IT teams now use automation for monitoring. Not a single respondent reported having no modern automation in their environment (Futurum 2025 survey).
What Agentic AI Means for University IT Teams
For university IT administrators managing complex, distributed environments with lean teams, agentic AI represents one of the most significant capability multipliers available. The structural challenge of university IT — too many systems, too few people — is exactly the problem that autonomous agents are designed to address.
Consider the practical scenarios: An agent monitors campus network performance continuously, detects anomalies in real time, escalates confirmed issues, runs initial diagnostic workflows, and creates detailed incident records — before any human touches the keyboard. A second agent monitors the performance of online learning platforms, detects degradation in student experience metrics, correlates the issue to its root cause in the infrastructure, and initiates a remediation workflow — all within minutes of the first performance deviation.
These are not hypothetical futures. ServiceNow’s AI Agents were publicly demonstrated autonomously triaging alerts, assessing impact, investigating root causes, and driving remediation in late 2025. LogicMonitor’s Edwin AI agent provides natural language summaries of infrastructure anomalies and automated incident detection across large hybrid environments. The technology is in production.
The Governance Gap That Could Derail Agentic AI Adoption
Thoughtworks’ 2025 assessment identified a consistent pattern in agentic AIOps failures: AI governance is missing. Enterprises that attempted to deploy agentic AI without establishing operating models to govern AI systems in production consistently ran into problems — agents taking unintended actions, making decisions without appropriate context, or operating in ways that violated implicit institutional policies that were never formally codified.
For universities, this governance challenge is particularly acute. Academic institutions have complex, federated governance structures. IT decisions that affect research operations, student data, faculty systems, or compliance with grant requirements may need different approval pathways than routine infrastructure management. An autonomous agent that cannot navigate these governance complexities will either take actions it should not, or be so heavily constrained that it provides no meaningful autonomy.
94% of higher education workers use AI tools daily. Only 54% are aware of their institution’s specific AI policies. Only 31% of institutions have clear, enforceable guidelines that staff actually understand. The governance gap for AI in higher education is significant — and it applies to IT operations as much as to end-user AI tools.
Three Foundational Capabilities That Enable Safe Agentic AI
Comprehensive observability as the foundation
Agentic AI systems can only operate safely and effectively when they have accurate, real-time information about the state of the infrastructure they are managing. Fragmented, siloed monitoring data produces agents that make decisions based on incomplete pictures — with predictably poor outcomes. The prerequisite for agentic AI in university IT is unified observability: a single, coherent view of network state, application performance, endpoint behaviour, and security events that agents can reliably query and act on.
Context engineering for university-specific knowledge
Thoughtworks identified context engineering — providing AI systems with enterprise-specific memory and institutional knowledge — as a critical missing component in most agentic AIOps deployments. For universities, this means capturing and encoding institutional knowledge: the normal traffic patterns of exam periods, the expected network behaviour of specific research workloads, the governance requirements for different categories of infrastructure actions, the escalation paths for incidents affecting different stakeholder groups.
Observable, auditable agent behaviour
Agents operating in production IT environments must be fully observable. Every action an agent takes should be logged, every decision auditable, every workflow replayable for post-incident review. This observability requirement is not just a governance imperative — it is the mechanism through which university IT teams build the trust in AI agent behaviour that allows them to progressively extend agent autonomy over time.
The Practical Path for University IT Teams
The most effective approach to agentic AI adoption in university IT is incremental. Begin with the lowest-risk, highest-value automation: anomaly detection and alert correlation, where agent actions are confined to notification and documentation rather than infrastructure modification. Build the observability foundation and the institutional knowledge base that agents will need to operate effectively. Establish the governance framework and audit infrastructure. Only then extend agent autonomy to active remediation workflows, with human approval gates for actions above defined risk thresholds.
The Ennetix AIOps platform provides the observability foundation that this progression requires. xVisor’s unified view of network, application, and endpoint data is the information substrate on which safe, effective AI agent operation depends. Building agentic AI on top of a fragmented monitoring infrastructure is building on sand — the agent will only be as reliable as the data it operates on.





