-
eDiscovery on your terms: The OpenText advantage
In the world of eDiscovery, one size has never fit all. It's increasingly common for vendors to force customers to accept vendor-dictated deployment models that may not align with their security requirements, operational preferences, or data governance obligations. At OpenText, we believe in a fundamentally different approach: customer-led flexibility that adapts to the ways you need to work.
Why flexible eDiscovery deployment matters for legal and compliance teams
Our philosophy is simple but powerful—we don't believe in forced migration.
Instead, we offer true deployment flexibility across on-premises, public cloud, private cloud, and hybrid models. This isn't just about giving you options; it's about recognizing that your organization's security requirements, regulatory obligations, and operational expertise are unique.
Your eDiscovery solution should work within your framework, not force you to adapt to ours.
Four pillars of OpenText eDiscovery advantage
1. Security first
Whether you're managing sensitive intellectual property, handling regulated data, or navigating complex jurisdictional requirements, security cannot be compromised. OpenText gives you the power to keep your most sensitive data exactly where it needs to be—on your infrastructure, in region-specific data centers, or in carefully controlled cloud environments. You decide what stays internal and what can leverage cloud scalability.
2. Control on your terms
From administrative access to data management workflows, control means different things to different organizations. Some teams want complete hands-on control over every aspect of the eDiscovery process. Others prefer to focus on case strategy while leaving infrastructure management to experts. OpenText delivers both options—and everything in between.
3. Scalability without compromise
Cloud scalability shouldn't require sacrificing security or control. Our deployment models allow you to scale computing resources on demand while maintaining your chosen level of data governance. Process terabytes of data for major litigation, then scale back down for routine matters—all within your security framework.
4. Flexibility across the EDRM
True flexibility means having choices at every stage of the Electronic Discovery Reference Model. OpenText provides end-to-end eDiscovery functionality with the freedom to deploy different components where they make the most sense for your organization.
eDiscovery deployment models that match your reality
Full-service public cloud (Multi-tenant)
For organizations that want comprehensive eDiscovery functionality without the burden of infrastructure management, OpenText Core eDiscovery in the public cloud delivers the complete package. Leverage our managed services team for end-to-end support—from data loading and processing to production—without worrying about database maintenance, infrastructure scaling, or technical expertise gaps. Focus on your cases while we handle the technology. If your team needs to manage users or load their own data, OpenText Core eDiscovery offers teams flexibility as well, with optional self-service user management and processing.
Hosted private cloud with administrative control (Single tenant)
Have an internal team of eDiscovery technology experts? Our AWS-hosted private cloud solution in region-specific data centers gives your team complete administrative control and access management capabilities while still delivering the scalability benefits of cloud infrastructure. You maintain the control you need while OpenText provides a robust, secure hosting environment.
Full on-premises / Bring-your-own-cloud deployment
For organizations with strict data residency requirements or those who prefer complete infrastructure control, OpenText offers comprehensive on-premises or bring-your-own-cloud deployment options. OpenText Investigation, deployed on-premises or in your own cloud, offers full access to all advanced analytics, culling, and tagging functionality.
The optional Review and Analysis module allows you to manage the entire eDiscovery process—from collection through to TAR, redaction, and production —all within your own environment. Maximum control. Maximum security. Zero compromise.
Hybrid eDiscovery: The best of both worlds
Perhaps the most powerful option is our hybrid deployment model, designed for organizations that need to balance security requirements with the benefits of rapidly scalable and accessible externally managed cloud environments.
The hybrid model allows you to perform early case assessment (ECA) and early data assessment (EDA) on-premises or in your own cloud environment, maintaining complete control over sensitive data during the initial review.
If and when you need additional cloud scalability and accessibility for external counsel or document review teams, you can seamlessly move the review set to OpenText Core eDiscovery, where internal and external teams can access a comprehensive suite of tools, including:
* Technology-Assisted Review (TAR) for efficient document review
* Redaction workflows to protect privileged information
* Production capabilities for seamless delivery
* OpenText eDiscovery Aviator GenAI summarization and review
* Machine text translation for multilingual matters
* Audio/video transcription for multimedia evidence
This approach lets you keep your most sensitive data close while providing additional scalability and advanced eDiscovery Aviator GenAI technologies when the case demands it.
Why a customer-centric eDiscovery strategy matters
The eDiscovery landscape is evolving rapidly, but your organization's journey is unique. You might start with an on-premises deployment and gradually adopt cloud capabilities as your security posture evolves. You might need different deployment models for different types of matters—public cloud for routine cases, on-premises for highly sensitive investigations.
OpenText eDiscovery adapts to these realities. We provide the full range of options for end-to-end eDiscovery because we understand that flexibility isn't just a feature, it's a fundamental requirement for modern legal and compliance teams.
Your path forward: Choose the eDiscovery deployment model that works for you
Whether you're managing internal investigations, responding to regulatory requests, or handling complex litigation, OpenText eDiscovery gives you security, control, scalability, and flexibility to succeed. We don't dictate your deployment strategy—we enable it.
Ready to explore how OpenText eDiscovery can adapt to your organization's unique requirements? Let's discuss which deployment model—or combination of models—makes sense for your team.
Because in eDiscovery, the best solution isn't the one that works for everyone. It's the one that works for you.
Learn more about your eDiscovery deployment options.
The post eDiscovery on your terms: The OpenText advantage appeared first on OpenText Blogs.
-
The plugin problem: Why your Atlassian Cloud migration is more complicated than it looks
Atlassian's recent Ascend program (Atlassian Cloud migration) has set a firm deadline: Data Center and Server support is ending. For thousands of organizations, cloud migration is no longer optional. The promise is compelling: no infrastructure to maintain, automatic updates, predictable costs. But for enterprises running heavily customized Jira environments, the reality is more complicated. The issue isn’t Atlassian’s cloud platform itself: It's a plugin problem.
It’s the dozens (sometimes hundreds) of plugins your organization depends on. Each one represents a separate migration challenge, a different vendor relationship, and a potential gap in your cloud environment. What looked like a straightforward platform migration becomes a complex orchestration of third-party dependencies, many of which don’t have clear paths forward.
When extensibility becomes complexity
Over the years, Jira’s success has been driven by its powerful marketplace ecosystem. Thousands of plugins extend Jira’s capabilities, from test management and automation to DevOps integration and reporting. However, this flexibility creates deep dependencies. A typical enterprise Jira setup might rely on 20, 50, or even 100 different plugins — each owned by a different vendor.
When moving to the cloud, these dependencies become a serious plugin problem:
* Not all plugins are available on Atlassian Cloud—many have no equivalent cloud version.
* Feature gaps and incompatibilities exist between Data Center and Cloud editions.
* Data migration paths are fragmented, as Atlassian only migrates core Jira data, making plugin data each vendor’s responsibility.
* Compliance and data residency risks arise since each plugin vendor hosts data separately, sometimes outside approved regions.
* Support and lifecycle management become decentralized, with multiple vendors to coordinate during and after migration.
What was once a unified Jira environment on-premises now becomes a distributed network of separate services, each with its own terms, data policies, and risk profile.
The business impact
For regulated industries or organizations with strict data control policies, this fragmented model introduces unacceptable uncertainty:
* Audit and compliance complexity: Multiple vendors with differing certifications and SLAs.
* Operational risk: Data loss or workflow disruption during plugin migrations.
* Vendor lock-in: Once in the cloud, organizations lose control over plugin versions and update timing.
* Hidden cost and governance overhead: Managing dozens of separate contracts and renewals.
In short, the migration effort often outweighs the anticipated benefits.
Rethinking the approach
If your migration assessment reveals an 18-month timeline, coordination with dozens of vendors, and compliance questions no one can answer, you're seeing the plugin problem clearly. The question becomes: is cloud migration solving your infrastructure challenge, or simply relocating your complexity?
For some organizations, the answer is to reconsider the foundation itself.
A different foundation: Unified software delivery
OpenText offers an alternative built on a different premise: that software delivery tools should work as a coherent system, not an assembly of independent parts.
The OpenText Software Delivery Platform integrates planning, requirements, testing, DevOps, and quality management under a single architecture. One vendor. One contract. One data model.
The practical advantages:
* Unified governance: A single vendor relationship for security reviews, compliance audits, and SLA management.
* Native integration: Capabilities designed to work together, not through APIs and marketplace plugins.
* Flexible deployment: Available both on-premises and in the cloud, preserving data residency and control.
* Coordinated lifecycle: Updates and support managed across the entire platform, not negotiated with separate vendors.
For organizations where data control, regulatory compliance, and operational predictability matter, this model removes the variables that make plugin-heavy cloud migrations so complex.
Simplify the chaos through unity
Atlassian Cloud represents one path forward. But it's not the only path, and for organizations built on extensive plugin ecosystems, it may not be the most practical one.
True modernization isn't about infrastructure location. It's about reducing dependencies, simplifying governance, and maintaining control over your tools and data. That's what a unified platform delivers: fewer integration points, fewer vendors, fewer risks. More control over what matters.
Explore OpenText DevOps Cloud now and simplify the plugin chaos.
The post The plugin problem: Why your Atlassian Cloud migration is more complicated than it looks appeared first on OpenText Blogs.
-
Seeing the unseen: How OpenText is leading the way in detecting AI risk
TL;DR: As AI becomes integral to software development, it’s also creating new and complex security risks. OpenText is leading the charge in AI risk detection, embedding AI-aware analysis into its AppSec platform to identify vulnerabilities in how AI models, APIs, and generative systems are used in code. Unlike traditional tools, OpenText doesn’t just scan for known flaws. We understand how AI behaves, helping enterprises build trust, ensure compliance, and innovate responsibly. The takeaway: in the AI era, secure innovation depends on detecting AI risk before it becomes a business risk.
Navigating the new frontier of application security
Artificial intelligence (AI) is no longer a futuristic concept, it’s embedded in nearly every modern business process and software product. From automating code generation to enabling adaptive digital experiences, AI is redefining how organizations innovate and compete.
But with every leap forward comes a new class of risk. The same models that help organizations accelerate development can inadvertently introduce vulnerabilities, expose sensitive data, or enable insecure behaviors when integrated without proper governance. AI systems have become more deeply woven into the software supply chain. The challenge is no longer just “how fast can we adopt AI?” but rather “how securely can we deploy it?”
This is where OpenText™ Application Security is leading the industry, by detecting AI risk at the source and turning responsible AI innovation into a business advantage.
The next security challenge: AI-driven software
AI-enabled applications introduce a new layer of trust assumptions. Large Language Models (LLMs) and agentic frameworks are powerful, but they can produce unpredictable or unsafe outputs if not properly validated. The risk compounds when AI systems make autonomous decisions, generate code, or interface directly with sensitive APIs.
Recent research by OpenText Software Security Research (SSR) highlights that many of today’s vulnerabilities stem not from malicious intent, but from implicit trust, developers assuming AI responses or generated code are safe by default. OpenText’s security research team addresses this head-on. We embed new detection capabilities that identify weaknesses in how AI models are integrated, used, and validated within applications .
How OpenText AppSec detects and mitigates AI risk
OpenText AppSec differentiates itself through deeply integrated, AI-aware security testing. Traditional tools that focus solely on known code vulnerabilities. OpenText’s SAST and DAST engines now analyze the context of AI usage, how models, prompts, and APIs interact within the broader application ecosystem.
Some key innovations include:
* AI model trust analysis: Detects vulnerabilities arising from unvalidated or overly trusted responses from AI/ML APIs, ensuring safe integration with LLM frameworks such as Python AutoGen and Google Vertex AI.
* Generative framework awareness: OpenText’s AppSec continuously updates rulepacks to identify emerging risks from agent-based systems, cooperative AI workflows, and AI-generated code injection.
* AI-augmented auditing with SAST Aviator: Using Anthropic’s Claude LLM, OpenText’s Aviator technology enhances code audit accuracy, drastically reducing false positives and providing human-readable explanations for detected issues.
* Continuous research-driven content: The SSR team monitors and models new AI development ecosystems, translating their findings into real-time updates across OpenText AppSec products, empowering customers to stay ahead of evolving AI threats.
By embedding this intelligence directly into the application security lifecycle, OpenText helps organizations detect and mitigate risks before they impact production systems.
Bridging innovation and responsibility
AI risk is more than a security issue, it’s a business risk. The potential impact spans compliance violations, reputational damage, intellectual property exposure, and loss of customer trust.
OpenText’s approach combines technical rigor with governance insight, helping organizations align their AI development with emerging standards for responsible AI. Business leaders gain clarity on key questions:
* Where is AI being used across our software ecosystem?
* Is the data used by these systems protected and compliant?
* Can we explain, validate, and control what our AI systems produce?
OpenText transforms these unknowns into actionable intelligence, allowing security and business teams to make confident, risk-informed decisions about AI adoption.
The power of research-led innovation
What sets OpenText apart isn’t just its product portfolio, it’s the depth of its security research. With over 1,700 vulnerability categories tracked across 33+ languages and more than one million APIs, the AppSec platform is backed by a global intelligence network that continuously evolves with the threat landscape.
This same expertise now powers OpenText’s AI risk detection capabilities. As generative AI frameworks evolve, the SSR team rapidly translates new findings into updated detection logic, ensuring customers are protected from risks that didn’t even exist six months ago.
Building digital trust in the age of AI
AI is transforming every industry, but innovation without security is unsustainable. Detecting and managing AI risk isn’t about slowing down, it’s about creating the confidence to innovate responsibly.
With OpenText AppSec, organizations can harness the power of AI while maintaining control, transparency, and trust. By proactively detecting AI-related vulnerabilities, OpenText helps leaders transform security into a strategic advantage—empowering them to build faster, smarter, and safer.
In short:
AI may be rewriting the rules of software development, but OpenText AppSec is redefining how we secure it.
The post Seeing the unseen: How OpenText is leading the way in detecting AI risk appeared first on OpenText Blogs.
-
Welcome to the Cognitive Computing Era
We’re at a major turning point in technology. The Cognitive Computing Era, powered by the rise of enterprise artificial intelligence and agentic AI, is transforming how organizations operate, make decisions, and compete.
What is enterprise artificial intelligence?
Enterprise artificial intelligence is the strategic application and integration of various AI technologies and capabilities within an organization to solve specific problems, automate processes, and drive decision-making. Meanwhile, agentic AI focuses on autonomous decision-making and action. While traditional AI primarily responds to commands or analyzes data, agentic AI can set goals, plan, and execute tasks with minimal human intervention.
But innovation comes with responsibility. Leaders must strike a balance between trust and innovation. AI unlocks new solutions and efficiency while trusted, sovereign data ensures confidence, reliability, and compliance. Without strong data governance, rapid AI advancement can compromise privacy and security, but without innovation, progress stalls.
Only trusted data can power truly effective AI. For organizations that treat data as their operating systems, not just a byproduct of business, this will be the new differentiator.
Enterprise Artificial Intelligence: Building Trusted AI with Secure Data
Written by OpenText leaders, this book provides a roadmap for the new reality. It explores why trusted data and responsible AI are two sides of the same coin, and how organizations can turn this relationship into a competitive advantage. You’ll learn:
* The evolution of enterprise data and how governance underpins AI maturity.
* How industry frameworks guide responsible AI deployment, addressing fairness, accountability, and regulatory compliance.
* Why emerging cybersecurity challenges call for a zero-trust architecture and proactive defenses.
* The "Sovereign AI” blueprint for organizations to safely unlock private data for a true competitive advantage.
The book addresses both the challenge and opportunity presented by AI. It recognizes that the next decade will not only be defined by technical capability but by who governs and uses data most effectively.
Information management is the key
Information management is the gatekeeper for trusted data; data quality defines the credibility of every AI decision. The two disciplines are deeply interconnected, as effective AI relies on governed, high-integrity data, while information management gains new speed and intelligence through AI-driven automation.
Packed with thoughtful analysis, engaging case studies, and actionable steps, Enterprise Artificial Intelligence: Building Trusted AI with Secure Data offers the architecture and governance frameworks you need to move from isolated AI experiments to enterprise-grade deployments.
This book is your guide to building systems that are fair, explainable, sovereign, and secure. AI will transform every industry—but only if it’s built on a foundation of trusted, well-governed data. Organizations that master this balance will define the next era of digital performance.
Download your copy today.
The post Welcome to the Cognitive Computing Era appeared first on OpenText Blogs.
-
IDC names OpenText a FOUR-time Leader in the Multi-Enterprise Supply Chain Commerce Network Marketscape
IDC recently published their 2025 Marketscape covering the Multi-Enterprise Supply Chain Commerce Network (MESCCN) segment. For the fourth time in a row, OpenText Business Network has been named a leader in this Marketscape, securing leadership position in each release since it was first introduced in 2018.
What is a Multi-Enterprise Supply Chain Commerce Network?
IDC defines a Multi-Enterprise Supply Chain Commerce Network (MESCCN) as “any platform that facilitates both the exchange of information and enables transactions among disparate parties pertaining to the supply chain or to supply chain processes”. The use of networks to facilitate commerce and collaboration can mean the difference between meeting supply chain performance goals and falling behind the competition. The Marketscape contains 18 vendors from a variety of backgrounds including the procurement/sourcing space, EDI/B2B and other cloud native providers.
IDC’s view remains that multi-enterprise supply chain commerce networks are the future of visibility and collaboration for the modern supply chain. In fact, IDC states that an MESCCN becomes a “must have” rather than just a “nice-to-have”.
To qualify for the Marketscape, MESCCN vendors must meet the following criteria:
* Have a global presence, with engagements in at least two major geographic regions
* Have industry breadth with engagements in at least two industries
* Have offered MESCCN capabilities for at least three years
* Have at least 20 referenceable client engagements across their supported industries
How can OpenText Business Network help enterprises with MESCCN?
The core mission of OpenText Business Network is to help companies connect their data, systems and partners, collaborate across internal and external stakeholders, and help optimize key business processes such as procure-to-pay, order-to-cash and treasury management. A key enabler for this is the OpenText Trading Grid™ platform, which allows companies to achieve any-to-any integration across their extended supply chain ecosystem.
The top three strengths of OpenText Business Network, according to IDC, are:
1. Combines global scale, deep domain expertise and a flexible, customer centric delivery model to address the most demanding supply chain collaboration challenges
2. Offers robust AI-enabled integration capabilities across a wide spectrum of technical requirements, supporting diverse connectivity protocols and data formats including regional and industry-based standards
3. Provides flexible, modular solution and platform architecture that combines a high degree of configurability with reusable, modular components, allowing for tailored deployments that address unique customer needs
A thorough evaluation of MESCCN capabilities led IDC to state that companies, especially in the manufacturing and retail sectors, looking for cloud-based tools with deep, scalable expertise with integration tools offering any-to-any capabilities should consider OpenText.
Learn more about IDC’s assessment of OpenText Business Network and get your own copy of the IDC Multi-Enterprise Supply Chain Commerce Network Marketscape.
The post IDC names OpenText a FOUR-time Leader in the Multi-Enterprise Supply Chain Commerce Network Marketscape appeared first on OpenText Blogs.
-
OpenText named a Leader in the 2025 IDC MarketScape for Worldwide Analytical Databases
We’re proud to share that OpenText has been named a Leader in the 2025 IDC MarketScape: Worldwide Analytical Databases Vendor Assessment.
We believe this recognition underscores OpenText’s commitment to helping data-driven enterprises build high-performance analytics environments that deliver insights—faster, smarter, and more securely.
Why this matters now
In 2025, enterprise analytics teams face increasing pressure to scale performance, support AI workloads, and control costs—all while maintaining trust and compliance. We believe being named a Leader in the IDC MarketScape reflects how OpenText™ Analytics Database (Vertica) enables organizations to meet those challenges with confidence.
Our platform is purpose-built to support critical use cases, including:
* Cloud repatriation for better performance and cost control
* AI and ML at scale with efficient compute and advanced compression
* Hybrid flexibility to manage data across cloud and on-prem environments
* Enterprise-grade governance to ensure trust, privacy, and compliance
What we believe differentiates OpenText
With OpenText Analytics Database, organizations gain a platform engineered for complex, large-scale analytics. Built for resilience, scalability, and security, it delivers:
* Predictable performance at petabyte scale with columnar, MPP architecture.
* High concurrency and blazing query speed for real-time, mission-critical insights.
* Best-in-class security and compliance for sensitive data.
* Flexible deployment options across on-premises, private, and public cloud, including containerized and Kubernetes options.
* Whether your goal is AI enablement, performance optimization, or digital resilience, OpenText helps you get there—at an enterprise scale.
Scalable analytics for enterprise AI and data governance
Meet rising data demands with a platform built to support large-scale analytics, real-time performance, and enterprise-grade security. OpenText™ Analytics Database gives you control across on-prem, cloud, and hybrid environments.
If scaling performance, AI readiness, and governance are part of your roadmap, let’s talk.
The post OpenText named a Leader in the 2025 IDC MarketScape for Worldwide Analytical Databases appeared first on OpenText Blogs.
-
3 manufacturing trends for 2026 that nobody’s talking about
Here's a familiar scene.
Another year, another wave of manufacturing trend predictions. AI agents that will revolutionize operations. Digital twins that mirror reality. Supply chain platforms promising real-time visibility. Sustainability dashboards track every carbon molecule.
All of it matters. All of its real. But there's something else happening beneath the surface that deserves attention.
Manufacturing's biggest opportunity in 2026 isn't just the technology everyone's buying. It's the information foundations most are overlooking.
Let me explain why this matters.
Trend #1: Information-native AI (your AI needs better content, not better algorithms)
The AI momentum is undeniable. ServiceNow is targeting $1 billion in AI revenue. SAP just announced 130+ AI capabilities. IDC predicts 80% of enterprises will have generative AI in production by 2026.
The technology is ready. The question is: is your content?
Your AI is only as intelligent as the information it can find and understand. And in most manufacturing environments, that information is spread across quality reports, CAD files, supplier certificates, production logs, maintenance records, work instructions, quality procedures, historical customer data, and systems that weren't designed to work together.
Consider what you're asking AI to do. You want it to understand context, to know that lot number "12345" on a Certificate of Analysis is different from purchase order "12345" or work order "12345." In regulated industries, you need it to provide complete audit trails showing which documents informed which recommendations.
Here's the reality that often gets overlooked: 80% of AI implementation effort goes to data preparation, not model deployment. Not training algorithms. Not fine-tuning models. Finding the information and making it usable.
While many organizations focus on deploying smarter algorithms, the ones pulling ahead are building smarter information foundations. They're creating what I call "information-native AI"—systems that don't just process content but understand manufacturing relationships. They connect quality certificates to production runs to supplier batches to customer orders. This is what creating a true digital thread across the product lifecycle actually means.
The opportunity? AI platforms are becoming increasingly similar. Information architecture is where differentiation lives.
Trend #2: Autonomous documentation (supply chain speed requires documentation speed)
Reshoring is reshaping manufacturing in real time.
74% of manufacturers are reshoring or nearshoring operations. Microsoft is moving 80% of its server components outside China by 2026. Companies are diversifying suppliers and building more resilient networks.
But here's what often gets missed: supply chains can only move as fast as supply chain documentation.
Consider what reshoring means operationally. New suppliers need onboarding: typically, 50-100 documents per supplier. Manual processes? That's often 6-12 months per supplier. A company reshoring to bring in 15 new suppliers is looking at 7-15 years of sequential onboarding delays before achieving full network diversity. Cross-border operations require 15-20 documents per border crossing. One documentation error can cost $10,000+ in delayed shipments.
And reshoring to Mexico or Vietnam sometimes increases documentation complexity compared to established China operations, not decreases it.
Then there's a certificate challenge. Certificates of Analysis. Certificates of Conformance. Sustainability certifications. Conflict minerals declarations. Different customers require different formats for the same information. You cannot accept shipments without them. You can't release products without validating them.
As most suppliers put it, "Moving assembly lines is straightforward. Moving component supply chains within a short timeframe is the real challenge."
Meanwhile, SAP is launching Supply Chain Orchestration in H1 2026. Oracle's pushing MultiCloud visibility. These sophisticated platforms promise real-time analytics and AI-powered optimization.
But here's the gap: over half, if not more, of supply chain information lives in unstructured documents that many of these systems struggle to process.
The manufacturers achieving true supply chain agility in 2026 are treating documentation as infrastructure, not overhead. They're not viewing this as a compliance function but as a supply chain infrastructure layer. They're automating supplier onboarding. They're building intelligent certificate management. They're making documentation move as fast as their products.
Trend #3: Proof-of-sustainability (from claims to evidence)
Sustainability has moved from nice-to-have too essential. Companies track carbon footprints, report emissions, and embrace circular economy principles.
But 2026 brings an important evolution.
Sustainability is shifting from calculation to documentation. From "we're sustainable" to "here's the proof."
The EU Digital Product Passport regulation takes effect in 2026 for batteries and expands to other product categories. It requires documented proof of sustainability claims with a complete chain of custody. Not estimates. Not projections. Verifiable evidence.
In B2B contexts, buyers increasingly require carbon documentation before awarding contracts. If you can't provide verifiable sustainability data as quickly as your competitors, you're at a disadvantage before discussions even begin about quality or price.
Then there's Scope 3 emissions tracking across thousands of suppliers, each with different reporting capabilities. Some have sophisticated systems. Some use spreadsheets. Some are just starting their sustainability journey.
The challenge? You can't aggregate what you can't verify. And you can't verify what isn't documented.
The circular economy adds another layer. Remanufacturing requires component-level traceability: material composition, repair history, remaining useful life. Many manufacturers struggle to provide this documentation for recent products, let alone products returning after years in service.
The companies succeeding in 2026 aren't just the ones with the best sustainability analytics. They're building what I call "proof infrastructure” systems that capture, verify, and distribute sustainability documentation across their supply chains.
They're making transparency measurable: "This component has a documented 40% lower carbon footprint than alternatives."
The connection point
Notice the pattern?
These three trends share something fundamental. They're information management opportunities that look like technology challenges.
While organizations invest in platforms and tools, leaders are also investing in information foundations. They're asking practical questions: Where does this content live? How do we capture it at the source? How do we connect it across systems? How do we make it findable, trustworthy, and traceable?
Not the most exciting questions at an industry conference, perhaps. But they're the difference between AI that delivers value and AI that looks good in demos. Between supply chains that respond in days and supply chains that take weeks to produce a certificate. Between sustainability claims you can prove and sustainability claims that crumble under scrutiny.
Manufacturing's 2026 leaders won't necessarily have the flashiest AI or the biggest technology budget.
They'll have the most intelligent information architecture.
So before deploying your next AI agent or investing in another supply chain platform, consider one question: Can these tools actually find and understand the information they need to deliver value?
If that question gives you pause, you've identified your real opportunity. The cost of waiting? Companies that don't establish information foundations in 2026 will spend 2027-2028 retrofitting systems, manage data chaos, and watch their AI investments under perform while competitors with better information architecture pull steadily ahead.
The post 3 manufacturing trends for 2026 that nobody’s talking about appeared first on OpenText Blogs.
-
Navigating the Legal AI tidal wave: Expert insights from OpenText World 2025
95% of legal professionals expect generative AI to become central to their workflows within five years. Yet only 40% are currently using or planning to use it. This gap between expectation and adoption reveals a profession standing at the water's edge, watching an inevitable wave approach.
At OpenText World 2025's Legal Tech track, experts shared how to ride this wave rather than get swept under it. Here are the five most critical insights.
1. Take stock of your dragons and unicorns: Prioritizing high-value legal AI use cases
Not every problem needs an AI solution. Andrew Kent, Director of Litigation Support at Pillsbury Winthrop Shaw Pittman, suggests starting with two questions:
* What's killing your productivity? (Your dragons)
* What critical goals do you need to accomplish? (Your unicorns)
One firm piloting OpenText eDiscovery Aviator praised its ability to slash document review time and costs while maintaining defensible workflows.
Jennifer Laws Harrell, eDiscovery Litigation Project Manager at Siemens Energy, aims to use eDiscovery Aviator GenAI to reduce the number of documents sent to outside counsel, thereby reducing review costs while preserving space for human judgment in crucial areas like custodian interviews and data collection.
Another speaker noted the potential for GenAI to assist in meeting international regulatory obligations, such as the EU General Data Protection Regulations.
2. GenAI as rocket fuel: Accelerating case narratives and early case assessment
Traditional technology-assisted review (TAR) helped lawyers prioritize relevant documents, but counsel still had to wait for review teams to finish before understanding the case story.
Not anymore. GenAI tools like OpenText eDiscovery Aviator Rapid Exploration let counsel find key documents immediately and use AI-generated summaries to piece together the narrative from day one. This capability transforms case strategy, enabling lawyers to understand their cases, formulate winning strategies, and execute faster than ever before.
3. Technical competence is now mandatory for legal teams
Since 2012, when the American Bar Association introduced technology competence standards, the bar has only risen. Today's legal professionals face pressure from all sides: corporate clients demanding efficiency, internal AI mandates, and the competitive imperative to stay current.
Conference attendees emphasized the importance of knowing which technology fits each situation, validating AI findings, and guarding against hallucinations. Equally critical: partnering with the right technology providers who can guide change management and share best practices.
Alexandra Roy-Lévesque, National eDiscovery Team Lead at Norton Rose Fulbright LLP, remarked on the importance of having reliable project managers to assist with technical tasks and ensure the smooth progress of the review, aligning with tight deadlines.
One speaker predicted the rise of "hybrid attorney-analysts"—professionals skilled at interpreting AI-driven insights, noting that in this new landscape, discernment matters more than output generation.
4. The automation paradox: Balancing AI efficiency and job security
Fear of job loss is natural, but clinging to inefficient processes doesn't serve anyone's interests. When technology offers more accurate, cost-effective solutions—such as AI-assisted first-level document review—the profession must evolve.
The key is reframing: instead of resisting change, legal professionals should focus on delivering greater value to clients in new ways.
5. Overreliance on AI threatens critical thinking
While AI is transformative, overuse poses risks, especially for junior lawyers. Studies show that excessive reliance on GenAI erodes critical thinking skills and homogenizes writing styles. As AI-generated content feeds back into large language models, outputs become increasingly uniform.
One speaker warned against using AI to generate automated first drafts for tasks that require creativity and critical thought. Without proper guardrails and mentorship, young lawyers risk never developing essential analytical skills.
The path forward for legal AI: Trust, verify, and keep humans in the loop
Legal professionals at OpenText World are optimistic yet cautious. They're implementing GenAI gradually, using validation workflows developed through years of technology-assisted review (TAR) experience.
Fern Boese, eDiscovery Specialist at Lawson Lundell LLP, noted that implementing GenAI review would be a true collaborative effort, with input from senior lawyers throughout the process to ensure quality and efficiency. As Naomi Carrera-McKail, litigation law clerk at Norton Rose Fulbright, put it: "My experience with TAR taught me that human oversight is non-negotiable. Technology amplifies judgment; it doesn't replace it."
By identifying their specific challenges, investing in technical competence, and applying AI strategically, legal professionals can harness this technology's unprecedented potential—staying firmly on top of the wave reshaping the profession.
Ready to explore how AI can transform your legal workflows?
Or talk to an expert.
The post Navigating the Legal AI tidal wave: Expert insights from OpenText World 2025 appeared first on OpenText Blogs.
-
Think EDR has your back? Think again.
Security teams today are under relentless pressure. Every hour, new threats emerge, threat actors innovate, and attack surfaces grow. Endpoint Detection and Response (EDR) has become the go-to tool for many Security Operations Centers (SOCs), and for good reason. EDR provides visibility into endpoint activity, surfaces suspicious behaviors, and enables containment actions. But the truth is, EDR alone isn’t enough to defend against today’s advanced threats.
To move beyond reactive firefighting, SOCs need Digital Forensics and Incident Response (DFIR) solutions that dig deeper, preserve evidence, and provide forensic-grade investigation and response capabilities. Together, EDR and DFIR give SOCs both the speed to contain threats and the clarity to understand them.
The limits of EDR for modern threats
EDR has earned its place as a pillar in cybersecurity, but it comes with some pretty significant limitations:
* Detection isn’t investigation: EDR is designed to detect suspicious activity, not to perform in-depth forensic analysis. An alert may indicate that a process is behaving strangely, but it may not reveal how the attacker gained access, what data they accessed, or whether persistence was established. Without these insights, SOCs risk treating symptoms rather than addressing the root causes.
* Coverage gaps leave blind spots: Endpoints are no longer neatly confined within a corporate firewall. Remote devices, off-VPN systems, and unmanaged assets often fall outside the visibility of EDRs. Threat actors are aware of these blind spots and exploit them, leaving SOCs in the dark.
* Attackers can evade EDR: Modern attackers use fileless malware, living-off-the-land techniques, and compromised third-party applications to avoid detection. In the case of a recent TransUnion breach, attackers exploited a third-party system, something traditional EDR tools would not flag.
* Limited value for compliance and legal needs: EDR alerts may help with containment, but they rarely provide the court-defensible evidence needed for regulators, auditors, or insurers. In an age of SEC disclosure rules, GDPR fines, and DORA requirements, evidence handling is not optional, but mandatory.
* Context switching wastes time: Even when EDR flags a compromise, SOC teams often need to pivot into other tools for deeper investigation, remediation, and reporting. This context-switching wastes time in moments when every second counts.
Why “EDR-plus” isn’t the same as DFIR
Some EDR vendors claim to deliver DFIR as an add-on capability. However, in practice, their solutions remain EDR-first, designed for detection rather than in-depth investigation. That EDR-bias manifests itself in three important ways:
* Shallow forensics capabilities: Most EDR platforms focus on telemetry and alerting, not forensic-grade evidence collection. They rarely provide tamper-proof logs, chain-of-custody integrity, or court-defensible reporting. That leaves organizations exposed when regulators, insurers, or legal teams require proof.
* Limited artifact collection: EDR-biased solutions might capture basic endpoint data, but they often miss critical artifacts, like volatile memory, registry changes, or third-party activity that only dedicated DFIR tools can preserve. Without these, investigations remain incomplete.
* Containment at the expense of evidence: Many EDR-first tools prioritize speed of isolation or remediation, but in doing so, they overwrite or lose evidence that investigators need later. True DFIR solutions are built to act quickly without compromising the investigation.
* Compliance gaps: When vendors stretch EDR into “DFIR,” it often fails to meet the defensibility standards of auditors, regulators, or courts. Without repeatable forensic playbooks and audit-ready reporting, organizations can’t meet compliance obligations.
Why SOCs need both EDR and DFIR
The takeaway is clear: EDR is critical for fast detection and containment, but it is not built for deep forensic analysis. A mature SOC requires both EDR for speed and DFIR for comprehensive investigation. Here’s why:
* DFIR Provides Root Cause Analysis: While EDR raises the flag, DFIR uncovers the story. DFIR tools enable analysts to collect forensic artifacts, reconstruct attacker timelines, and track activity across multiple devices. This level of detail allows SOC teams to understand not only what happened, but also how and why it happened.
* DFIR Extends Visibility Beyond the Endpoint: Modern DFIR solutions provide cross-environment monitoring, capturing activity even on off-VPN or remote systems. They also integrate with SIEM and SOAR platforms, ensuring incidents, whether from internal endpoints or third-party vendors, are triaged in the SOC immediately.
* DFIR Preserves Evidence with Integrity: Every forensic collection is logged and hashed, maintaining chain-of-custody. This is critical for regulatory filings, legal proceedings, and insurance claims. DFIR provides security leaders with confidence that the evidence is defensible and audit ready.
* DFIR Enables Smarter Containment: Instead of choosing between speed and accuracy, DFIR allows SOCs to do both. Compromised endpoints can be remotely isolated while maintaining forensic access. Malicious processes and files can be remediated automatically without losing the evidence needed for investigation.
* DFIR Strengthens SOC Maturity: In a mature SOC aligned with zero-trust principles, DFIR plays a pivotal role in advancing operational resilience. By enabling red team/blue team exercises, breach simulations, and repeatable forensic playbooks, DFIR makes sure that every incident becomes a learning opportunity. It strengthens continuous verification and visibility (key principles of zero trust) by providing forensic-level insights into how threats evade existing controls. Over time, this transformation enables the SOC to shift from a reactive unit to a proactive force, continuously refining detection, response, and containment strategies to reduce dwell time and enhance readiness.
A real-world example
Consider a possible scenario: An EDR tool flags suspicious PowerShell activity on a laptop, triggering containment, but the SOC still needs answers:
* Was this an isolated incident or part of a broader campaign?
* Did the attacker escalate privileges or move laterally?
* What files were accessed or exfiltrated?
* Is there persistence left behind?
EDR can’t answer all of these questions. DFIR can. With forensic timeline reconstruction, analysts can trace the attacker’s path, identify persistence mechanisms, and determine whether sensitive data was compromised. Without DFIR, the SOC is left guessing.
The business value of DFIR for cyber resilience
Beyond the technical benefits, digital forensics and incident response deliver tangible business outcomes that get the attention of executives, boards, and regulators:
* Reduced Risk: Faster detection and containment shrink dwell time, limiting exposure and preventing costly breaches.
* Regulatory Compliance: Audit-ready reports and chain-of-custody evidence meet the demands of GDPR, HIPAA, SEC, and other regulators.
* Operational Efficiency: Automated collections and artifact-driven workflows cut investigation times from days to hours, reducing analyst fatigue.
* Consumer Trust: Faster, more transparent incident response protects brand reputation and minimizes fallout.
* Resilience: By integrating forensic insights into daily SOC operations, organizations continuously improve their defenses.
Why OpenTextTM Endpoint Forensics & Response complements EDR
At OpenText, we see DFIR as a core capability for every modern SOC. OpenText Endpoint Forensics and Response combines forensic investigation and incident response into a single solution.
* Gain real-time endpoint visibility - even across off-network devices -so SOC teams can maintain full situational awareness and respond to threats faster, reducing operational blind spots.
* Automate threat detection with IoC and YARA-based scanning, helping SOC analysts quickly identify beaconing malware, unauthorized processes, or registry tampering, minimizing manual workload and speeding up investigations.
* Enable remote incident response by isolating compromised endpoints, terminating malicious processes, and remediating threats while preserving forensic evidence and ensuring legal defensibility. This minimizes dwell time and business disruption.
* Produce audit-ready reports that meet the needs of regulators, executives, and cyber insurers, supporting compliance, reducing regulatory risk, and strengthening business trust.
* Support SOC maturity and resilience by enabling repeatable, forensically sound playbooks that improve response times, facilitate red/blue team exercises, and drive continuous improvement across the security program.
In short, OpenText Endpoint Forensics & Response complements EDR by giving SOCs both the speed to respond and the forensic depth to respond confidently.
Building a SOC ready for today’s threats
The reality is clear: EDR alone is not enough. Attackers are too sophisticated, threats evolve too quickly, and regulators demand too much of organizations to rely solely on detection.
A mature SOC combines EDR with digital forensics and incident response (DFIR). While EDR identifies suspicious activity and initiates containment, DFIR provides the deep investigation, evidence, and response needed to fully resolve incidents. Together, they enable organizations to respond faster, make smarter decisions, and build greater resilience.
WithOpenText Endpoint Forensics & Response, SOCs don’t just react to threats - they investigate, contain, and emerge stronger.
Ready to strengthen your SOC with forensic-grade response? Contact OpenText today to learn how our OpenText Endpoint Forensic and Response DFIR solution complements EDR to reduce risk, improve compliance, and build lasting resilience.
The post Think EDR has your back? Think again. appeared first on OpenText Blogs.
-
Stop treating ESM like a tool choice: It’s a business strategy
As we move toward the end of the year, IT leaders are already shaping priorities for the next one. For many organizations, enterprise service management (ESM) is near the top—whether the goal is to strengthen what’s already in place or expand ESM across the business. The real challenge is turning those ambitions into measurable outcomes that deliver business value.
According to the Gartner® How to Build a Successful Enterprise Service Management Program report, 45% of leaders cite maximizing their ITSM investment as the biggest benefit of ESM. Yet many organizations struggle to realize that value without a clear success framework.
What ESM really means
ESM takes the principles and practices of IT service management (ITSM) and extends them across functions like HR, facilities, legal, finance, payroll, and procurement. As we often say at OpenText: A service is a service—whether it sits inside IT or not.
With capabilities such as service catalogs, automated workflows, self-service, and knowledge management—now increasingly powered by AI—ESM delivers consistent, efficient, user-focused services across the enterprise.
Start with strategy, not the tool
Organizations that choose an ESM platform before defining objectives, governance, and success metrics often end up with a tool that doesn’t solve the right problems. Gartner notes that 90% of ESM client inquiries prioritize tools over strategy—leading to mismatched capabilities and limited outcomes.
Gartner’s ESM success framework highlights essential steps for driving transformation:
* Secure stakeholder buy-in early.
* Partner with functions that have strong ESM affinity—HR and workplace management are great starting points.
* Define clear transformation goals.
* Align on scope, objectives, and funding.
* Establish a center of excellence to operationalize ESM.
* Measure what matters—combine qualitative metrics like CSAT and portal usability with quantitative metrics such as onboarding cycle time and case resolution.
* Plan across a three-year horizon to sustain momentum and deliver ongoing value.
How OpenText put ESM best practices into action
At OpenText, we aligned closely with these best practices for our own ESM program:
* CIO-level sponsorship ensured strong executive support.
* We partnered with HR—an early adopter with clear business needs.
* HR set clear goals: elevate the employee experience, reduce ticket volume by 20%, and expand self-service.
* We measured progress through CSAT, year-over-year ticket reduction, and self-service engagement. Notably, global CSAT improved from the low 80% range to 92%, driven by streamlined SLA tracking and simplified ticket management.
Today, 22,000 employees have one place to go for services. What started with HR a year ago has grown to more than 500 teams using OpenText Service Management, deploying processes and workflows that handle over 30,000 tickets each month. The results? Lower operational costs, simplified processes, and greater visibility into service performance.
Hear from Shannon Bell, EVP, Chief Digital Officer and CIO at OpenText, on how we set—and surpassed—our cost reduction goals. For example, in just one HR automation use case, we automated 20,000 high-volume, repetitive tickets, saving approximately 5,000 human hours—that’s 625 workdays reclaimed from repetitive tasks and redirected toward high-value work.
Your next step
Before scheduling a demo for ESM tool evaluation, ask yourself:
* Do we have stakeholder buy-in?
* Have we defined our business objectives?
* Do we know which functions we’ll partner with first?
Remember: ESM success starts with strategy. Technology follows.
For a deeper look at best practices, the Gartner report, How to Build a Successful Enterprise Service Management Program, provides valuable guidance. Read the report here.
Explore more
* Scale service management for HR – Hear from our HR team on how they streamlined employee services through ESM.
* Modernize the employee experience – Hear from our Global HR VP Operations on driving efficiencies and empowering teams.
* What is ESM? – The role of enterprise service management (ESM) in modern operations.
* OpenText Service Management – Learn more about our solution and how it supports enterprise-wide service delivery.
The post Stop treating ESM like a tool choice: It’s a business strategy appeared first on OpenText Blogs.