SCADA Historian vs. Time-Series Database: The Architecture Decision That Shapes Your Analytics

OSIsoft PI and AVEVA are the industrial historian standard. InfluxDB and TimescaleDB are the modern alternative. The decision is not as simple as replacing one with the other.

August 19, 2025 · 12 min read · Technical Deep Dive
SCADA historian time-series database architecture

Industrial data historians have been the standard storage layer for SCADA sensor data since OSIsoft released PI System in 1980. For the majority of manufacturing facilities built in the last 40 years, the historian - whether OSIsoft PI, AVEVA Historian, GE Proficy Historian, or Rockwell FactoryTalk Historian - is the authoritative record of what every sensor read, when, and for how long. These systems are battle-tested, OT-security approved, and deeply integrated into the operational workflows they support.

The argument for modern time-series databases - InfluxDB, TimescaleDB, QuestDB, Apache IoTDB - is compelling on the surface: open source or lower-cost licensing, cloud-native deployment, SQL-compatible query languages, and modern API interfaces that integrate cleanly with cloud analytics platforms. The question for operations teams is whether these alternatives are ready to replace industrial historians, and under what conditions.

The answer is more nuanced than most vendor presentations acknowledge: industrial historians and modern time-series databases are optimized for different problems, and the right architecture for most manufacturing facilities is not a replacement but a complementary layer.

What Industrial Historians Are Actually Good At

Industrial historians are purpose-built for one specific problem: storing and retrieving large volumes of tag-based time-series data with high reliability, on hardware that may be decades old, in an OT network environment with strict security and change management requirements.

Key capabilities that industrial historians provide and that modern TSDB platforms often struggle to replicate:

  • Compression for slowly-changing values: Historians use swinging door compression and exception reporting - only storing values when they change beyond a deadband threshold. A temperature sensor stable at 82.3°C for 8 hours generates one historian record, not 28,800 samples. This compression ratio is typically 10:1 to 100:1 for process data and is fundamental to long-term storage economics.
  • Native OPC-DA and OPC-HDA support: Historians were designed to receive data directly from OPC servers without intermediate translation layers. OPC-HDA (Historical Data Access) provides a standardized API for retrospective queries that is broadly supported across the industrial software ecosystem.
  • Certified OT security posture: AVEVA PI, Rockwell Historian, and comparable products have OT-specific security certifications, established vulnerability disclosure programs, and change management processes that industrial facilities trust. This matters enormously for OT security review timelines.
  • Long track record at scale: Large industrial facilities run historians with 500,000+ tags and 30+ years of data. The reliability and data fidelity of these systems at that scale is proven.

Where Industrial Historians Fall Short

The limitations of industrial historians become apparent when the use case shifts from data storage to analytics:

  • Query languages: Industrial historians use proprietary query languages (PI AF Expressions, Historian SQL) that require specialized knowledge and don't integrate with standard analytics tools without middleware.
  • API modernity: Most historian REST APIs were designed for low-frequency polling, not streaming subscriptions. Real-time anomaly detection that needs sub-second data latency is difficult to implement against a historian API.
  • Cloud integration: On-premise historians don't natively stream to cloud analytics platforms. Getting historian data into AWS, Azure, or GCP typically requires a connector product that adds cost and complexity.
  • Licensing cost: OSIsoft PI licensing is typically $100,000-$500,000+ for large installations, with annual maintenance fees. For cloud-native analytics, the total cost of ownership is significantly higher than open-source TSDB alternatives.
Historian and TSDB architecture layers

What Modern Time-Series Databases Bring to Industrial Data

InfluxDB, TimescaleDB, and their peers solve the analytics layer problem that industrial historians were never designed to address. Their advantages for industrial data pipelines:

  • SQL-compatible queries: TimescaleDB runs on PostgreSQL. InfluxQL and Flux are SQL-adjacent. Analytics teams can query with standard tools - Grafana, Apache Superset, Python pandas - without historian-specific training.
  • Streaming subscriptions: Modern TSDBs support real-time data subscription via Kafka, MQTT, or direct HTTP push. For anomaly detection at sub-second latency, this architecture is significantly more capable than polling a historian.
  • Cloud-native deployment: Docker-deployable, Kubernetes-ready, and natively integrated with cloud storage. Data retention, replication, and backup are managed through standard DevOps tooling.
  • Open ecosystem: Integration with ML frameworks, visualization tools, and event streaming platforms is straightforward. No proprietary connector layer required.

The Architecture That Actually Works in Production

The practical architecture for operational intelligence at facilities with existing industrial historians is a two-layer model: retain the historian as the on-premise, long-term storage layer (where it is trusted and already integrated), and add a cloud or edge TSDB for the analytics layer (where modern query capabilities and API integration are needed).

The historian-to-TSDB pipeline reads from the historian API at configurable intervals (typically 1-60 second polling for operational intelligence use cases) and writes to the analytics TSDB. The analytics platform queries the TSDB with standard tooling. The historian remains the authoritative on-premise record.

This is the architecture Relynk implements: we read from existing industrial historians via their native APIs (AVEVA Historian REST, OSIsoft PI Web API, Ignition historian), maintain a cloud-side time-series store for the anomaly detection layer, and provide read-only API access for downstream analytics. The historian is not replaced. It is extended with a modern analytics layer that the historian was never designed to provide.

When Replacing the Historian Is Justified

For greenfield deployments - new facilities with no legacy historian investment - a modern TSDB with native industrial protocol support (via an OPC-UA collector writing directly to InfluxDB or TimescaleDB) is a viable and lower-cost architecture than a traditional historian installation. The OT security review for a self-hosted TSDB is manageable if the security team is comfortable with the deployment architecture.

For facilities with historian licenses expiring and no strong reason to renew, a migration project is reasonable if planned carefully - data migration from historian to TSDB format preserves the historical record, and the improved analytics capabilities often justify the migration investment within 18-24 months.

For facilities with active historians in good standing, the replacement case is weak. The historian is working. The OT security team trusts it. The migration cost and disruption risk are real. Adding a modern analytics layer on top of the historian provides the analytics capabilities without the disruption.

Relynk reads from your existing historian

No historian replacement required. Relynk connects to AVEVA, OSIsoft PI, Ignition, and FactoryTalk Historian via their native APIs and adds the anomaly detection and CMMS integration layer on top.

Request a Demo
Back to Blog