Feature spotlights

Leading Inline deduplication for object storage

StorReduce is a specialized software-defined deduplication solution, designed to meet the unique requirements of companies using object storage (on-cloud or on-premises) for large volumes of data at industry-leading deduplication ratios. StorReduce sits between your applications and the object store, transparently performing inline data deduplication. StorReduce reduces storage and bandwidth requirements by as much as 30 times, and data transfer times can be reduced by the same amount.

Fast and Massively Scalable

StorReduce can make your backup solutions fast and massively scalable. Sustained write speeds of 10 gigabytes/s or higher are possible using multiple servers deployed as a scale-out cluster, and hundreds of petabytes of data can be managed. Data deduplicated with StorReduce can always be accessed on-cloud or on-premises using the S3 API.

S3 REST Interface

StorReduce exposes its own S3-compatible REST interface and accepts AWS version 2 and version 4 signatures, allowing you to use StorReduce like you would a conventional object store and retaining compatibility with your favorite S3 clients and applications.

Compatible with the leading existing backup software

Works with existing data management or backup software that is compatible with Amazon S3, including Veritas NetBackup, Backup Exec, Commvault Simpana, Veeam and EMC Networker.

High Availability

When configured as a scale-out cluster StorReduce provides automatic failover in the event of server failure, and can be set up with any desired degree of redundancy between servers.

Unwrap backup files

Unwraps backup files (such as TAR files generated by backup applications) on the cloud so the migrated backup data can use any cloud service, such as search, data mining, or artificial intelligence.

Read-only Replicas

StorReduce allows you to access your data virtually anywhere that has access to an S3-compatible backend that is used to store your deduplicated data. Additional StorReduce servers or clusters can be deployed on-cloud or on-premises as read-only endpoints. Uploaded data is only stored once but is immediately available at each endpoint location, enabling migrated backup workloads to be re-purposed on the cloud for development, test, QA, disaster recovery and to be used by cloud-based services.

Quickly Clone Entire Buckets at Petabyte Scale, Nearly Free!

StorReduce Object Clone enables you to repeatedly produce fully isolated, copy-on-write, clones of buckets containing millions of objects and petabytes of data, at virtually zero cost. Your cloned bucket contains a comprehensive snapshot of your data at a single point in time without the worry of changes and deletions from developers, researchers and experiments causing data corruption and works with all kinds of data beyond backups - like media and IoT Appliances.

Data Replication

StorReduce can replicate data between regions, between cloud vendors, or between public and private cloud, providing increased data resilience. Only the unique data is transferred, providing up to 30 times speedup on transmission and up to 30 times reduction in bandwidth and storage costs.

Works on all major public and private object stores

Built to work with the major public clouds including Amazon S3, Microsoft Azure and Google Cloud Storage, StorReduce can also work with any backend that implements an S3 REST interface and can hence also work with a variety of private object stores. These include: Cloudian HyperStore, IBM COS, HDS HCP, EMC ECS, Scality and HGST Active Archive & Active Scale & many more.

Secure Encryption

StorReduce supports encryption of data before it lands in the object store. StorReduce integrates with key management systems like Amazon KMS or KMIP- compatible hardware security modules for storage of the cryptographic keys. Data can be encrypted on-premises before being sent to the cloud, or cloud encryption-at-rest services can be used. Data is always encrypted in-transit.

Secure User Account and Key Management

Users or servers can be given individual user accounts within StorReduce, allowing data access to be restricted. Multiple access keys can be created and managed as needed for each user account. Enterprise security policies can be expressed using StorReduce policy engine, using Amazon’s IAM policy language.

Customer case study

Case study image

StorReduce and Equinix to Reduce US Healthcare company's on-cloud backup costs by over 85%

Read the case study

How customers use it

  • Primary Backups Straight to Object Storage with StorReduce

    Primary Backups Straight to Object Storage with StorReduce


    Primary backups stored to on-premises backup appliances (eg Data Domain) have a single point of potential data loss, do not scale & prevent data insights. Save up to 80% by deduplicating & storing to cloud. Use services like Watson & search on them.


    StorReduce configured onto an object store backend provides faster and more durable primary backup storage. The scale-out architecture has massive throughput at PB-scale enabling faster ingest and recovery than leading backup appliances.

  • Remove Tape, Send Backups to Cloud

    Remove Tape, Send Backups to Cloud


    Tape archives generally contain multiple copies of the same data sets, which can be reduced to a single copy with deduplication. This has the potential to reduce the amount of data stored and bandwidth for transfer down to as little as 1/30th.


    Send tape or disk-based backups to cloud by installing StorReduce software on-premises for very fast migration of an enterprise’s backup archives to cloud. Install StorReduce on-premises to minimize bandwidth. Granular recovery on-cloud with search.

  • Clone Large Data Sets

    Clone Large Data Sets


    Developers, researchers and experimenters may want to access an organization's dataset of petabytes of data and make changes without fear of corruption. Cloning data traditionally is expensive and time-consuming, leading to underutilization of data.


    StorReduce Object Clone enables you to repeatedly produce fully isolated, copy-on-write, comprehensive clones of buckets containing millions of objects and petabytes of data, at virtually zero cost.

See how it works