
Overcoming the bottlenecks of conventional synchronization
Replicating or synchronizing massive data stores over wide-area networks (WANs) presents serious challenges using traditional TCP-based replication tools. Tool performance significantly degrades as latency and packet loss increase due to greater transfer distances and the kind of network congestion that occurs with large transfers. IBM Aspera® Sync offers extreme performance and efficiency to overcome the limitations of conventional synchronization tools like rsync. It lets you scale up and out for maximum speed replication and synchronization over WANs.
Attain high-speed replication and synchronization
Achieve massive scale
Harness the power of an architecture that enables synchronization of millions of small files or multi-terabyte files with support for concurrent sessions, clustering and multi-gigabit transfer speeds.
Sync more efficiently
Avoid hours of unnecessary copy times with Sync’s ability to intelligently recognize changes and file system operations, instantaneously propagating them to remote peers and reducing your RTO and RPO.
Customize your solution
Choose from several options that include uni-and bidirectional sync and one-to-one, one-to-many and full-mesh synchronization in both push mode and pull mode among peers over WANs and LANs.
National Instruments accelerates development by 95 percent
Using the patented high-speed transfer technology in IBM Aspera, National Instruments was able to significantly accelerate its global software development pipeline by synchronizing large software builds between teams at 17 sites around the world from its centralized data center in Austin, Texas. The deployment made development timelines and costs far more reliable and predictable while significantly reducing the risk of delivery. Now the company is more responsive to customers and ultimately develops innovative products and features faster.
Aspera synchronization resources
Footnotes
¹ Achieved by National Instruments, using IBM Aspera ² In synchronizing one million small files with an average file size of 100 KB and network latency and packet loss of 100 ms and 1 percent, respectively ³ In synchronizing more than 5,000 larger files with an average file size of 100 MB and network latency and packet loss of 100 ms and 1 percent, respectively