Why Object Storage Is Becoming The New Default

by Erik Ottem and Vivek Tyagi    Jul 27, 2017


Object Storage has recently received a lot of acclaim for being the latest form of data storage. However, why would a consumer be affected by this technology, what problems are solved by it and why should one care about its implementation. Let’s take into consideration five common issues that object storage has been designed to address.  


The biggest need for object storage was to keep up with the mountain of data that had to be stored. 21st century has brought about a change in the nature of data, transaction data could be stored easily in a structured block storage device earlier but now we have in front of us a wild and fast growing data mix from social media platforms, audio, video, emails, log files and many more.

Placing all this data on a traditional storage block or in file storage would not only be expensive but also extremely difficult to manage. Although some forms of data are still appropriate to be stored in traditional forms, most of it isn’t anymore and a key reason why traditional storage finds it difficult to keep up with this growth is directory structures. The purpose of a directory structure is to allow data to be organized in a framework that makes it easy to access the data, however as the data grows the directory becomes larger and navigation becomes more and more taxing. This impacts the performance of the structure, which is considered as one of the highlights for traditional storage. Even though traditional storage might work when a small percentage of active unstructured data needs to be stored, it becomes a real problem when we deal with large amounts of data that is inactive.

Object storage follows a different approach. It includes a flat file space that makes use of metadata for describing the object and has unique identifiers to locate it. This enables scaling without limitations and without negotiating through an increasingly large directory structure. Object storage has been designed to scale and address 95% of the inactive data in the enterprise and on the cloud. Object storage is capable of keeping up with massive growth rates that are often witnessed in unstructured data that a user wishes to store.

 Cost Effective

Object storage is designed to manage large amounts of data that can handle lesser performance when compared to transaction data; this reduces the need for an expensive architecture. In order to ensure extremely low latency, users generally make use of caching and tiers that give the best performance at the least possible expense. In a scenario where the overarching need is scale there wouldn’t be a need for that much performance, which would mean that there would be no need to invest in expensive flash tier and continual tuning of cache for ensuring the best performance. By setting up a private cloud and making it self-service the user can reduce the overhead cost. Accessing data off tapes can be too expensive due to the human intervention expense and by bringing all this data online via object storage users can reduce the cost involved when compared to restoring data from tapes.

 Data Integrity

The expectation is that data remains retrievable at all times, whenever the consumer may need it. And the data that is retrieved should be the complete data, not parts of it. However with the growing disk drives and RAID rebuild times, this becomes difficult and with it increases the probability of data loss. With object storage this threat is dealt with in a different manner known as erasure coding. Unlike RAID that protects the data by rebuilding the disk drive’s information, erasure coding rebuilds chunks of data in order to protect it. This creates data integrity of a new order, and along with this most object storage systems come with strong consistencies so the consumers do not have to worry about stale data. And finally, some of these systems also provide background data integrity checks and self-healing to make sure that the data remains uncorrupted.


Managing data on a terabyte scale is often considered challenging; imagine managing data on a petabyte scale, this would require a completely different approach. Traditionally storage provided rack based management which would allow you to add users, identify failed HDDs, add encryption, provide new storage and manage all the other day to day tasks in the world of data. This was feasible till we had data in gigabytes, it was even enhanced to support terabyte scale data, however it did not work very well for petabyte scale data. This was simply because as data continued to grow, the need for a new approach for effective management of data also became more evident.

Through object storage, one needs to manage only the namespace and not the rack of storage. What this means is that in object storage, a namespace can represent either a single rack of storage or multiple ones, and these racks could be local or spread geographically. No matter how they are configured, all of the data can be managed through a single pane of glass. This displays productivity and gives a broader view of the storage resources available.

 Infrastructure for Cloud

The 20th century applications were accessed using a GUI or a command line interface, with the 21st century’s arrival came in portals such as the browser. This browser acts like a gateway into the cloud, and via object storage it can also provide access to the private cloud. Most of the consumer applications make use of the Amazon’s S3 protocol to interact with the cloud. The industry has identified the potential of this protocol and so most of the object storage vendors make use of S3 as the dominant access method. The advantages it provides include easy access, high security, and low overhead. In conclusion, cloud is a 21st century technology and object storage is the infrastructure supporting it.

Although the transition of data to object storage has not yet become obvious, the reason for this migration are becoming visible and making it important for enterprises to keep up with this trend.

[Erik Ottem, Director of Product Marketing, Data Center Systems Western Digital Corporation and Vivek Tyagi, Director for India business development, SanDisk brand Commercial sales and Support at Western Digital Corporation.]

[Disclaimer: The views expressed in this article are solely those of the authors and do not necessarily represent or reflect the views of Trivone Media Network's or that of CXOToday's.]