HPC to enable India witness more homegrown patents

by Darinia Khongwir    Apr 04, 2013

Hemant Agarwal

The Centre for Development of Advanced Computing (C-DAC) is the premier R&D organization of the Department of Electronics and Information Technology (DeitY), Ministry of Communications & Information Technology (MCIT) for carrying out R&D in IT, electronics and associated areas.  Netweb Technologies recently deployed one of fastest supercomputer in C-DAC, Pune. Hemant Agarwal, CTO, Netweb Technologies talks to CXOToday on how HPC will lead to more IPs in India.

What is the market for High Performance Computing (HPC) for Indian enterprises? How has it evolved in the past few years?

People with very high computing needs initially relied on specialized hardware available from select vendors. HPC in India started in the late eighties and the access was limited to a privileged few. Use of COTS (Commodity of the shelf) x86 systems for building HPC led to a continuous reduction in the price of such solutions and we started seeing a growth in demand in the year 2003. Earlier it was mostly limited to scientific computing and required lots of effort to set up. The interconnects used were proprietary and there were very few vendors making such interconnects. Over the years, HPC started becoming more affordable and the choice of interconnects narrowed down to either Ethernet or InfiniBand and for both of the interconnects there were multiple players offering customers a choice.

At some point people realized that the computing performance was not growing as per the requirement of the users and that was the stage when people started talking about using special purpose accelerator boards to improve performance. These accelerator boards were proprietary and each had its own set of development tools and therefore required lots of efforts to migrate the codes not only initially but also while moving from one platform to the other.

Once GPU computing became available, most of the other accelerators went out of favour and these made supercomputing affordable. A single computer could now do what would have earlier only been possible using a large number of systems connected over a high-speed interconnect. We were associated with a HPC in late 2003 and early 2004 that used 144 dual-processor servers to deliver a performance of 1TFLOP, which was at that time was considered to be high performance and was the fastest HPC in India in the academic segment. Now, you can get single systems with a performance, which are of a few TFLOPS. This has not only brought supercomputing in reach of many small users, it has also led to a dramatic increase in terms of performance per watt resulting in greener computing.

Where is HPC seeing more traction?

HPC in India has been mostly confined to scientific computing (R&D & educational institutions) and the government has been a major buyer. In the enterprise segment, oil & gas and biotechnology have been major consumers and to some extent in the area of EDA (Electronic Design Automation) and CFD (Computation Fluid Dynamics). Earlier HPC was limited only to buyers with deep pockets and as it has become more affordable and it has become possible to put up decent computing infrastructure with relatively modest infrastructure, more and more people are opting for it. We now see increased usage in the financial analysis markets.

Even in the area of scientific computing we now see an increased demand as more and more researchers now demand access to a supercomputing infrastructure. Weather forecasting is one area where HPC has always been used in a big way worldwide and with increased capacity in terms of computing, it is now possible to predict weather more accurately. Other areas of scientific computing which have benefitted from use of HPC are molecular dynamics and computational chemistry.

What are the challenges of deploying HPC?

The major challenges are power and cooling. A large HPC can be a real power guzzler and the largest ones can consume a few megawatts of power. In India, where we have major power shortage, it can be a spoilsport. To add to the complication, we need to use large generators to take care of the blackouts and brownouts and we need large UPS systems to switch between the two sources of power without a shutdown.

The second challenge is cooling. With the rapid strides in computing capacity in terms to density the challenges posed by cooling have also grown. Air based cooling can be effective only up to a certain density and people are now increasingly resorting to new methods such as liquid cooling and immersion cooling to solve these problems.

With the supercomputer being deployed at C-DAC Pune, do you think India will see more IPs in tech field?

Certainly. As more and more researchers get access to computing, the time required for innovation would come down and we should see more IPs in many fields. As HPC becomes available to more such people not only because HPC is now affordable, but also because now connectivity (networking) has improved and therefore many more people have access to such infrastructure either locally or remotely.

Is it true that with higher computing capability there is more tenacity to save power?

This is true. As people realized that the power is one of the major factors inhibiting the growth of computing, a lot of thinking started in the direction of saving power. We now have more computing capacity for every watt of power consumed. The other area where we are seeing a saving is in the cooling infrastructure. Earlier for every watt of power consumed by the computing infrastructure, the power consumption of the auxiliary infrastructure (such as cooling) was 1 watts leading to a PUE (Power Utilization Effectiveness) of 2.0. We now find that by using certain new methods we can actually bring town this power consumption of auxiliary infrastructure to a fraction of it. Therefore the new mantra is PUE. Netweb has launched solutions that offer a lower PUE and can actually bring it down to 1.1.