News & Updates

  Home / News

Bikal High Performance Computing solutions

HPC and Big Data Solutions

September 2016

Our HPC services are designed to support your core competency by aiding the management of your data. By reliving you of dealing with day-to-day managing of database systems, a public or private enterprise can then focus on answering people’s questions with the help of a data scientist, who is not having to spend time on structuring data.

While there are many good cloud services that provide very effective high performance compute power there are still some issues in being able to access it real time and managing the costs. Matt Solnit, CTO of SOASTA said he had previously encountered issues with Amazon’s compute-storage price policy. “You can end up paying for storage and horsepower you don’t need,“ he said. “Amazon’s great in a lot of ways. But Redshift (an AWS product) didn’t map with some of our needs.”

Bikal, with the help of Sugon’s Silicon Cube HPC, wants to tailor software services for companies moving at least some cloud services to on-premise servers. We can address concurring query issues that may not be able to be processed in the some of the general purpose, and even higher end cloud data warehousing offerings. The Silicon Cube can provide speed and scalability of queries through multi-clustering warehousing that better supports users’ queries during periods of heavy use, and adaptive query result caching that tunes often-repeated queries to ensure high performance for reports and the like.

Bikal uses AWS to test out many of its, and its customers, applications and will run its initial iterations in that cloud environment. The issues can arise in the compliance requirements of Business Impact Levels and the need for other security concerns. We do not state that AWS does not cover any of those issues but if an air-gapped system is required then the only alternative is an on-premise system. The scalability for HPC works both ways, with a 2U Silicon Cube being able to be hold eight NVidia Tesla GPUs. If this is combined with an arranged flash array, we use technology from Pure Storage, then an on-premise unit with one petabyte of storage can be a footprint of 4U.

Spark Requires Storage

There’s still a major role for Hadoop in all this. An important aspect of Spark deployment is that the technology does not provide a distributed storage system. That’s key, because distributed storage is what allows vast, multi-petabyte datasets to be stored across clusters of low-commodity servers. So, any company wanting to use Spark must also implement a scalable and reliable information storage layer – which, in many cases, is proving to be the Hadoop Distributed File System (HDFS).

For further information and for any queries please contact us on or call on +44 (0)20 7193 5708