• Facebook is about to release the hardware design for its server that it uses to train AI software

Facebook is about to release the hardware design for its server that it uses to train AI software (Photo : Reuters)

Facebook is about to release the hardware design for its server that Facebook uses to train AI software, which will allow any company exploring AI to come up with similar systems.

Code-named Big Sur is used by Facebook in its server to run its machine learning programs. That typical AI software learns and gets better with time. It is contributing Big Sur to the Open Compute Project set to let companies share designs for any new hardware.

Like Us on Facebook

Image recognition is one common use for machine learning where a software program learns a photo or video to identify the objects in the frame. The method is getting applied to all kinds of large data sets aiming at spotting things like credit card fraud and email spam.

Facebook, Microsoft, and Google are pushing hard at AI. This is believed to help them build smarter online services. In the past, Facebook had released some open-source AI software although this is the first time it is releasing AI hardware.

GPUs are relied heavily on by Big Sur as they are more efficient than CPUs for any machine learning task. The server can have more than eight high-performance GPUs with each consuming 300 watts and also can get configured in a variety of ways via PCIe.

According to Facebook, the GPU-based system is twice as fast as its previous generation of hardware. In a blog post on Thursday, it said that distributing training across eight GPUs allows them to scale the speed and size of their networks by another factor of two.

One unique thing about Big Sur is that it does not require special cooling or other infrastructure. It is expensive to cool the high-performance computers that generate a lot of heat. To stop such computers from overheating, some get immersed in exotic liquids, PCWorld reported.

With Big Sur, this is not necessary. Images released so far show a large airflow unit inside the server containing fans to blow cool air across the components. Facebook pointed out that it can use the servers located at its air-cooled data centers, and this avoids industrial cooling systems keeping costs down.

To bring machine learning to more of its services, Facebook said that it will triple its investment in GPUs. "Big Sur is twice as fast as our previous generation, which means we can train twice as fast and explore networks twice as large," it said. "And distributing training across eight GPUs allows us to scale the size and speed of our networks by another factor of two."

Facebook did not specify when the specifications for Big Sur would get released. More might be said about the system during the next OCP Summit in the U.S. expected to take place in March.