Facebook is making its custom AI hardware open-source

Shawn Knight

Posts: 15,240   +192
Staff member

Facebook on Thursday announced plans to open-source its latest Open Rack-compatible hardware design for AI computing, codename Big Sur.

As Facebook staffers Kevin Lee and Serkan Piantino explain, Big Sur was built to use up to eight high-performance GPUs of up to 300 watts each. Leveraging Nvidia's Tesla Accelerated Computing Platform, the duo claim Big Sur is twice as fast as previous generations that were built using off-the-shelf components and design.

The speed increase means they can train neural networks twice as fast and explore networks that are twice as large as before (larger means faster in this instance). What's more, because training can be distributed across eight GPUs, the size and speed of networks can be scaled by another factor of two.

It may seem foolish for a tech giant like Facebook to openly give away its AI secrets but it's actually a strategic move that will help both the social network and the artificial intelligence community advance at a faster pace.

Think about it. If Facebook can hire employees that already have experience with its AI systems, that'll no doubt save them time and money when it comes to training and whatnot. It's also not unheard of as Google essentially did the same thing last month by making its artificial intelligence engine TensorFlow open source.

Facebook said it will submit the design materials to the Open Compute Project (OCP) but didn't reveal a timeline of exactly when that will happen.

Permalink to story.

 
Using Nvidia hardware it's doomed to fail...because nothing connected to green team is open source.
 
What are you talking about?
I doubt mosu (EDIT: and his echo Nitrotoxin) knows. He saw the word Nvidia and went immediately into full guerrilla marketing mode...basically a tech Pavlovian response.
4ef822b9efa498145c70ccff31988d1f.jpg
Most other people would realize that facebook's implementation isn't much different from how other vendors use the hardware. Amazon use Tesla for their AWS EC service, Microsoft likewise for Azure, and a host of HPC users are all using Nvidia hardware.

In fact, the only Nvidia specific API is CUDA, and that isn't much of a hurdle considering porting the code to OCL isn't a particularly difficult task - likely to be made even simpler in due course.
Using Nvidia hardware it's doomed to fail
Lug your pulpit to Google....and then go on a worldwide tour, because it looks like more than a few Government, research, and learning institutes haven't heard your message. Of the 101 HPC clusters using co-processors in the TOP500 list, 70 are using Nvidia hardware....and that percentage is growing just as co-processor equipped clusters are proliferating.
e35CCVw.jpg
 
What are you talking about?
I doubt mosu (EDIT: and his echo Nitrotoxin) knows. He saw the word Nvidia and went immediately into full guerrilla marketing mode...basically a tech Pavlovian response.
4ef822b9efa498145c70ccff31988d1f.jpg
Most other people would realize that facebook's implementation isn't much different from how other vendors use the hardware. Amazon use Tesla for their AWS EC service, Microsoft likewise for Azure, and a host of HPC users are all using Nvidia hardware.

In fact, the only Nvidia specific API is CUDA, and that isn't much of a hurdle considering porting the code to OCL isn't a particularly difficult task - likely to be made even simpler in due course.
Using Nvidia hardware it's doomed to fail
Lug your pulpit to Google....and then go on a worldwide tour, because it looks like more than a few Government, research, and learning institutes haven't heard your message. Of the 101 HPC clusters using co-processors in the TOP500 list, 70 are using Nvidia hardware....and that percentage is growing just as co-processor equipped clusters are proliferating.
e35CCVw.jpg
Using Nvidia hardware it's doomed to fail...because nothing connected to green team is open source.
 
What are you talking about?
I doubt mosu (EDIT: and his echo Nitrotoxin) knows. He saw the word Nvidia and went immediately into full guerrilla marketing mode...basically a tech Pavlovian response.
4ef822b9efa498145c70ccff31988d1f.jpg
Most other people would realize that facebook's implementation isn't much different from how other vendors use the hardware. Amazon use Tesla for their AWS EC service, Microsoft likewise for Azure, and a host of HPC users are all using Nvidia hardware.

In fact, the only Nvidia specific API is CUDA, and that isn't much of a hurdle considering porting the code to OCL isn't a particularly difficult task - likely to be made even simpler in due course.
Using Nvidia hardware it's doomed to fail
Lug your pulpit to Google....and then go on a worldwide tour, because it looks like more than a few Government, research, and learning institutes haven't heard your message. Of the 101 HPC clusters using co-processors in the TOP500 list, 70 are using Nvidia hardware....and that percentage is growing just as co-processor equipped clusters are proliferating.
e35CCVw.jpg

I was trying to say that he should have read the article closer, not echoing him, this forum software is crap and will not let you edit your posts after you post them sadly. I was trying to tell him that he should have read the article more closely as the Nvidia video cards is only part of the open source solution, not the whole thing.
 
As a matter of fact my comment was just to test your reaction that proves me right. And yes, I did read the article, the point being Facebook making software open source to attract devs working with AI.Do I have to elaborate? It seems that some of you didn't realize that I personally do not hate Nvidia( or Apple or Intel) but I strongly resent the commercial practices these companies use.The hint in this case is CUDA( CUDA compiler, CUDA core library and CUDA runtime library) not being open source, used in an open source project.
 
As a matter of fact my comment was just to test your reaction that proves me right. And yes, I did read the article, the point being Facebook making software open source to attract devs working with AI.Do I have to elaborate? It seems that some of you didn't realize that I personally do not hate Nvidia( or Apple or Intel) but I strongly resent the commercial practices these companies use.The hint in this case is CUDA( CUDA compiler, CUDA core library and CUDA runtime library) not being open source, used in an open source project.
Lamest excuse ever for defending an indefensible stance. Blahing on about Nvidia when the ONLY contribution they have is that their GPU products are used by OCP GPU server partners (AMAX and Penguin), and yet Intel, whose record regarding commercial practices dwarfs just about everyone's, actually supply most of the server hardware AND are presently looking to squeeze every competing architecture (including OpenPOWER and HSA) out of the very HPC/server markets that OCP will integrate with by running roughshod over a supposedly "open" OpenHPC.

Bravo mosu. I think your reaction and omitting the one company that is both shady (your principle gripe) and attempting to subsume the entire big iron industry (to go along with their 98.5% market share in x86), while singling out a third party IHV with no direct connection to the project speaks volumes.

I honestly can't even see why CUDA has relevance in this case. The choice of GPGPU API is entirely up to the clients who rent time on the systems. This system, like Azure and AWS is API agnostic. Of course if you'd have read the article as you claim and looked at the infrastructure of the project you would already have known that.
 
Last edited:
Back