For my final post on the SC'13 conference that ended this past Friday in Denver, there were two intriguing technologies discussed toward the end.
1. Micon Automata
The Automata processor from Micron, known for its RAM products, really brings the compute closer to the memory. Now, it's not a "Turing machine"; i.e., it's not a general purpose processor. But it's very fast at what it does, which is process state machines, a domain previously requiring FPGAs for the highest performance.
Under development in secret for seven years, the Automata processes non-deterministic finite automata (NFA). Finite state machines are used in industrial control applications, text input stream parsing, robotics, network switches, and much more.
2. DWave Quantum Computing
Colin Williams from DWave gave one of the last keynotes of the week, and he succeeded at his goal: to dispel the myths surrounding quantum computing. It's real, and it's here.
The D-Wave Two is large (10'x10'x12'), but its 512 coupled qubits occupy just a single silicon chip with layers of niobium, a superconducting metal that expresses quantum effects at low temperatures. The rest of the unit is just cryo cooling to keep the chip temperature to 20 milli-Kelvin. The 512 qubits means it can instantaneously find the optimum solution in a search space of 2512 combinations. The 512 qubits through quantum effects represent every possible state by exploiting "parallel universes" as Colin Williams said. An example real-world success is the optimization of cancer radiation therapy plan to require a significant percentage (cannot recall from the presentation, perhaps 30%) less radiation than the best solution computed by conventional medical software and computer.
Colin Williams said the D-Wave solves just one type of problem, and any real-world problem to be solved by the D-Wave has to be mapped onto that one type of problem the D-Wave can solve. Because of the limited class of problems, Colin Williams portrayed quantum computing as a companion to HPC (high-performance computing), which is general to solve any problem.
Concluding SC13 Remarks
The sentiments repeated several times throughout SC13 were:
- The need to bring compute closer to memory
- That more and more RAM is needed for performance
On this latter point, there is continued excitement over the Micron memory cube, which stacks memory die in 3D to provide increased density. Cramming Micron memory cubes onto GPU boards was a particularly popular idea.
When I went on the tour of the Janus supercomputer at CU Boulder, it was noted that the 96MB per node was today considered limiting, forcing some simulation application onto other supercomputers in Colorado and Wyoming. The need for more RAM is being heeded by HPC manufacturers for new systems.
As I noted in my previous two SC13 blog posts, the road ahead for HPC is looking a lot different than it has for the past 20 years.
See: HPC Game Changer: IBM & NVidia New Architecture
See also: GPU is Common Ground between HPC and Hadoop