Have been watching advances in computing power for some time, since university really. I did research in parallel algorithms and architecture for my first position at the university and later applied practically on Wall Street. In those days super-expensive machines like the Intel Hypercube, Paragon, and many other architectures were the backbone of the HPC community.

HPC (High Performance Computing) roughly breaks down into 4 categories:

- Big iron supercomputers (MIMD generally)
- Distributed computing (these days advertised as Cloud Computing)
- The emerging SIMD GPU based solutions
- Quantum Computing (not really here yet for the mainstream)

In the machine learning and optimisation world there are massive problems, some of which are not computable on von-neumann architectures, as their runtime would be astronomical. An (absurd) example of such a problem would be to simulate a large number monkeys typing on typewriters, stopping when one produces the works of Shakespeare. The number of monkeys required to produce such a work on average in astronomical. This seems like an absurd problem, but is comparable to the GP / GA approach.

Then of course there are numerous problems with high dimensionality and/or with polynomial order complexity.

**Supercomputing on the Cheap
**The FASTRA team at the University of Antwerp has put together an inexpensive multi-teraflop machine with 7 gaming cards. Check out their video.

Unfortunately the “easy” part of these sort of solutions is the hardware. The problem is the (often) great expense to develop one’s models in a SIMD framework, so can be applied to for the GPU architecture. Although there is now standardization on the low-level C-variant used to program GPUs, there are significant differences between different models of GPUs, that even if you manage to write a correct SIMD program, may have to rearrange for a specific GPU implementation. (I guess this is not all that different from my experiences with big-iron parallel architectures of the past).

One could have a team devoted to parallelization, tuning, and retuning / reworking for the new GPUs that are out periodically. Very time consuming!

For my work, the problems that would map well are particle filters and monte-carlo based models, each of which have obvious fine-grained parallel operations.

**Quantum Computing
**The other notable announcement this week was Google’s use of quantum computing to solve pattern recognition problems. I have not done the leg-work to fully understand the algorithms in quantum computing, but broadly it seems to be a matter of framing one’s problems statistically as path integration problems (i.e., expectations), where quantum computing allows the paths to be explored simultaneously.

You have some good stuff on your blog! And I’m honored to be (mis-)classified with Derman et al on your blogroll – thank you!

I have only the most cartoon-ish understanding of quantum computing. But the link you provide isn’t about q.c. but is instead about something called ‘quantum adiabatic algorithms.’ Sadly, I don’t have even a cartoon-ish perspective on these as I’d never heard of them until now… and the linked paper’s abstract stumped me on the eighth word

Thanks for helping me meet today’s quota of learning at least one new thing ;^>

Yes, you are right. I wrote this with a broad brush. There has been a dispute for some time as to whether D-Wave computer is indeed a “Quantum Computer”.

I doubt I’ll get access to quantum devices for some time, but I can see the potential for my work.

I’m pretty sure QC is a baloney field, as far as actual hardware goes. It would be nice if it were otherwise, but there seems to be no evidence for it.

Post is a bit dated. I have to agree that it is much longer away from anything realistic than my extrapolation in this post implied. The theory and physics are still in flux as how to best implement a quantum computer.

QC may only be appropriate for a subset of problems as well.

The question is where to go beyond the current technology. Perhaps the easiest next step is building silicon in the 3rd dimension, instead on the 2d plane as done now. I can’t claim to have studied the issues, amongst must be how to radiate the heat.

there is no point to implement quantum algorithm in nowadays’ computer, besides, I was doing quantum computing research for a while, I am pretty sure D-Wave computer is a joke. Last time, maybe 4 or 5 years ago, I heard about this is eight-bit experiment.