Von Neumann syndrome

The von Neumann syndrome is the reason of the supercomputing crisis and explains the reconfigurable computing paradox. This term (von Neumann syndrome) has been coined by Prof. C. V. Ramamoorthy (after having listened to a keynote by Reiner Hartenstein). For most applications in massively parallel computing systems with thousands of processors or tens of thousands of processors only disappointing performance can be achieved. With the number of processors the programmer productivity usually goes down dramatically (The Law of "More"). The overhead-prone and memory cycle-hungry inefficiency of moving data around and other communication requirements are the problem, not the amount of available processing resources. The instruction-stream-driven computing paradigm is the reason of the von Neumann syndrome. The migration of an application to Reconfigurable Computing on FPGAs or on coarse-grained reconfigurable platforms means a shift to the data-stream-driven basic Anti machine paradigm using data counters instead of program counters (during execution time there is no instruction fetch). Instead of moving data around, the locality of execution is optimized by placement and routing.

One of the future battlefields affected by the Von Neumann syndrome is programming Many-core microprocessors. A Many-core microprocessor is one that combines multiple independent processors (between 4 and more than 30 cores as pre-announced by 2007) into a single integrated circuit chip. Because in Massive parallelism the programmer productivity rapidly declines with the number of CPU cores involved ("The Law of More" - programming ready: hardware obsolete). There are severe doubts, whether thread-level parallelism will solve the programmer productivity problem of Massive parallelism. Only a few high performance computing (HPC) specialists or supercomputer programmers are qualified for only a narrow application domain. Another solution discussed in the (HPC) and supercomputer community is Reconfigurable Computing which promises better solutions to cope with the Memory wall, for the price of requiring a paradigm shift - toward a dual-paradigm approach (von Neumann machine and Anti machine). The conclusion: programming many-core microprocessors and FPGAs is also a problem of educational deficits, challenging the upgrade of obsolete CS-related curricula.