Overcoming the Programming Barriers to Unleash the Full Potential of Multicore Platforms
Multicore processors are now in nearly all electronic products, including smartphones, tablets, laptops, and servers – as well as embedded devices that have become ubiquitous and are making up the Internet of Things (IOT). This shift to add additional CPU cores, rather than continuing to increase clock speeds, began in 2004 because clock speeds had risen to about 4 GHz, and at that speed the resulting leakage current meant they used too much power and gave off far too much heat to be practical for many uses.
That shift put the challenge on software developers to effectively use these cores, what the industry is now calling software modernization. Initial efforts focused on simple partitioning for dual core processors. Next were optimized libraries for compute-intensive functions that could run across multiple cores. As core counts increased, so did programming complexity, such that true parallel programming has become necessary, a difficult to master skill found only in the most elite of programmers. The latest processors have a heterogeneous mix of cores, with specialized cores such as (GP)GPUs added to traditional CPU cores. This looks good on paper because GPUs deliver outstanding floating point performance per watt. Unfortunately, this makes the programming challenge even harder, since not only is parallelization required, but software must also be rewritten in a very different – and relatively low level – programming language (e.g.-CUDA, OpenCL).
Einstein was right…we were overdue to think differently.
The significant problems we face cannot be solved using the same level of thinking we used when we created them.Albert Einstein
Amdahl’s Law Illustrates Why This is So Critical
Parallelizing half of a software application seems like a great accomplishment. Yet as the chart shows, even parallelizing 50% of your application means it will never run more than twice as fast, no matter how many cores you add! This is also why simply substituting special libraries for some functions in an application is no longer sufficient.
The curves are asymptotic, yielding diminishing returns. As cores are added the speedup tapers off, then effectively stops. The more processors you add, the worse the efficiency gets; as core counts increase, efficiency will decrease asymptotically to zero. The only solution is to parallelize at near 100%. Unfortunately humans can’t do that, and the only software tools have been manual “Band-Aid” add-ons to 1980 technology that did not address the underlying root issues.
Clearly a dramatically better approach is required so all programmers, engineers, and scientists can program these systems faster, easier, and with high quality. SequenceL is breakthrough technology that addresses all these needs. This patented auto-parallelizing technology is currently the only means of generating massive (near 100%) parallelisms, either manually or automatically, and is remarkably quick and easy to learn and use.
It’s Time to Change the Game (Again)
The way humans deal with added complexity is via abstraction. SequenceL is another level of abstraction for programmers, allowing them to write high performance programs in an easy to use language.
Historically, each of these steps was met with great resistance, since abstraction often comes at the expense of performance. However, that is not the case in SequenceL, where performance actually increases due to the ability of the compiler to find even fine-grained parallelisms, often where a programmer wouldn’t think they may exist or be worth the effort to code and test, due to its patented CSP-NT technologies. This allows the programmer to focus on problem solving rather than worrying about parallelizing, race conditions, and low-level target hardware details.
SequenceLGets Rid of Decades of Programming Baggage
With SequenceL, the programming paradigm no longer mimics old single step CPU hardware. The auto-parallelizing SequenceL compiler is free to decide which instructions can be executed in parallel without adding unexpected race conditions. Compilers for most other languages are unable to do this due to the possibilities of side-effects and reassignment of variable values. In fact, SequenceL’s parallelized C++ output provides a clean starting point for C/C++ compilers to do more optimizations.
SequenceL is a purely functional, declarative language designed by math people for math and science people. In SequenceL, you describe the desired output in terms of the input, as functions; i.e.- declare only “what” to compute, not “how” to compute it, so no thinking about parallel execution.
The widely used SQL declarative language for databases is analogous to SequenceL for algorithmic code. Before SQL was created, programmers wrote their own low-level database procedures in a lower level language. This was error-prone, difficult to read/write, and did not perform as well as using Oracle or DB2 to optimally map the language to the underlying hardware.
Provably Race-Free and Built Upon Open Standards
The SequenceL compiler automatically identifies all opportunities for parallel processing and generates massively parallel C++ code (and optionally OpenCL to support GPUs). SequenceL not only automatically parallelizes the code, but is provably race-free due to the two computational laws underpinning it. This is a huge benefit since race conditions are the largest quality challenge a parallelized software development team faces, often a much larger effort than the code development itself.