Why SequenceL Works

SequenceL Gets Rid of Decades of Programming Baggage

With SequenceL, the programming paradigm no longer mimics old single step CPU hardware, making it an excellent choice for software modernization projects. The auto-parallelizing SequenceL compiler is free to decide which instructions can be executed in parallel without adding unexpected race conditions. Compilers for most other languages are unable to do this due to the possibilities of side-effects and reassignment of variable values. In fact, SequenceL’s parallelized C++ output provides a clean starting point for C/C++ compilers to do further optimizations. And since SequenceL does not attempt to mimic hardware execution, it can easily be moved to different types of architectures.

SequenceL is a modern, purely functional, declarative language designed by math people for math and science people. In SequenceL, you describe the desired output in terms of the input, as functions; i.e.- declare only “what” to compute, not “how” to compute it, so no thinking about parallel execution.

The widely used SQL declarative language for databases is analogous to SequenceL for algorithmic code. Before SQL was created, programmers wrote their own low-level database procedures in a lower level language. This was error-prone, difficult to read/write, and did not perform as well as using Oracle or DB2 to optimally map the language to the underlying hardware.

SequenceL Gets Rid of Decades of Programming Baggage
SequenceL is next level of abstraction up in history of programming #moderncode

No More Band Aids!

#moderncode No more language workaround band aids

Language workarounds

  • Locks
  • Semaphores
  • Critical sections
  • Atomic operations
#moderncode No more platform optimization band aids

Application optimizations for specific h/w platforms

  • Caching tricks
  • Different SIMD widths
  • Core counts
#moderncode No more memory management band aids

Explicit Memory Management

  • Allocating and freeing memory
  • Pointers
#moderncode No more Thread management band aids

Explicit Thread Management

  • Determining where to parallelize a program
  • Creating/deleting threads
  • Parallel extensions: OpenMP/TBB

TMT Does the Hard Work So You Don’t Have To

Immutable data structures enable extensive performance optimizations

  • Enables automatic parallelization to utilize multiple CPU cores and GPU for maximum performance
  • Patented CSP-NT technologies uncover even fine-grained parallelisms, often where a programmer wouldn’t think they may exist or be worth the effort to code and test
  • Far more vectorization can occur automatically than C, C++, Fortran, etc.
  • Re-assignment and memory re-use still happens but at lower level

High level language abstracts away hardware and enables optimizations not possible in a low level language

  • Allows application code to be processor/accelerator independent
  • Compiler can optimize over a larger portion of the program
  • E.g.- parallel edge cases, fusing parallel operations

High abstraction allows for runtime to control memory layout, alignment, and access patterns

  • Optimizations take place without polluting the original algorithm
  • Compiler can instruct runtime on data usage patterns
  • Data is optimally aligned for usage by the processors on all platforms
  • Runtime is cache-conscious and accesses data in cache-efficient patterns

Get started now for free!

Download