Emerging multi-processor products, such as Multi-Processor System on Chips (MPSoC), exploit concurrency to spread work around a system improving performance. European competitiveness in microelectronics such as wireless networking, telecommunications, automobile, and consumer products relies heavily on improving reliability of such multi-processor systems. In this project, we develop predictive system level analysis techniques to increase reliability of concurrent multi-processor systems.
Our goal is to improve the reliability of multicore software running on heterogeneous embedded systems using message passing architectures, their hardware implementations and software analysis techniques. Specifically, we are developing verification and coverage tools for new message passing standards for embedded multicore systems such as Multicore Communication API (MCAPI). Also, we have a benchmark generation framework for multicore software that speeds up performance evaluation of new embedded architectures.
Our goal is to improve and speed-up the development process of new computer architectures, specifically embedded multicore architectures such as tablets and mobile cellphones, through the use of synthetic benchmarks. In particular, we plan to characterize existing SystemC benchmark suites and generate synthetic benchmarks out of them. These benchmarks are going to be much smaller from the original applications, yet have similar performance characteristics. During characterization, we plan to use software architectural patterns and investigate the parallel computation behavior of existing benchmarks.
Extremely high performance, huge memory bandwidth, and comparatively low cost of Graphics Processing Units (GPUs) are turning GPUs into a hardware platform for parallelization of several general purpose applications. The emergence of general purpose programming environments for GPUs such as CUDA shorten the learning curve of GPU programming. The complexity of electronic designs have been rapidly increasing. The task of verifying such systems becomes an immense challenge and often products are delivered with bugs. In this project, we plan to use GPUs to parallelize digital logic simulation, which is the most popular verification technique that consumes the highest percentage of overall design cycle in the industry. Any means to speedup logic simulation results in productivity gains in design cycle due to higher performance and ultimately reduce bugs. The quality of design verification will be improved since using our parallel simulation techniques substantially more design behaviors can be checked for correctness. Also, the performance gains from parallelism will reduce the overall design cycle.
Functional design verification is the task of establishing that a given design accurately implements the intended functional behavior. Today, design verification has grown to dominate the cost of electronic system design, however, designs continue to be released with latent bugs. Coverage metrics evaluate the quality of tests used for design verification. A major problem with design verification is the lack of good coverage metrics. We propose to attack the design verification quality problem using a combination of two approaches. First, we will use system level design models to alleviate the complexity introduced by low level implementation models. Second, we will develop novel coverage metrics for such system level design models. Our work will ultimately allow verification engineers to direct the combination of hardware and software system into critical scenarios where coverage is low in an automated manner, thereby exploring the difficult corner cases, and exposing associated bugs, that previously were hard to find.