Presentation at Jornadas Sarteco

J. Gracia-Morán has presented the paper entitled “Un nuevo Código de Corrección de Errores matricial con baja redundancia“, presented at the Jornadas Sarteco, held in Teruel, 12-14 September, 2018.

Presentation at EDCC 2018

David de Andrés has presented the paper entitled “Accurate Robustness Assessment of HDL Models through Iterative Statistical Fault Injection” at EDCC 2018.

This work has been selected as one of the “Distinguished Papers”.

Congratulations!!

Paper accepted at Computing journal

The paper entitled “Simulating the effects of logic faults in implementation-level VITAL-compliant models”, written by Ilya Tuzov, David de Andrés and Juan-Carlos Ruiz has been accepted at Computing journal (Springer). This paper is an extension of the paper entitled “Accurately simulating the effects of faults in VHDL models described at the implementation-level”, awarded as best paper in the EDCC 2017.

Abstract:

Simulation-based fault injection (SBFI) is a well-known technique to assess the dependability of hardware designs specified using Hardware Description Languages (HDL). Although logic faults are usually introduced in models defined at the Register Transfer Level (RTL), most accurate results can be obtained by considering implementation-level ones, which reflect the actual structure and timing of the circuit. These models consist of a list of interconnected technology-specific components (macrocells), provided by vendors and annotated with post-place-and-route delays. Macrocells described in the Very High Speed Integrated Circuit HDL (VHDL) should also comply with the VHDL Initiative Towards Application Specific Integrated Circuit Libraries (VITAL) standard to be interoperable across standard simulators. However, the rigid architecture imposed by VITAL makes that fault injection procedures applied at RTL cannot be used straightforwardly. This work identifies a set of generic operations on VITAL-compliant macrocells that are later used to define how to accurately simulate the effects of common logic fault models. The generality of this proposal is supported by the definition of a platform-specific fault procedure based on these operations. Three embedded processors, implemented using the Xilinx’s toolchain and SIMPRIM library of macrocells, are considered as a case study, which exposes the gap existing between the robustness assessment at both RTL and implementation-level.

Best paper award nomination at LADC

The paper entitled “ Speeding-up robustness assessment of HDL models through profiling and multi-level fault injection”, authored by Ilya Tuzov, David de Andrés and Juan-Carlos Ruiz, has been nominated to best paper at LADC 2018.

Congratulations!!!

Paper accepted at LADC (II)

The paper entitled “ Speeding-up robustness assessment of HDL models through profiling and multi-level fault injection”, authored by Ilya Tuzov, David de Andrés and Juan-Carlos Ruiz, has been accepted at LADC 2018.

Abstract:

Simulation-based fault injection is an indispensable technique to assess the robustness of hardware components defined by means of hardware description languages (HDL). However, the high complexity of modern hardware and its strict verification accuracy requirements lead to an unfeasible number of fault injection experiments, even when following statistical (instead of exhaustive) approaches, as accurate implementation-level models are up to three orders of magnitude slower than (inaccurate) behavioural ones. This paper proposes the combined use of multi-level fault injection in sequential logic and the profiling of the use of combinational logic to guarantee results’ accuracy while keeping experimentation duration within reasonable time-bounds. First, the sequential logic generated at the implementation-level model is matched with associated structures at its related behavioural-level model. In such a way, most fault injection experiments targeting sequential logic could be executed at the much faster behavioural level, while maintaining the accuracy of results. Second, by profiling the implementation-level model, run-time statistics (inactive macrocells, switching activity, etc.) can be exploited to keep result precision while reducing the number of experiments targeting combinational logic. The case study of three embedded processor models illustrates both approaches and quantifies the experimental speed-up derived from their combined use.

Paper accepted at LADC (I)

The paper entitled “Correction of Adjacent Errors with Low Redundant Matrix Error Correction Codes”, authored by J. Gracia-Moran, L.J. Saiz-Adalid, J.C. Baraza-Calvo and P.J. Gil-Vicente, has been accepted at LADC 2018.

Abstract:

The continuous growth of the integration scale in CMOS circuits has derived in an increase in the memory systems capacity, but also in their fault rate. In this way, the probabilities of suffering Single Cell Upsets (SCUs) or Multiple Cell Upsets (MCUs) has thus raised.
Traditionally, Error Correction Codes (ECCs) are used in memory systems to correct errors. However, when using ECCs, it is necessary to find a good balance between the redundancy of the code; the area, power consumption and delay overheads of the encoding and decoding circuits; and the error coverage achieved.
In this work, we present two new low-redundant matrix ECCs that are able to correct different types of adjacent errors. Both codes have the same error coverage, but different levels of redundancy. In this way, we have been able to study the influence of these different levels of low redundancy in the area, power consumption and delay overheads. We have also compared our proposals to a well-known matrix code, in terms of overhead vs. coverage using a recently introduced metric. In all cases, our proposals get better scores.

Seminar INSA Ecole Été 2018 in CyberPhysical Systems

On July, 18th, Juan Carlos Ruiz has participated in the Summer School CPS 2018. His talk was titled “Design and Verification of Safe and Secure VLSI Systems”.

Abstract:

Current embedded VLSI systems are widespread and operate in multitude of applications in different markets, ranging from life support, industrial control, or avionics to consumer electronics. It is unquestionable that critical systems require different degrees of fault tolerance and security, given the human lives or great investments at stake, but nowadays the lack of robustness exhibited by consumer products (against unexpected failures and attacks) can also undermine their success in the marketplace and negatively affect the reputation of the manufacturer.

On the one hand, current practices for the design and deployment of hardware fault and intrusion-tolerance techniques remain in practice specific (defined on a case-per-case basis) and mostly manual and error prone. This situation is aggravated by considerations relating to time-to-market costs that promote the reuse and integration of (black- and white-box) IP cores developed by third and sometimes untrusted parties. This seminar addresses the challenging problems of engineering HW fault tolerance strategies in a generic way and supporting their subsequent instantiation. This approach relies on metaprogramming to specify fault tolerance mechanisms and open compilation to automate their deployment on target cores.

On the other hand, the assessment, verification, optimization and selection (benchmarking) of resulting HW implementations is far from being properly supported by existing Electronic Design Automation (EDA) tools when dependability becomes an important design concern. This seminar will address this situation with efficiency and flexibility in mind. It will be explained how, and to what extent, the HW implementation and analysis phases can be customized while relying on existing off-the-self languages, synthesis, mapping, placement and routing tools and technology-dependent libraries. Three different embedded processor models will be used to exemplify how the aforementioned challenges can be addressed in practice when considering FPGAs as final implementation devices.

Presentation slides are accesible using this link: 07.2018.ET.INSA-Toulouse.Final.UPV

Paper accepted at WIICT 2018 (III)

The paper entitled “Towards dependability-aware design space exploration using genetic algorithms” written by Quentin Fabry, Ilya Tuzov, Juan-Carlos Ruiz and David de Andrés has been accepted in the Workshop on Innovation on Information and Communication Technologies (ITACA-WIICT 2018)

Abstract:

The development of complex digital systems poses some design optimization problems that are today automatically addressed by Electronic Design Automation (EDA) tools. Deducing optimal configurations for EDA tools attending to specific implementation goals is challenging even for simple HW models. In deed, previous research demonstrates that such configurations may have a non-negligible impact on the performance, power-consumption, occupied area and dependability (PPAD) features exhibited by resulting HW implementations. This paper proposes a genetic algorithm to cope with the selection of appropriate configurations of EDA tools. Regardless statistical approaches, this type of algorithms has the benefit of considering all the effects among all configuration flags and their iterations. Consequently, they have a great potential for finding out tool configurations leading to implementations exhibiting optimal PPAD scores. However, it also exists the risk of incurring in very time-consuming design space explorations, which may limit the usability of the approach in practice. Since the behavior of the genetic algorithm will be strongly conditioned by the initially selected population and the mutation, crossover and filtering functions that will be selected for promoting evolution, these parameters must be determined very carefully on a case per case basis. In this publication, we will rely on a multilinear regression model estimating the impact of synthesis flags on the PPAD features exhibited by the implementation of an Intel 8051 microcontroller model. Beyond reported results, this preliminar research show how and to what extend genetic algorithms can be integrated and use in the semi-custom design flow followed today by major HW manufacturers.

Paper accepted at WIICT 2018 (II)

The paper entitled “Energy-aware Design Space Exploration for Optimal Implementation Parameters Tuning” written by Ilya Tuzov, David de Andrés and Juan Carlos Ruiz has been accepted in the Workshop on Innovation on Information and Communication Technologies (ITACA-WIICT 2018)

Abstract:

Determining the optimum configuration of semicustom implementation tools to simultaneously optimize the energy consumption, maximum clock frequency, and area of the target circuit requires navigating through millions of configurations. Existing design space exploration approaches, like genetic algorithms, try to reduce as much as possible the number of configurations that must be implemented to find the (close to) optimum configuration. However, these approaches are not suitable when dependability-related properties must be also taken into account. To accurately estimate these properties, extensive simulation-based fault injection experiments must be executed for each configuration, leading to unfeasible experimentation times. This work proposes an alternative approach, based on statistical and operational research artifacts, to drastically reduce the design space while preserving the accuracy of results, thus, enabling the energy-aware design space exploration for semicustom implementation of logic circuits.

Paper accepted at WIICT 2018

The paper entitled “A comparison of two different matrix Error Correction Codes” written by J. Gracia-Morán, L.J. Saiz-Adalid, D. Gil-Tomás and P.J. Gil-Vicente has been accepted in the Workshop on Innovation on Information and Communication Technologies (ITACA-WIICT 2018)

Abstract:

Due to the continuous increment in the integration scale, the fault rate in computer memory systems has augmented. Thus, the probability of occurrence of Single Cell Upsets (SCUs) or Multiple Cell Upsets (MCUs) also increases. A common solution is the use of Error Correction Codes (ECCs). However, when using ECCs, a good balance between the error coverage, the redundancy introduced, and the area, power and delay overheads of the encoders and decoders circuits must be achieved.

In this sense, there exist different proposals to tolerate MCUs. For example, matrix codes are codes able to detect and/or correct MCUs using a two-dimensional format. However, these codes introduce a great redundancy, which leads to an excessive area, power and delay overhead.

In this paper we present a complete comparison of two recently introduced matrix codes.