Paper accepted at WIICT 2018 (III)

The paper entitled “Towards dependability-aware design space exploration using genetic algorithms” written by Quentin Fabry, Ilya Tuzov, Juan-Carlos Ruiz and David de Andrés has been accepted in the Workshop on Innovation on Information and Communication Technologies (ITACA-WIICT 2018)

Abstract:

The development of complex digital systems poses some design optimization problems that are today automatically addressed by Electronic Design Automation (EDA) tools. Deducing optimal configurations for EDA tools attending to specific implementation goals is challenging even for simple HW models. In deed, previous research demonstrates that such configurations may have a non-negligible impact on the performance, power-consumption, occupied area and dependability (PPAD) features exhibited by resulting HW implementations. This paper proposes a genetic algorithm to cope with the selection of appropriate configurations of EDA tools. Regardless statistical approaches, this type of algorithms has the benefit of considering all the effects among all configuration flags and their iterations. Consequently, they have a great potential for finding out tool configurations leading to implementations exhibiting optimal PPAD scores. However, it also exists the risk of incurring in very time-consuming design space explorations, which may limit the usability of the approach in practice. Since the behavior of the genetic algorithm will be strongly conditioned by the initially selected population and the mutation, crossover and filtering functions that will be selected for promoting evolution, these parameters must be determined very carefully on a case per case basis. In this publication, we will rely on a multilinear regression model estimating the impact of synthesis flags on the PPAD features exhibited by the implementation of an Intel 8051 microcontroller model. Beyond reported results, this preliminar research show how and to what extend genetic algorithms can be integrated and use in the semi-custom design flow followed today by major HW manufacturers.

Paper accepted at WIICT 2018 (II)

The paper entitled “Energy-aware Design Space Exploration for Optimal Implementation Parameters Tuning” written by Ilya Tuzov, David de Andrés and Juan Carlos Ruiz has been accepted in the Workshop on Innovation on Information and Communication Technologies (ITACA-WIICT 2018)

Abstract:

Determining the optimum configuration of semicustom implementation tools to simultaneously optimize the energy consumption, maximum clock frequency, and area of the target circuit requires navigating through millions of configurations. Existing design space exploration approaches, like genetic algorithms, try to reduce as much as possible the number of configurations that must be implemented to find the (close to) optimum configuration. However, these approaches are not suitable when dependability-related properties must be also taken into account. To accurately estimate these properties, extensive simulation-based fault injection experiments must be executed for each configuration, leading to unfeasible experimentation times. This work proposes an alternative approach, based on statistical and operational research artifacts, to drastically reduce the design space while preserving the accuracy of results, thus, enabling the energy-aware design space exploration for semicustom implementation of logic circuits.

Paper accepted at WIICT 2018

The paper entitled “A comparison of two different matrix Error Correction Codes” written by J. Gracia-Morán, L.J. Saiz-Adalid, D. Gil-Tomás and P.J. Gil-Vicente has been accepted in the Workshop on Innovation on Information and Communication Technologies (ITACA-WIICT 2018)

Abstract:

Due to the continuous increment in the integration scale, the fault rate in computer memory systems has augmented. Thus, the probability of occurrence of Single Cell Upsets (SCUs) or Multiple Cell Upsets (MCUs) also increases. A common solution is the use of Error Correction Codes (ECCs). However, when using ECCs, a good balance between the error coverage, the redundancy introduced, and the area, power and delay overheads of the encoders and decoders circuits must be achieved.

In this sense, there exist different proposals to tolerate MCUs. For example, matrix codes are codes able to detect and/or correct MCUs using a two-dimensional format. However, these codes introduce a great redundancy, which leads to an excessive area, power and delay overhead.

In this paper we present a complete comparison of two recently introduced matrix codes.

Visit from Nepal

Professor Dr. Dinesh Kumar Sharma, Professor Dr. Subarna Shakya and Professor Dr. Tri Ratna Bajracharya from the Tribhuvan University in Nepal has visited us today.

 

Presentation at DSN 2018

Our PhD student, Ilya Tuzov, has presented the work “DAVOS: EDA toolkit for dependability assessment, verification, optimisation and selection of hardware models” at DSN 2018, that is being celebrating in Luxembourg.

 

Paper accepted at Jornadas SARTECO 2018

The paper entitled “Un nuevo Código de Corrección de Errores matricial con baja redundancia” written by J. Gracia-Morán, L.J. Saiz-Adalid, D. Gil-Tomás and P.J. Gil-Vicente has been accepted in the Jornadas SARTECO 2018.

The abstract of this work says:

Actualmente, y debido al continuo aumento en la escala de integración, la tasa de fallos en los sistemas de memoria de los computadores ha aumentado. Así, la probabilidad de que se produzcan Single Cell Upsets (SCUs) o Multiple Cell Upsets (MCUs) aumenta. Una solución común es el uso de Códigos de Corrección de Errores (ECCs). Sin embargo, cuando se utilizan ECCs en aplicaciones empotradas, se debe lograr un buen equilibrio entre la cobertura de errores, la redundancia introducida y la eficiencia en términos de área de silicio ocupada, potencia consumida y retardo de los circuitos de codificación y decodificación.
En este sentido, existen diferentes propuestas para tolerar MCUs. Por ejemplo, los códigos matriciales utilizan códigos de Hamming y controles de paridad en un formato bidimensional para detectar y/o corregir MCUs. Sin embargo, estos códigos introducen una gran redundancia, lo que conlleva una sobrecarga excesiva con respecto al área, potencia consumida y retardo.
En este trabajo presentamos un nuevo código matricial con una baja redundancia, que permite corregir diferentes patrones de MCUs y que no introduce una gran sobrecarga en los circuitos de codificación y decodificación.

Paper accepted at IEEE Transactions on VLSI

The paper entitled “Improving Error Correction Codes for Multiple-Cell Upsets in Space Applications”, written by Joaquín Gracia-Morán, Luis J. Saiz-Adalid, Daniel Gil-Tomás, and Pedro J. Gil-Vicente has been accepted at the IEEE Transactions on VLSI.

Abstract:
Currently, faults suffered by SRAM memory systems have increased due to the aggressive CMOS integration density. Thus, the probability of occurrence of single-cell upsets (SCUs) or multiple-cell upsets (MCUs) augments. One of the main causes of MCUs in space applications is cosmic radiation. A common solution is the use of error correction codes (ECCs). Nevertheless, when using ECCs in space applications, they must achieve a good balance between error coverage and redundancy, and their encoding/decoding circuits must be efficient in terms of area, power, and delay. Different codes have been proposed to tolerate MCUs. For instance, Matrix codes use Hamming codes and parity checks in a bi-dimensional layout to correct and detect some patterns of MCUs. Recently presented, column–line–code (CLC) has been designed to tolerate MCUs in space applications. CLC is a modified Matrix code, based
on extended Hamming codes and parity checks. Nevertheless, a common property of these codes is the high redundancy introduced. In this paper, we present a series of new low-redundant ECCs able to correct MCUs with reduced area, power, and delay overheads. Also, these new codes maintain, or even improve, memory error coverage with respect to Matrix and CLC codes.

More info at: https://ieeexplore.ieee.org/document/8370138/

Seminar: “FPGA-based fault injection”

Next Thursday 31/05/2018 at 12h30, researcher Jose Luis Nunes (University of Coimbra) will provide a talk describing his research on FPGA-based Fault Injection. Find here after the information concerning this seminar:
Title: FPGA-based Fault Injection
Summary: Reconfigurable embedded devices built on SRAM-based Field Programmable Gate Arrays (FPGA) are being increasingly used in critical embedded applications. Its susceptibility to Single Event Upsets (SEU) requires the use of fault tolerant designs, for which fault injection is one of the most accepted verification techniques.
In this talk implementation details of FIRED, a fault injector targeted at SRAM-based FPGAs (Virtex-5) for dependability evaluation of critical systems are presented. This tool is able to perform hardware fault injection in real-time, by inserting bitflips at the SRAM cells through Partial Dynamic Reconfiguration (PDR).
The architecture of an updated version of the tool, targetting current state-of-the-art devices (Xilinx 7-series) is also discussed. This new tool will take advantage of Soft Error Mitigation (SEM) core to support the fault injection.
CV of the speaker: José Luís Nunes is a professor at Coimbra Polytechnic where he teaches Operating Systems, Digital Systems and Computer Architectures. He is a member of the Software and Systems Engineering (SSE) research team at Center for Informatics and Systems of the University of Coimbra (CISUC), enrolled in the PhD program. His current research topics include dependability, fault injection, FPGAs and real-time embedded system.

Seminar at LAAS

This week, Juan Carlos Ruiz has been teaching the entitled “Statistical Fault Injection: When is it enough in robustness assessment?

Simulation-based fault injection is commonly used to assess the robustness of hardware components modelled using Hardware Description Languages (HDL). The current complexity of modern circuits usually makes not feasible the consideration during experimentation of all possible combinations of fault models, targets, and times. By assuming a confidence interval and error margin, statistical fault injection exploits the principle of statistical sampling to reduce the number of experiments while keeping the results representative of the whole population of fault injections. Since the percentage of injected faults leading to failure is a priori unknown, such number of experiments is usually determined by selecting the value maximizing the sample size. This presentation argues that this conservative assumption leads to a worst-case scenario that can be improved. It proposes an new iterative approach to progressively adjust the number of experiments by estimating the percentage of those leading to failure and the error of such estimation. This proposal provides new means to decide when to stop a fault injection campaign and to estimate the error existing in the results finally reported.

Exhibition at Universitat Politènica de València

Our group has shown some of our latest research to the future students of the Master’s Degree in Computer and Network Engineering.

More info: https://www.upv.es/titulaciones/MUIC/index-en.html