Paper accepted at IEEE Transactions on Dependable and Secure Computing

The paper entitled “A Multi-criteria Analysis of Benchmark Results With Expert Support for Security Tools” written by Miquel Martínez, Juan-Carlos Ruiz, Nuno Antunes, David de Andrés and Marco Vieira has been accepted at IEEE Transactions on Dependable and Secure Computing journal.

Abstract. The benchmarking of security tools is endeavored to determine which tools are more suitable to detect system vulnerabilities or intrusions. The analysis process is usually oversimplified by employing just a single metric out of the large set of those available. Accordingly, the decision may be biased by not considering relevant information provided by neglected metrics. This paper proposes a novel approach to take into account several metrics, different scenarios, and the advice of multiple experts. The proposal relies on experts quantifying the relative importance of each pair of metrics towards the requirements of a given scenario. Their judgments are aggregated using group decision making techniques, and pondered according to the familiarity of experts with the metrics and scenario, to compute a set of weights accounting for the relative importance of each metric. Then, weight-based multi-criteria-decision-making techniques can be used to rank the benchmarked tools. The usefulness of this approach is showed by analyzing two different sets of vulnerability and intrusion detection tools from the perspective of multiple/single metrics and different scenarios.

Paper accepted at Electronics Journal

The paper entitled “Proposal of an Adaptive Fault Tolerance Mechanism to Tolerate Intermittent Faults in RAM”, written by J.-Carlos Baraza-Calvo, Joaquín Gracia-Morán, Luis-J. Saiz-Adalid, Daniel Gil-Tomás y Pedro-J. Gil-Vicente has been accepted at Electronics journal.

Abstract: Due to transistor shrinking, intermittent faults are a major concern in current digital systems. This work presents an adaptive fault tolerance mechanism based on error correction codes (ECC), able to modify its behavior when the error conditions change without increasing the redundancy. As a case example, we have designed a mechanism that can detect intermittent faults and swap from an initial generic ECC to a specific ECC capable of tolerating one intermittent fault. We have inserted the mechanism in the memory system of a 32-bit RISC processor and validated it by using VHDL simulation-based fault injection. We have used two (39, 32) codes: a single error correction–double error detection (SEC–DED) and a code developed by our research group, called EPB3932, capable of correcting single errors and double and triple adjacent errors that include a bit previously tagged as error-prone. The results of injecting transient, intermittent, and combinations of intermittent and transient faults show that the proposed mechanism works properly. As an example, the percentage of failures and latent errors is 0% when injecting a triple adjacent fault after an intermittent stuck-at fault. We have synthesized the adaptive fault tolerance mechanism proposed in two types of Field Programmable Gate Arrays (FPGAs): non-reconfigurable and partially reconfigurable. In both cases, the overhead introduced is affordable in terms of hardware, time and power consumption.

Paper accepted at Electronics journal

The paper entitled “Reducing the Overhead of BCH Codes: New Double Error Correction Codes”, authored by Luis-J. Saiz-Adalid, Joaquín Gracia-Morán, Daniel Gil-Tomás, J.-Carlos Baraza-Calvo and Pedro-J. Gil-Vicente has been published at Electronics journal.


The Bose-Chaudhuri-Hocquenghem (BCH) codes are a well-known class of powerful error correction cyclic codes. BCH codes can correct multiple errors with minimal redundancy. Primitive BCH codes only exist for some word lengths, which do not frequently match those employed in digital systems. This paper focuses on double error correction (DEC) codes for word lengths that are in powers of two (8, 16, 32, and 64), which are commonly used in memories. We also focus on hardware implementations of the encoder and decoder circuits for very fast operations. This work proposes new low redundancy and reduced overhead (LRRO) DEC codes, with the same redundancy as the equivalent BCH DEC codes, but whose encoder, and decoder circuits present a lower overhead (in terms of propagation delay, silicon area usage and power consumption). We used a methodology to search parity check matrices, based on error patterns, in order to design the new codes. We implemented and synthesized them, and compared their results with those obtained for the BCH codes. Our implementation of the decoder circuits achieved reductions between 2.8% and 8.7% in the propagation delay, between 1.3% and 3.0% in the silicon area, and between 15.7% and 26.9% in the power consumption. Therefore, we propose LRRO codes as an alternative for protecting information against multiple errors.

DSN 2020 Conference Tracks


DSN 2020, organized by the Fault Tolerant Systems Group of the University Politècnica de València, is a multi-track conference seeking for contributions in different tracks. More info in this link.

We hope to see you here!!

Paper available at IEEE Access

The paper entitled “Ultrafast Codes for Multiple Adjacent Error Correction and Double Error Detection”, authored by Luis-J. Saiz-Adalid, Joaquín Gracia-Morán, Daniel Gil-Tomás, J.-Carlos Baraza-Calvo and Pedro-J. Gil-Vicente is available at IEEE Access.

Paper accepted at IEEE Access

The paper entitled “Ultrafast Codes for Multiple Adjacent Error Correction and Double Error Detection”, authored by Luis-J. Saiz-Adalid, Joaquín Gracia-Morán, Daniel Gil-Tomás, J.-Carlos Baraza-Calvo and Pedro-J. Gil-Vicente has been accepted at IEEE Access.


Reliable computer systems employ error control codes (ECCs) to protect information from errors. For example, memories are frequently protected using single error correction-double error detection (SEC-DED) codes. ECCs are traditionally designed to minimize the number of redundant bits, as they are added to each word in the whole memory. Nevertheless, using an ECC introduces encoding and decoding latencies, silicon area usage and power consumption. In other computer units, these parameters should be optimized, and redundancy would be less important. For example, protecting registers against errors remains a major concern for deep sub-micron systems due to technology scaling. In this case, an important requirement for register protection is to keep encoding and decoding latencies as short as possible. Ultrafast error control codes achieve very low delays, independently of the word length, increasing the redundancy. This paper summarizes previous works on Ultrafast codes (SEC and SEC-DED), and proposes new codes combining double error detection and adjacent error correction. We have implemented, synthesized and compared different Ultrafast codes with other state-of-the-art fast codes. The results show the validity of the approach, achieving low latencies and a good balance with silicon area and power consumption.

Presentation at Jornadas SARTECO 2019

Last 18 September, J. Gracia-Morán presented the paper entitled “Mejora de un Código de Corrección de Errores para tolerar fallos adyacentes bidimensionales” at the Jornadas SARTECO 2019.

Panel at CARS 2019

Last September, 17th, Juan Carlos Ruiz took part in the panel “Autonomous driving: safety and security issues”, celebrated in the 5th International Workshop on Critical Automotive Applications: Robustness & Safety (CARS 2019), collocated with EDCC 2019 in Naples, Italy.

Presentation at EDCC 2019

Juan Carlos Ruiz has presented the paper entitled Robustness-aware design space exploration by iteratively augmenting and repairing D-optimal designs”, written by Ilya Tuzov, David De Andrés and Juan Carlos Ruiz.


Design space exploration (DSE) is nowadays of utmost importance to implement HW designs with acceptable levels of performance, power consumption, area and dependability (PPAD). Electronic Design Automation (EDA) tools support the transformation of HW description models into technology-dependent implementations. Although designers can influence this process by tuning the parameters offered by EDA toolkits, determining their proper configuration is a complex and very time-consuming DSE problem rarely addressed from a PPAD perspective. On one hand, the spatial and temporal complexity of considered targets and the level of abstraction of their descriptions pose problems for the rapid execution of fault injection campaigns. On the other hand, the multi-level nature of parameters offered by EDA toolkits lead to an explosion of possible configurations to exercise during experimentation. This paper shows how to combine the D-optimal design of experiments with FPGA-based and statistical fault injection to significantly reduce not only the number of such configurations but also the number of faults to inject and the time required to perform each injection. All of this without compromising the statistical significance of results. The proposal is exemplified through the Xilinx Vivado Design Suite, which integrates one of the FPGA-based EDA toolkits most widely-used today in the industry, and the MC8051 IP core, a synthesizable microcontroller from Oregano Systems.

Paper available at Electronics Journal

The paper entitled “Fault Modeling of Graphene Nanoribbon FET Logic Circuits”, written by D. Gil-Tomàs, J. Gracia-Morán, L.J. Saiz-Adalid and P.J. Gil-Vicente, and published by Electronics Journal, is now available here.