z-logo
open-access-imgOpen Access
Modeling Soft Analytical Side-Channel Attacks from a Coding Theory Viewpoint
Author(s) -
Qian Guo,
Vincent Grosso,
FrançoisXavier Standaert,
Olivier Bronchain
Publication year - 2020
Publication title -
iacr transactions on cryptographic hardware and embedded systems
Language(s) - English
Resource type - Journals
ISSN - 2569-2925
DOI - 10.46586/tches.v2020.i4.209-238
Subject(s) - computer science , mathematical proof , ciphertext , block cipher , side channel attack , theoretical computer science , plaintext , low density parity check code , cipher , cryptography , decoding methods , implementation , computer engineering , algorithm , computer security , encryption , mathematics , programming language , geometry
One important open question in side-channel analysis is to find out whether all the leakage samples in an implementation can be exploited by an adversary, as suggested by masking security proofs. For attacks exploiting a divide-and-conquer strategy, the answer is negative: only the leakages corresponding to the first/last rounds of a block cipher can be exploited. Soft Analytical Side-Channel Attacks (SASCA) have been introduced as a powerful solution to mitigate this limitation. They represent the target implementation and its leakages as a code (similar to a Low Density Parity Check code) that is decoded thanks to belief propagation. Previous works have shown the low data complexities that SASCA can reach in practice. In this paper, we revisit these attacks by modeling them with a variation of the Random Probing Model used in masking security proofs, that we denote as the Local Random Probing Model (LRPM). Our study establishes interesting connections between this model and the erasure channel used in coding theory, leading to the following benefits. First, the LRPM allows bounding the security of concrete implementations against SASCA in a fast and intuitive manner. We use it in order to confirm that the leakage of any operation in a block cipher can be exploited, although the leakages of external operations dominate in known-plaintext/ciphertext attack scenarios. Second, we show that the LRPM is a tool of choice for the (nearly worst-case) analysis of masked implementations in the noisy leakage model, taking advantage of all the operations performed, and leading to new tradeoffs between their amount of randomness and physical noise level. Third, we show that it can considerably speed up the evaluation of other countermeasures such as shuffling.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here