Papers abstract

Secure and Robust Cyber Security Threat Information Sharing 
Anis Bkakria , Reda Yaich and Walid Arabi
IRT SystemX
In recent years, several laws have been decreed, at both national and European levels, to mandate private and public organiza- tions to share their Cyber Security related information. However, exist- ing threat sharing platforms implement ”classical” access control mech- anisms or at most centralized attribute-based encryption (ABE) to pre- vent data leakage and preserve data confidentiality. These schemes are well-known to be suffering from a single point of failure on security as- pects. That is, if the central authority is compromised, the confidentiality of the shared sensitive information is no longer ensured. To address this challenge, we propose a new ABE scheme combining both the advantages of centralized and decentralized ABE while overcoming their weaknesses. It overcomes the centralized ABE’s single point of failure on security by requiring the collaboration of several entities for decryption key issuing. In addition, in contrast to existing decentralized ABE schemes, our con- struction does not require the data providers to fully trust all attributes authorities, only a single authority should be trusted. Finally, we for- mally prove the security of our ABE construction in the generic group model.

Revisiting stream-cipher-based homomorphic transciphering in the TFHE era
Adda Akram Bendoukha, Aymen Boudguiga and Renaud Sirdey
CEA

ransciphering allows to workaround the large expansion of the size of FHE encrypted data, thanks to the use of symmetric cryp- tography. Transciphering is a recryption technique that delegates the effective homomorphic encryption to the cloud. As a result, a client only has to encrypt (once) a symmetric key SYM.sk under a homomorphic encryption system, while his payload data are encrypted under SYM.sk using the chosen symmetric encryption algorithm.
In this work, we study the performances of some symmetric encryp- tion algorithms in light of the TFHE cryptosystem and its properties. This allows us to unleash the use of additional existing symmetric algo- rithms which were not viable candidates for efficient encrypted domain execution with levelled-FHEs. In particular, we provide experimental evidences that Grain128-AEAD, a well established and well respected stream-cipher which is a finalist of the NIST competition for light-weight cryptography, is amenable to practical performances when run in the encrypted domain. As such, our work extends practical transciphering capabilities to include authenticated encryption for the first time.

Generic Construction for Identity-based Proxy Blind Signaturetle
Charles Olivier-Anclin, Leo Robert, Xavier Bultel and Pascal Lafourcade (
[1] LIMOS, University Clermont, [2] INSA Centre Val de Loire, LIFO

Generic constructions of blind signature schemes have been studied since its appearance. Several constructions were made leading to generic blind signatures and achieving other properties such as identity-based blind signature and partially blind signature. We propose a generic construction for identity- based Proxy Blind Signature (IDPBS). This combination of properties has several applications in the real world, in particularly in e-voting or e-cash systems and it has never been achieved before with a generic construction. Our construction only requires two classical signatures schemes: a blind EUF-CMA blind signa- ture and a SUF-CMA unique signature. The security of our generic identity-based proxy blind signature is proven under these assumptions.

Trade-offs between Anonymity and Performance in Mix Networks
Matthieu Jee, Ania Piotrowska, Harry Halpin and Ninoslav Marina
[1] HEC-ARC, [2]Nym Technologies, [3] World Wide Web Consortium

Mix networks were developed to hide the correspondence between senders and recipients of the communication. In order to be usable and defend user privacy, anonymous communication networks like mixnets need to be parameterized in an optimal manner. This work uses a mixnet simulator to determine reasonable packet size and parameters for the real-world Nym mixnet, a stratified continuous-time mixnet that uses the Sphinx packet format. We analyzed network parameters, such as the sending rate, cover traffic overhead, and mixing delay, to determine the impact of various configurations on the anonymity and performance.

A Comparative Analysis of Machine Learning Techniques for IoT Intrusion Detection
oão Vitorino, Rui Andrade, Isabel Praça, Orlando Sousa and Eva Maia
School of Engineering of the Polytechnic of Porto (ISEP/IPP)

The digital transformation faces tremendous security challenges. In particular, the growing number of cyber-attacks targeting Internet of Things (IoT) systems restates the need for a reliable detection of malicious network activity. This paper presents a comparative analysis of supervised, unsupervised and rein- forcement learning techniques on nine malware captures of the IoT-23 dataset, considering both binary and multi-class classification scenarios. The developed models consisted of Support Vector Machine (SVM), Extreme Gradient Boosting (XGBoost), Light Gradient Boosting Machine (LightGBM), Isolation Forest (iForest), Local Outlier Factor (LOF) and a Deep Reinforcement Learning (DRL) model based on a Double Deep Q-Network (DDQN), adapted to the intrusion detection context. The best performance was achieved by LightGBM, closely fol- lowed by SVM. Nonetheless, iForest displayed good results against unknown at- tacks and the DRL model demonstrated the possible benefits of employing this methodology to continuously improve the detection. Overall, the obtained results indicate that the analyzed techniques are well suited for IoT intrusion detection.

An automatized Identity and Access Management system for IoT combining Self-Sovereign Identity and smart contracts
ontassar Naghmouchi, Hella Kaffel and Maryline Laurent
[1] Faculty of Science of Tunis, University of Tunis El Manar, Samovar, [2] Telecom SudParis, Institut Polytechnique de Paris

Nowadays, open standards for self-sovereign identity and access man- agement enable portable solutions that are following the requirements of IoT sys- tems. This paper proposes a blockchain-based identity and access management system for IoT – specifically smart vehicles- as an exemplar use-case, showing two interoperable blockchains, Ethereum and Hyperledger Indy, and a self-sov- ereign identity model.

PERMANENT: Publicly Verifiable Remote Attestation for Internet of Things through Blockchain
Sigurd Frej Joel Jørgensen Ankergård, Edlira Dushku and Nicola Dragoni
DTU Compute, Technical University of Denmark

Remote Attestation (RA) is a security mechanism that al- lows a centralized trusted entity (Verifier) to check the trustworthiness of a potentially compromised IoT device (Prover). With the tsunami of interconnected IoT devices, the advancement of swarm RA schemes that efficiently attest large IoT networks has become crucial. Recent swarm RA approaches work towards distributing the attestation verification from a centralized Verifier to many Verifiers. However, the assumption of trusted Verifiers in the swarm is not practical in large networks. In addition, the state-of-the-art RA schemes do not establish network-wide decentralized trust among the interacting devices in the swarm. This pa- per proposes PERMANENT, a Publicly Verifiable Remote Attestation protocol for Internet of Things through Blockchain, which stores the his- torical attestation results of all devices in a blockchain and allows each interacting device to obtain the attestation result. PERMANENT en- ables devices to make a trust decision based on the historical attestation results. This feature allows the interaction among trustworthy devices (or with a trust score over a certain threshold) without the computational overhead of attesting every participating device before each interaction. We validate PERMANENT with a proof-of-concept implementation, us- ing Hyperledger Sawtooth as the underlying blockchain. The conducted experiments confirm the feasibility of the PERMANENT protocol.

Why anomaly-based intrusion detection systems have not yet conquered the industrial market?
S. Seng, Joaquin Garcia-Alfaro and Youssef Laarouchi
EDF R&D, Institut Polytechnique de Paris

In this position paper, we tackle the following question: why anomaly-based intrusion detection systems (IDS), despite providing ex- cellent results and holding higher (potential) capabilities to detect un- known (zero-day) attacks, are still marginal in the industry, when com- pared to, e.g., signature-based IDS? We will try to answer this question by looking at the methods and criteria for comparing IDS as well as a specific problem with anomaly-based IDS. We will propose 3 new crite- ria for comparing IDS. Finally, we focus our discussion under the specific domain of IDS for critical Industrial control systems (ICS).

Detecting Attacks in Network Traffic using Normality Models: The Cellwise Estimator
Felix Heine, Carsten Kleiner, Philip Klostermeyer, Volker Ahlers, Tim Laue and Nils Wellermann
University of Applied Sciences and Arts Hannover

Although machine learning (ML) for intrusion detection is attracting research, its deployment in practice has proven difficult. Major hindrances are that training a classifier requires training data with attack samples, and that trained models are bound to a specific network.
To overcome these problems, we propose two new methods for anomaly- based intrusion detection. Both are trained on normal-only data, making deployment much easier. The first approach is based on One-class SVMs, while the second leverages our novel Cellwise Estimator algorithm, which is based on multidimensional OLAP cubes. The latter has the additional benefit of explainable output, in contrast to many ML methods like neu- ral networks. The created models capture the normal behavior of a net- work and are used to find anomalies that point to attacks. We present a thorough evaluation using benchmark data and a comparison to related approaches showing that our approach is competitive.

Creation and Detection of German Voice Deepfakes
Andreas Schaad, Dominik Binder, Vanessa Barnekow and Pascal Munaretto
University of Applied Sciences Offenburg

Synthesizing voice with the help of machine learning tech- niques has made rapid progress over the last years [1]. Given the current increase in using conferencing tools for online teaching, we question just how easy (i.e. needed data, hardware, skill set) it would be to create a convincing voice fake. We analyse how much training data a participant (e.g. a student) would actually need to fake another participants voice (e.g. a professor). We provide an analysis of the existing state of the art in creating voice deep fakes and align the identified as well as our own optimization techniques in the context of two different voice data sets. A user study with more than 100 participants shows how difficult it is to identify real and fake voice (on avg. only 37 percent can recognize a professor’s fake voice). From a longer-term societal perspective such voice deep fakes may lead to a disbelief by default.

A Modular Runtime Enforcement Model using Multi-Traces
Rania Taleb, Sylvain Hallé and Raphael Khoury
University of Quebec at Chicoutimi

Runtime enforcement seeks to provide a valid replacement to any misbehaving sequence of events of a running system so that the correct sequence complies with a user-defined security policy. However, depending on the capabilities of the enforcement mechanism, multiple possible replacement sequences may be available, and the current literature is silent on the question of how to choose the optimal one. In this paper, we propose a new model of enforcement monitors, that allows the comparison between multiple alternative corrective enforcement actions and the selection of the optimal one, with respect to an objective user-defined gradation, separate from the security policy. These concepts are implemented using the event stream processor BeepBeep and a use case is presented. Experimental evaluation shows that our proposed framework can dynamically select enforcement actions at runtime, without the need to manually define an enforcement monitor.

A Tight Integration of Symbolic Execution and Fuzzing
Sebastien Bardin, Michaël Marcozzi and Yaelle Vincont
CEA LIST, Université Paris-Saclay

Most bug finding tools rely on either fuzzing or symbolic execution. While they both work well in some situations, fuzzing struggles with complex conditions and symbolic execution suffers from path explo- sion and high constraint solving costs. In order to enjoy the advantages from both techniques, we propose a new approach called Lightweight Symbolic Execution (LSE) that integrates well with fuzzing. Especially, LSE does not require any call to a constraint solver and allows for quickly enumerating inputs. In this short paper, we present the basic concepts of LSE together with promising preliminary experiments.

At the bottom of binary analysis: instructions
Alexandre Talon and Guillaume Bonfante
GSCOP, LORIA

We present here a careful exploration of the set of instructions for the x86 processor architecture. This is a preliminary step to- wards a systematic comparison of SMT-based retro-engineering tools. The latter arose in the context of binary code retro-engineering. All these tools rely themselves on more elementary disassembly tool. In this contri- bution, we attack the problem at its most atomic level: the instructions. We prepare, trading off between the size of the list and the correctness of the future comparison, a good list of instructions.

Asset Sensitivity for Aligning Risk Assessment Across Multiple Units in Complex Organizations
Carla Mascia and Silvio Ranise
University of Trento, Fondazione Bruno Kessler

A cyber-risk assessment conducted in a large organization may lead to heterogeneous results due to the subjectivity of certain as- pects of the evaluation, especially those concerning the negative con- sequences (impact) of a cyber-incident. To address this problem, we propose an approach based on the identification of a set of sensitivity features, i.e. certain attributes of the assets or processing activities that are strongly related to the levels of impact of cyber-incidents. We apply our approach to revise the results of a Data Protection Impact Assess- ment, a mandatory activity for complying with GDPR, conducted in a medium-to-large organization of the Italian Public Administration, and we obtain encouraging results.

An Extensive Comparison of Systems for Entity Extraction from Log Files
Anubhav Chhabra, Paula Branco, Guy-Vincent Jourdan and Herna Viktor
University of Ottawa

Log parsing is the process of extracting logical units from system, device or application generated logs. It holds utmost importance in the field of log analytics and forensics. Many security analytic tools rely on logs to detect, prevent and mitigate attacks. It is critical for these tools to extract information from large volumes of logs from multiple evolving sources. Log parsers typically require human intervention as regular ex- pressions or grammar need to be provided to extract knowledge. Teams of experts are required to keep these rules up-to-date in a time-consuming and costly process that is prone to errors and fails when new logs are added. On the other hand, strategies based on machine learning can automate the parsing of logs, thereby reducing time consumption and human labour. In this paper, we perform an extensive and systematic comparison of different log parsing techniques and systems based on ma- chine learning approaches. These include baseline learning solutions such as Perceptron, Stochastic Gradient Descent, Multinomial Naive Bayes, a graphical model: Conditional Random Fields, a pre-trained sequence- to-sequence model: NERLogParser, and a pre-trained language model: BERT. Moreover, we experiment with the Transformer Neural Network, modelling the Named Entity Recognition task as a sequence-to-sequence generation task, an approach not previously tested in this domain. An extensive set of experiments is carried out in in-scope and out-of-scope datasets aiming at estimating the performance in log files from known and unknown log sources. We use multiple evaluation schemes in order to: (i) compare the different systems; and (ii) understand the quality of the information extracted, providing deeper insights on the advan- tages and disadvantages of the different systems. Overall, we found that sequence-to-sequence models tend to perform better both in in-scope and out-of-scope data.

Choosing wordlists for password guessing: an adaptive multi-armed bandit approach
Hazel Murray and David Malone
Munster Technological University, Maynooth University

A password guesser often uses wordlists (e.g. lists of previously leaked passwords, dictionaries of words in different languages, and lists of the most common passwords) to guess unknown passwords. The attacker needs to make a decision about what guesses to make and in what order. In an online guessing environment this is particularly important as they may be locked out after a certain number of wrong guesses. In this paper, we employ a multi- armed bandit model to show that an adaptive strategy can actively learn characteristics of the passwords it is guessing, and can leverage this information to dynamically weight the most appropriate wordlist. We also show that this can be used to identify the nationality of the users in a password set, and that guessing can be improved by guessing using passwords chosen by other users of the same nationality.

Cut It: Deauthentication Attacks on Protected Management Frames in WPA2 and WPA3
Karim Lounis, Steven Ding and Mohammad Zulkernine
Queen’s University

Deauthentication attacks on Wi-Fi protocol (IEEE 802.11) were pointed out in early 2003. In these attacks, an attacker usually im- personates a Wi-Fi access point (a.k.a., authenticator) and sends spoofed deauthentication frames to the connected Wi-Fi supplicants. The con- nected supplicants receive the frames and process them as if they were sent by the legitimate access point. These frames instruct the connected Wi-Fi supplicants to invalidate their current association and authenti- cation to the access point and get disconnected from the Wi-Fi net- work. This is possible due to the absence of authentication in manage- ment frames (which includes deauthentication frames) in the currently used Wi-Fi security mechanisms (i.e., WPA and WPA2). To thwart these attacks, as well as, many other Denial-of-Service attacks, in 2009, an amendment, standardized IEEE 802.11w, was published as a set of new security mechanisms and procedures to enforce authentication, data freshness, and confidentiality on certain management frames. This amendment uses PMF (Protected Management Frames) to provide au- thentication of management frames and prevent the occurrence of many management frame spoofing-related attacks, including deauthentication attacks. Although only a few Wi-Fi-certified devices have incorporated IEEE 802.11w as an optional mechanism, the new Wi-Fi security mechanism, WPA3, has made IEEE 802.11w mandatory to provide a better security against those Denial-of-Service attacks. In this paper, we demon- strate through various attack scenarios the feasibility of deauthentication attacks on PMF-enabled WPA2-PSK and WPA3-PSK networks. We pro- vide interpretations to explain the reason behind the feasibility of the attacks and describe possible countermeasures to prevent the attacks.

Lightweight Authentication and Encryption for Online Monitoring in IIoT Environments
Armando Miguel Garcia and Matthias Hiller
Fraunhofer AISEC

Emerging industrial technologies building upon lightweight, mobile and connected embedded devices increase the need for trust and enforcing access control in industrial environments. We propose an ap- proach which combines Physical Unclonable Functions (PUFs), firmware fingerprinting and Attribute-Based Encryption (ABE) for enabling au- thentication and fine-grained access control of the data generated on the IoT end nodes. This approach is evaluated using an experimental setup and its feasibility for online monitoring in industrial environments is demonstrated. The proposed architecture adds a processing overhead of under 1% on a low-cost microcontroller and a communication latency of 144 milliseconds over a long-range wireless link, while having a low power consumption and protecting against multiple cyber-threats.

Employing Feature Selection to Improve the Performance of Intrusion Detection Systems
Ricardo Avila, Raphaël Khoury, Christophe Pere and Kobra Khanmohammadi
Université du Québec à Chicoutimi, La Capitale Financial Group Inc., Geotab Inc.

Intrusion detection systems use datasets with various fea- tures to detect attacks and protect computers and network systems from these attacks. However, some of these features are irrelevant and may re- duce the intrusion detection system’s speed and accuracy. In this study, we use feature selection methods to eliminate non-relevant features. We compare the performance of fourteen feature-selection methods, on three ML techniques using the UNSW-NB15, Kyoto 2006+ and DoHBrw-2020 datasets. The most relevant features of each dataset are identified, which show that feature selection methods can increase the accuracy of anomaly detection and classification.

Implementation of Lightweight Ciphers and Their Integration into Entity Authentication with IEEE 802.11 Physical Layer Transmission
Yunjie Yi, Kalikinkar Mandal and Guang Gong
University of Waterloo, University of New Brunswick

This paper investigates the performance of three lightweight authenticated ciphers namely ACE, SPIX and WAGE in the WiFi and CoAP handshaking authentication protocols. We implement the WiFi and CoAP handshake protocols and the IEEE802.11a physical layer com- munication protocol in software defined radio (SDR) and embed these two handshaking protocols into the IEEE802.11a OFDM communica- tion protocol to measure the performance of three ciphers. We present the construction of KDF and MIC used in the handshaking authentica- tion protocols and provide optimized implementations of ACE, SPIX and WAGE including KDF and MIC on three different (low-power) microcon- trollers. The performance results of these three ciphers when adopted in WiFi and CoAP protocols are presented. Our experimental results show that the cryptographic functionalities are the bottleneck in the hand- shaking and data protection protocols.

HistoTrust: ethereum-based attestation of a data history built with OP-TEE and TPM
Dylan Paulin, Christine Hennebert, Thibault Franco-Rondisson, Romain Jayles, Thomas Loubier and Raphaël Collado
CEA

Device- or user-centric system architectures allow everyone to manage their personal or confidential data. But how to provide the trust required between the stakeholders of a given ecosystem to work together, each preserving their interest and their business? HistoTrust introduces a solution to this problem. A system architecture separat- ing the data belonging to each stakeholder and the cryptographic proofs (attestations) on their history is implemented. An Ethereum ledger is deployed to maintain the history of the attestations, thus guaranteeing their tamper-resistance, their timestamp and their order. The ledger al- lows these attestations to be shared between the stakeholders in order to create trust without revealing secret or critical data. In each IoT device, the root-of-trust secrets used to attest the data produced are protected at storage in a TPM ST33 and during execution within an ARM Cortex-A7 TrustZone. The designed solution aims to be resilient, robust to software attacks and to present a high level of protection against side-channel at- tacks and fault injections. Furthermore, the real-time constraints of an embedded industrial application are respected. The integration of the security measures does not impact the performance in use.

K-Smali: an Executable Semantics for Program Verification of Reversed Android Applications
Marwa Ziadia, Mohamed Majri and Jaouhar Fattahi
Université Laval

One of the main weaknesses threatening smartphone security is the abysmal lack of tools and environments that allow formal verification of appli- cation actions, thus early detection of any malicious behavior, before irreversible damage is done. In this regard, formal methods appear to be the most natural and secure way for rigorous and unambiguous specification as well as for the veri- fication of such applications. In previous work, we proposed a formal approach to build the operational semantics of a given Android application by reverse en- gineering its assembly code, which we called Smali+. In this paper, we rely on the same idea and we enhance it by using a language definitional framework. We choose K framework to define Smali semantics. We briefly introduce the K frame- work. Then, we present a formal K semantics of Smali code, called K-Smali. Se- mantics includes multi-threading, threads scheduling and synchronization. The proposed semantics supports linear temporal logic model-checking that provides a suitable and comprehensive formal environment for checking a wide range of Android security-related properties.

Homomorphic Evaluation of Lightweight Cipher Boolean Circuits
Kalikinkar Mandal and Guang Gong
University of Waterloo, University of New Brunswick

Motivated by a number of applications of lightweight ciphers in privacy-enhancing cryptography (PEC) techniques such as secure multiparty computation (SMPC), fully homomorphic encryption (FHE) and zero-knowledge proof (ZKP) for verifiable computing, we investigate the Boolean circuit complexity of the core primitives of NIST lightweight cryptography (LWC) round 2 candidates. In PEC, the functionalities (e.g., ciphers) are often required to express as Boolean or arithmetic cir- cuits before applying PEC techniques, and the size of a circuit is one of the efficiency factors. As a use case, we consider homomorphic evaluation of the core AEAD circuits in the cloud-outsourcing setting using the TFHE scheme, and present the performance results.

Breaking Black Box Crypto-Devices using Laser-based Fault Injection
Bodo Selmke, Emanuele Strieder, Johann Heyszl, Tobias Damm and Sven Freud
Fraunhofer AISEC, Bundesamt für Sicherheit in der Informationstechnik

Laser fault injection attacks on hardware implementations are challenging, due to the inherently large parameter space of the fault injection and the unknown underlying implementation of the attacked device. In this work we report details from an exemplary laser fault attack on the AES-based authentication chip Microchip ATAES 132A, which lead to full secret key extraction. In addition we were able to reveal some details of the underlying implementation. This chip claims to feature various countermeasures and tamper detection mechanisms and is therefore a representative candidate for devices to be found in many different applications. On this basis we describe a systematic approach for Laser fault attacks on devices in a black-box scenario. This includes the determination of all relevant attack parameters such as fault locations, timings, and energy settings.

Automatic Annotation of Confidential Data in Java Code
Iulia Bastys, Pauline Bolignano, Franco Raimondi and Daniel Schoepe
Chalmers University of Technology, Amazon, Middlesex University

The problem of confidential information leak can be ad- dressed by using automatic tools that take a set of annotated inputs (the source) and track their flow to public sinks. Unfortunately, manu- ally annotating the code with labels specifying the secret sources is one of the main obstacles in the adoption of such trackers.
In this work, we present an approach for the automatic generation of labels for confidential data in Java programs. Our solution is based on a graph-based representation of Java methods: starting from a minimal set of known API calls, it propagates the labels both intra- and inter- procedurally until a fix-point is reached.
In our evaluation, we encode our synthesis and propagation algorithm in Datalog and assess the accuracy of our technique on seven previously annotated internal code bases, where we can reconstruct 75% of the pre- existing manual annotations. In addition to this single data point, we also perform an assessment using samples from the SecuriBench-micro bench- mark, and we provide additional sample programs that demonstrate the capabilities and the limitations of our approach.

A Quantile-based Watermarking Approach for Distortion Minimization
Maikel Lázaro Pérez Gort, Martina Olliaro and Agostino Cortesi
Ca’ Foscari University of Venice

Distortion-based watermarking techniques embed the water- mark by performing tolerable changes in the digital assets being pro- tected. For relational data, mark insertion can be performed over the different data types of the database relations’ attributes. An important goal for distortion-based approaches is to minimize as much as possible the changes that the watermark embedding provokes into data, preserv- ing their usability, watermark robustness, and capacity. This paper pro- poses a quantile-based watermarking technique for numerical cover type focused on preserving the distribution of attributes used as mark carriers. The experiments performed to validate our proposal show a significant distortion reduction compared to traditional approaches while maintain- ing watermark capacity levels. Also, positive achievements regarding ro- bustness are visible, evidencing our technique’s resilience against subset attacks.

EXMULF: An Explainable Multimodal Content-based Fake News Detection System
Sabrine Amri, Dorsaf Sallami and Esma Aïmeur
DIRO, University of Montreal

In this work, we present an explainable multimodal content- based fake news detection system. It is concerned with the veracity anal- ysis of information based on its textual content and the associated im- age, together with an Explainable AI (XAI) assistant. To the best of our knowledge, this is the first study that aims to provide a fully ex- plainable multimodal content-based fake news detection system using Latent Dirichlet Allocation (LDA) topic modeling, Vision-and-Language BERT (VilBERT) and Local Interpretable Model-agnostic Explanations (LIME) models. Our experiments on two real-world datasets demonstrate the relevance of learning the connection between two modalities, with an accuracy that exceeds 10 state-of-the-art fake news detection models.

Leave a Comment