z-logo
open-access-imgOpen Access
AMNED: An Efficient Framework for Spiking Neuron Coding in AirComp Federated Learning
Author(s) -
Juncheng Ji,
Chan-Tong Lam,
Ke Wang,
Benjamin K. Ng
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3595270
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
This paper advances ACFL technologies by proposing the Adaptive Memristor Neuron Encoding-Decoding (AMNED) framework for AirComp Federated Learning (ACFL), enabling efficient, privacy-preserving model aggregation optimized for resource-constrained wireless environments. In advancing future ACFL technologies, Over-the-Air Computation (AirComp) has emerged as a groundbreaking innovation. AirComp Federated Learning (ACFL) integrates AirComp with federated learning, transforming distributed machine learning by enhancing data privacy and leveraging network device computation. The ACFL employs a novel aggregation technique leveraging the superposition property of wireless signals, enabling simultaneous model updates from multiple devices. This reduces communication overhead and alleviates computational burdens at central aggregation points. Despite these advantages, challenges such as device location variability, channel quality instability, and limited computational and energy resources persist. Motivated by these challenges, we introduce the AMNED framework to address these issues. By integrating spiking neural networks with memristive techniques, the proposed framework significantly improves signal reconstruction accuracy and energy utilization. This paper proposes the AMNED framework for ACFL to support efficient and secure model aggregation in ACFL networks. The framework combines spiking neural networks with memristor-based techniques to improve signal reconstruction and reduce energy use. Experiments in an ACFL with SNRs and interference from 10 devices show that AMNED achieves 79.3% communication efficiency, compared to 73.2% for the Stepwise Forward algorithm. It reaches an MMSE of 0.3044 and converges 7% faster in a toy FL task using synthetic gradients. A dynamic threshold adjustment improves efficiency by 2% at 10 dB SNR, and a hybrid temporal-rate encoding method lowers MMSE by 5% under high interference, tested on MNIST data. AMNED accurately reconstructs signals from spike sequences, enabling stable model updates. Its low energy use makes it suitable for edge devices and large-scale federated learning. Additionally, our proposed framework accurately reconstructs signals from spike sequences, ensuring precise model updates. Consequently, its energy efficiency makes it ideal for energy-constrained devices and scalable federated learning.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom