z-logo
open-access-imgOpen Access
Deep Learning–Based Energy Beamforming With Transmit Power Control in Wireless Powered Communication Networks
Author(s) -
Iqra Hameed,
Pham V. Tuan,
Insoo Koo
Publication year - 2021
Publication title -
ieee access
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.587
H-Index - 127
ISSN - 2169-3536
DOI - 10.1109/access.2021.3121724
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
In this paper, we propose deep learning–based energy beamforming in a multi-antennae wireless powered communication network (WPCN). We consider a WPCN where a hybrid access point (HAP) equipped with multiple antennae broadcasts an energy-bearing signal to wireless devices using energy beamforming. We investigate the joint optimization of the time allocation for wireless energy transfer (WET) and wireless information transfer (WIT) with the design for energy beams while minimizing the transmit power at the HAP for efficient use of its available resources. However, this is a non-convex problem, and it is numerically intractable to solve it. In the literature, the traditional approach to solving this problem is based on an iterative algorithm that incurs high computational and time complexity, which is not feasible for real-time applications. We study and analyze a deep neural network (DNN)-based scheme and propose a faster and more efficient approach for the fair approximation of a near-optimal solution to this problem. To train the proposed DNN, we acquire training data samples from a sequential parametric convex approximation (SPCA)-based iterative algorithm. Instead of acquiring data samples and training the DNN, which is highly complex, we use offline training for the DNN to provide a faster solution to the real-time resource allocation optimization problem. Through the simulation results, we show the proposed DNN scheme provides a fair approximation of the traditional SPCA method with low computational and time complexity.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here