A Knowledge Distillation-Based Lightweight Task Offloading Algorithm for Mobile Edge Computing
DOI:
https://doi.org/10.6919/ICJE.202604_12(4).0042Keywords:
Mobile Edge Computing; Task Offloading; Deep Reinforcement Learning; Knowledge Distillation; Lightweight Models; D3QN.Abstract
To address the issues of complex models, high inference latency, and difficulties in deploying traditional deep reinforcement learning offloading strategies on resource-constrained devices in mobile edge computing, this paper proposes a lightweight D3QN computation offloading method based on knowledge distillation. This method constructs a teacher-student network architecture, transferring the decision-making capabilities of the teacher model to a lightweight student network via offline knowledge distillation. Concurrently, an adaptive exploration strategy incorporating heuristic rules is designed to accelerate convergence, whilst LSTM temporal awareness and a dynamic energy consumption normalisation mechanism are introduced to optimise QoE. Experimental results demonstrate that, at a compression ratio of 0.4, the number of parameters in the student model is reduced by 77.61%, inference latency is reduced by 26.32%, and QoE reaches 829.07, approaching the performance of the teacher model, thereby providing an efficient and feasible lightweight solution for edge intelligence offloading.
Downloads
References
[1] M. Patel et al., “Mobile-edge computing introductory technical white paper,” White Paper, Mobile-Edge Comput. (MEC), vol. 29, pp. 854-864, Sep. 2014.
[2] S. Lu, S. Liu, Y. Zhu, W. Liang, K. Li, and Y. Lu, “A DRL-based decentralized computation offloading method: An example of an intelligent manufacturing scenario,” IEEE Trans. Cogn. Commun. Netw., vol. 9, no. 4, pp. 1042-1055, Aug. 2023.
[3] L. Ale, S. A. King, N. Zhang, A. R. Sattar, and J. Skandaramayan, “D3PG: Dirichlet DDPG for task partitioning and offloading with constrained hybrid action space in mobile-edge computing,” IEEE Internet Things J., vol. 9, no. 19, pp. 19260-19272, Oct. 2022.
[4] L. Ale, N. Zhang, X. Fang, X. Chen, S. Wu, and L. Li, “Delay-aware and energy-efficient computation offloading in mobile-edge computing using deep reinforcement learning,” IEEE Trans. Cogn. Commun. Netw., vol. 7, no. 3, pp. 881-892, Sep. 2021.
[5] J. Li, Q. Jiang, V. C. M. Leung, Z. Ma, and K. K. Abrokwa, “Deep-reinforcement-learning-based joint optimization of task migration and resource allocation for mobile-edge computing,” IEEE Internet Things J., vol. 12, no. 13, pp. 24428-24441, 1 Jul. 2025.
[6] X. Qiu, W. Zhang, W. Chen, and Z. Zheng, “Distributed and collective deep reinforcement learning for computation offloading: A practical perspective,” IEEE Trans. Parallel Distrib. Syst., vol. 32, no. 5, pp. 1085-1101, May 2021.
[7] N. Zhao, Z. Ye, Y. Pei, Y.-C. Liang, and D. Niyato, “Multi-agent deep reinforcement learning for task offloading in UAV-assisted mobile edge computing,” IEEE Trans. Wireless Commun., vol. 21, no. 9, pp. 6949-6962, Sep. 2022.
[8] L. Ale, N. Zhang, X. Fang, X. Chen, S. Wu, and L. Li, “Delay-aware and energy-efficient computation offloading in mobile-edge computing using deep reinforcement learning,” IEEE Trans. Cogn. Commun. Netw., vol. 7, no. 3, pp. 881-892, Sep. 2021.
[9] S. Lu, S. Liu, Y. Zhu, W. Liang, K. Li, and Y. Lu, “A DRL-based decentralized computation offloading method: An example of an intelligent manufacturing scenario,” IEEE Trans. Cogn. Commun. Netw., vol. 9, no. 4, pp. 1042-1055, Aug. 2023.
[10] L. Ale, S. A. King, N. Zhang, A. R. Sattar, and J. Skandaramayan, “D3PG: Dirichlet DDPG for task partitioning and offloading with constrained hybrid action space in mobile-edge computing,” IEEE Internet Things J., vol. 9, no. 19, pp. 19260-19272, Oct. 2022.
[11] J. Li, Q. Jiang, V. C. M. Leung, Z. Ma, and K. K. Abrokwa, “Deep-reinforcement-learning-based joint optimization of task migration and resource allocation for mobile-edge computing,” IEEE Internet Things J., vol. 12, no. 13, pp. 24428-24441, 1 Jul. 2025.
[12] S. Chouikhi, M. Essgehir, and L. Merghem-Boulahia, “Energy-efficient computation offloading based on multiagent deep reinforcement learning for industrial Internet of Things systems,” IEEE Internet Things J., vol. 11, no. 7, pp. 12228-12239, Apr. 2024.
[13] N. Zhao, Z. Ye, Y. Pei, Y.-C. Liang, and D. Niyato, “Multi-agent deep reinforcement learning for task offloading in UAV-assisted mobile edge computing,” IEEE Trans. Wireless Commun., vol. 21, no. 9, pp. 6949-6962, Sep. 2022.
[14] X. Qiu, W. Zhang, W. Chen, and Z. Zheng, “Distributed and collective deep reinforcement learning for computation offloading: A practical perspective,” IEEE Trans. Parallel Distrib. Syst., vol. 32, no. 5, pp. 1085-1101, May 2021.
[15] J. Hao, L. Wang, M. Odiathevar, W. K. G. Seah, G. Xu, B. Huang, and Y. Gao, “Prune-based deep reinforcement learning offloading algorithm for mobile edge computing,” IEEE Trans. Cogn. Commun. Netw., vol. 12, pp. 2876-2889, 2026.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 International Core Journal of Engineering

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.




