Adaptive Sleep-Wake Control using Reinforcement Learning in Sensor Networks
Data(s) |
2014
|
---|---|
Resumo |
The aim in this paper is to allocate the `sleep time' of the individual sensors in an intrusion detection application so that the energy consumption from the sensors is reduced, while keeping the tracking error to a minimum. We propose two novel reinforcement learning (RL) based algorithms that attempt to minimize a certain long-run average cost objective. Both our algorithms incorporate feature-based representations to handle the curse of dimensionality associated with the underlying partially-observable Markov decision process (POMDP). Further, the feature selection scheme used in our algorithms intelligently manages the energy cost and tracking cost factors, which in turn assists the search for the optimal sleeping policy. We also extend these algorithms to a setting where the intruder's mobility model is not known by incorporating a stochastic iterative scheme for estimating the mobility model. The simulation results on a synthetic 2-d network setting are encouraging. |
Formato |
application/pdf |
Identificador |
http://eprints.iisc.ernet.in/51339/1/6th_int_con_com_sys_net_2014.pdf Prashanth, LA and Chatterjee, Abhranil and Bhatnagar, Shalabh (2014) Adaptive Sleep-Wake Control using Reinforcement Learning in Sensor Networks. In: 6th International Conference on Communication Systems and Networks (COMSNETS), JAN 07-10, 2014, Bangalore, INDIA. |
Publicador |
IEEE |
Relação |
http://dx.doi.org/10.1109/COMSNETS.2014.6734874 http://eprints.iisc.ernet.in/51339/ |
Palavras-Chave | #Computer Science & Automation (Formerly, School of Automation) #Electrical Engineering |
Tipo |
Conference Proceedings NonPeerReviewed |