top of page

Research Blog

Search
  • arshadfarhad9

Updated: Jan 13, 2022

Resource Allocation is one of the top trend in LoRaWAN for static and mobile IoT applications these days.


In recent times, LoRa has become a de-facto technology for the Internet of Things (IoT) due to its long-range connectivity support for many end devices (EDs), low deployment cost, and ultra-low energy consumption. LoRaWAN defines an open-source MAC layer protocol developed by the LoRa Alliance. As part of this protocol, a star-of-stars topology comprises a large number of EDs, gateway (GW), a network server (NS), and application servers, as shown below in Figure.



“Mainly two parameters are considered forresource allocation in LoRaWAN: spreading factor and transmit power.”

ED always initiate a transmission towards GW using two resource parameters [i.e., spreading factor (SF) and transmit power (TP)] with the Aloha channel access mechanism. An adaptive data rate manages these parameters (ADR), implemented at ED and NS sides. ADR is a widely adopted strategy for resource assignment to EDs recommended for static IoT applications, such as smart grid and metering.


However, the ADR performance decreases (in terms of packet success ratio) significantly under a highly dense network and variable channel conditions owing to the inefficient use of SF and TP. Therefore, both SF and TP need proper attention to be adjusted in such a scenario. Currently, it is an open research issue in LoRaWAN.


Recently, researchers have been moving to solve the same issue with AI approaches. So, why do we need AI for resource management in LoRaWAN? In the next blog, I will be discussing this.

bottom of page