Hello,
we have developed our own RF modules based on the S2-LP transceiver and we are currently characterizing their maximum usable sensitivity.
Measurement setup
- TX and RX are connected through a calibrated variable attenuator.
- The transmitter sends 500 packets with a 200 ms interval between packets.
- After the test, we create statistics on successfully received messages
- Our RF settings: Frequency base = 433 MHz, Data rate = 3.84 ksps, Deviation = 2 kHz, Filter = 12 kHz; Preamble length = 64 bits. Rest parameters are identical to the configuration generated by the S2-LP DK GUI.
- The only parameter that changes is the RX timer configuration.
Observed behavior
We obtain different maximum usable sensitivity values depending on the RX timer setting:
- RX timer set to infinite (S2LPTimerSetRxTimerCounter(0)):
→ Maximum usable sensitivity ≈ –105 dBm - RX timer set to 200 ms timeout (S2LPTimerSetRxTimerUs(200000)):
→ Maximum usable sensitivity ≈ –111 dBm
This is a significant difference (~6 dB), and we would like to understand the reason.
Question
Why does the RX timeout configuration affect the measured maximum usable sensitivity?