Indoor localization dataset

In order to report the performance of our indoor localization algorithm we collected two sequences of situations of a person walking in a test environment. The path followed during the data collection (starting from the entrance of the meeting room and ending in the kitchen), along with the sensors' ids and positions, are depicted in the figure below.

experiments_path

Along with data coming from the motion sensors, we collected a ground truth, in order to allow a quantitative evaluation of the accuracy of our approach. This ground truth provides information about when the person is moving from an area to another, as well as the name of the destination area.

The datasets is available to the community for further research and comparisons.
For each sequence there are two files, both with .pkl extension: one containing the sensor data and one for the corresponding ground truth. The first is composed by a sequence of tuples dumped with the Pickle Python module. The data from these files need to be loaded with the appropriate functions; the first element of each tuple is the timestamp of the measurement, the second is a list of sensors that detected some motion at that time. The ground truth file has a similar format, but the second element of the tuple is a string stating the area in which the person is actually entering. The areas, along with their names, are depicted in the figure below.

terzo_piano

During the data gathering sessions we had two major problems which, in our opinion, represent real-life situations: the sensor in the fridge_sofa area detected motion very frequently even when no person was there and the sensor in the atrium detected motion also when this was taking place in the dining room. These problems made the datasets very noisy. We did not suppress this noisy data in order to check one of the main strength of our approach: its ability to work even in presence of realistically noisy sensor data.

The datasets, along with the ground truth, can be downloaded from the links below.

To load the datasets, the following python code can be used:

import pickle

dataset = []
ground_truth = []
with open(dataset_filename, 'rb',) as f:
  try:
    while True:
      dataset.append(pickle.load(f))
  except:
    pass

with open(ground_truth_filename, 'rb') as f:
  try:
    while True:
      ground_truth.append(pickle.load(f))
  except:
    pass