• image

    SNGNN

    SNGNN is a Graph Neural Network to model adherence to social-navigation conventions for robots.
    (more)

    Given a particular scenario composed of a room with any number of walls, objects and people (who can be interacting with each other) the network provides a social adherence ratio from 0 to 1. This information can be used to plan paths for human-aware navigation.

    More information in the PROJECT's home page and the associated research papers:

  • SocNav1

    SocNav1 dataset

    SocNav1 is a Dataset to Benchmark and Learn Social Navigation Conventions
    (more)

    Datasets are essential for the development and evaluation of ML and AI algorithms. SocNav1 aims at evaluating the robots’ ability to assess the level of discomfort that their presence might generate among humans. It contains 9280 labelled sample scenarios including not only human, objects and walls but also their interactions, which makes SocNav1 particularly well-suited to be used to benchmark non-Euclidean machine learning algorithms such as graph neural networks. These models can be used for human-aware navigation path planning.

    The paper associated to the dataset deeply describes the data, the methods employed and provides a brief analysis.

    More information:

  • image

    AGM/AGGL

    AGM is an open-source robotics cognitive architecture. Its key features are its ability to reason about perceptive task (e.g., how to find an object), its modularity and reusability.
    (more)

    AGM is a modular open-source robotics cognitive architecture. Its main advantage in comparison with other architectures is that it allows robots to reason about how to achieve missions that require perceptive actions such as finding new objects. In fact, it can reason not only about the robots' perception but also about humans' perception.

    The ability of reasoning about perceptive tasks comes from the use of a new visual planning domain definition language named AGGL which can be used with existing PDDL planners or a AGGL-specific planner provided with the rest of the architecture. Demonstrations are available in the demonstrations section. More information in the project's home page and the associated research papers:

  • image

    RoboComp

    RoboComp is an open-source modern robotics framework. It makes extensive use of technologies such as component-oriented programming and domain-specific languages.
    (more)

    RoboComp is an open-source modern robotics framework. It makes extensive use of technologies such as component-oriented programming and domain-specific languages. RoboComp has a wide range of already-existing components used to communicate with the hardware of several robots. The key features of RoboComp are its tool set and the ease of creating new components using domain specific languages.

    It is currently being used in several different robots. For more information, please visit the RoboComp's home page.