This page presents a non exhaustive list of frameworks recommended by by the XR4ALL developpers community. If one of these frameworks meets your needs, it will be strongly recommended to implement your solution based on it. If you wish to reference on this page an open framework that aims to improve the interoperability of XR technological components , please contact us at email@example.com.
Computer vision (camera pose estimation, tracking, relocalization, 3D reconstruction)
SolAR is an open source framework dedicated to augmented reality (AR). It is promoted by the XR4ALL project and hosted on GITHub. It offers a C++ and C# SDK to easily and quickly develop and use custom computer vision solutions addressing mixed reality applications. It provides developers with a full chain from low-level vision components development to high-level computer vision pipelines and XR service development. Currently focusing on camera pose estimation, it will be soon be extended to support also dense 3D reconstruction and scene understanding.
The SolAR framework is open-source and released under Apache license 2.0, and this allows it to be used for research as well as for commercial purposes addressing various domains (smart factory, smart home, real estate, health, etc.).
SolAR aims at stimulating the interaction between all XR actors for the benefit of end users.
How does it work?
The SolAR Framework addresses the full chain of XR applications development related to computer vision:
- Components creation: SolAR defines a unique API for basic components required for computer vision pipeline (features extractor, descriptors calculation, matching, Perspective N Points, homography, image filters, bundle adjustment, etc.). The SolAR community can implement new components compliant with the SolAR API.
- Component publication: A set of components are packaged into modules to ease their publication. SolAR modules, whether royalty free or under a commercial license, are stored on an artefact repository to be available to the SolAR community of pipeline assemblers.
- Vision pipeline assembling: Modules of components published by the SolAR community can be downloaded from the module repositories. Then, SolAR provides developers with a pipeline mechanism allowing them to assemble SolAR components to define their own vision pipeline such as a camera pose estimation solution.
- Vision pipeline publication: When a vision pipeline has been assembled and configured, it can be published in a repository of artefacts dedicated to SolAR vision pipelines to make it accessible to the SolAR pipeline users.
- AR service development: Thanks to a SolAR plugin for Unity, XR service developers can download SolAR pipelines stored on the dedicated artefact repository and integrate them in few clicks to their applications. They can simply develop XR applications as with any AR SDK and roll them out. Since SolAR is based on a unified interface, XR application developers will be able to easily make their applications evolve with the new solutions developed by the SolAR community.
Why use the SolAR framework ?
AR applications developers and end users face a dilemma.
On one hand, major IT actors have released toolkits to develop AR applications. Nevertheless, they do not always meet the specific needs required by dedicated use cases or contextual environments (localization accuracy, lighting conditions, indoor/outdoor environments, tracking area range, dynamic scenes, etc.).
No solution fits all, and, generally, these toolkits do not provide the level of tuning required to optimally adapt the vision pipeline to the use case. Moreover, these closed solutions do not always ensure the confidentiality of data, and can store information concerning the environment of the user (3D maps, key frames) or the augmentations (3D meshes, procedure scenarios) that could contain crucial intellectual property and private information.
On the other hand, open source vision libraries and SLAM implementations can generally be modified and configured to optimally meet AR applications requirements. However, many SLAM implementations generally developed by academic actors do not provide the license or the level of maturity required for the development of commercial solutions. Likewise, open source vision libraries offer a huge number of low-level functions but require a huge expertise and important development resources to obtain usable vision pipelines such as camera pose estimation ready for commercial use.
To that end, SolAR offers an alternative to current commercial AR SDKs or existing open-source solutions, providing the benefits of both worlds, i.e. openness, ease of use, efficiency, adaptiveness. SolAR aims at creating an ecosystem bringing researchers, developers, and end-users together to help the adoption of XR.
The main feature of ApertusVR is the so called “Distributed Plugin-in Mechanism” which means that not only humans could be involved in an multi-user virtual reality scene but any element of the Internet of Things like hardware, software, robot or any kind of smart device.
How dos it work?
The Core of ApertusVR is a programming library written in C++11, that fulfills modern software requirements as it is modular, embeddable, platform-independent, and easily configurable. It contains basic software interfaces and modules for logging, event-handling, and for loading plugins and configurations. It is also responsible for distributed data synchronization.
Plugins can extend the Core of ApertusVR with XR (AR/VR/MR) capabilities, which help to integrate XR technologies into new/existing developments, products rapidly. It creates a new abstraction layer over the hardware vendors in order to use different display and control devices in any product or service.
Why use ApertusVR?
ApertusVR offers a brand new “no vendor lock-in” approach for virtual and augmented reality on different operating systems and on different virtual and augmented reality hardware.
This higher abstraction level enables that the business logic has to be implemented once and then it works on any platform. Moreover these different virtual and augmented reality hardware can be shared a same virtual reality scene at the same time.
The ApertusVR only contains software libraries in order to easily integrate the virtual and augmented reality technologies into an already existing product.