Wi-fi data helps researchers improve accuracy of mapping algorithms venturebeat t gas terengganu


SLAM, or simultaneous localization and mapping, is a well-studied computational problem that involves updating electricity generation in india an environment map while keeping track of an agent’s location. (Typically, said agent is a drone or robot.) The advent of cheap, ubiquitous depth sensors and sophisticated algorithms have solved it somewhat, but even state-of-the-art vision systems aren’t perfect: Symmetric and repetitive patterns sometimes result in faulty maps, and the aforementioned sensors tend to generate large, unwieldy volumes of data.

That’s why researchers propose using Wi-Fi sensing to supplement the technique. A newly published paper on the preprint server Arxiv.org (“ Augmenting Visual SLAM with Wi-Fi Sensing For Indoor Applications“) describes a “generic way” to integrate wireless data j gastroenterol hepatol into visual SLAM algorithms, toward the goal of improving their accuracy and minimizing hardware overhead.

“Wi-Fi radio is available on most robots or mobile devices and Wi-Fi Access Points are ubiquitous in most urban environments. Wi-Fi and visual sensing are complementary to each other,” the paper electricity song’s authors wrote. “In this work, we present a generic workflow to incorporate Wi-Fi sensing into visual SLAM algorithms in order electricity grid uk to alleviate perceptual aliasing and high computational complexity.”

The researchers’ system associates a visual frame (an image) from a camera with a corresponding Wi-Fi signature. For every three or four meters, the robot or mobile device on which it’s running pauses for about ten seconds to aggregate signatures, and then associates a signature with any visual frames that follow until electricity research centre the next pause. A distribution of Wi-Fi clusters, each comprising frames that contain signatures similar to the representative signature, help to establish spatial proximity to the current frame. (The team notes that the current frame is compared only to frames within similar clusters, so as to speed up loop closure, or the task of deciding whether or not an agent has returned to a previously visited area.) Lastly, the current frame is assigned to the correct cluster.

The scientists instrumented three separate open-source visual SLAM systems — RGBD-SLAM, RTAB-Map, and ORB-SLAM — using their techniques, and in tests collected measurements (four data sets’ worth from buildings on a college campus) using a Microsoft Kinect sensor mounted on a Turtlebot electricity transmission and distribution costs robot. Analyses showed an improvement in accuracy by 11 percent on average and a reduction in computation time by 15 to 25 percent across all four algorithms, and comparable (and in some cases better) performance compared with FABMAP, a Wi-Fi-augmented SLAM system.

There’s room for improvement, of course. The researchers note r gas constant that Wi-Fi signal strength is beholden to “environmental dynamics” such as the number of people in a room and the number of devices connected at one time, and that rooms with few “blocking objects” like walls along the trajectory tend to produce fewer Wi-Fi clusters. Additionally, they say that the t gastrobar performance gain correlates with the number of access points; the proposed approach works well with as low as 40.

Still, the team believes that, with a bit of refinement (and technologies like fine-grained 60GHz sensing), their method could prove useful not only for robots of the present and future, but augmented reality headsets like Microsoft’s HoloLens and MagicLeap’s One gas utility cost. “[W]e hope to demonstrate the utility of Wi-Fi sensing for sustained long-term use in an urban space,” they wrote. “[We’ve shown] our work is useful for robots as well as mobile devices.”