My research mainly focuses on construction robotics. My current interest is trying to make use of Visual-Language-Model as prior information for robot navigation problems.
In this paper, we propose an autonomous robotic system, dubbed RoboAuditor, for identifying and localizing energy-intensive appliances in buildings given text queries from humans. RoboAuditor utilizes visual language models to predict relevance scores between text queries and observed images for goal selection in robot navigation. It then automatically identifies and localizes queried appliances while self-navigating with efficient navigational strategies.
This paper describes a lightweight and reproducible robotic system dubbed AcTEA-bot, that automatically detects and locates thermal leaks in ceiling environments while self-navigating.
TEA-bot is an Unmanned Ground Vehicle (UGV) designed to navigate in ceilings using visual-based Simultaneously Localization And Mapping (SLAM) while detecting thermal leaks from HVAC systems using Convolutional Neural Networks (CNN).
In this paper, we proposed a reinforcement learning based approach for robotic motion planning using curriculum learning, which enables robots to conduct multiple construction tasks using a single trained agent.
Feel free to steal this website's source code. Do not scrape the HTML from this page itself, as it includes analytics tags that you do not want on your own website — use the github code instead. Also, consider using Leonid Keselman's Jekyll fork of this page.