I'm a first-year Robotics Ph.D. student advised by Dr. Sehoon Ha and Dr. C. Karen Liu. I am interested in applications of vision-based reinforcement learning in real-world robotics. At the moment, I am tackling outdoor navigation and environment interaction problems for quadrupedal robots.
We train vision-based agent to perform object searching in photorealistic 3D scene. And propose a motion synthesis mechanism for head motion re-targeting. Using which we enable object searching behaviour with animated human character (PFNN/NSM).[PDF] [arXiv] [video] [talk(20 min)]
We show how vision-based navigation agents can be trained to adapt to new sensor configurations with only three shots of experience. Rapid adaptation is achieved by introducing a bottleneck between perception and control networks, and through the perception component's meta-adaptation.[PDF] [arXiv]
Computer Animation class project, which utilizes off-the-shelf Soft-Actor-Critic Reinforcement Learning method that learns to build up the momentum and swing the animated character on a pull up bar.[short-summary]
Mobile Manipulation project that utilises MoveIt! & GQ-CNN to grasp an object from the table using a Fetch Robot in the Gazebo Simulator.[short-summary]
End-to-end (image-to-steering wheel) control policy learning from data collected over multiple laps with off-the-track recoveries generated by human.[github]
Project which uses sobel filters, perspective transforms, and line fitting to extact and display lane boundaries from the front camera feed of the car.[github]
Computational Photography project, app allows user to automatically crop out the object of interest and apply panning effect based on the magnitude and direction provided.
2020© Maks Sorokin
built using Skeleton, icon credits flaticon, hosted by GitHub Pages❤️
feel free to copy: this page