profile photo
Maks Sorokin

I'm a first-year Robotics Ph.D. student advised by Dr. Sehoon Ha and Dr. C. Karen Liu. I am interested in applications of vision-based reinforcement learning in real-world robotics. At the moment, I am tackling outdoor navigation and environment interaction problems for quadrupedal robots.

News
  • MAY'21 Awarded the fellowship by the Machine Learning Center at Georgia Tech. [link]
  • FEB'21 Paper on Learning Human Search Behavior accepted at EUROGRAPHICS'2021! [PDF][arXiv]
  • DEC'20 Paper on Few-shot visual sensor meta-adaptation accepted at ICRA'2021! [PDF][arXiv]


Publications
new
Learning Human Search Behavior from Egocentric View

Maks Sorokin, Wenhao Yu, Sehoon Ha, C. Karen Liu - EUROGRAPHICS 2021

We train vision-based agent to perform object searching in photorealistic 3D scene. And propose a motion synthesis mechanism for head motion re-targeting. Using which we enable object searching behaviour with animated human character (PFNN/NSM).

[PDF] [arXiv] [video] [talk(20 min)]
new
A Few Shot Adaptation of Visual Navigation Skills to New Observations using Meta-Learning

Qian Luo, Maks Sorokin, Sehoon Ha - ICRA 2021

We show how vision-based navigation agents can be trained to adapt to new sensor configurations with only three shots of experience. Rapid adaptation is achieved by introducing a bottleneck between perception and control networks, and through the perception component's meta-adaptation.

[PDF] [arXiv]


Projects
Real2Sim Image adaptation

Image domain adaptation through the conversion of images with randomized textures (or real images) to a canonical image representation. Replication of a RCAN paper with different loss modeling (GAN loss -> Perceptual/Feature Loss).

[github]
Learning to swing

Computer Animation class project, which utilizes off-the-shelf Soft-Actor-Critic Reinforcement Learning method that learns to build up the momentum and swing the animated character on a pull up bar.

[short-summary]
FetchIt

Mobile Manipulation project that utilises MoveIt! & GQ-CNN to grasp an object from the table using a Fetch Robot in the Gazebo Simulator.

[short-summary]
Behavioral Clonning for Autonomous Driving

End-to-end (image-to-steering wheel) control policy learning from data collected over multiple laps with off-the-track recoveries generated by human.

[github]
Line Marking Detection

Project which uses sobel filters, perspective transforms, and line fitting to extact and display lane boundaries from the front camera feed of the car.

[github]
Add panning motion with any photo

Computational Photography project, app allows user to automatically crop out the object of interest and apply panning effect based on the magnitude and direction provided.




Teaching Experience
I've had an amazing experience helping teach one of the largest classes at Georgia Tech.
CS6601 Artificial Intelligence class by Dr. Thomas Ploetz & Dr. Thad Starner.






consider checking out:


2020© Maks Sorokin
built using Skeleton, icon credits flaticon, hosted by GitHub Pages❤️

feel free to copy: this page