profile photo
Maks Sorokin

I'm a fourth-year Robotics Ph.D. student at Georgia Tech, advised by Dr. Sehoon Ha and Dr. C. Karen Liu. I am interested in applications of vision-based robot learning in real-world robotics. Currently, I am working on outdoor navigation and environment interaction problems.



PhD in Robotics
2020 - Present

AI Residency
2021 - 2022

Latest Work
On Designing a Learning Robot: Improving Morphology for Enhanced Task Performance and Learning

Maks Sorokin, Chuyuan Fu, Jie Tan, C. Karen Liu, Yunfei Bai, Wenlong Lu, Sehoon Ha, Mohi Khansari
International Conference on Intelligent Robots and Systems (IROS) 2023

We present a learning-oriented morphology optimization framework that accounts for the interplay between the robot's morphology, onboard perception abilities, and their interaction in different tasks. We find that morphologies optimized holistically improve the robot performance by 15-20% on various manipulation tasks, and require 25x less data to match human-expert made morphology performance.

[project page] [video overview] [pdf] [arXiv]

Human Motion Control of Quadrupedal Robots using Deep Reinforcement Learning

Sunwoo Kim, Maks Sorokin, Jehee Lee, Sehoon Ha
Proceedings of Robotics: Science and Systems (RSS) 2022

We propose a novel motion control system that allows a human user to operate various motor tasks seamlessly on a quadrupedal robot. Using our system, a user can execute a variety of motor tasks, including standing, sitting, tilting, manipulating, walking, and turning, on simulated and real quadrupeds.

[project page] [pdf] [arXiv] [video]
Relax, it doesn't matter how you get there!

Mehdi Azabou, Michael Mendelson, Maks Sorokin, Shantanu Thakoor, Nauman Ahad, Carolina Urzay, Eva L Dyer
Neural Information Processing Systems (NeurIPS) 2023 - Spotlight

We introduce Bootstrap Across Multiple Scales (BAMS), a multi-scale self-supervised representation learning model for behavior analysis. We combine a pooling module that aggregates features extracted over encoders with different temporal receptive fields, and design latent objectives to bootstrap the representations in each respective space to encourage disentanglement across different timescales.

[project page] [pdf] [arXiv]
Learning to Navigate Sidewalks in Outdoor Environments

Maks Sorokin, Jie Tan, C. Karen Liu, Sehoon Ha
IEEE Robotics and Automation Letters (RA-L) 2022

We design a system which enables zero-shot vision-based policy transfer to the real-world outdoor environments for sidewalk navigation task. Our approach is evaluated on a quadrupedal robot navigating sidewalks in the real world walking 3.2 kilometers with a limited number of human interventions.

[project page] [pdf] [arXiv] [video] [TechXplore article]
Learning Human Search Behavior from Egocentric View

Maks Sorokin, Wenhao Yu, Sehoon Ha, C. Karen Liu

We train vision-based agent to perform object searching in photorealistic 3D scene. And propose a motion synthesis mechanism for head motion re-targeting. Using which we enable object searching behaviour with animated human character (PFNN/NSM).

[pdf] [arXiv] [video] [talk(20 min)]
A Few Shot Adaptation of Visual Navigation Skills to New Observations using Meta-Learning

Qian Luo, Maks Sorokin, Sehoon Ha
The IEEE International Conference on Robotics and Automation (ICRA) 2021

We show how vision-based navigation agents can be trained to adapt to new sensor configurations with only three shots of experience. Rapid adaptation is achieved by introducing a bottleneck between perception and control networks, and through the perception component's meta-adaptation.

[pdf] [arXiv]

Real2Sim Image adaptation


Image domain adaptation through the conversion of images with randomized textures (or real images) to a canonical image representation. Replication of a RCAN paper with different loss modeling (Perceptual/Feature Loss instead of GAN loss).

Learning to swing


Computer Animation class project, which utilizes off-the-shelf Soft-Actor-Critic Reinforcement Learning method that learns to build up the momentum and swing the animated character on a pull up bar.



Mobile Manipulation project that utilises MoveIt! & GQ-CNN to grasp an object from the table using a Fetch Robot in the Gazebo Simulator.

Behavioral Clonning for Autonomous Driving


End-to-end (image-to-steering wheel) control policy learning from data collected over multiple laps with off-the-track recoveries generated by human.


Mentoring Experience
I've had a great pleasure working with a number of exceptional students at Georgia Tech.

  • PRESENT: Master's Student - Jiaxi Xu
  • FALL 2021: Master's Student - Arjun Krishna -> PhD student at UPenn’s GRASP lab
  • FALL 2020: Master's Student - Qian Luo -> NLP Algorithm Engineer at Alibaba Group

Teaching Experience
I had an amazing experience helping teach one of the largest classes (1000+ students) at Georgia Tech.
CS6601 - Artificial Intelligence class by Dr. Thomas Ploetz & Dr. Thad Starner.

  • FALL 2019 & SPRING 2020: Head Teaching Assistant
  • FALL 2018 & SPRING 2019: Teaching Assistant

Scholarly Activities
  • IROS 2023 - Session co-chair Mechanism Design
  • RA-L 2023 - Reviewer at IEEE Robotics and Automation Letters
  • RSS 2023 - Reviewer at Proceedings of Robotics: Science and Systems
  • RA-L 2022 - Reviewer at IEEE Robotics and Automation Letters
  • ICRA 2021 - Reviewer at IEEE International Conference on Robotics and Automation

consider checking out:

2023© Maks Sorokin
built using Skeleton, icon credits flaticon, hosted by GitHub Pages❤️

feel free to copy: this page