• Skip to main content
  • Skip to primary navigation
  • Skip to primary sidebar
Header Search Widget
FHL Vive Center for Enhanced Reality
  • Home
  • Research
    • Robot Open Autonomous Racing (ROAR)
    • OpenARK
    • OpenARK Atlas
    • ISAACS
    • Tele Immersion
    • xR Lab Research
  • Education
    • ROAR Academy
    • Master of Engineering Program
    • Courses
    • DeCal
    • Tutorials
    • Seminar Videos
  • News & Events
    • Newsletters
    • Annual Reports
  • Publications
  • Membership
  • Career
  • ROAR Academy
  • Master of Engineering Program
  • Courses
    • CS294-137 Course Projects
  • DeCal
  • Tutorials
  • Seminar Videos

Tutorials

Tutorials

ISMAR 2022 Tutorial

OpenARK — Tackling Augmented Reality Challenges via an Open- Source Software Development Kit
Presenters: Dr. Allen Yang,  Dr. Mohammad Keshavarzi, and Adam Chang

Abstract

This tutorial is a revised and updated edition of the OpenARK tutorial presented at ISMAR 2019 and 2020. The aim of this tutorial is to present an open-source augmented reality development kit, called OpenARK. OpenARK was founded at UC Berkeley in 2015. Since then, the project has received high-impact awards and visibility. Currently OpenARK is being used by several industrial alliances including HTC Vive, Siemens, Ford, and State Grid. In 2018, OpenARK won the only Mixed Reality Award at the Microsoft Imagine Cup Global Finals. In the same year in China, OpenARK also won a Gold Medal at the Internet+ Innovation and Entrepreneurship Competition, the largest such competition in China. OpenARK has received funding support from research grants by an Intel RealSense project, NSF and ONR.

OpenARK includes a multitude of core functions critical to AR developers and future products. These functions include multi-modality sensor calibration, depth-based gesture detection, depth-based deformable avatar tracking, and SLAM and 3D reconstruction. In addition to these functionalities, a lot of recent work has gone into developing a real time deep learning 3D object tracking module as a solution to the problem known as Digital Twin. The Digital Twin problem is the problem of overlaying a virtual augmentation over a real object with near perfect accuracy, enabling a wide variety of AR functionalities. All functions are based on state-of-the-art real-time algorithms and are coded to be efficient on mobile-computing platforms.

Another core component in OpenARK is its open-source depth perception databases. Currently we have made two unique databases available to the community, one on depth-based gesture detection and the other on mm-accuracy indoor and outdoor large-scale scene geometry models and AR attribute labeling. We would like to overview our effort in the design and construction of these databases that potentially could benefit the community at large.

Finally, we will discuss our effort in making depth-based perception easily accessible to application developers, who may not have and should not be forced to learn good understanding about 3D point cloud and reconstruction algorithms. The last core component of OpenARK is an interpreter of 3D scene layouts and its compatible AR attributes based on generative design principles first invented for creating architectural design layouts. We will discuss the fundamental concepts and algorithms of generative design and how it can be used to interpret common 3D scenes and their attributes for intuitive AR application development.

The teaching material of this tutorial will be drawn from a graduate-level advanced topics course on AR/VR offered at UC Berkeley for the past three years. Our teaching material can be downloaded from two websites:

ISMAR 2022
OpenARK Github

Videos

OpenARK Tutorial
by Allen Yang

Generative Modeling for Mapping the Virtual to the Physical by Dr. Mohammad Keshavarzi

OpenARK SLAM by Adam Chang

3D Reconstruction by Adam Chang

Digital Twin by Adam Chang

OpenARK Tutorial Slides

PDF

ISMAR 2020 Tutorial

OpenARK — Tackling Augmented Reality Challenges via an Open- Source Software Development Kit
Presenters: Allen Yang, Adam Chang, and Mohammad Keshavarzi

Abstract
This tutorial is a revised and updated edition of the first OpenARK tutorial presented at ISMAR 2019. The aim of this tutorial is to present an open-source augmented reality development kit, called OpenARK. OpenARK was founded by Dr. Allen Yang at UC Berkeley in 2015. Since then, the project has received high-impact awards and visibility. Currently OpenARK is being used by several industrial alliances including HTC Vive, Siemens, Ford, and State Grid. In 2018, OpenARK won the only Mixed Reality Award at the Microsoft Imagine Cup Global Finals. In the same year in China, Open-ARK also won a Gold Medal at the Internet+ Innovation and Entrepreneurship Competition, the largest such competition in China. OpenARK currently also receives funding support from a research grant by Intel RealSense project and the ONR. OpenARK includes a multitude of core functions critical to AR developers and future products. These functions include multi-modality sensor calibration, depth-based gesture detection, depth-based deformable avatar tracking, and SLAM and 3D reconstruction. All functions are based on state-of-the-art real-time algorithms and are coded to be efficient on mobile-computing platforms.

Another core component in OpenARK is its open-source depth perception data bases. Currently we have made two unique databases available to the community, one on depth-based gesture detection and the other on mm-accuracy indoor and outdoor large-scale scene geometry models and AR attribute labeling. We would like to overview our effort in the design and construction of these databases that potentially could benefit the community at large. Finally, we will discuss our effort in making depth-based perception easily accessible to application developers, who may not have and should not be forced to learn good understanding about 3D point cloud and reconstruction algorithms. The last core component of OpenARK is an interpreter of 3D scene layouts and its compatible AR attributes based on generative design principles first invented for creating architectural design layouts. We will discuss the fundamental concepts and algorithms of generative design and how it can be used to interpret common 3D scenes and their attributes for intuitive AR application development.

ISMAR 2020
OpenARK Github

Videos

OpenARK Introduction Part 1
by Allen Yang

OpenARK Introduction Part 2 by Allen Yang

SLAM and 3D Reconstruction Part 1 by Adam Chang

3D Reconstruction (Part 2) by Adam Chang

OpenARK Installation Part 3 by Adam Chang

Generative Modeling in Spatial Computing by Mohammad Keshavarzi

Generative Modeling in Spatial Computing by Mohammad Keshavarzi

PDF

ISMAR 2019 Tutorial

OpenARK — Tackling Augmented Reality Challenges via an Open- Source Software Development Kit
Presenters: Allen Y. Yang, Luisa Caldas, Woojin Ko, and Joseph Menke

Abstract
The aim of this tutorial is to present an open-source augmented reality development kit, called OpenARK. OpenARK was founded by Dr. Allen Yang at UC Berkeley in 2015. Since then, the project has received high-impact awards and visibility. Currently, OpenARK is being used by several industrial alliances including HTC Vive, Siemens, Ford, and State Grid. In 2018, OpenARK won the only Mixed Reality Award at the Microsoft Imagine Cup Global Finals. In the same year in China, OpenARK also won a Gold Medal at the Internet+ Innovation and Entrepreneurship Competition, the largest such competition in China. OpenARK currently also receives funding support from a research grant by Intel RealSense project and the NSF.

OpenARK includes a multitude of core functions critical to AR developers and future products. These functions include multi-modality sensor calibration, depth-based gesture detection, depth-based deformable avatar tracking, and SLAM and 3D reconstruction. All functions are based on state-of-the-art real-time algorithms and are coded to be efficient on mobile-computing platforms.

Another core component in OpenARK is its open-source depth perception databases. Currently we have made two unique databases available to the community, one on depth-based gesture detection and the other on mm-accuracy indoor and outdoor large-scale scene geometry models and AR attribute labeling. We would like to overview our effort in the design and construction of these databases that potentially could benefit the community at large.

Finally, we will discuss our effort in making depth-based perception easily accessible to application developers, who may not have and should not be forced to learn good understanding about 3D point cloud and reconstruction algorithms. The last core component of OpenARK is an interpreter of 3D scene layouts and its compatible AR attributes based on generative design principles first invented for creating architectural design layouts. We will discuss the fundamental concepts and algorithms of generative design and how it can be used to interpret common 3D scenes and their attributes for intuitive AR application development.

The teaching material of this tutorial will be drawn from a graduate-level advanced topics course on AR/VR offered at UC Berkeley for the past three years.

Agenda and Slides

Session 1: OpenARK –Tackling AR Challenges via Open-Source Development Kit by Allen Yang

PDF

Session 2: OpenARK – Open Source Augmented Reality by Joe Menke

PDF

Session 3: Optimization and Manipulation of Contextual Mutual Spaces for Multi-User Virtual and Augmented Reality Interaction by Woojin Ko.

PDF

Primary Sidebar

  • Contact
  • Berkeley Engineering
  • UC Berkeley
  • linkedin
  • youtube
  • facebook
  • instagram
  • Privacy
  • Accessibility
  • Nondiscrimination

© 2016–2025 UC Regents  |  Log in