Visualizing stuff using Augmented Reality

Shashank Yadav
3 min readNov 9, 2020

Augmented reality is one of the things that’s been slowly gaining steam over the years and aims to completely eliminate the boundaries between real and virtual. Both Apple and Google have released their AR SDKs for quite some time(ARKit and ARCore respectively). One of the best real-world examples of this technology is IKEA Place. It allows you to see how a piece of furniture will look in your house

IKEA Place on AppStore

This is quite a useful application and sounded something worth building, for learning purposes. So that’s what I did, for Android!

TLDR;

For the feisty ones, here’s a link to the repo: https://github.com/shashank-yadav/glimpse-android

And here’s a link to the app on PlayStore: https://play.google.com/store/apps/details?id=com.glimpse.app

The Process

Goal of this exercise was to replicate the most basic of functionalities:

  1. Ability to place and remove objects in and from the real world
  2. Ability to change the size and position of placed objects

You might be thinking that how we created the 3d models for this project. We used Structure from motion to create a model out of sofa that was at my home :) Since SFM is too vast a topic to discuss right now, let’s just move forward and assume we have the 3D models

Real vs Augmented: real sofa vs 3D model of sofa through our app

Architecture

Loose architecture of the App

There are two major frameworks used:

  1. Sceneform (ARCore): This is just an easier way of using ARCore without getting into nitty-gritty details of OpenGL. It takes care of everything related to AR: identifying a plane, placing and interacting with the object, lifelike rendering and so much more. You can find list of supported devices here: https://developers.google.com/ar/discover/supported-devices
  2. Firebase: You can use any database, there is no specific reason to use Firebase other than ease of integration. This is used for everything related to data: user details, authentication, product metadata, store metadata, product images and even 3D models

User Flow

  1. User logs in using his/her phone number
  2. A list of stores is populated to choose from
  3. Within each store there is a list of products along with their information
  4. User selects a product and the camera opens
  5. User points at a planar surface and moves the phone side to side for plane detection
  6. When the detection succeeds, user taps on the location where he/she wants to place the required object
  7. Now the object can be manipulated and more objects can be added/deleted
An example of the product screen

Conclusion

The app worked out pretty well on some of the Android phones I tried: Galaxy Note 8, Oneplus 7, and some more.

There can be some issues with plane detection in case of textureless surfaces but that’s something that cannot be fully dealt with. Other than that, the app works pretty well. The real and virtual sofas looked almost the same so I’d call it a success! Feel free to use the code for your own purposes. I personally see this as a great way to enrich online buying experiences.

--

--