RYOICHIRO OKA
  • Home
  • Sightsync
  • Rokushoji
  • Synth Space
  • Victorian
  • Hobby

Demo Video Released

8/15/2016

3 Comments

 
Sightsync's concept finally stated. However, this project is still seeking for a better idea; the root concept may change soon.

This video was originally made for John; thanks for your help!

I'm looking for some advice from who have worked on a similar concept or projects.
If you know anyone like that, please help me contact with them.
3 Comments

Image Recognition Usages

8/7/2016

2 Comments

 
Sightsync will eventually get to image recognition, so I paid some attention to those applications at ARcade and VRLA. This post sorts out a couple of them and give personal views.

Summary

Because image recognition is still energy-intensive on mobile platforms, its current usage is so limited that some projects are duplicating. Also, I personally sense that those projects are still competing and focusing on technology: the accuracy of recognition, lacking an UX or "practical" perspective. With that going, technology is evolving: space recognition will be available soon.

MagePrints

MagePrints at ARcade #4.
Users register a pair of image and video, then the camera will play the video on the top of the captured image.
The usage is simple and the result is clear; people will immediately come up with the use and look forward to show it off to friends.
My only concern is that there is an app that does exactly the same job with additional functionality.

Animal 4D+

Grasp Assistive Technology at ARcade #4.
Users download the app, purchase a deck of animal cards, then the camera will display a 3D figure of animal on the top of each card. They also provide some other topics: dinosaurs, careers, etc. Educational use is targeted.
I can't quite imagine how this app can be useful in education though; I wonder what a child can do with both hands and most part of the sight being occupied.
Also, to me it's unclear how this stands out over existing media. TV programs show animals' body, their motion and ecosystem with a narration. This app just displays their body. If they interact with the user or each other, or speak out about themselves when focused on, then it will prove this app's meaning.

uSens Demo and Space Recognition

Presentation by uSens at VRLA of the current technology of space recognition (and hand gesture tracking.)
Users are able to draw lines in the air, which remains at the same position.
The presenter mentioned that space recognition is getting available in mobile platforms (in terms of energy usage,) which is a good news for Sightsync.
They also mentioned that VR will be soon merged into mobile platforms, followed by their lightning speed evolution of machine power. Mobile app designing will have to take the growth into an account.
2 Comments

QR Code in Sightsync

7/25/2016

3 Comments

 
element_settings.Image_30621876.default

Background

Sightsync originally planned to utilize BLE for tracking and mapping, but had a couple of concerns:
  • BLE's installation will be expensive with Sightsync's wide coverage of space,
  • BLE's latency and accuracy will be insufficient in Sightsync's requirements.
There, Marty at the last ARcade meetup introduced me that QR codes (i.e. image recognition) can perform tracking and mapping as well.

Availability

I researched on it a bit to fortunately find out that it's a common method. Vuforia does recognize QR codes, hence it does mapping for free. The other libraries are also known to work in the same way according to the forum's conversation. There are several demonstrations as well.

Performance

Here is an ideal working example of QR code recognition and object mapping (not my work.) this video shows a QR code reader running on an Android device, which reads a QR code from camera, fetches a 3D object from the URL, and maps the object onto the original QR code. The object follows the QR code's location and rotation as it moves.
Notice how fast and stable the tracking and mapping in the video are performed. Sightsync needs this level of speed and stability.

Maintenance Cost

Reg in yesterday's conversation about this topic, pointed out a potential cost of maintaining the surface of those images for a stable recognition. It will be problematic in Sightsync's case because those images will be exposed to nature and children for a long time.

Problem in Sightsync

The most significant problem is that Sightsync cannot keep those images inside screen: users have to be looking around. The tracking and mapping fail to work when the device moves from the location where those images were lastly caught on screen. It will break Sightsync's requirement: the real and virtual sights have to be exactly synchronized.

The maintenance cost can be dealt with easily, however, the stability of sight synchronization has to be secured. I came up with three solutions to this problem:
  1. Distribute an enough amount of objects in the physical space,
  2. Run a traditional location tracking method simultaneously, or
  3. Instruct users to not move their feet in action.

1. Placing images everywhere. Surely this will disturb users and the others. A bunch of QR codes pasted to each exhibition in a museum or on walls of buildings is anarchy. It could be avoided by registering not only QR codes but also general objects, however, it would make the stability sensitive to situations.

2. Collaborating with traditional location tracking. Machine power of mobile devices will be the biggest concern. Also, researching, purchasing or developing an algorithm to combine two tracking methods will take up a large resource.

3. Telling users to not move. This will restrict the range of AR experience that Sightsync can potentially provide to the users; for example, storytelling gimmicks will be less new or creative. Also, I have to come up with how actually to make the users pause their feet when the virtual world is attracting.
This, however, reduces many technical and design difficulties of Sightsync. The sight can be a pre-rendered 360° image or animation, which reduces bugs from AI's or renderer's runtime behaviors.
Most importantly, this still achieves Sightsync's goal: perfectly synchronize sights of the real and the virtual (as long as the design succeeds on pausing users.)

Summary

QR code does have a place in Sightsync: it is technically very achievable, capable of detecting and mapping, and most likely be more accurate than BLE. From negative perspectives, it essentially cannot track but only detect an initial location in Sightsync's usage, and also registered images have to be maintained during exhibition.

The decision that I have to make here is whether I give up a device tracking feature and obtain a nice and easy achievement instead, or keep pursuing the stability of sight synchronization with allowing users to move.

Sidenotes

Marty additionally explained how QR code can become a secondary business, for that you can display advertisements around the image and such, which is an interesting possibility.
Reg also told me that a QR code will have a power of "reminding" people of an app when they see one, which is another interesting topic to study.

Thank you a lot, Marty and Reg, for sharing your insights with me.
3 Comments

    Archives

    August 2016
    July 2016

    Categories

    All
    AR
    Image Recognition
    Sightsync

    RSS Feed

Contact me at hi@ryoichirooka.com or SNS below.
© 2015-2021 Ryoichiro Oka
  • Home
  • Sightsync
  • Rokushoji
  • Synth Space
  • Victorian
  • Hobby