Okay so welcome everyone to this tutorial for all ends and in particular for research mode so before getting there so here’s roughly what we will talk about I first very briefly introduce for those of you not familiar with hololens and what’s what all the sensors inside whole lines and what Hollins is was built for a briefly introduced that.

That gives you some hardware architecture there then the main part of the tutorial will actually be.
And introduce the new research more components which essentially makes.

All of the imaging sensors available in newer researchers so you can switch the device to a most immoral it is now available because of privacy reasons and.

Sensors are off-limits to applications except for the Thunder I believe but if you switch you device can be searched for you have access to all the imaging sensors and so in terms of it you know a computer vision.

Device with an integrated device that computes a lot of things on the device all the time like a building’s position space computing.

3d renovation of the environment etc all of this is still running but at the.

Same time you can get access yourself to those processes streams and then run your favor your vision algorithms either on the platform on the device or also you can stream them to a PC and video processing there for example so that’s what we will mostly talk about me show you examples of that and then at the end we also have will show you our new depth sensor which both will be integrating acceleration Apollo but also will be independently also made available as a Kinect flasher sensor package so this is also called but.

It’s already a preview of that depth sensing technology that will show to much the end of of the tutorial and so all this will keep to the timeslot would be nice so that you.

Can okay so um here’s picture of we see here’s a video I love this sound okay this.

One myself anyway so this is a short 20 minute video make sure okay yes so so this shows you what a lens is made for.

It essentially provides us with huge movement the world with personal objects that can be inserted in the world but it enables the world just like that goes with the Holograms the games always that here for those games is important that bullet is still totally aware of where its powers including for the world and I see also.

The 3d geometry of the world so that these objects actually preschools here why surprise me and the scenarios where were working and looking at balls before they are billed but we can get a really impression by Steve in treating the problem it’s a lots of applications opponents just for.

Those of you that we’re not sure I’m either with it potentially so what’s inside so if you look here the key thing and.

Also probably interests where you essentially have here that’s the one that was already always accessibility but then today is on one hand the apartment target of us so you see these four cameras here which actually wrote the two essential ones here are actually they were waiting for this.

Mostly sport estimating tracking those tracking and essentially immediately have a scale initialized because their system and then you have twenty show cameras that provide a wider field of view so that we have a system if you quickly look around that you still that there are all sorts of issues here might not be spilled over here we could look at all I can still see those features in the virtual.

So it says four of us with a fracking that’s also those cameras are being sensitive they’re working around much better lose that race that happens 640 480 eventually their global share capital so and then the other sensor is these Deborah said she’s the time of flight so your bot later this recurrent generation will be a boring article about an acceleration camera another is here two.

Different back to the details so there’s close to it they are actually there for for half because the camera has two modes both of all tracking and.

Look at near your knee range and this has two for if you want to click for example in the lens you have to do that in high frequency to actually I frankly make things but at the same time we also want to capture the environment we need for attracting unity white people because we.

Have them anyway for reconstructing the environment are often dependent they.

Also are limited to a smaller to compute and then essentially enables to save energy also the frame rate for the long range is much lower frame because also the environment I’m expected to change as quickly as your hands very quick so if you look at home and essentially everything.

The design of whole lens is about saving energy and in particular saving energy not even for the battery is saving energy for so it’s actually the biggest challenge in the design of phone.

Lines so here are the different sensors these are this plane actually notice also here there’s an IMU in here the that’s used for official inertial tracking is a compute board on this is essentially a full computer to do most of the real-time always on computer vision perception algorithms that are running in the background that also means that applications for example your computer vision applications SOC for.

Processing as none of that is used almost none of that is used for the core vision algorithms that are running that are always on the embedded algorithms in homeland’s of course also spatial sound and then here’s an exploded view of all the hardware a quite complicated system and here’s in.
A more system diagram but that’s of.

Course what in research mode will will actually enable you to get.

Access to these masteries a little bit more on the HP you this is.

The current HP you last you actually SVR we announced an acceleration is view and we showed some demonstrations of real-time hand tracking also leveraging or DNN core that’s implemented on the next generation HP so what are the core functionality computer vision lens essentially in.

The background all the time so first thing is knowing where you are so this.

Is six degrees freedom visual visual inertial.

Odometry and slam it also includes visual read organization D so that’s using the four cameras at the.

Same time it’s actually about the environment and is then used to be able to make a placement and so on environmental we’re also generate you know allow occlusions and all Holograms to work in the environment that song and then of course there’s also the same thing okay so and then this is now which is the recombination of the body here we actually okay so that was the quick intro on hormones so we’ll continue with Pavel for showing you hands on.

Some of the things you can extract for hololens I think we’ll start actually with the demo where so everything.

Worked well I should be able to see for the deviceport oh you should be able.

To see where I’m moving environment yes networking here is interesting we might have to fall back to the cable actual yeah it just worked our wife is actually my malfunctioning here oh you know so this is the device portal it’s Windows 10 when put in developer mode always a web server that exposes some.

Information that is device specific and some of this information Tom Hollands is a essentially visualization of what horror and sees at the moment and I have made a tactical mistake effects than my desktop I’m sorry okay not like it’s a lonely so this is what Holland’s.

Like I just pull this out of the box here and looked around this is a visualization of the of the surface reconstruction and actually put this in first person know this is what the device sees from Mark’s perspective that blob is probably me here it updates at the low frequency as Mark mentioned to conserve power those updates are running up about once per second.

So he were to walk around sadly because of not working interest with limited by the cable for the web connection you could see how the device is tracking through the room and updating.

Updating dimensions at the ghosts are just clicked to refresh then where COCs demo was a little bit better with the Wi-Fi so what you see here is the output of those environmental tracking cameras these have.

The device knows where mark is looking and that that camera in the long term note in the long range mount that is used for surface reconstruction so I think with that I’m gonna start talking about research rooms yes others people I won’t miss one hey this.

Way I’m gonna skip over the addendum work address this so what I wanted to reiterate is that Hollis is a PC and you.

Run your code on the device spec wise it’s similar to the surface free not the problem surface three tablets it comes with all windows 10 AP.

Is that you would expect to see on a.

Essentially mobile device mark management perception AP is that tell you where the device is with respect to the world spatial mapping that describe the surface reconstruction and an API is other than that we have all the generic ap is for voice and other input modalities if you can all microphones graphics it’s all available for you so we actually have introduced research mode before we have promised we’re going to deliver it actually shipped in last month actually so if you have a hololens and you accept the update the april.

2018 update of Windows 10 you should be able to get access to the four cameras the depth sensor and the.

PD camera already had before so the research mode is something that we need to enable as you have seen in the design the.


Please enter your comment!
Please enter your name here