All posts by George Brown

Plans for the Reality Augmenter

If the Reality Augmenter isn’t already the best video/projection mapping for iOS, it will be. Ever expanding source options and new ways to process and display them will give more options for more complex mappings, challenging higher end applications for most user needs. Combining ease of use with a low cost setup, the Reality Augmenter has the potential to really open up video mapping to everyone.

Just finishing a new feature that adds another level of options for connecting your device to a projector, really excited about it, more soon.

Version 1.4 out now

New version got by review! You can now use webpages as sources! The page the source currently defaults too is a demo page I made for testing, I’ll be updating it soon as it’s a bit basic…

Apple’s WebKit isn’t really meant to be used this way, but the store guidelines are quite clear that web access must go through it. It’s a very intensive process to render the webpage to an opengl textures, and it’s not possible to do this in a background thread either, so I had to limit the refresh rate. However, top tip! you can put some javascript in your page to call a periodical refresh: webkit.messageHandlers.realityAugmenter.postMessage(“redrawPage”);

I’m already half way through to another update. I’ve been making a lot of optimisations to the rendering pipeline after spending some time with the openGL instruments and Apples guidelines. Bunch of small things adding together to make a 10-20% performance increase. I did a lot of eliminating redundant calls, I reorganised the rendering cycle to create and update all opengl objects before going through and drawing, and corrected a few places where we’re creating textures and buffers larger than they should be.

I’ve got another useful suprise to come too, should prove very useful, more soon.

New version submitted for review

Well version 1.4 went in for review last night, the “Average App Review Times” website currently gives the average time for review at 3 days.

The big change is being able to use webpages as sources, the performance is not as good as I would like it to be (see some of my other dev blog posts), so I had to implement a minimum refresh rate to avoid potential problems. However, even with performance problems, it still makes for a very useful feature, so it’s been included. As time goes on I’ll be improving it, either by working around Apple’s current implementation, or hopefully, Apple will provide some better tools.

Other than that, I started looking into a bit of openGL tuning, a few redundant calls have been removed but I stopped short of a major rework to get the release out. There’s optimisation to be done and I think the app is ripe for some multi threading additions to further improve performance, it’s an ongoing job.

I still have quite a few new features planned but not yet implemented, enough to keep me going for a while, so those updates will still keep coming for the near future.

Preparing for next release

Well I’ve added a web page source, I had to make some compromises to fit with some of the limitations of the WKWebView and my app. I scrapped constant rendering of the page, so we can’t really handle animated websites, the method to render the webpage to opengl is just to slow, or rather, the method to create an image from the webview is too slow, intensive and can’t be run asynchronously, so it causes havoc with the UI. As such the user can set a refresh interval to dictate how often the page is redrawn for every one second to every sixty seconds.

It’s also not possible to navigate beyond declaring the initial address, this is due to the fact we can specify our own dimensions for the webview, meaning it will not render correctly for the screen, so I thought it would be better to present it as the rendered page so we can see what’s displayed, rather than a normal webview we could navigate, but would not represent what’s projected. I may make it possible to navigate in the future by enabling pan and zoom on the preview, but the app isn’t really a web browser.

I’m just going through final checks, and will submit the app for review very soon.

Adding web page sources

I’m currently adding a new feature to add web page sources, I got it up and running in a day, which was a relief after the delays in the last update.

Web page display

However, it’s far from ideal, rendering is painfully slow and there doesn’t seem to be anything I can do about it. The trouble is Web Kit, which must be used in for web stuff in iOS or the app faces getting rejected from the app store. Web Kit only renders to a WKWebView, causing some problems for me. The WKWebView is pretty useless to me outside of editing the source, everything I do is eventually handled in openGL.

With no facilities to access the rendered web page outside of the WKWebView, I have to create an offscreen one. Next we have to go through a two step process to render to opengl. First, copy the view contents to a UIImage, then upload the UIImage to a texture. We’re supposed to use drawViewHierarchyInRect: afterScreenUpdates: as described in this Technical Q6A . Trouble is, this method is too slow to run every frame, it interferes with animation and can make the keyboard unresponsive.

OK, so can we do this in a worker thread and leave the main thread alone? Sadly not, even though my WKWebView is offscreen, the drawViewInHeirachy method will not render anything when called outside of the main thread, we can upload to texture asynchronously, but not the slow task of copying the WKWebView to a UIImage. None of the other methods such as renderInContext will work, I think I’ve tried every combination.

So at the moment I’m going to have to make some comprises. First thing I thought to try was to try and limit updates to when it’s absolutely necessary, such as when any part of the webpage redraws itself, but is there any way to detect and observe when the WKWebView is redrawn? no, no there isn’t. I tried drilling down the layers in the WKWebView, of which there are many, and it eventually seems to drill down to layers backed by IOSurfaces. I suspect this underlying IOSurface is what’s being rendered to, and if this was OSX, we could easily copy it or bind it to a texture, but in iOS in the app store, the APIs related to iOSurface are private and can not be used, so no luck there.

We can monitor navigation, but just because the navigation has been completed, doesn’t mean the screen has been redrawn yet. I’m currently looking into a solution by injecting Javascript and detect when anything changes, but from what I’m reading while looking into this solution, we still need to allow a delay for when something is done to when it is rendered on screen, it might bring some improvements to webpages with little animation, but ultimately we’re still going to have problems with websites that have animations or video in them.

For now I think I’m going to have a combination of the user setting a refresh rate, somehow looking for changes to the rendered web page to only refresh when needed, and for case where the user wants fast refresh, then we need facilities to suppress the more cpu intensive actions while were animating from view to view or have some other main thread intensive process. All in all, it’s not going to be as good as I would like it. Still, it will be something to work on in the future.

Version 1.3 now out

A very quick turnaround from Apple when they reviewed my app in less than 24 hours, I wonder if Apple provide a quicker turnaround if they start to trust the developer? Or maybe it’s because there weren’t actually that many code changes for this release, with most of the work in the editor view.

Anyway, the app is getting much closer to what I want it to be, adding a zoomable view to the geometry editor makes mapping a hell of a easier, especially on iPhones where the small screen could make fine mapping tricky, and small changes picked up when a finger leaves the screen could mess up a finely positioned corner. It’s also much easier to control points near the edges as I left a little leeway at closer zooms to move off the edge of the screen.

There are still a couple of features I want to include before I really think it starts to provide some seriously powerful functionality. One thing is the ability to crop sources and spread them across multiple surfaces, the code actually already exists in the application to do this (you can use it on the OSX version of my app), I just need to come up with a sensible UI. There are also a few extra source types I want to add based on customer feedback. I’ll probably setup an issue tracked at some point, but if you really think the Reality Augmenter is missing something, drop me a line and I’ll see what I an do.

Other fixes in this release:
Crash fix, when returning to a video source, then leaving the video source view would cause the application to crash, now fixed.

At some iOS update, the masks had stopped working. Looking at my code, they never should have worked for this one mistake I made, but for some reason openGL carried on working as if there was no problem. It turned out to be a simple mistake that was remarkably hard to track down, now fixed.

Lastly I removed a requirement for the app to have a minimum chipset, the app now only checks that your device supports openGL ES2 or higher, this is more future proofing than anything else, currently the requirement aligned with the opengles2 requirement, but that may not be the case in the future. If your device wasn’t supported before, it likely still isn’t.

Development update

No updates in a while, I got stuck implementing a new feature I’d wanted to include in the next release, the ability to pinch zoom and pan in the edit geometry view, for finer control of geometry mapping. I managed to get myself in a mess over the way I tried to implement it by manipulating the model matrix, just when I think I had zooming figured out, some other aspect would stop working, it frustrated me for days. I eventually fixed it by more sensibly managing the view and got the problem fixed in an afternoon. Sometimes I wish I was working with someone, having a second pair of eyes would be so useful when you can’t see the wood for the trees.

So finishing off a new release, not so much in this one because of aforementioned problems. It will fix a couple of bugs I picked up and add the new pinch and pan UI for geometry editing.

I’ve still got some major updates and new features to come that will enable some powerful new behaviours, keep updating the app! And if you have any ideas for feature you would like, drop me mail and I’ll see what I can do.

Version 1.2 Out Now

The latest version of the Reality Augmenter iOS is out now. It has a new slideshow feature, and you can also activate the screensaver from the initial start screen when no projector is connected. Some projectors support a feature to automatically start working when powered on, so they can be used with a timer.