All posts by George Brown

Dev updates

I’m working on some new features, but it’s in an area I’m unfamiliar with, so I’m having to do a bit of learning and been down a few blind alleys. Once I get over the first hurdles I’ll be back in familiar territory, new updates soon.

EDIT: Further to this, I finally figured out how to get my OAuth tokens and am now figuring out the whole web APIs and JSON stuff for online cloud services. I found getting the token to be very confusing and poorly documented, but once I got access, using the APIs seems much simpler.

Dev Update

Been a bit quiet the last couple of weeks haven’t I? Took some time off for the local Blue Balls festival here in Luzern (That is the real name, I’m unsure if the swiss were aware of the double entendre…).

Anyway, back to work this week and working on a new feature which should be pretty exciting, keep checking for updates!

Update on video troubles

OK, so I’ve come up with a workaround to the orientation of videos filmed on apple devices, we can apply the rotation when the video is played (Which was already happening with portrait videos). This isn’t as good as converting the video to the correct orientation as additional real time processing is required. I will continue to look into the problem and see if I can raise a bug on it, or find a more effective workaround.

Next release should be out in a few days.

More video troubles

Ugh, now I’m having troubles with importing non standard orientation videos detected while testing the next release. The AVAssetExportSession is giving me memory problems and crashing the app when I try to correct the orientation for videos filmed on Apple devices, works great for any normally orientated video… I’m finding a workaround, but the release is delayed.

AVPlayerViewController troubles

I’ve removed the standard UIImagePickerController to let users pick photos and videos from their camera roll, and while it functions pretty well at that, it had the annoying habit of automatically displaying videos and images on the external display, for my application, this is undesirable, as we’re already using the secondary display for our own purposes.

So I decided to build my own copy of the picker controller, fairly simple to do using the photos framework, a table view, a collection view and some views for displaying videos and images. But then I ran into trouble again with the AVPLayerViewController I was going to use so the user can check the video before importing. While it presents properly at first, there’s a full screen icon that can be selected and again, the video takes over the external display. It’s possible for the AVPlayer object you associate with the controller to disable external playback, but the view controller ignores this setting, very frustrating.

So I’m going to have to build my own player, really not that difficult, but will involve creating some icons, more code, time and testing, when all I really want is a facility to disable the automatic takeover of the external display.

I submitted a bug to Apple about it, but I don’t expect them to move quick. Obviously the AVPlayerViewController was designed to be easy to plug into your app, but the over simplification of the use case is annoying, if you have an external display connected, then you must want the video displayed on it. I guess with the recent addition of Picture in Picture, Apple simply only see special cases where more than one video could be played at once.

Anyway, couple of days delay, but the next update is coming along….

Back to work

After a bit of a tour of Switzerland in the hot weather last week, I’m back working on the next release. This one will remove the stock photo and video picker and replace it with a custom version that won’t mess with connected displays, and maybe offer extra features in the future. It’s almost done already, so I might fit in a few other enhancements, should have a release for the end of the week.

Plans for next release

So after fixing the layers view, I’m going to address another fundamental part of the GUI, video importing. At the moment I’m using the stock apple UIImagePickerController for importing video and images, it’s less than satisfactory for my needs, it arbitrarily takes over the connected display, a totally undocumented feature which doesn’t seem to be able to be turned off. I’ll be replacing it with a custom importer, the basics are already there with the slideshow creator, and while I’m at it, improve other features of video importing and usage to better conserve resources and make the the Reality Augmenter more responsive.

Hopefully that won’t be too tough and I can add some extra stuff and still keep within a 2-3 week release cycle. After that there are going to be some major new features and improvements in the coming months, you simply can’t get a better projection mapping app for your mobile device.

New Version out today

New Features
This update gets rid of the layers table view, replaced with a new view that shows the layer geometries instead, so you can instantly see which layer is which. From this view you can tap any layer to edit it’s properties. Layer geometries are locked by default to stop accidentally interfering with the projection, to unlock, either long press on any area, or touch the edit button in the top left. You can then select and edit any layer like before, pinch to zoom, pan, and a long press to unlock snaps. The geometry view also looks a bit prettier and the checker pattern let’s you see if you’re texture edges line up or not, snapped nodes now appear in orange instead of blue.

The layer properties editor now has perspective correction, the default is off and the Reality Augmenter uses bi linear interpolation by default, this ensures the edges of your textures always line up. You can now turn on Perspective correction for a more correct 3d mapping, but the edges of textures are not guaranteed to line up, in many situations, this is not an issue.

Bug fixes
Fixed crash when trying to connect to a google cast device with an iPad.
Various internal code updates.

As usual, for any ideas, bug reports, suggestion, don’t hesitate to contact me.

Version 1.8 submitted for review

Despite my earlier post on finding a bug on some devices, I managed to find a workaround, so 1.8 has now been submitted to review.

The problem only seemed to occur on older devices that only supported OpenGL ES2, and it made no sense. The OpenGL driver would crash when performing a certain operation in the fragment shader, reporting gpus_ReturnGuiltyForHardwareRestart. After googling the error, I couldn’t find any solutions, many suggested that it may have been incorrectly setup vertex buffer objects, but this was not the case for me. Others suggest a problem in the latest iOS release that was supposedly fixed in Beta.

For my layers, I combine multiple textures for the layer image, mask and overlay, with seperate texture coord buffers for each texture, all created in the same way that has proved 100% reliable for all previous releases. But this latest update adds another z coord to the texture coords in order to achieve the perspective correction feature, it worked fine for the layer image, but if a mask was introduced, it crashed. If I stopped dividing the xy values by the z, in mask only, it worked again. I hadn’t actually changed the size of the texture coords, I’d been using vec4s to respect apples guidelines on padding coordinates to increase memory access efficiency, just recalculated the z value as appropriate instead of using 0.

How was this problem solved? I did the texture coordinate recalculation in the vertex shader, all exactly the same data, same calculation, but it stopped the crash, for no reason I can fathom. Hopefully it’s a problem in iOS that will be resolved, but I don’t know if I want to go through the hassle of creating a custom project to try and reproduce the error for an apple bug report…