progress with root on ipad.. ios from programmers pov: ios is unix, but... no x11 (no gvirtualx, no...

Post on 13-Jan-2016

228 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Progress with ROOT on iPad.

iOS from programmers POV: iOS is Unix, but ... No X11 (no gVirtualX, no ROOT's GUI). No custom shared libraries (only system libraries), only

static libraries for user (no CINT???). All ROOT is one static libROOT.a.

Objective-C/Objective-C++ to create native GUI. Rich set of libraries and frameworks: CoreGraphics,

CoreFoundations, CoreText, CoreAnimation, UIKit, GLES, etc. etc.

Very specific GUI, different from Windows/X11/MacOS X. Fingers instead of mouse, touches instead of clicks, no

cursor!

iOS, Mac OS, GLES related tasks:

Port ROOT's non-GUI graphics to iOS

Write ROOT-based native applications for iOS: a) demo of ROOT's graphics on iOS; b) browser for ROOT's files; c) components to support users, who want to develope for

iOS with ROOT.Port our OpenGL code to GLESNative VirtualX back-end for Mac OS XHPC: head coupled perspective and its application for our Eve

and OpenGL code

1. ROOT's non-GUI graphics (June):

Mainly based on TVirtualPad interface (gPad). TPad/TCanvas internally calls TVirtualPadPainter methods (X11/Win32 GDI/OpenGL under the hood). To port it you can: re-implement TVirtualPad or extend TPad and implement TVirtualPadPainter for specific platform. I:

implemented IOSPad: to keep the code as simple and clean as possible (TPad is already difficult to support); to add iOS-specific algorithms and extensions (not required in standard pad); to avoid TCanvas loop (TPad is a base for TCanvas, but it strongly depends on TCanvas (!!!). So IOSPad is responsible for primitive management and coordinate conversions, it also accepts all calls from the graphical primitives.

implemented IOSPainter to do actual painting.

Graphics: implementation

2D graphics on iOS can be done by GLES or Quartz 2D (UIKit also). GLES has several problems:

No tesselation (required to draw polygons and render text)

No text rendering

No built-in anti-alisaing for lines and polygons.

Quartz 2D:

Very rich 2D API (paths, strokes, blending, gradients, etc.)

Advanced text rendering

No problems with any polygons

Anti-aliasing

Can draw to a window, to bitmap, to PDF (the same code).

etc. etc.

So we decided to use Quartz 2D.

The first version (middle of June) worked in iOS simulator:

With CoreText (font metrics were improved, screenshots from real device)

Problems(2D graphics): Quartz (so cool) has nothing to work with font metrics

(required by all ROOT's text primitives - labels, titles, pavestats etc. So text is now done by CoreText framework (and attributed strings from CoreFoundation framework).

There is no Symbols font, used by ROOT. Custom fonts do not work for me at the moment (to be investigated) and they are very slow the first time you try to use such font (probably, iOS is doing some caching).

Metrics are still not ideal. Optimize fill patterns (use bitmaps).

2. GUI on iOS - tools:

UIKit (Cocoa Touch) - object oriented libraries, written in Objective-C, to be used from Objective-C or Objective-C++ (mixture of C++ and Obj-C). Model-view-controller pattern everywhere, reference counting everywhere. _VERY_ strange method names :) and memory management of stone age :)

InterfaceBuilder to create user interface (Apple's analogue for QT-builder or ROOT's GUI builder, _VERY_ unusuall :) ).

Mixture of CoreGraphics, CoreFoundation, QuartzCore and etc. libraries (unfortunately, you can not limit yourself with one UIKit, you have to use all this zoo of libraries).

Application on iOS consits of:

UIApplication Run loop (done by UIKit) Window One or several controller objects Views, managed by controllers (views are

rectangular areas, roughly speaking similar to TPad, but they contain not TObjects, but other views - widgets or sub-views).

The first application: 'Tutorials'

"Tutorials":

Split-view based application. Has several demos (something like demos.C from

$ROOTSYS/tutorials) Supports zooming and scrolling (pinch gesture, taps,

pan) Supports picking (initial version) Has editor prototype - in a popover window. 3D objects can be rotated with pan gesture Different animations.

Mainly is a testbed for different features.

Problems: picking

ROOT's picking requires very high precision, +- 2-4 pixels and you are not able to pick a histogram.

Touches are very different from click - actual touch coordinates are always different (+- 10-20 pixels) from what you expect.

Picking from IOSPad POV:

Extensions to Pad to visualize picking:

ROOT browser for iPad:

Based on navigation controller and view hierarchie: top view to open/close select ROOT files (using TFile and

TWebFile) view with file contents (with thumbnails for objects,

contained in a file) detailed object view with IOSPad to draw object and

editor to modify its properties (potentially with fit panel and etc.).

This application is a simplified iOS mixture of TCanvas (with editor) and TBrowser (1% of TBrowser, of course :))

RootFileController and its view:

FileContents controller and its view:

ROOTObjectController and its view:

Editor (iOS version of gedit): first versions

Editor (current version)

Problems to solve:

UIScrollView is not good enough and works incorrectly for my browser. I have to develop a custom scroll-view.

Work with files: big files, error handling, big number of objects, etc.

Custom widgets and controls required for editors.

Many editors to implement (I have only editors for TAttLine, TAttFill, TPad.

Bugs :)

3. GLES (now only plans and estimation)

GLES is a sibling of OpenGL, but it's very different from OpenGL 1.5 (Matevz estimate our OpenGL usage as it's a version 1.5): no fixed pipeline, it has to be written in GLSL (shaders). No display lists, no immediate mode (glVertex calls), we have to use vertex arrays or vertex buffer objects everywhere.

GL painters for histograms I'll port easily, it will not take more than 3 weeks.

Eve/GL-viewer part will be more complex.

4. VirtualX back-end for Mac OS X

At the moment I can not estimate the time required. Will be Cocoa based (Objective-C++), since alternative - Carbon - is obsolete and deprecated (and there is no 64-bit version). Will reuse the code for iOS (since it's also Cocoa and Objective-C + Quartz and CoreXXX libraries).

5. HCP

Main complexity is in (as I understand) pattern recognition methods and computer vision: using frontal camera (on iPad, or laptop, or external camera) a program has to trace head position in space and adjust perspective of 3D picture accordingly. I do not know at the moment, how difficult it is.

top related