This is the first part of (hopefully many) blog entries to describe my and my teams’ experiences in mobile development. In this part I’ll try to point out some observations regarding app performance…
See this intro to get the context.
LL Perf1: The emulator is not your target!
We did a lot of work on our application before we used target devices for real-world tests. This was a big mistake, because we encountered performance issues quite late. We should have known better from the many embedded projects we did in the past – but the phones seemed to be so much faster than other embedded target devices. But they still are devices with limited processing power and limited memory!
At the same time, the Windows Phone emulator runs lightening fast (compared to real phones). One example: One of our cryptographic methods took 20ms in the emulator, but 2 seconds on some phones, 200ms on others.
Bottom line:
- Test on the target!
- As early as possible!
- On as many device types as possible!
LL Perf2: Don’t guess performance!
Many developers tend to judge performance on an emotional level – “feels” good or bad. And this is good! If your performance feels good on important target devices then you’re safe, because your users will judge on performance in the same way.
If your performance doesn’t feel “right”, you need to change your tactics dramatically!
In order to prepare well for this situation, I would like to recommend:
- Measure and trace as much as you can (in v1 of your app)!
- Be aware of privacy when tracing!
- Identify frequently used areas of your app (in order to optimize for v2)
- Optimize these areas for optimal performance (in v2)
LL Perf3: Control loading and tell your users what currently happens!
We started our app development with a lazy loading tactic, because our XING app offers many different areas with loads of data (social network activities, private messages, birthday list, visitors, contact requests, contact list with profiles…). It is nearly impossible to load all this data within an acceptable time frame.
Our next mistake was to implement paging algorithms, that worked “automatically” and weren’t controllable by the users.
Both ides were bad, here’s why:
App performance in our experience is judged in 3 main areas:
- App startup
- Scrolling performance
- Page transition performance
If your app is considered bad in one of those areas, it will get extremely hard to earn 5 stars. It is really hard to optimize all of them and feature-rich apps will always have a problem with fast startup, but you need to keep startup time in a certain acceptable boundary.
That means you need to control exactly, when which data is loaded. “Unmanaged” lazy loading isn’t an option. You must trigger data fetching tasks in a reasonable moment and put the user in control of that (show progress, enable cancellation, etc.). The users will need detailed feedback, what the app tries to do. Your users will understand a bad loading performance, if they have a bad connectivity – but your app needs to explain at any time what it tries to accomplish…
For us, these aspects resulted in several architectural decisions:
- Offline-mode is default!
- Cache most of the loaded data to avoid unreasonable re-fetching of data
- Collect a lot of metadata to implement intelligent caching strategies
- Fetch important data up front (eager loading of basic information)
- Show the user always what the app is trying to do (especially important if connectivity problems occur)
- Implement self-healing mechanisms, if something goes wrong with caching (and we all know this can and will happen…)
Also remember: your mobile app is just one client to the server system – typically there are others – you need to consider “background activities” – the server state might have changed without the app noticing. Some of your users might change constantly, so “continuous client” ideas should be followed, see kellabyte’s blog for inspiration....