Downloading screen time data for analytics

When I started this project I had big goals. I began 2024 on a fitness kick and being a data person I wanted to track as much information as I could about myself across sleep, movement, diet and time spent on devices. I wanted to be able to answer questions like:

To kick off I decided to start with downloading screen time data. I have an iphone and exporting screen time data turned out to be more difficult than I imagined - as you will see below.

How can I export screen time data from my phone?

After some online searching and exploring the iphone screen time interface it was clear that there is no button to export the data to files in your phone, nor is there a way to download it via icloud. (As of March 2024) From what I could see options were to:

  1. Use a third party app to track your device usage. This would share your data with the application developer/company.
  2. Jailbreak your iphone. Apple stores information about device usage on your iphone in a database called knowledgeC.db. (This blog post by Sarah Edwards describes it in more detail). You need to jailbreak your phone in order to access this database.

I didn't really want to jailbreak my iphone or share my device usage data with another company so I decided to explore another option - using Apple shortcuts to automate the downloading of screen time data with some data engineering magic to convert it to a csv file ready for analysis.

Shortcuts and Automations

Shortcuts is an Apple app that allows you to create scripts that automate tasks. These scripts can then be shared with other users via iCloud. Automations allow you to schedule a shortcut based on an event trigger e.g time of day or change in location.

Initially, I had hoped to automatically run a shortcut at 11:59pm every night, in the background, to grab screen time data and store it to iCloud. This time would maximise the screen time data I could collect for the day.

Although automations have an option to run automatically, because I want to take a screenshot it kept getting stuck on asking for me to unlock my phone. There didn't seem to be a way to run it completely in the background when my phone was locked. This meant I would be unable to run the shortcut consistently at 11:59pm every night. (I'm an early bird so I would be most likely sleeping at this time). Instead I set the shortcut to run at 10:30pm, set it up to run with prompt and resigned myself to the fact that I would have to physically hit run.

Using Shortcuts to download screen time data

Next was writing the shortcut to download the data. I experimented with a number of different strategies including:

In the end I created a shortcut that navigates to the Screentime settings page and then takes a screen shot of the data and stores it in my iCloud drive. I was unable to get it to scroll further so the data that is captured is limited to total time for the day plus the top 3 apps I spent my time on. I have an iphone 11 - depending on how large your screen is you might get a similar amount of data or if your screen is smaller you will just get the total time spent on your phone.

A picture of the shortcut script showing the code to navigate to the screen time settings url, taking a screen shot and storing it in an iCloud folderThis image shows the screen time data collected by the shortcut. It shows daily total time, a graph of usage and time spent on four applications which are in screen

The architecture to process the screenshot

The overall architecture looks like this:

Once the screenshot is stored in my personal the iCloud folder using a local file reference:
/Users/{username}/Library/Mobile Documents/iCloud~is~workflow~my~workflows/Documents
I installed tesseract on my mac and used the pytesseract package to access it via python code. Tesseract is an optical character recognition engine that can extract printed or written text from images.

import pytesseract
from pytesseract import Output
from screentime.transforms import image_transforms

def convert_image_to_text(image) -> list[str]:
    for transform in image_transforms:
        image = transform(image)
    text = pytesseract.image_to_data(image, output_type=Output.DICT)["text"]
    return text

Full source code and setup instructions are available here.

Initially using tesseract the output was very noisy and there was a high percentage of missing values in the outputted csv file. The screenshot also has graph imagery which was being translated by tesseract into alpha numeric characters. It is also the same colour as the time spent on applications below it. This text was small and light which meant it was difficult for tesseract to pick it up. A combination of converting the image to greyscale and then black and white, and masking out the graphs improved the accuracy of the OCR output.

Python code for Image transforms:


def masking_graphs_transform(image):
    # Masking the image, the middle content is largely graphs 
    # and is the same colour as the application time text 
    # that we want to capture

    # create a mask that will include the top section - total screen time
    mask = np.zeros(image.shape[:2], np.uint8)
    image_len = image.shape[0]
    top_mask_start = int(image_len * TOP_MASK_PERCENTAGE_START)
    mask_len = int(image_len * TOP_MASK_PERCENTAGE_END)
    mask[top_mask_start:mask_len] = 255

    # create a mask that will include the bottom section 
    # (last 20% or so of screen) - app screen time
    bottom_mask = int(image_len * BOTTOM_MASK_PERCENTAGE_START)
    mask[bottom_mask:image_len] = 255
    masked_image = cv2.bitwise_and(image, image, mask=mask)
    return masked_image

Python code for masking out graphs and unrequired areas

The data loader function runs in a single batch and processes all the files in the iCloud folder, outputting to a csv where it can be picked up for further analysis.

Spinning up a jupyter notebook to explore and visualise the data

Last step is to spin up a jupyter notebook to visualise the data. We import the csv file and do some cleaning of month values before visualising. This graph shows daily averages across current month, previous month and year to date.

Conclusion

This turned out to be more challenging than I expected and took more time to develop than I had originally allowed for. This was due to the restrictions on what you could automate as a shortcut, trying to find a way to export more of the screentime data (I wasn't able to find a way to export number of pickups) and the initial noisiness of the ocr output (before masking).

There were also numerous screenshot variations that I had to allow for e.g when the screenshot was for yesterday not today or when there was no screentime data collected. I took a TDD (test driven development) approach to building the data_loader so have included these variations as test cases.

In the end the csv that is produced generally has 100% completeness across the total time column and close to 100% completeness for the first application that time is spent on. Completeness starts to drop for applications after that.



Tags: personal analytics, OCR, data engineering, project

← Back home