How to Build a Simple iOS Home Screen PWA Camera Using Vue, Tailwind, and WebRTC on CodePen

WebRTC finally comes to the iOS home screen

Lee Martin
Bits and Pieces

--

Both iOS and Android OS allow users to add websites to their home screen like bookmarks. If these websites were built as Progressive Web Apps (PWA) they could also really feel native. I was aware of this but never really pursued it because iOS home screen apps were lacking support for WebRTC (access to the camera and microphone.) Well, you can just about imagine my surprise when I found out that support was added in a recent update to iOS. I quickly revisited and rebuilt my “CAMERA” app from 2018 and was very happy with the results.

In this blog, I’d like to share my approach for building the simplest PWA camera and hopefully it will serve as a base for your own experiments. Be sure to check out and fork the companion CodePen so you can build your own camera.

User Journey

The user journey is broken down into three parts. When the user initially lands on the application, they will see an intro. This helps establish the functionality of the camera and also allows you to prepare the user for granting access to their device’s camera.

Once the user grants access, we’ll redirect them directly to the camera itself. At the very least, this screen should provide a capture button fixed to the bottom center of the screen. Depending on your concept, you may want to try displaying the video feed itself in interesting ways. In the case of this simple example, I’ll be displaying it full bleed.

When the user taps the capture button, we’ll bring up the download screen which shows them a preview of the photo they just captured. From here the user may download the photo or return to the camera to take another.

Accessing Camera

The “Allow Access” button on our intro page is connected to the startCamera method in our Vue app. First, the method calls getUserMedia which asks for permission to utilize the user’s device camera. Once permission is granted, we pass the stream to the src of an awaiting <video> element. I am also using the stream variable to conditionally show or hide the intro and camera <section> elements.

this.stream = await navigator.mediaDevices.getUserMedia({
audio: false,
video: {
facingMode: 'environment'
}
})
this.$refs.video.srcObject = this.stream

We don’t want users capturing photos until the <video> element is ready. We can check for this by listening for the loadedmetadata event. I’ll use the ready boolean to enable or disable the capture button.

this.$refs.video.onloadedmetadata = () => {
this.ready = true
}

When WebRTC is running from a home screen app, iOS will show a small red bar at the top of the screen to let users know their camera is in use. Users have the ability to click this and stop the camera. We should listen for this action so that we can send users back to the intro page as soon as the camera stops. You can do this by listening for the ended event.

this.$refs.video.onended = () => {
this.ready = false
this.stream = null
}

Capturing Photo

Once the WebRTC stream is connected to the <video> element and the metadata is loaded, users will have the ability to capture photos. As I mentioned in the user journey section, this is best handled by a well placed button fixed to the bottom of the camera screen. Our app’s capture button is connected to the aptly named capturePhoto Vue method. Let’s break it down.

First, we initialize a new temporary canvas which uses the exact same height and width as our video stream. The current video image is then drawn to the awaiting canvas.

let video = this.$refs.videolet videoCanvas = document.createElement('canvas')
videoCanvas.height = video.videoHeight
videoCanvas.width = video.videoWidth
let videoContext = videoCanvas.getContext('2d')
videoContext.drawImage(video, 0, 0)

For this example app, I’ve decided to scale and crop the captured photo into a 1080x1080 square. To make things easy for myself, I’ve bought in the excellent blueimp JavaScript-Load-Image library’s scaling function.

this.photo = loadImage.scale(videoCanvas, {
maxHeight: 1080,
maxWidth: 1080,
cover: true,
crop: true,
canvas: true
})

As soon as the photo variable is updated to this scaled canvas, the download screen will appear thanks to another conditional Vue attribute. We can then hide this screen to get back to the camera by simply setting photo to null again.

Downloading Photo

The download arrow button is connected to the downloadPhoto method on our Vue app. This method calls the toBlob method on our photo canvas to generate a JPG blob of the captured photo. We then turn that blob into an object URL, add it to a hidden download link, and then programmatically click() the link to start the download.

this.photo.toBlob(blob => {
let data = window.URL.createObjectURL(blob)
let link = document.createElement('a')
link.href = data
link.download = "photo.jpg"
link.click()
}, 'image/jpeg')

Configuring PWA

For info on configuring the bare minimum meta tags required to get your PWA running as a standalone application, look no further than this excellent Appscope post.

I’ve added all of the suggested meta tags, icons, and launch screen images to the <head> section of my app by editing the Pen Settings. This can be tested by adding the Debug view to your iOS home screen.

Next Steps

Now that you understand setting up the base functionality of the camera, it is time to make it your own. You could evolve the capturePhoto method to filter the captured photo is someway using HTML5 canvas. You can extend the camera screen itself to include a button which switches from the front and back camera by adjusting the facingMode property of the getUserMedia options. I decided to bring in TensorFlow for my “CAMERA” to identify objects in capture photos and then use HTML5 to write the object name right on top. Let me know what you end up doing and happy hacking.

Related Stories

--

--

Netmaker. Playing the Internet in your favorite band for two decades. Previously Silva Artist Management, SoundCloud, and Songkick.