Skip to content

The use of 360° video and images in recording historic sites

Always keen to try new technologies, I have recently ventured into the world of 360° cameras. I have experimented in the past with the iPhone and Google Streetview app to capture 360° images, but in March 2022 purchased my first dedicated camera. I took this camera on a road trip to Denmark, with mixed results. I was still understanding how to set up the camera for the best results, as well as not really knowing how it would fit into my workflow. This has been supplemented more recently when I purchased the DJI Mini 3 Pro drone which is also capable of capturing 360° images. Nearly 6 months into my 360° journey, I think it is a good time to share my experiences so far. This isn’t a review or an advert, I just think it’s important to rationalise my understanding of the technology and hopefully put forward some points that may help you figure out if it could be of use in your own work or projects.

This article will look at the various methods of capturing 360° images and will compare the results of each method at my disposal. Capturing the images is only one part of the process, I will also go through the methods I use to edit both images and video and where I output the finished result.

BLUF (Bottom Line, Up Front)

Why should you read any more of this article until you know if it’s for you? I get that. So before we begin, here is what I think 360° imagery can give you, and what it can’t (IMHO).

ProsCons
Immersive user experience will only develop as Virtual Reality (VR) hardware becomes more commonLower-resolution images and video compared to mirrorless or DSLR cameras
Ability to post-analyse a scene to spot details otherwise missedIncreased workflow if 360° imagery is in addition to ordinary photography, rather than replacing it
Minimal training is required to achieve good results. You don’t have to think about the camera angle until post-processingAdditional equipment is often required to capture the images
Images and video can contribute directly to Google Streetview with a GNSS-enabled camera (or a time-synchronous GPX file)Specialist (albeit mainstream and often free) software is required to view the results
New capability can create virtual maps from a 360 video (utilises Visual SLAM)1Simultaneous localization and mapping, not covered in great detail in this articleDue to the wide angle of the lenses, it is difficult to use the camera for picking up fine details or wide landscapes. Selecting the right environment or subject is key.

But WHY??

Take this real-world example (naturally, it’s very niche). When documenting a burnt-out Second World War Nazi coastal artillery bunker on the West coast of Denmark, you come across a room that has lost almost all of its discernable features. There are some lumps and bumps left in the concrete walls and ceiling, but you aren’t confident as to what their purpose could have been. To make matters worse, the atmosphere was dank and the image isn’t as clear as you would have hoped. Image below:

But alas, you took a reference image in a preserved and restored museum that had a room just like this. No need about worrying to match the camera angle, because you took a 360° photograph! You can use the image below to interpret the shell of a room above.

Has this piqued your interest? Then read on.

What are 360° photographs?

A 360° photograph is a digital image just like any other, but the secret lies in the field of view captured. And as the name suggests, this is 360°. But of course, current mainstream image sensor technology can’t directly capture images onto a curved sensor.2Manufacturers like Sony are developing curved sensors which will revolutionise the way we capture and view images. Read the Digital Camera World article So instead, we have to rely on software manipulation of a flat image, stretching and squeezing the top and bottom of the image to the proportions we want.

Consider you had to cut out a printed image in order to wrap it around a ball, what portions of your image would become distorted? This is demonstrated using the example below.

It would be impossible (in 2022 at least) to capture a complete single-frame 360° image sphere with a single sensor and lens, so the image sphere we look at is actually the result of multiple images stitched together to give the illusion of a single photograph. Currently, this sort of imaging is used primarily by construction and property sales, and it hasn’t really leaked over into the mainstream, yet! There are a few methods employed by the industry to record immersive images:

  1. One camera, two sensors, each with a 180° lens, and two camera positions (front and back) such as the new style of cameras including the GoPro Fusion, Insta360 X3 and Ricoh Theta range.
  2. One camera, one sensor and lens, and multiple camera positions. This can be done with most smartphones and specialist apps (such as Google Streetview) and involves tiling images in a bubble. The image to the right demonstrates how the Streetview app captures these.
  3. Multiple cameras in a fixed array, each with its own lens and sensor, one camera position. A much less cost-effective way of capturing 360 images, but arguably it provides the highest quality results. As well as requiring multiple cameras, memory cards, batteries and a bespoke geometric frame, these images require much greater post-processing.

Digital photography is all about resolution, right? MORE MEGAPIXELS! There are many more factors that go into the quality of a digital photograph, and resolution is just one of them. I won’t go into the rest of them in this article, but I think it is important to explore the subject of image resolution. Resolution is measured in pixels. Think of a pixel as being a single dot. That dot then represents a single colour.

This incredibly close detail from the tank image below shows the series of single pixels that make up the much larger image. Most full-resolution images now are in the 20-megapixel range, meaning they have 20 million of these square pixels making up the final image. In video terms, 4k video relates to the width of the video, which is 3,840 (kilo = thousand) pixels wide, a nominal two times greater than Full HD which is 1,920 pixels wide.

Resolution is a good measure of comparison for digital cameras and images. In fact, it’s really the only quantifiable measure we have. And the resolution has become somewhat of a measure of quality. The biggest issue with using the resolution as a comparative measure between standard images and 360° images is that is only applicable to the flat image.

A comparison image to scale showing a full 60-megapixel image from the Ricoh Theta X with an inset 24-megapixel image from the Sony a6600. The resolution is calculated by multiplying the horizontal and vertical dimensions (in pixels) together. These can also be represented as 11k and 6k images, although this nomenclature is generally reserved for videos.

It is probably a good time to introduce aspect ratio into the conversation. This is the ratio of width to height of an image. Traditional wet-film photographs were generally 4×6 inches or 5×7 inches in size, but their aspect ratio was 3:2 or 1.4:1 respectively. The most common aspect ratio is 3:2, such as in the a6600 photograph above (6,000 x 4,000 pixels = 6:4 = 3:2). Images captured for use in 360° photographs would look unconventional as standard photographs. They have a ratio of 2:1, and are known as equirectangular. Their width is twice their height (as a flat image at least). The example above reflects this, being 11,008 pixels wide and 5,504 pixels in height (11,008 x 5,504 = 11:5.5 = 2:1).

Below is a flat image extracted from the equirectangular. This image is 2048 x 1544 pixels but is very distorted. An artefact of being captured from a very wide-angle lens has resulted in a fisheye effect. We can correct this, and often the software we view the images in does this for us.

Having corrected the fisheye image above, we end up with something similar to the proportions of a 3:2 image below.

Now look more closely at the detail in this image in comparison to the image above. They cover an almost identical portion of the scene, so their field of view is similar. But look more closely and you can spot stretching and squeezing.

Despite the distortion in the image, it is a pretty good representation. The centre appears sharp, but the edges definitely appear softer as the pixels are stretched.

Capturing a 360° image is really only the first stage, selecting how they are viewed (I have found) can make the difference between thinking you have a useful shot or one that has lost some of its function.

Viewing equirectangular images

As I have hopefully demonstrated through my workings above, viewing an image that has been captured for 360° display isn’t as straightforward as it may initially appear. There are a few mainstream social media providers that can natively host equirectangular images and display them as immersive 360° scenes. Facebook is probably the most well-known, with YouTube hosting 360° videos. Slightly more specialist services are Flickr and Vimeo, with Google Streetview being a good site if you want to publically host georeferenced 360° images of locations around the planet.

A number of mainstream websites natively host 360° media

So what does an equirectangular image look like when it is published online? Without the correct processing by the hosting provider, it will look like a flat image, as below. This sort of image can be hosted anywhere, saved to your computer or phone.

Images captured in 360° are in the equirectangular format, with an aspect ratio of 2:1

You may also be familiar with the iconic tiny planet view that can be generated by a 360° camera. Not very practical when it comes to reviewing the image, but certainly eye-catching!

Below is the same file as the equirectangular and tiny-planet images, but this time the flat image has been manipulated into a virtual sphere around the viewer. You should be able to tap and scroll the photograph to view the scene from every perspective. As this is a still image you can’t move around the scene, but you move the scene around you.

And finally, we get to the best part of 360° images, a site that displays the interactive image with enough power to nicely process the equirectangular image. I have a number of 360° photographs on Flickr, I have recently signed up for a personal account with Momento360 as their processing of the images is the best I have found (as of late-October 2022) and the ability to annotate images with hotspots and descriptions will really add value to this resource. The image below is exactly the same file as was used in the two images above.

Comparing various forms of 360° media capture and hosting platforms

I’ve found that the best results for viewing and sharing 360 images and video come from the right combination of devices and platforms. What I have decided to do in this section is a display of various cameras or techniques combined with the different hosting platforms that I use.

There will be some limitations such as the iPhone and DJI Mini 3 Pro not being capable of recording 360 videos, and images recorded with the Google Streetview app not being available to upload to other platforms.

iPhone 11 Pro + Google Streetview

I took this image in the Google Streetview app for iOS and then made it public, having assigned a location to it. Images by default are captured on the device and uploaded to Google Streetview. This is an important point to note because when you record an image in Google Streetview, it also saves a local copy on your phone. If, like me, you delete that local copy in the hope to later download it from Streetview at a later date – you can’t. Not at full resolution anyway.

Utilising this method of capturing a 360° image uses the full resolution of the phone, and the processing power of Google servers to create the image. And for most people with a smartphone, this is achievable. The app is free.

Embedding images from Google Streetview into websites is super simple with some basic web development knowledge; the app will generate the code to enable an embed onto your site, and you paste that code (as HTML) onto your site.

Ricoh Theta X + Momento360

The image below, taken inside the Hanstholm Bunker Museum, is hosted on Momento360. The process of recording an image in 360 using the Ricoh Theta is super simple; turn the camera on and push a button. However, to fully set up the camera, understand the settings and utilise the features to maximum effect, I would say that this takes much more understanding. Simply using the Auto function will work, but I think you’ll be disappointed with the results.

This was a reasonably difficult image for the small sensors of the camera to capture. With a fixed aperture of F2.4 and focal length of 1.37mm I could set the ISO at 100 and the calculated exposure time was 1/2 second. I had to use the self-timer to ensure a steady shot on the monopod.

As for the website Momento360 to host your images, it is my latest and preferred method of doing this. I have opted for the middle tier of membership, with a small monthly fee, but there is 100GB of storage and 8k resolution for viewing images.

Momento360 has an approved 3rd party plugin for WordPress (the platform I use to build this website) which allows me to use a shortcode [momento360 url= ] to embed images on the site. Super easy, and very efficient.

Image size

10.5 MB (jpg)

Image dimensions

11008 x 5504 pixels

X/Y Resolution

300 dp

Skill Level

Semi-professional

Ricoh Theta X + Flickr

The image below, taken at the Bangsbo Coastal Museum is hosted on Flickr. And I’ll be honest it’s quite a disappointing combination. The camera was set at ISO 100 and had a shutter speed of 1/2000 seconds. I also don’t rate Flickr for rendering 360 photographs, certainly not compared to Streetview or Momento360.

Uploading to Flickr is super simple, and then sharing the 360 photographs is also really easy. However, Flickr will only make 360° images available to explore on a desktop or laptop computer browser, they can’t yet be explored on mobile devices.

Hack the system: After exploring the code Flickr generated to embed images on this site, I noticed that the dimensions of the embed were almost half of the original image. By altering the dimensions in the embedded code, I was able to make the full-resolution image available to view; so it’s not quite as grainy as the default setting! In the copied embed code, this looks like:

width="11008" height="5504"

Image size

10.7 MB (jpg)

Image dimensions

11008 x 5504 pixels

X/Y Resolution

300 dpi

Skill Level

Very easy (with a caveat)

Coastal Museum Bangsbo Fort - M270 Emplacement (Tour Building #33)

DJI Mini 3 Pro + Momento360

As one of the automatic image modes on the Mini 3 Pro, the drone does all the work for you. Much like taking a 360° image with a phone, the drone stitches a series of images together and outputs a single equirectangular file. Despite the image file being

Wind warning! The way the Mini 3 Pro captures these images is by following a preprogrammed series of movements. The drone will spin on the spot a full 360°, and the lens will look up and down to capture enough images that can be seamlessly stitched together. What the drone can not do, is adjust for any movement caused by excessive wind. The example of the horizon below is a result of the drone capturing a coastal image in moderate winds, but they were just enough that the drone was knocked out of position between a few of the shots, resulting in the poor stitching of the images.

Capturing 360° images on the Mini 3 Pro requires calm conditions!

Image size

16.9 MB (jpg)

Image dimensions

8192 x 4096 pixels

X/Y Resolution

72 dpi

Skill Level

Very easy

Ricoh Theta X (Video) + Google Streetview

A recent firmware upgrade the Ricoh Theta X now facilitates capturing media for direct import into Google Streetview. Having a built-in GNSS receiver, the camera is able to record a framerate of 2 fps (frames per second) at a resolution of 8k while location stamping each frame. And straight from the camera, these files can be uploaded to Streetview using the Streetview Studio website.

I can’t say the quality is mind-blowing, but it is certainly adequate for recording and exploring a site that would otherwise not be recorded or available on Google Streetview. Perhaps as processing power and algorithms improve these images will too.

Ricoh Theta X (Video) + YouTube

Finally, a sample of 360° video. And I love this! The ability to record a site as I am walking around, with little attention paid to the orientation of the camera or my framing, and I have a full record and immersive video of a site. There are a few notes specific to the Ricoh Theta X if you are going to do this:

  • Ensure top-bottom correction is ON. This allows the camera to stabilise the image and maintain a level horizon.
  • Set the ISO to a reasonable level to prevent the video from being too grainy. I use a maximum of ISO 800 for most videos.
  • Turn OFF WiFi and Bluetooth as these cause the camera to overheat and reduce battery life when recording. Also, set the screen to turn off once recording begins to maximise battery life.
  • Consider an external audio recorder as the camera can only record mono audio.

When watching 360° videos on YouTube, make sure you select the highest resolution possible. The player will default to Auto which is typically lower than the maximum.

vSLAM

Visual simultaneous localization and mapping (vSLAM), refers to the process of calculating the position and orientation of a camera, with respect to its surroundings, while simultaneously mapping the environment. The process uses only visual inputs from the camera.

© 1994-2022 The MathWorks, Inc.

This section on vSLAM was a bit of a last-minute addition. In fact, I have added it since publishing the article. And it really demonstrates the power of 360° media for the recording of historic sites. In the latest desktop application for the Ricoh Theta (download from the Ricoh website here, I also believe it can be used with any equirectangular video) they added a βeta version of a feature called Route Conversion. Using vSLAM algorithms, the software analyses the video file to generate a map of the route taken in the video. This feature blew my mind! I’ll share a couple of photographs below.

After processing the video through the Route Conversion process, locational data is appended to the video file. As the video is played (within the Ricoh application), the camera icon follows the path plotted and indicates the camera’s orientation.

While there is no easy method to export this locational data from the video, a screenshot and snipping of the plotted path let me overlay this onto an aerial image of the site. Almost perfectly (allowing for some scaling error in my overlay) the plan lays on top of the battery and followed my route. All this was done through analysis of the image frames in the video, and it used no GNSS (GPS) data or inertial measurements from the camera. Incredible!

vSLAM is currently used as a method of localisation in robotics, self-driving cars and also in Augmented Reality applications. At this early stage in the consumer area I am not sure how vSLAM can be of benefit to us, other than giving some localisation to the videos we take, perhaps of complex sites or sites underground with limited options for gathering location or position data. I am watching this space closely to use this technology to the fullest.

  • 1
    Simultaneous localization and mapping, not covered in great detail in this article
  • 2
    Manufacturers like Sony are developing curved sensors which will revolutionise the way we capture and view images. Read the Digital Camera World article