#006- Does Leica BLK2GO’s data worth the money?

Lukasz Wiszniewski
16 min readAug 15, 2021


It is a continuation of the previous article with a strong focus on the quality and usability of BLK2GO’s data.

Unfiltered BLK2GO data with waypoints generated every 2 meters.
Pict. Unfiltered BLK2GO data with waypoints generated every 2 meters.

After many hours and a few kilometers walked with the BLK2GO scanner (check PART1 of this review), came time to take a look at the point clouds. In this article I will focus on:

  • uploading data
  • data quality
  • data usability

As long as I was impressed by the portability of Leica’s device, the data itself gives me mixed feelings.

Pict. A single run with the scanner.

Transferring data to the Leica Cyclone Register 360

First things first. Before you would be able to see the data on the screen of your PC, you should transfer them from the scanning device. You have only one option here which is a USB-C cable. In my opinion, is the fastest and most stable data transferring possibility, and I am glad it is used in BLK2GO. USB socket is placed under the battery, so you turn off the scanner, pull out the battery and connect the scanner by a cord to your computer. Cyclone Register 360 will recognize connected devices and display projects available in its internal storage.

Now you have three options:

  • import data to the Register360 project
  • import data to the Register360 project and save raw data to B2G format
  • export directly to e57 file without importing data to the project
Pict. Export E57

Users can also create waypoints with preferred intervals e.g. every 5 meters. These waypoints are nothing more than spots in a scanning walk wherefrom will be generated panoramic images. Extracted pano images are used for Truview. This option is available only with a Publisher Pro license.

Pict. On the very bottom of the import panel in Cyclone Register360 is an option to create WayPoints

Leica developers implemented a few months ago a new feature called Filter. It is meant to be used for removing the spaghetti effect in the point cloud - the point cloud doesn’t have nicely structured rows and columns of points in comparison with the TLS data. Unfortunately, the filtering not only enhances the readability of the point cloud but also removes some percentage of points, thereby removes smaller details of scanned space.

Pict. Data filtering during import (from the left: no filter, high filter, low filter)
Pict. Amount of imported points with used filtering (from the left: low and high filtering)

You should notice that the same raw data imported to the Cyclone Register360 and directly exported to e57 gives a different amount of points in the point cloud. Another odd thing is that low and high filter applied to the same walk has a diverse length of the trajectory. Why does it happen?

First, look

I am going to mention shortly about data quality comes from my scanning test in April 2020 and then I take a look at data from March and May 2021.

Pict. The same office area was scanned with BLK2GO (no filters applied) and RTC360.

Generally speaking, BLK2GO data from 2020 — the first release- was useless in most cases. As you can see in the picture above, visibility and recognizability are poor and it is a blockage for using it in the modeling process or analysis/clash detections. I juxtaposed BLK2GO point cloud with RTC360 to show a huge difference and let you answer the question does the Leica’s MLS data would be easy to use.

Luckily a year later -2021 BLK2GO began to deliver much nicer point clouds. They are more consistent and have much better visibility. The availability of the filtering options in Register360 also increased the value of the data in most cases. I am saying in most cases because filtering drastically dumps the number of scanned details.

BLK2GO competitors

MLS systems like Gexcel Heron MS-2 Color, Leica Pegasus backpack, or GeoSlam delivers very noisy point clouds and often fragmented/layered data. By layering, I mean that flat surfaces, in reality, were presented by several surfaces of points so as a consequence slice through a wall’s face looked like many lines instead of a single line. It happened because of the mismatch between the following point clouds created by a single rotation of the scanner’s head.

Pict. Orthographic view of the GeoSlam data. Notice the data representing the ground in front of the building visible in the center of the picture should be shown as a single line but you can see at least 4 extra lines below the ground.

BLK2GO point cloud impressed me from the very beginning. Even in 2020 slice through a wall never occurred as several lines. There was no layering whatsoever. Of course, there was noticeable noise but not so heavy in my opinion. Horizontal cross-section through the vertical surface showed spaced-out points with a maximum distance of 18mm in perpendicular measurement to the sliced surface. It is much better than the Leica Pegasus backpack, Heron MS-2 or GeoSlam Horizon.

Pict. Spaced-out points are visible in the Gexcel Heron MS-2 backpack data (without filtering/denoising). Notice that the average 2D distance between the extreme points representing a single surface is approx. 30mm

Point cloud quality

Scans of the outdoor environment on a cloudy day are nicely colorized. Unfortunately too bright (direct sunlight or artificial light) or too dark scan areas have applied noticeable overexposed colors so the user has to rely on a single color to recognize scanned elements. The view can be switched to black&white or an intensity color palette so it kind of solves the problem.

When it comes to checking the entire walk and its coherence then results seem to be not so promising. As long as your walk doesn’t start and end in the same spot, you are mostly safe. In case you would like to scan in the loop so let’s say you walk around the building, then you will most likely end up with a mismatch of the first and last seconds of your walk. And by loop I mean the start position and the end position are the same, and scan data overlap in the start/end spot. It happens because drifts occurred along the trajectory. I assume there can be many factors that lead to distorted data and lost spatial consistency. I am listing some of them:

  • human and road traffic nearby a scanner operator
  • movement of neighbor vegetation e.g. leaves in the wind
  • reflective, transparent and semi-transparent surfaces
  • improper walk speed
  • too even scanned environment — long corridors or tunnels without any features/things which would make the space less homogenous e.g. furniture, paintings on a wall
  • too small features in scanned space — laser beam would hit only a few times e.g. let’s imagine we scan a room which is empty but has only lamps on the ceiling, hooks in the walls, electrical boxes, for the men eye is plenty of elements (and not only bare walls) but from the algorithm perspective view there is nothing that can help in the data processing
  • scanner shaking
  • to quick turns during a walk
  • and probably a few more, I can’t remember now

Mentioned mismatch at the end of the walk’s loop happened every time I chose this strategy. I tested it out for outdoor and indoor scanning and in both variants, the result was the same — a mismatch! Probably you wonder what was the value of it, it varied from 20cm to 120cm. At that point, my impression drastically changed 180 degrees. A very important factor is the walk duration. In my cases, walks took from approx. 7 to 23 minutes. I was assured by Leica authorities that I can walk as long as I want/need. I don’t think 7 or 15 minutes of walk is long for MLS equipment and it should be doable to deliver consistent data. I also think, that in a BLK2GO case, 30 or 60 minutes walk is too long in a matter of such a small IMU implemented in this device.

Unfortunately, Leica Cyclone doesn’t give any chance to fix drifts, straight up a trajectory, or remove some unfixable pieces of data. The user has three options

  1. Send the data to Leica support and pray for it to be repairable
  2. Take a walk again — can be costly to get back on the scanning spot
  3. Create super-smart workflow

Small digression: I had a chat on one of facebook’s groups with the very satisfied BLK2GO user regarding super-smart workflow. He told me that he with his colleagues created a very good workflow consist of their scanning technics and max. 3 minutes BLK2GO walks which gives them satisfying results. He didn’t want to share the details, which I fully respect but mentioned that it took approx. a year to achieve a stable and reliable workflow. You, my dear reader, can decide for yourself is it long or not. I believe it is an expensive journey because during that year I can only imagine how many tests they did, how many projects they have redone, how many times MLS data had to be supplemented or replaced by TLS data.

Let’s leave the digression about building a workflow and go back to the two other options in the case of the mismatched point cloud. Personally, I tried both. I have redone a walk of the same area twice— two floors of a parking lot connected by two separate staircases and a very wide parking ramp. Each of these three attempts failed, so I can say it will not always be a solution to get good scanning results with a clear conscience. I did twice the same walk of another area. That was outdoor scanning of building facades — a mix of bricks, concrete, glass, and steel with a predominance of bricks. Results were the same as in the parking example. You probably wonder how did I know I should redo scanning? In the parking, I noticed a mismatch in the blk2go live app, in the outdoor scanning I just did it to be sure.

I also took advantage of the third option — ask Leica support for help. I shared with them two samples of my raw data with the smallest misalignment — one of outdoor and one of indoor scanning. After almost a week later I got back the results of support’s data repairing. They managed to decrease error in the outdoor walk and couldn’t do anything with an indoor walk (parking lot). Below you can read the message I got. Outdoor walk’s error was changed from horizontal misalignment to horizontal (smaller than before) and vertical misalignment.

The issues of misalignment in the point clouds are most due to the scenes that are challenging for SLAM as combination of few features, long walk and for one case also initialization phase that was not ideal.
In the following, you will find all the details for the datasets you shared:

Customer 1 — Outdoor: “outside.b2g”

No issues are found in the dataset, and it’s switched to fallback option. I believe the offset is coming due to the complexity of the scan environment where we see quite of downhill & uphill areas. Fortunately, we resolved this offset with parameter tuning as the below slices of the misalignment-free point cloud.

Customer 1 — Indoor: “Garage5–4floor.b2g”

In terms of misalignment in the SLAM solution, the scan environment is quite challenging for SLAM due to less features available (barely empty parking lot) and ramp (which is the most difficult part for SLAM). In addition, we have seen some issue during initialization of the device. This and the complexity of the scene led to SLAM to fail.

Pict. Outdoor dataset: “outside.b2g”
Pict. Outdoor dataset: “outside.b2g”
Pict. Indoor dataset: “Garage5–4floor.b2g”

As I mentioned, the result of fixing by the Leica support ended up with errors you can check in the pictures below. I also don’t agree with the support’s comment regarding the indoor walk. I did three similar walks with the same results. I rather wait extra 5–10 seconds before beginning a walk with the scanner than disturbed the initialization process so this could happen as an internal scanner’s failure, I suppose.

Pict. 12cm error in XY plane, reduced from 130cm error by Leica support.
Pict. When the error in the XY plane was reduced, a new error was created in the Z plane - approx. 20cm.

So what would happen in the case of an unfixable point cloud? Probably you should go back to the project’s area to do acquisition one more time. Of course, it is up to you how would you fix this issue but in my case I did the scanning with RTC360 in the case of BLK2GO data failure.

Another outdoor test walk

It was one of many walks with BLK2GO but this one was very simple. That’s why I take it as an example of another outdoor test. I walked around the Church on Saturday afternoon.

Pict. Top view of the scanning area. In the center is the scanned church. The green line represents trajectory.

There was light car traffic on the street and some people walking nearby the church. I would describe it as a lazy Saturday’s area. The acquisition took approx. 4 minutes on the distance of 200m so you can assume it was a very slow walk.

The overall quality of the point cloud seemed to be good until I found layering of the point cloud. I found it by slicing the data with 20cm vertical and horizontal cross-sections. The misalignment among points reached 5cm.

Pict. The vertical cross-section through the front part of the church. Notice dY values indicate layering.
Pict. Front view of the vertical cross-section. The noticeable layering of the point cloud.
Pict. Top view. The visible layering of the point cloud (another spot than the one above).
Pict. Range of the collected data (mostly up to 20 meters).

The last test after the firmware 2.0.2 update

In Mai 2021, I tested BLK2GO last time. The reason was an updated firmware which supposes to fix most of the known issues. Were done two walks in a parking lot (another than previously) and one outdoor walk.

Pict. Perspective view on the outdoor data.

Indoor point clouds looked descent, unlike outdoor data. The walk around the parking lot’s building took approx. 10 minutes and 350m. It was closed-loop - I finished the walk in the same spot where I started.

Pict. 350m walk took 10min and 32 sec.

On the face of it, the point cloud generated from this walk looked proper. Cross-sections have shown again some misalignments/layering. This time the error equaled 50cm! The point cloud of the beginning and the very end had some rotation in relation to each other.

Pict. Marked spot with the 498mm error.
Pict. Top view with shown layering.

Surprisingly enough, my colleague in Leica imported the same data also to Leica Cyclone 360 but couldn’t find any error in the same spot. This is odd! I decided to reproduce the same error and imported the same data again. Unfortunately, I have updated Cyclone 360 to the newest version in the meantime, and the reimport didn’t contain the previous error. Was it the incompatibility of software and firmware versions? I have no idea.

Anyway, an error was still there near the spot of the walk’s beginning/end. This error was about 10cm and was along the trajectory. It would signalize that IMU had a shift at some point and the walk’s loop didn’t close. Yes, it is not a big deal, but when you look at the issue from a wider perspective, you cannot be sure when and how errors happen, and how to avoid them. Perhaps, performing a multitude of measurements would show some correlations between errors and the acquisition's environment or the method of data collection (fast vs. slow walking, the way of scanning narrow places, etc.).

Pict. The top view shows two (the same) walls - layering.
Pict. The horizontal cross-section shows the same error.

As the last issue, I noticed, were differences between indoor point clouds. Each of them comes from two different levels of the parking lot. A common area of them is approx. 30m of the driveway and it was scanned in each of these walks. That means I had 30 meters of common point cloud for both levels which I could use for constraining them. I gave a try for these point clouds several times and, I can promise you, it was impossible to merge them properly. First of all, they didn’t match each other because of some bending.

Pict. Bending one of the point clouds.

Second of all, on these common 30 meters, one of the scans was longer approx. +/- 1,5cm. Not a big deal but I wonder would this error multiply in the case of 300 meters? Would we get 15cm error then?

Pict. Comparison between indoor scans in the common 30 meters. Notice the yellow color on almost the entire walls in the right driveway.

Useability. Panorama pictures

I would start this topic from take a look at the photos produced by cameras. BLK2GO has three 2Mpx built-in cameras which are responsible for producing panoramas. My first thought was: “Really? Only 2Mpx at times when not even high-end smartphones have 108Mpx?” Pano-images don’t cover 360 degrees range of view. The area where is placed scanner’s operator during a walk is not covered in the panoramas (I think it is about 60 to 70 degrees). I think it is a good solution that reduces the time of blurring/editing each individual picture in the case we don’t want to have an operator in the pictures. On the other hand, the operator has to always remember to change the walking direction so the entire scanning area is covered by pictures.

Unfortunately, low-quality cameras are good enough for colorizing a point cloud but rather useless for a Truview which can be a huge blockage for some of Leica’s customers. In good weather/light conditions, not too bright, not too dark, pictures are just ok, with poor readability, especially when you zoom.

Another issue with the pano images is that most of the shakes during acquisition are visible as a type of blurryish artifacts.

The third disadvantage is an ugly connection between single pictures in each panorama. Perhaps I am too tacky but according to me if any other laser scanner from Leica’s stable delivers (except older C and HDS-series) beautiful pano pictures then a user is used to the quality. We have 2021 and brands like Leica cannot deliver low-quality products.

  • no HDR
  • ugly joins in stitching areas of single pictures
  • very low quality of panoramas
  • shaking artifacts visible on panoramas

The very important thing that needs to be mentioned is that a user to be able to export waypoints with a point cloud needs a Publisher Pro license! At least in the case of exporting to e57 without importing to the Cyclone.

Pict. Poor stitching between single pictures to a single panorama
Pict. 3x zoom. Visible artifacts, texture errors/missed pixels, and unstraight edges
Pict. Poor quality in low-light conditions. High overexposure.
Pict. Stitching errors and different exposure levels between subpart pictures.
Pict. Fatamorgana effect.

Usability. Point cloud

When we consider a point cloud and its quality we have to look at two cases: accuracy and readability. Accuracy, as you could read earlier, is random. You can’t predict how accurate the point cloud you will get. Would it be misaligned or not, leveled or not? If you want to use it for any other purpose than visualizing, you have to consider it twice if it is worth using BLK2GO.

Readability has to be considered from the project’s perspective. In construction projects where the focus is only on the walls, floors, ceilings, beams, etc. which are relatively spacy then the filtered point cloud is mostly suitable for BIM modeling. Things get tough when the project requires the modeling of small elements like e.g. fire sprinklers. No matter what filtering level would be used, BLK2GO will not deliver details like these. This is nicely shown in the pictures presenting the underground parking lot area. On the photograph is a visible floor drain. The same area is shown from the same perspective in the point cloud (without filter applied) where mentioned the floor drain is not recognizable. That’s how unreliable data is, concerning detail modeling.

Pict. A floor drain is visible next to the pillar.
Pict. Zoom to a floor drain
Pict. The same area in the unfiltered point cloud. The arrow shows the spot where is a floor drain.
Pict.Black and white pipes.
Pict. The same area in the unfiltered point cloud with visible only white pipes. Black color absorbed the laser light.

Usability. Targets

Until now, we know that the point cloud is not highly detailed. Would be good to supplement its data with the TLS point cloud. The best and most accurate way is to use targets (B/W, spheres, or blinkers). Unfortunately, Leica failed again. None of those types of targets are recognizable in the BLK2GO point cloud. No matter what the walk’s velocity is (you can even stop by next to a target), the distance between the scanner and a target, or scanning environment - outdoor or indoor, either manual or automatic target recognition in Cyclone software is not possible. It is most likely to happen because of a lack of points representing a target in a point cloud.


BLK2GO is a cool device and it’s simplicity lower the entry threshold for most enthusiast of laser scanning. Unfortunately, the data quality and usability of it are low. BIM modeling based on this data would be difficult or very difficult depends on LOD. Pano pictures (waypoints) for Truview are mostly useless. In my opinion, the user never knows when can rely on a point cloud generated by this device. A genuine engineering project needs precise 3D data which can be used in the entire BIM cycle.



Lukasz Wiszniewski

I am a geomatician and software developer with over decade experience in reality capturing and BIM. More info read on 3d-points.com