Artikel

Locating Street Lights in Weesp in Point Clouds using Deep Learning

Blog post by amsterdamintelligence

In a previous blog post, we discussed how we can use smart data fusion to automatically annotate 3D point clouds based on public data sources, such as elevation data and topographical maps. We also demonstrated how we were able to train a deep semantic segmentation algorithm using these annotated point clouds. This algorithm could then be used to annotate new point clouds or to improve the earlier annotations done using data fusion.

Today we will demonstrate how one can use these results to automatically locate street lights in an entire city, and how point cloud data allows us to extract meaningful properties of the found objects, such as their height and inclination. Point clouds are 3D image-like representations, consisting of a large number of points, or pixels. Each pixel not only has RGB color information, but also <x, y, z> coordinates, which allow us to measure distances, for example accurately.

Dataset

For this project, we looked at the urban area of Weesp. Through CycloMedia we obtained a point cloud, recorded using a mobile LIDAR (LIght Detection And Ranging, a laser scanner that records distances to points in 3D space) device mounted on a vehicle. As a result, we have coverage only of public areas that are accessible by car. The resulting point cloud encompasses an area of roughly 9 km2 and over 7 billion points in total. The point cloud is divided into 3569 tiles of 50x50 meters each. Of those, we used our data fusion approach to label a training set of 109 tiles (see image), of which we used 99 for training and 10 for validation.

Point cloud tiles of Weesp (blue) with the training set shown in orange.

Deep semantic segmentation model

We trained RandLA-Net, a state-of-the-art deep semantic segmentation algorithm specifically designed for point cloud data, using the annotated training set. Training the RandLA-Net model took roughly 1~2 days on a single P100 GPU, after which it had converged. Performance on the validation set is shown in the table below:

RandLA-Net performance on the validation set. 

Although the model performs very well for ground, buildings, trees, and cars, it has more difficulty in identifying street lights. The latter is due to an imbalance in the dataset: street lights only make up 0.17% of the training set, in terms of number of points.

Finally, we used the trained model to label each point in the full point cloud of Weesp, which took 30~60 seconds per tile.

Extracting street lights

The next step was to locate all street lights in the annotated point cloud. For this, we first filtered out the points that were labeled as street lights by the semantic segmentation model. Then, we clustered those points, obtaining a rough division into individual objects. We ignored those objects with a height less than 3m, or with a total number of points less than 100, as these are most likely noise. This way we ended up with 4244 potential street lights.

We then performed an automatic analysis of each object to extract relevant features such as the height and inclination of the lamp post. To do so, we segmented each object in horizontal slices with a height of 10 cm and computed the variance in X and Y directions for each slice. The idea here is that a low variance in both directions indicates that the slice captures the lamp post itself, rather than additions such as the light arm or traffic signs. This way we obtained only those points that make up the lamp-post, and we used PCA to fit a straight line to the pole. Using this fitted line we could compute the exact height and inclination of the lamp post.

Finally, we generated images of each individual object, from different angles, with the line fit overlayed, and inspected each manually (see an example below). Using this manual inspection we identified 13.6% false positives (mostly traffic lights, flag poles, and other vertical pole-like objects) and a further 10.5% which had a too low point density to be classified. In the end, we were left with 3199 positively identified street lights.

Example of an extracted street light, with the pole-fitted line in red.

Break-down of the results

Now that we extracted the individual street lights we can analyze their relevant properties. First of all we look at the height, which gives us useful information that can be used to further classify them into subtypes, such as smaller residential poles, or taller masts that are found along main roads. The figure below illustrates the distribution of pole heights on the horizontal axis, while the color corresponds to the number of points of each individual object.

The distribution of pole heights and the number of points in each object. Note: the vertical axis does not contain information and is merely used to spread out the points in the graph.

 

The height distribution clearly shows clusters around whole meters, with the majority of lamp posts being either 4 m or 6 m high. Some outliers of 15 m and 18 m are in fact floodlights around sports fields.

Next, we look at the skew of the pole, meaning the offset in degrees with respect to a vertical line. In other words, whether the pole is standing up straight or leaning over. The figure below illustrates the distribution of skew.

Distribution of skewness, or offset w.r.t. the vertical position.

 

We can see that the majority of lamp posts are more or less vertical with a very small offset of 1 or 2 degrees, which can partially be attributed to noise in the point cloud or a small error in the PCA fit. For larger values, e.g. > 5 degrees, there is more certainty that that pole is indeed not standing straight. The figure below shows an example where a lamp post is clearly leaning over. In total, we identified 97 lamp posts with a skew of more than 5 degrees. This information can be used for targeted maintenance.

Example of a pole for which a skew has been identified, meaning it is no longer standing up straight and might require maintenance.

 

Finally, we can look at the <X, Y> locations of the extracted street lights on a map, and compare these to the expected locations as currently registered in the BGT (topographical registry). Out of the 3199 identified objects, we find a match (less than 2m distance) in the registry for 2727 street lights (see figure below). Of those, 2496 are within 20 cm of their expected location, and 2685 are within 1 m. This information can help to automatically correct the registry, assuming that the accuracy of the point cloud is sufficient.

We also find objects at locations that are not in the registry, which is mainly due to two causes: a refurbishment of the street or new project which has not been accounted for yet, or an object that is not maintained by the municipality (e.g. floodlights around sports fields or lamp-posts around national roads). Finally, for a number of objects in the registry, we did not find a match in the point cloud. This is either due to their shape or placement (e.g. mounted on a building wall, which is harder to detect) or because of lacking coverage in the point cloud (such as pedestrian zones).

Located street lights in Weesp. Green: perfect match with BGT; Yellow: match with >20 cm difference; Red: match with >1 m difference; Purple: no match; Black: in BGT but not found in point cloud.

Code

Our code for extracting street lights from labeled point clouds is available on Github, along with example data and tutorial notebooks.

Wrapping up

Altogether, in this blog post we have discussed how machine learning and data science can help to update registries of street furniture, such as street lights. We have also illustrated how features of extracted objects can be used to guide targeted maintenance. At the moment we are in the process of extending this type of analysis to more types of street furniture, such as city benches and rubbish bins. In addition, we plan to scale-up to the larger urban area of the City of Amsterdam.

Stay tuned for more updates!

 

Source: amsterdamintelligence

 

Afbeelding credits

Header afbeelding: streetlights in point clouds - by daan bloembergen