digiKam is an advanced digital photo management application for KDE, which makes importing and organizing digital photos a "snap". The photos are organized in albums which can be sorted chronologically, by folder layout or by custom collections.

Tired of the folder constraints? Don’t worry, digiKam also provides tagging. You tag your images which can be spread out across multiple folders, and digiKam provides fast and intuitive ways to browse these tagged images. You can also add comments to your images. digiKam makes use of a fast and robust database to store these meta-informations which makes adding and editing of comments and tags very reliable.

digiKam makes use of KIPI plugins for lots of added functionalities. KIPI (KDE Image Plugin Interface) is an initiative to create a common plugin infrastructure for digiKam, KPhotoAlbum, and GwenView. Its aim is to allow development of image plugins which can be shared among KDE graphical applications.

An easy-to-use interface is provided that enables you to connect to your camera and preview, download and/or delete your images. Basic auto-transformations can be deployed on the fly during image downloading.

Another tool, which most artists and photographers will be familiar with, is a Light Table. This tool assists artists and photographers with reviewing their work ensuring the highest quality only. A classical light table will show the artist the place on the images to touch up. Well in digiKam, the light table function provides the user a similar experience. You can import a photo, drag it onto the light table, and touch up only the areas that need it.


  • import pictures
  • organize your collection
  • view items
  • edit and enhance
  • create (slideshows, calendar, print, ...)
  • share your creations (using social web services, email, your own web allery, ...)

What's New:

Complete release notes here.

Deep-Learning Powered Faces Management

For many years, digiKam has provided an important feature dedicated to detecting and recognizing faces in photos. The algorithms used in the background (not based on deep learning) were old and had been unchanged since the first revision that included this feature (digiKam 2.0.0). It had the problem of not being powerful enough to facilitate the faces-management workflow automatically.

Until now, the complex methodologies that analyzed image contents to isolate and tag people’s faces used the classical feature-based Cascade Classifier from the OpenCV library. This works, but does not provide a high level of positive results. Face detection is able to give 80% of good results, while analysis is not too bad but requires a lot of user feedback to confirm whether or not what it has detected is really a face. Also, according to user feedback from bugzilla, Face Recognition does not provides a good experience when it comes to an auto-tag mechanism for people.

During the summer of 2017 we mentored a student, Yingjie Liu, who worked on the integration of Neural Networks into the Face Management pipeline based on the Dlib library. The result was mostly demonstrative and very experimental, with poor computation speed results. We saw this as a technical proof of concept, but not usable in production. The approach to resolve the problem took a wrong turn and that is why the deep learning option in Face Management was never activated for users.

We tried again this year, and a complete rewrite of the code was successfully completed by a new student named Thanh Trung Dinh.

The goal of this project was to leave behind all the old ideas and port the detection and the recognition engines to more modern deep-learning approaches. The new code, based on recent Deep Neural Network features from the OpenCV library, uses neuronal networks with pre-learned data models dedicated to Face Management. No learning stage is required to perform face detection and recognition. We have saved coding time, run-time speed, and improved the success rate which reaches 97% of true positives. Another advantage is that it is able to detect non-human faces, such as those of dogs, as you can see in this screenshot.

But there are more improvements to face detection. The neural network model that we use is really a good one as it can detect blurred faces, covered faces, profiles of faces, printed faces, faces turned away, partial faces, etc. The results processed over huge collections give excellent results with a low level of false positives. See examples below of face detection challenges performed by the neural network.

The recognition workflow is still the same as in previous versions but it includes quite a few improvements. You need to teach the neural network with some faces so that it automatically recognizes them in a collection. The user must tag some images with the same person and run the recognition process. The neural network will parse the faces already detected as unknown and compare them to ones already tagged. If new items are recognized, the automatic workflow will highlight new faces with a green border around a thumbnail and will report how many new items are registered in the face-tag. See the screenshot below taken while running the face recognition process.

Recognition can start to work with just one face tagged, where at least 6 items were necessary to obtain results with the previous algorithms. But of course, if more than one face is already tagged, recognition is more likely to return good results. The true positive recognition rate with deep learning is really excellent and increases to 95%, where older algorithms were not able to reach 75% in the best of cases. Recognition also includes a Sensitivity/Specificity settings to tune the results’ accuracy, but we advise you leave the default settings as you begin experimenting with this feature with your own collection.

The performance is better than with previous versions, as the implementation supports multiple cores to speed up computations. We have also worked hard to fix serious and complex memory leaks in the face management pipeline. This hack took many months to complete, as the errors were very difficult to reproduce. You can read the long story from this bugzilla entry. Resolving this issue allowed us to close a long list of older reports related to Face Management.

To complete his project, Thanh Trung Dinh presented the new deep learning faces management at Akademy 2019 held in September in Milan. The talk was recorded and is available here.

Although Thanh’s project is complete, the whole story is not and the second stage of rewriting the Face Management workflow is an ongoing process with two new students working on it this summer.