In this project, we have tried out artificial intelligence in principal component analysis on our images, by using neural networks and algorithms.

Two of the results in the project are described in more detail below.

  1. The algorithms that show compositional similarities for us in a new user interface on our website.
  2. The algorithm that classifies the National Museum's art by subject keywords.

A new interface within the collection online

As a starting point in attempting to train an algorithm to analyze our images, we used a neural network trained on ImageNet, written in Caffe [1], developed by Autonomous Perception Research Lab at Berkeley [2]. Then we re-trained this on classifications of art movements in images from The Wikiart Collection. We also experimented with different results of face recognition, through neural net features in OpenFace [3]. Furthermore, we analyzed composition, color and style using "neural style". Here we performed tests within parameters for figures and shapes, faces, age, gender and typical/atypical works per decade. The algorithm then identified and marked motifs in our own collection. We then added a presentation using t-SNE algorithm (t-distributed stochastic neighbor embedding) [4]. The T-SNE algorithm takes multidimensional systems and reduces them to a two-dimensional layout. It groups the images by motif similarity, technique, composition and use of color. Moreover, we explored different visualization methods, 2d versus 3d t-SNE, "clustering" and a visualization of our new "neighborhood" of images using a "fish-eye" tool. The result was a new user interface on our webpages:

All information related to the artworks is harvested via the Digital Museum API. The application is automatically updated as we publish more works of art. The source code is open and accessible at the National Museum’s Github:

For a more detailed history of the field and more detailed technical descriptions, see Bengler's website:

Automated, convential classification

One of the project’s main objectives, in addition to creating a new search interface for the public, was to train the algorithm on the classification system Iconclass [5]. Through the database Arkyves [6]  we gained access to large datasets from art collections that are similar to our own. However, the methodology we chose turned out to be less successful for the more specific Iconclass categories. The algorithm revealed a structural irrationality wherein the subcategories of the hierarchies did not necessarily correspond to the main categories in a simple machine-readable manner [7]. The analysis of the main categories, on the other hand, worked well. We therefore mapped Iconclass' more general motif types against the National Museum's own subject keyword lists as well as lists from the Norwegian Feltkatalogen [8] (A.2.5 Motivtype (B, F)). With the automatic generation of subject keywords, we can now include missing descriptive metadata in the works default information in our catalog.


The project has allowed a theorizing of the boundaries between art history and robotics, which is something we have tried to convey along the way. In addition to the Arts Council network meetings, we have presented the project at the following conferences: at the DHN 2016 conference (Digital Humanities in the Nordic countries), Blindern, Oslo, in the session Art History [9], at the ICOM CIDOC 2015 conference (International Council of Museums, Le Comité International pour la DOCumentation), at The National Museum Institute in New Delhi, in the Session Techniques and Methods of Documentation [10] and at the NORDIC 2015 conference (The Nordic Committee for Art History), at The Nordic House in Reykjavik, in the session Digital Art History, a new frontier in research [11].


We would like to thank the Arts Council Norway for the project grant and the interesting meet ups in the Program for Digital Development in the museums. A big thank you also goes out to Hans Brandhorst (Editor of Iconclass & Arkyves) and Reem Weda (Information specialist terminologies, RKD – Netherlands Institute for Art History/ IT & Digitisation) for letting us access their datasets and their highly approachable attitude to our constant inquiries. Further, we would like to thank Even Westvang at Bengler; Without his outsider view of art history, eternally optimistic ideas and solution-oriented attitude, the project could not have been realized. Finally, a special thank you to Audun Mathias Øygard, for sharing his great knowledge of machine learning, his always understandable clarifications on the subject to us uninitiated and last, but not least; His endless processing of the algorithms in the project.

  1. Caffe may be compared to e.g. Torch, developoed by programmers from Facebook, Twitter and Google 
  3. OpenFace:  
  4. More on t-SNE here: and
  5. Iconclass: 
  6. Arkyves contains collections from ICONCLASS-users, such as Rijksmuseum, RKD, Herzog August Bibliotek, the university libraries of Milan, Utrecht, Glasgow and Illinois. http://arkyves.or
  7. For example could self-portraits of artists not be merged when professions and social structures are split from each other in Iconclass.
  8. Feltkatalog for kunst- og kulturhistoriske museer, Norsk Museumsutvikling (NMU), 2002. 
  9. Published abstracts located here:
  10. Published in article form here: /CIDOC 2015 
  11. NORDIK 2015

Project Team

Nasjonalmuseet: Françoise Hanssen-Bauer, Director Collection Management (project owner), Magnus Bognerud, Consultant Digital Collection Management, Dag Hensten, Senior Adviser, Comm. Dept., Gro Benedikte Pedersen, Coordinator Digital Collection Management (project manager)
Bengler: Even Westvang, technologist and designer, and Audun Mathias Øygard, Machine Learning Specialist and Data scientist. 

Project period 10 October 2015–28 April 2017.

Source code

For a more detailed history and explanation of the technical solutions see