In this project, we have tried out artificial intelligence in principal component analysis on our images, by using neural networks and algorithms.

Two of the results in the project are described in more detail below.

  1. The algorithms that show compositional similarities for us in a new user interface on our website.
  2. The algorithm that classifies the National Museum's art by subject keywords.

A new interface within the collection online

As a starting point in attempting to train an algorithm to analyze our images, we used a neural network trained on ImageNet, written in Caffe [1], developed by Autonomous Perception Research Lab at Berkeley [2]. Then we re-trained this on classifications of art movements in images from The Wikiart Collection. We also experimented with different results of face recognition, through neural net features in OpenFace [3]. Furthermore, we analyzed composition, color and style using "neural style". Here we performed tests within parameters for figures and shapes, faces, age, gender and typical/atypical works per decade. The algorithm then identified and marked motifs in our own collection. We then added a presentation using t-SNE algorithm (t-distributed stochastic neighbor embedding) [4]. The T-SNE algorithm takes multidimensional systems and reduces them to a two-dimensional layout. It groups the images by motif similarity, technique, composition and use of color. Moreover, we explored different visualization methods, 2d versus 3d t-SNE, "clustering" and a visualization of our new "neighborhood" of images using a "fish-eye" tool. The result was a new user interface on our webpages:

All information related to the artworks is harvested via the Digital Museum API. The application is automatically updated as we publish more works of art. The source code is open and accessible at the National Museum’s Github: https://github.com/nasjonalmuseet/propinquity

For a more detailed history of the field and more detailed technical descriptions, see Bengler's website: http://bengler.no/principalcomponents

Automated, convential classification

One of the project’s main objectives, in addition to creating a new search interface for the public, was to train the algorithm on the classification system Iconclass [5]. Through the database Arkyves [6]  we gained access to large datasets from art collections that are similar to our own. However, the methodology we chose turned out to be less successful for the more specific Iconclass categories. The algorithm revealed a structural irrationality wherein the subcategories of the hierarchies did not necessarily correspond to the main categories in a simple machine-readable manner [7]. The analysis of the main categories, on the other hand, worked well. We therefore mapped Iconclass' more general motif types against the National Museum's own subject keyword lists as well as lists from the Norwegian Feltkatalogen [8] (A.2.5 Motivtype (B, F)). With the automatic generation of subject keywords, we can now include missing descriptive metadata in the works default information in our catalog.

Theorising

The project has allowed a theorizing of the boundaries between art history and robotics, which is something we have tried to convey along the way. In addition to the Arts Council network meetings, we have presented the project at the following conferences: at the DHN 2016 conference (Digital Humanities in the Nordic countries), Blindern, Oslo, in the session Art History [9], at the ICOM CIDOC 2015 conference (International Council of Museums, Le Comité International pour la DOCumentation), at The National Museum Institute in New Delhi, in the Session Techniques and Methods of Documentation [10] and at the NORDIC 2015 conference (The Nordic Committee for Art History), at The Nordic House in Reykjavik, in the session Digital Art History, a new frontier in research [11].

Acknowledgements

We would like to thank the Arts Council Norway for the project grant and the interesting meet ups in the Program for Digital Development in the museums. A big thank you also goes out to Hans Brandhorst (Editor of Iconclass & Arkyves) and Reem Weda (Information specialist terminologies, RKD – Netherlands Institute for Art History/ IT & Digitisation) for letting us access their datasets and their highly approachable attitude to our constant inquiries. Further, we would like to thank Even Westvang at Bengler; Without his outsider view of art history, eternally optimistic ideas and solution-oriented attitude, the project could not have been realized. Finally, a special thank you to Audun Mathias Øygard, for sharing his great knowledge of machine learning, his always understandable clarifications on the subject to us uninitiated and last, but not least; His endless processing of the algorithms in the project.
 

  1. Caffe may be compared to e.g. Torch, developoed by programmers from Facebook, Twitter and Google 
  2. http://bvlc.eecs.berkeley.edu/
  3. OpenFace: http://cmusatyalab.github.io/openface/  
  4. More on t-SNE here: http://lvdmaaten.github.io/tsne/ and http://cs.stanford.edu/people/karpathy/cnnembed/
  5. Iconclass: http://www.iconclass.nl/home 
  6. Arkyves contains collections from ICONCLASS-users, such as Rijksmuseum, RKD, Herzog August Bibliotek, the university libraries of Milan, Utrecht, Glasgow and Illinois. http://arkyves.or
  7. For example could self-portraits of artists not be merged when professions and social structures are split from each other in Iconclass.
  8. Feltkatalog for kunst- og kulturhistoriske museer, Norsk Museumsutvikling (NMU), 2002. http://issuu.com/norsk_kulturrad/docs/feltkatalog?mode=window&viewMode=doublePage 
  9. Published abstracts located here: http://www.hf.uio.no/iln/english/research/networks/digital-humanities/news-and-events/events/2016/pdf/bofab.pdf.
  10. Published in article form here: http://network.icom.museum/fileadmin/user_upload/minisites/cidoc/ConferenceGuidelines/2015_Cidoc_Paper__Mr_Bognerud_and_Mrs_Pedersen_with_figs_authors.pdf /CIDOC 2015 http://network.icom.museum/cidoc/ 
  11. NORDIK 2015 http://nordicarthistory.org/

Project Team

Nasjonalmuseet: Françoise Hanssen-Bauer, Director Collection Management (project owner), Magnus Bognerud, Consultant Digital Collection Management, Dag Hensten, Senior Adviser, Comm. Dept., Gro Benedikte Pedersen, Coordinator Digital Collection Management (project manager)
Bengler: Even Westvang, technologist and designer, and Audun Mathias Øygard, Machine Learning Specialist and Data scientist. 

Project period 10 October 2015–28 April 2017.

Source code

https://github.com/nasjonalmuseet/propinquity

For a more detailed history and explanation of the technical solutions see http://bengler.no/principalcomponents.