Projects on offer for Market 18 May 2018.

Learning Objectives:

In every project we challenge you to include:

  • Artificial Intelligence (e.g. you will learn how to use the Kairos API)
  • Human Computation (e.g. you will learn how to use the MicroWorkers API)
  • Data Enabled & Driven Design
  • Work with real clients in real-world cases


Project: The Smart Safety Network

Safety and health of employees is of outmost importance for every company; after all no one would like to be confronted with an occupational injury or illness. Companies take a lot of measures to make working safer and healthier. Examples of these measures are placing fire extinguishers, first aid kits and emergency lighting in all workspaces.

At the moment, these tools are mostly “silent forces”, until they are useful. Apart from this, there are parameters inside the work environment that are worthy of measuring and of monitoring for employees' safety and well being. One could think of measuring harmful noise, light, oxygen levels, hydrocarbon levels and temperature; among others.

In this project we want to turn safety equipment to smart measuring tools that could send premonitory signals to enable early and preventive intervention for employees' safety and well being.

A unique opportunity for this project: Students can visit Veiligheidsdag 2018 for free and are offered the opportunity to present their project during the next Veiligheidsdag

Watch the project pitch video

Client:


Project: Crowdsourcing for Medical Imaging

Project 1: Similarity of medical imaging datasets. A lot of intuition in machine learning is based on the concept of similarity between datasets. For example, an algorithm can be expected to be a good at specific type of datasets, but not as good on others. Consequently, if two datasets are similar, we would expect an algorithm to have similar performance on this data. Another example is transfer learning, being able to learn from different, larger (source) datasets before training the classifier for the (target) task of interest. When training on a source dataset and testing on another, target dataset, we would expect better results if the source and target datasets are similar. However, it is not always clear what is meant by dataset similarity:

  • Domain of the images, for examples "everyday images" vs "medical images"
  • Quantiative properties of the dataset, such as the number of images, number of categories and so forth
  • Visual similarity of the images
  • Similarity of image representations by the algorithm
  • Performance of some algorithms on the datasets
  • Papers a dataset was cited in
  • Possibly others

For example, Cheplygina et al [2015] showed that datasets expected to behave in a similar way due to their domain and quantitative properties, often displayed different behavior in terms of algorithm performances. Cheplygina et al [2017] showed that subsets of the same medical imaging datasets (thus similar on the first three characteristics), where the goal was to segment scans into different anatomical structures, were not always similar based on classifier performances. In medical imaging in general, the intuition for transfer learning was that the best thing to do is to use a source dataset that is similar to the target, such as other medical images. However, several papers [Schlegl et al, 2014, Ribeiro et al, 2017] have shown that it might more advantageous to train on a seemingly unrelated dataset of everyday, non-medical images.
The goal of this project would be to investigate how people assess similarity of image datasets. There are several questions, such as
  • What are the different aspects of similarity that could be considered?
  • How can these aspects be assessed?
  • How should a dataset be presented to observers, given that a dataset may contain millions of images?
  • Are there differences between people with and without earlier experience with such datasets?
  • Etc.

One or more of these questions can be addressed within a student project. Supervisor: Veronika Cheplygina
References
  • V. Cheplygina, and D.M.J. Tax. Characterizing multiple instance datasets. In Feragen, A., Pelillo, M., , Loog, M., eds.: Similarity-Based Pattern Recognition (SIMBAD), pages 15–27. Springer International Publishing, 2015.
  • V. Cheplygina, P. Moeskops, M. Veta, B. Dashtbozorg, and J.P.W. Pluim. Exploring the similarity of medical imaging classification problems. In Intravascular Imaging and Computer Assisted Stenting, and Large-Scale Annotation of Biomedical Data and Expert Label Synthesis, pages 59-66. Springer, 2017.
  • E. Ribeiro, M. Hafner, G. Wimmer, T. Tamaki, J. Tischendorf, S. Yoshida, S. Tanaka, and A. Uhl. Exploring texture transfer learning for colonic polyp classification via convolutional neural networks. In International Symposium on Biomedical Imaging (ISBI), pages 1044–1048. IEEE, 2017.
  • T. Schlegl, J. Ofner, and G. Langs. Unsupervised pre-training across image domains improves lung tissue classification. In Medical Computer Vision: Algorithms for Big Data, pages 82–93. Springer, 2014.

Project 2: Machine learning offers many possibilities diagnosis of disease, for example, diagnosing diabetes from retinal images. For best results, a lot of annotated images are needed, where abnormalities such as microaneurysms have been outlined. Although for a while it was considered too difficult for the crowd to annotate retinal images, recent studies have shown promising results. Unfortunately standard crowdsourcing platforms were not built with retinal images in mind, and lack interaction possibilities that would be desirable (i) to increase the ease of annotation, and (ii) to motivate the annotators, and as such, to increase quality.

In this project, we are interested in investigating what would a successful interface be for annotating medical images. In a prior project an app has already been developed that crowdsources annotations of melanoma images. In this project we want to generalize to other type of medical images and link to existing crowdsourcing platforms (e.g. from Amazon Mechanical Turk). We expect that most crowdworkers, if not all, would have no knowledge whatsoever of medical imaging annotation. Therefore a salient task of the interface is to simplify, educate and guide the annotator while ensure the validity of the annotations.

Client:


Project: DESIGN FOR THE “MUSEUM IN THE CITY” OF EINDHOVEN MUSEUM

The Eindhoven Museum does not only exploit the (Pre)historic village (which, by the way, will be the background for the soon-to-be-released Dutch film epos on Redbad, the last king of the “Friesen” in the 8th century), but also owns a collection of art and historic objects. The museum however doesn’t own a permanent exhibition space, so that they have adopted a strategy for the upcoming years that is centered around organizing pop-up exhibitions. These exhibitions are scheduled to take place at events where already a lot of people gather, such as Eindhoven Culinair, Dutch Design Week, Strijp festival, etc. but also on a more limited scale at community centers, schools, etc. This implies that exhibitions need to be flexible and adaptable in terms of size and content to the intended venue and audience. Both professional designers hired by the museum and design teams from our department have made/are making proposals for installations that could be part of such a pop-up exhibition. Such installations need to be assessed in terms of 3 aspects: 1) how well do they agree with the strategic goal of “Museum in the City”, which is to connect past, present and future developments in the Eindhoven region, 2) how well can museum staff who are often not technically trained (especially not in ICT), but who have insight in the collection and the practice of making exhibitions, use the proposed installations to create an exposition, and 3) how will the public experience the proposed installations. The project can build on experiences gained in earlier student projects (the first theme was “Food” and will be shown for the first time at DDW, the theme that you will be designing for is “Mobility”).

Client:


Project: Designing for Refugees: Crowdsourcing as a step for social inclusion

The proliferation of the Internet has raised a new type of work-related platforms, known as Crowdsourcing platforms. One might have already heard of AirBnb, which is sourcing rooms from the crowd and Uber, which is sourcing car rides from the crowd. Nevertheless, these two platforms are only the tip of the iceberg. Amazon Mechanical Turk [mturk.com] sources image annotations (mainly for machine learning purposes) from the crowd, Design2Gather.com sources sketches for products from the crowd, Crowd.site sources graphic designs from the crowd, Prolifc.ac sources participants for academic research from the crowd, just to name a few . Crowdsourcing is growing in all its dimensions: the number of people engaged in these platforms, the number of platforms, the amount of money given out and the different type of platforms. For example, Mturk.com is listing more than quarter of a million tasks, Upwork.com is providing more than $1 billion income per year to its workers, Samasource.org is lifting more than 35 thousand people out of poverty. Leaving one’s country is a difficult thing let alone leaving one’s country due to war and destruction. Part of integrating in a new country and culture is utilizing one’s existing knowledge and skills and feeling useful. Refugees in the Netherlands often have to wait a long time before they can utilise their skills and have job in which they can feel useful. Our assumption is this gap can be partially filled by utilising existing infrastructure and online platforms (both crowdsourcing and learning platforms). In this project we wish to design a “mashup” platform, which brings the best of the web together to help refugees do work, learn new skills and feel useful from day one.

Client: