I was born and raised close to Hamburg in Germany. After doing my bachelor's in computer science and media I worked as a software developer for five years.
I enjoy coding very much but I also became more and more interested in UX design. I therefore moved to Sweden to do my Master's in HCI.
Thus, I'm now on a journey to learn more about human-centred design. I aim to make useful products that empower people. Those products need to be well designed, on the technical side, but maybe even more importantly on the human side of things.
During the design process, I have found myself particularly engaged during the phases of user research, concept generation (sketches, wireframes, workshops), prototyping (with tools like Axure or directly in code) and user experience evaluations.
An area I am interested, but less confident in, is visual design.
Besides that, I have a strong interest in interactions beyond the Web as explored in my thesis (see "Works"). I also enjoy hardware prototyping with Arduino and/or wood quite a lot.
I'm a very curious person that loves to learn something new every day. Thus, besides studying HCI, I started to learn Swedish when I moved to Sweden. Lately, I have started to understand most of normal conversations. It's very interesting to see how much more insights into a culture one gets when not having to rely on translation of others.
I realised that the ability to dive into a certain culture and create a deeper understanding is something which is important for becoming a designer as well. It's not enough to directly ask users what they want. Only if the designer and the humans they're are designing for, start speaking the same "language", they can come up with ideas for products that can make an impact.
During my semester break I worked for a startup as a UX designer and developer. The startup develops software with which businesses can calculate how likely it is that a certain end-customer will buy a product soon. This information helps sales and marketing departments to make better decisions and serve the end-customer better. The startup offers this functionality as software-as-a-service and it is based on machine learning and big data. During a period of about five weeks I was solely responsible for identifying current issues in the frontend and designing an alternative. In the last couple of weeks I implemented it with the help of another developer.
I started the design process by doing some research, afterwards I synthesized my data, sketched some possible solutions and then built an interactive prototype with Axure.
First, I made myself familiar with the existing frontend. It was challenging in the beginning to understand the whole concept because the predictions about who will buy a product rely heavily on data science. Furthermore it is a relatively new product field which made it hard for me, and I assumed also for future users, to create a mental model about how the frontend worked. To overcome this I asked for a meeting with the developer and I sketched my understanding that I had so far on a whiteboard while he filled in the blanks and corrected me whenever necessary.
To collect more information I interviewed employees from different departments (data science, sales, customer support and marketing). With the results from the interviews I prepared a workshop with the goal of creating a shared understanding about user personas (besides the buyer personas that already existed). In the workshop we followed a given structure for personas and collected ideas from all the different departments on post-its. Afterwards, we merged those ideas and ended the workshop with four tentative personas.
Another important issue that I identified was the need for a new vocabulary in the frontend. Besides collecting data about wordings through the interviews I also collected ideas from other products that had some similarity. Sometimes I also got the chance to listen to sales phone calls which proved to be a good source to learn about what words potential customers used. At some point we did another workshop in which we agreed on the new vocabulary. It also became clear that some real users should be interviewed as well which was then planned to do.
Based on the personas and the other collected information I started to sketch out some ideas about possible workflows. During the process I took some inspiration from established interaction design patterns and a book about how to visualize information.
After going a bit back and forth with the ideas we finally settled on one solution that I turned into an interactive prototype with Axure. Some things turned out to not make as much sense as it did on the paper but others turned out to be more intuitive than anticipated.
The Axure prototype was iteratively refined over about two days until everyone who was involved agreed on the solution. After that the implementation phase started in which another developer and me implemented the changes. Towards the end we also had a visual designer who supported us for some days.
I think this project was a great opportunity to apply my skills from my studies in a real world project. I had a lot of freedom to decide about what was best to do and I learned a lot. One thing that surprised me was the power of the Axure prototype. As I was at the same time the designer and the developer I had some doubts about the importance of an interactive prototype. But I decided to build an interactive prototype anyhow because it was quick and easy. First of, the process of prototyping with Axure helped me refine my ideas much deeper than before. What was even more important was the prototype's use as a tool to create a shared understanding among the stakeholders. Due the Axure prototype, the process of developing the final solution was very straightforward although small refinements were also made in that phase.
In the end my employer was very satisfied with the result. I had hoped that I could also test the results with some users outside the company but we unfortunately did not manage to organize that due to my limited time frame and because the project took place in the midst of the holiday season.
Unfortunately I cannot show the final screens because the startup wants to keep them secret for now because of a very competitive market.
This project was a university group project which was developed in cooperation with Wikimedia. Wikimedia asked us to design a mobile game that would help to categorize the large number of uncategorized pictures in their image database.
I encouraged my team to start this project by analysing the workflow of uploading pictures on Wikimedia. Besides analysing the current state we studied gamification concepts that could help us to create a fun user experience. We also summarised our understandings about possible users.
After our research and planning phase we undertook a collective design session in which we created initial sketches.
After some more brainstorming sessions in which we refined our idea we had an initial concept. Our idea was to create a game that would consist of two modes, a tag mode and a check mode.
In the tag mode the user has six suggested (but configurable) image tags which have to be matched with pictures in a gallery. The game is over as soon as all tags are matched with pictures. By tagging pictures the user earns points and reaches new levels. Initially, the gallery shows a large number of already categorized pictures to make the game easy for new players. The difficulty level increases by displaying more and more uncategorized pictures.
The check mode provides a variation of the game for users but also makes sure that the pictures are being tagged correctly for Wikimedia. In the second game mode the user sees a similar gallery and tags but this time the user has a time limit of one minute to tag as many pictures as possible for each tag.
Our final prototype incorporated navigation through the application and playing the tag mode as shown in this video.
We developed our prototype in three iterations, each of it with a new prototype and a suitable evaluation method.
The first prototype was built by my team members with Balsamiq and the screens were printed out for our first evaluation. Some sample screens:
In addition to the Balsamiq screens we used printed pictures from the Wikimedia database and sticky notes to make the paper prototype appear interactive.
With the paper prototype we aimed to collect first feedback about our idea. Potential future users were presented with the paper prototype and we discussed our idea in an informal, unstructured session. Having a paper protoype at this step in the design process was important to present our idea in a clear way. The prototype was also looking unfinished enough to give test users the impressions that any changes they would suggest could be easily implemented.
The next step was a more detailed screen design made by my team members. The screens incorporated feedback from the previous evaluation like for a example a possibility to play in teams.
Those screens were used for a usability evaluation conducted as a cognitive walkthrough with scenarios. In this evaluation we discovered for example the lack of affordance of image tags in the game which initially looked like buttons one could click instead of draggable tags.
Our final prototype was developed well enough to conduct a user experience evaluation. Our test users were asked to freely explore the application and we observed them quietly. Afterwards we interviewed them about their experience with a combination of closed-ended and open-ended questions.
Our results indicated that most of our users found the fun factor of the game "ok", which was a reasonable result, considering that we could not implement all gamification elements. Another interesting finding was that regular gamers did not care about the fact that they were helping Wikimedia by playing this game. Non-gamers, on the other hand were encouraged by the fact of knowing that they help Wikimedia.
The iterative process of developing prototypes and evaluating them helped us to discover problems early. However, I wish we would have spent more time learning about potential users initially and created personas instead of jumping to a concept too quickly. But in the end the representatives of Wikimedia Sweden were pleased by our results.
This university project was developed in a team of two. Our task was to do an empirical study about participation in social media. We decided to focus on participation on the social coding platform GitHub. Our goal was to look at participation through a gender lens and find reasons why people participate or not participate.
A challenge hereby was that there are significantly less females than males in open source software development in general, and also on GitHub. Therefore we had to make sure to get a large dataset to be able to draw conclusions. We decided to use a questionnaire and data scraping to collect data.
We used data scraping to test if there are differences in the amount of participation on GitHub, depending on gender. I wrote a script to collect user data from the GitHub API which represented the whole range of users depending on the registration date. GitHub users do not provide their gender so I identified their gender based on their name and location (if available) by using an API (genderize.io).
Additionally, we needed data about the amount of participation. When looking at participation on GitHub, there are two different aspects to look at. One the one hand, there are code contributions, and on the other hand there are discussions. So, I wrote two other scripts to enrich our user database with the number of code contributions and the number of comments in discussions per user.*
Among the total sample size of 1195 GitHub users were 64% males, 7% female and 29% with unidentified gender.
As shown by the following graph, there are overall no large gaps concerning participation in code, depending on gender. However, the graph also shows that there are slightly more male power users.
In contrast to that, the following graph shows that there are significant differences when looking into participation in discussions on GitHub.
Overall, females seem to participate in discussions considerably less. Surprisingly, their contribution of code does not seem to be influenced by the fact that they are in the minority.
Our questionnaire, which we designed together, had ten questions in total. The first three questions asked about gender, registration date and if they reveal their real gender on their profile page. Afterwards we asked about their participation amount in discussions and code contributions. The last questions aimed to reveal the motives behind participation and non-participation.
We collected a total number of 71 responses with our questionnaire. About 85% were male and 15% were female users.
Interestingly, the patterns about the differences in the amount of participation were the same as in our previous data collection. We also found that there are no differences when it comes to revealing gender on the profile page.
The following two graphs give some insights about why people participate. The first graph shows the reasons why people participate in discussions.
From these answers we can see that females do initiate discussions but tend to avoid involving themselves in further conversations. The following graph shows why people don't participate in discussions.
Based on these answers we can assume that females tend to avoid discussions on GitHub because they feel uncomfortable, are afraid of negative feedback or tend to question their competency. Males seem to be more self-confident about participating in discussions and less scared of negative feedback.
The results obviously have to be interpreted with care because our questionnaire had only 11 female participants.
* The data was extracted by using GHTorrent and scraping the GitHub website because the GitHub API is not built to provide that kind of data in an efficient way.
This project was an individual university project in which I worked with data from the World Bank. The World Bank provides Open Data about the development of the world but the data is not very accessible and understandable. So the task was to focus on a certain aspect of the data, create a more accessible representation and also incorporate the ideas of embodied interaction.
I decided to design an artifact for a fictitious museum environment with a broader topic about for example water or Africa. My idea was to create a map that would show the percentage of people having access to water in rural areas for each country in Africa. As museums typically have a wide range of visitors, from small kids to seniors, the input device had to be very easy and intuitive to interact with. As the input device I therefore aimed to use a physical slider with which users would be able to navigate through each year and explore the development over time.
Unfortunatly I couldn't get a slider so I had to work with a turning knob instead. A quick demonstration of the result can be seen in the following video.
A live version is also available but it doesn't have Arduino support. It's also just a prototype and was therefore only optimised for the Google Chrome browser.
The prototype was developed in several iterations. During the development I did small evaluations in which I led my users explore the prototype and encouraged them to give feedback. One minor finding was for example, that while using the turning knob the users lost the focus on the year range in the bottom. I therefore additionally included the current year above the map as shown in this screenshot:
Later on I found through the evaluations that people immediately started to try to identify which countries were constantly improving and which were not. In the chosen map visualisation this is hard to keep track of because of the large number of countries. A therefore useful improvement would have been to add for example a list with countries that had less water access than in the previous year. This calculation is easy for software and hard for humans to calculate and should therefore become part of the visualisation.
A challenge on the technical side was to constantly listen to the Arduino input and update the website in realtime. I eventually solved this by making use of WebSockets. Additionally, I learned how to use the data visualisation library D3.js. For more development specific details have a look at my GitHub repository.
This project was a very small one, but I was still able to prototype some aspects of embodied interaction. People found it fascinating to use an alternative input device for a web application and found it interesting to feel the borders of the year range in form of physical feedback.
This was a small master's thesis for which I decided to explore user experience in ubiquitous computing. Besides writing a thesis for the university I also talked about what I learned at a conference. The video can be found below.
When looking at the Internet of Things the question arises how people, places and things will be connected to each other in the future. One option to create interoperability between devices and humans for the Internet of Things is to use open web standards. Researchers have named this approach the Web of Things and have studied the vision by showing the technical feasibility and by suggesting software architectures.
What has been missing so far is a designer’s view on the challenges of connecting the virtual and the physical world with web technology. This thesis therefore aims to explore how current web technologies can be used as design material for the Web of Things. The results indicate that new web technologies like push notifications work well in the context of ubiquitous computing.
Additionally, the repertory grid method was applied to evaluate how users experience the Web of Things. It was found that the prototypes were perceived as easy to use, personal and working instantly but the participants were also clearly aware of the dependency on a working smartphone.
The Internet of Things has become more and more popular in recent years and is expected to grow heavily in the upcoming years (Gartner, 2013). While more and more devices are becoming connected to the Internet it is exciting to see how those devices will be connected to each other and especially how users will interact with them.
One idea to integrate physical objects in the digital world and vice versa is to build a network based on the web which would be easily accessible for users but also for developers. Some use cases for the Web of Things (WoT) have been presented like for example a guiding system at a conference, public transport information or smart parking meters (Barton & Kindberg, 2001; Google, 2015). However, the prototypes built so far have been focused on demonstrating the technical feasibility and measuring performance. Before designers can effectively explore this area it is important to understand the potentials and constraints for the Web of Things additionally from a different perspective.
In HCI, researchers have recognized the trend of the digital moving into the physical world. In a panel discussion at CHI 2012 (Wiberg et al., 2013) the participants highlighted consequently that when designing for a combination between the digital and physical world it is becoming of great importance to start thinking of computing as design material. Understanding computing as design material is essential because it determines how people experience the designed artifacts and eventually even how they experience the world.
A materiality or user experience perspective on the Internet of Things has so far been applied in various studies in the area of discoverability with NFC, Bluetooth or QR-codes. (Sundström, Taylor, & O’Hara, 2011; Shin, Jung, & Chang, 2012; Meschtscherjakov, Gschwendtner, Tscheligi, & Sundström, 2013). I want to take this view one step further and look beyond the discovery process and instead look into what kind of current web technologies could be used to create meaningful digital interactions within the physical world.
For this thesis I have chosen Bluetooth Low Energy (BLE) beacons as the technology to make devices discoverable because Bluetooth is built into every modern smartphone and is currently named as the most promising technology for device tagging (Want, Schilit, & Jenson, 2015). Ready-made BLE beacons that broadcast URLs can be bought and a smartphone application from Google is available to discover broadcasted URLs (Google, 2015).
Based on the BLE beacons and the smartphone app for discovery, two prototypes were built to explore the materiality of current web technology for the WoT from a designer’s perspective. When studying material from the physical world one can touch, maybe form and sense it. That is however difficult when looking at computing as design material. By building prototypes that make the Web of Things technology graspable I aim to find out more about the material properties of this technology. The prototypes address certain use cases that people are facing today and could possibly be improved by combining the physical and digital world and providing context-sensitive information and interaction. The prototypes are being evaluated with the repertory grid method to gain insights into the material properties of the WoT. So, my research questions is as follows:
“What are the material properties of the Web of Things and how do they influence the user experience in public spaces?”
This thesis aims to make the following contributions:
By exploring the materiality of the Web of Things I aim to contribute to a discussion about how to bring the web into the physical world and help to create an environment in which humans can communicate with devices around them easily and instantly. Not parts of this study are aspects that will need to be addressed in the future like security and privacy issues.
If you're interested in reading further you can download the full thesis on the university's website.
In the summer of 2015 I got the chance to speak about the findings from my thesis, the current state of beacon technology and its use cases in my first conference talk at eurucamp in Berlin.