Visions of Couture

Interaction design
Creating an interactive artefact for High Fashion showroom that allows users to interact with holohraphic projection of the models using gesture interaction.
Client
Malmö University
Project type
Interaction design
Project year
2023
Role

UX & Interaction designer | UI designer | Individual thesis project

Design objective

How can we use gesture interactions to create interactive fashion shows and installations?

Background

For my Master in Interaction Design, as my thesis project, I decided to work on creating an interactive artefact for the Haute Couture fashion industry. The project lasted 10 weeks.

The problem

The fashion industry has always been a pioneer in research and innovation, exploring different materials, techniques and communication channels. In recent years, specific attention has been paid to the possible applications of technology in fashion, with examples such as Nike’s Air Mag BTTF or Coperni’s spray-paint dress technology. Extremely relevant, then, is the incorporation of technological solutions in fashion shows and exhibitions.


Despite the recent development, there is still a lot that is yet uncharted when it comes to merging fashion and technology, especially regarding interactive technologies and interactive fashion shows. Recent shows, in fact, use innovative and captivating technologies but never offer the viewer the possibility to directly interact with the show.

I have therefore decided to focus this thesis project on incorporating interactive methods and technologies, in particular gesture interaction, in fashion shows and installations. The aim is to give the viewer more power while still maintaining a clear brand identity.

Research

I started this project by analysing existing literature on three main topics:

- The relationship between fashion and technology;

- Embodied interaction and, in particular, gesture interaction;

- Holographic illusions, which I intended to explore  to enhance the viewer’s experience of the show and as a mean for a shared mixed reality. This technology can, in fact, allow for portable and scalable 1:1 full-body projections, creating a simulation of the body in space and disintermediating the interaction from glasses or headsets.


After this, I interviewed five people that worked in the fashion industry or that had attended different fashion shows. During this interviews, I wanted to better understand the use context, collect insights about the experience of fashion shows and discuss possible design openings. I collected three main insights:

- The most important thing in a fashion show is the spectacle, not the collection itself. People do not wish to interact with the models during the show because it might spoil the magic of it.

- Participants wish for a more immersive and engaging experience. They would like to interact with the models to have a closer look at the collection but outside of the actual show.

- A form of interaction with the models already happens in fashion showrooms where selected clients are invited by brands and they can ask the model to wear a look from the show to see it up close. This, though, has space and time limitation and represents a great cost for brands as they need the models to be physically present all the time.


Finally, I conducted a bodystorming workshop and an experience prototyping workshop to understand which actions the users would want to perform and which gestures they would use to communicate these actions.

Ideation and prototyping

To simulate the effect of an actual holographic mesh I used an insect net and tested how the projection would look on it. After that, I created the structure to hold the net by building two wooden poles with a wooden base. At this point, I tested how many layers of mesh were necessary to optimise the projection and settled for three.


To represent the looks I recorded a model wearing seven different outfits in front of a green screen with an iPhone 12 Pro. For each look the recording included: walking in, standing still, turning around, and walking out. Finally, the background was removed using Final Cut Pro, the videos were split into sections based on the action being performed and the “zoom in” and “zoom out” sections were created using the same video editing software.


To prototype the gesture recognition I focused on a method based on Data Gloves. I used an Arduino Nano 33 BLE to record the data sent by the accelerometer and the gyroscope mounted on it while performing the gestures. When I had collected a sufficient set of data I used TensorFlow Lite, which enables on-device Machine Learning, to train a model to recognise which gesture was being performed with the Arduino Nano based on the accelerometer and gyroscope data. The final code implemented on the central Arduino Nano, thus, uses the model to recognise the gesture and sends it through Bluetooth to a peripheral Arduino Nano that prints it on the Serial port.

At this point, I created a bracelet that could turn the central Arduino Nano into a wearable device; the common Data Glove has been replaced with a bracelet to make the device more elegant and to better fit the fashion context. Using the elastic band, then, allows the bracelet to be elastic so that it can fit different wrist sizes without rotating, which could impede the correct recognition of the gestures.


Finally, to connect the gesture recognition with the videos, I used Processing, a graphical library with an integrated development environment. I wrote a code that would play the first video, read the gesture that the Arduino Nano 33 BLE has printed on the serial port and play the video that represents the action that corresponds to the gesture performed. If no gesture is performed Processing plays the following look.


The gestures included in the prototype are:

- Swipe up to zoom in

- Swipe down to zoom out

- Swipe left to move to the next model

- Rotating the hand to make the model turn around

Design

The final design is Visions of Couture, an interactive artefact that uses a gesture recognition bracelet to let the user interact with holographic projections of models wearing looks from fashion shows collections. Following insights collected through interviews, the artefact is designed mainly for showrooms, where selected clients are invited by brands to see the collection up close and the garments are sold or ordered. The system should ideally be located close to the actual garments from the collection so that users can still touch the materials, but doesn’t require models to be physically present to show the looks to the clients.


The use of holographic illusions as a mixed reality media, allows for a more embodied mixed reality where the user doesn’t feel alienated from his body as it happens in VR. Moreover, it offers the possibility for a 1:1 full-body simulation of the body in space while still being easily scalable and portable and disintermediating the experience from glasses or headsets. Most importantly, holographic illusions allow people around the user to see what he’s doing and seeing, creating a shared experience and removing the awkwardness of moving their hands in the air with people around them not understanding what they are performing and why.

Value

The value of the design lies in the possibility it offers to interact with the looks the user has seen in the fashion show. It allows for a slow experience, in contrast to the frenzy of the fashion show, and permits to possibly overcome the space and time limitations that the fashion show has by creating an easily scalable experience.

Testing

The prototype was tested with three people in a controlled environment where light and space could be properly arranged. Each participant was invited to freely interact with the prototype individually, first without music and then with a high-tempo electronic track. Finally, they were asked to provide oral feedback on the experience and interaction.


The participants described the experience as fun and engaging, with one of them claiming that

“Even though I’ve never been in a fashion show, I found it really fun and it was nice to be able to control the model with my gestures”.

Once the participants were taught the gestures at the beginning of the test they were able to remember them throughout the whole interaction. It took them a couple of tries to calibrate the strength they needed to put for the system to recognise them but they could easily adjust.

The participants could understand if the gesture had been recognised because the visual feedback of the projection was almost instant and because of the sonic feedback provided by the Piezo buzzer attached to the peripheral Arduino.


Finally, the assumption that music has a strong impact on the “vibe” as it does in the fashion show was confirmed by the participants. They, in fact, described the experience with music as more immersive, claiming that the experience felt more complete. One of the participants said that it felt awkward without music and that without the music the experience was still engaging but felt like testing a technology rather than experiencing the collections and the system.

No items found.

Other projects

Let's get in touch!

Hello there! Thank you for visiting my portfolio. I'm thrilled that you've taken the time to explore my work. If you have any questions, comments, or even just want to say hello, feel free to reach out to me using the form below. I'd love to hear from you and learn more about your thoughts on my projects. Whether you're interested in collaborating, have an exciting opportunity, or simply want to connect, I'm here and ready to chat. Looking forward to our conversation and the possibility of working together. Don't hesitate to drop me a line!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.