Module 3 Formstorming

Form Experiments – Module 3

Sarah Al-Fkeih - This template shows our weekly Formstorming work for Module 3.


Project 3


Module 3

We used these activities to explore how form, structure, and interpretation can change meaning in visual design. It also helped us practice analyzing images, reflecting on our assumptions, and making more intentional design decisions

Activity 1

In this step, I followed the tutorial and reviewed the code structure so I could fully understand how the p5.js environment, WebGL canvas, and BlazeFace model work together. I used this stage to study the template, read through each function, and learn how the tracking system updates variables like face position, scale, and rotation. This helped me build a proper foundation before customizing the code for my own concept. Here, I tested Harold Sikkema’s original BlazeFace p5.js template to see how the face-tracking model behaved in real time. This experiment helped me understand how landmark detection responds to my movements, and how images can be layered and animated using face data. By experimenting with the base template first, I was able to plan how to personalize it for my own project theme. In this step I experimented with the order of the loadImage() lines in preload(), rearranging the layers to see how changing the stacking order would affect which shapes sit on top of each other in the final composition. This screenshot shows the visual result after I reformatted and reordered the layers: the circle-based composition now appears split and partially hidden, which helped me see how much control the layer order has over what is visible. In this step I located the webcam = createCapture(VIDEO, initialize) line and realised this is the code that actually connects my sketch to the webcam and triggers the BlazeFace model once the video stream is ready. Here I looked closely at the x: lerp(data.landmarks[0][0], face.rightEye.x, 0.5) line and learned that landmark index 0 is the right eye; the code uses lerp to smoothly blend between the new BlazeFace position and the previous one so the motion is less jittery. In this screenshot I focused on both the x and y lines for the right eye and understood that each frame, the code updates the eye position by easing toward the new landmark values, which is what makes the tracked movement feel smooth and organic instead of snapping. Following Harold’s tutorial, I read the MDN page for Math.atan2() to understand how the sketch calculates the face rotation angle from the difference between the two eye positions, which later controls how my image layers rotate. Here I imported my own PNG shapes from Canva into the sketch and tested them with BlazeFace; at this point all seven shapes were still stacked directly on top of each other, so the cluster looked very crowded when it followed my face. This image shows my Canva file where I placed one simple shape per page; I did this so I could export each shape as a separate PNG layer to use inside p5.js and have more control over how every piece moves and overlaps. In this step I selected specific stock shapes from Canva (cloud, starburst, blobs, etc.) that matched the Giorgia Lupi–inspired visual language, and started colouring and arranging them so they could become the individual graphic layers in my data portrait. In this step, I ran the original p5.js BlazeFace template to understand how the face-tracking system works. I used this test to study how the images move, scale, and rotate according to my face landmarks before customizing anything for my own project. Here, I began experimenting with visual elements inside Canva. I created multiple artboards to compare different shapes and colours that I might use later in my p5 project. This helped me explore what visual style felt right for my concept. This image shows me browsing Canva’s shape library to choose the base forms I wanted. I explored simple geometric silhouettes to understand how they might translate into layered images inside p5.js. At this point, I selected the “soft rounded petal” graphic as one of my main elements. I tested how these shapes could represent emotion, softness, or childhood memories once animated inside the p5 canvas. Here, I downloaded Meshmixer to prepare for 3D cleanup later in my workflow. Even though this tool is separate from p5, I needed it for cleaning up 3D scans connected to another part of my project. This screenshot shows the Meshmixer interface after installation. I explored the tools available—such as sculpt, select, edit, and analysis—to understand how I could use them to refine my 3D scan later. Scaniverse templates for scanning Here I installed Scaniverse on my phone. This was necessary to generate a 3D scan for another portion of my project. Using this app gave me access to LiDAR scanning, which creates detailed meshes that can be exported. This screenshot shows the Scaniverse interface right before capturing the scan. I positioned myself in the environment and prepared to move around the object to collect accurate depth data. Once the scan was captured, I chose the “Detail” processing mode to get the most accurate texture data. This mode takes longer to process, but it provides a cleaner mesh that is easier to edit later in Meshmixer. This screenshot shows the completed 3D scan of my plush toy in Scaniverse. After finishing the scan, I reviewed the preview and selected the processing mode to prepare it for export into Meshmixer. Plush Toy (Strawberry Sheep) – Reference Photo Plush Toy (Blue Bunny Doll) – Reference Photo Johnson’s Baby Powder – Reference Photo

Activity 2

Meshmixer – Removing Background Geometry Meshmixer – Rotating and Flattening the Model Meshmixer – Selecting the Floor for Deletion Meshmixer – Cleaned Model Standing Alone Meshmixer – Inspecting the Base of the Model Cinema 4D – Wireframe Import View Cinema 4D – Textured Model Loaded Correctly Plush Toy Reference Photo – Spamton Plushie Reference Photo – Decorative Skull Reference Photo – Domo Knitted Plush This image shows the 3D scan of me sitting down, captured in Scaniverse. The scan mainly kept the front of my body and clothes, but a lot of the background and edges did not fully render. This is the side view of my seated 3D scan. You can clearly see how the chair and my hair scanned unevenly, showing missing geometry on the edges. This image shows the partially captured scan of the domo knitted plushie. Only a portion of the surface was recorded, leaving a warped and incomplete mesh around it. This scan shows the decorative skull figurine, but most of the surrounding area failed to process. Only the top part of the skull loaded clearly while the rest stretched into a distorted shape. This is the failed scan of the Spamtoon plushie. The scanner captured only fragments of the plushie, stretching textures across a broken surface and leaving most of the model incomplete. This screenshot shows the process of selecting and deleting unwanted floor pieces in Meshmixer. I used the selection brush to highlight extra geometry before removing it. This image shows the bunny plushie scan inside Meshmixer after removing most of the floor. The outline of the plushie is still rough, but the main form is now isolated. This image shows the side of the bunny plushie scan while continuing to clean extra mesh. Some holes and missing parts are still visible on the side of the model. This image shows the scan of the reference marker sheet with a tool placed on top. The edges scanned unevenly, but the calibration pattern is still visible. This screenshot shows the auto-repair tool analyzing the broken scan that includes the baby powder bottle. The colored spheres mark holes, weak geometry, and areas needing repair. This time I added my childhood photos into the p5.js placement to test how the layout would look when using personal imagery. Here is what the p5.js output looks like with my childhood photos arranged inside the generative shapes. These are template frame options I explored in Canva to help me plan different layout structures for my final composition. This Pinterest mood board shows the visual styles I’m thinking about, like collage, fragmented portraits, and layered storytelling. I started exploring p5.js tutorials to learn how to code my final project idea and understand how to integrate my images into generative shapes.

Reflexive Workshop 1 & 2

For this activity, we chose a portrait and described the emotion it represents. We picked “Cheerful” because the person gives a warm and positive feeling. This helped us see how easily we project emotions onto someone just from their face or lighting, even if we don’t actually know them. Here, we placed the image on a spectrum from “Disciplined” to “Impulsive.” We discussed where the photo fits based on how we see the person. This showed us how our judgments often come from our own experiences and values, not only the image itself. For this part, we created a small story using three photos: a messy desk (the problem), writing and organizing (the action), and a clean desk (the ending). This helped us understand how photos can show emotional progress, and how simple objects can represent stress, effort, and finally calmness.

Project 3


Final Project 3 Design

This project explores how judgment and self-perception can be expressed through interactive visuals. Using p5.js and face-tracking, the animation responds to the viewer’s movement, layering images, motion, and texture to represent emotional pressure and identity. The work transforms personal experiences into a data portrait that shows how judgment can follow, react to, and gradually overpower one’s sense of self.

×

Powered by w3.css