Meet The MIT Reality Hack 2026 Winning Teams: The Dream CatchARs & SNAK
Celebrating the success of creators and developers has always been at the heart of what we do here at Lenslist. Sharing your achievements, highlighting inspiring projects, and supporting you on your journey is what drives us every day.
That’s why we’re proud to share yet another milestone from members of our creative tech community.
During MIT Reality Hack 2026, some developers teamed up with a shared goal: to demonstrate the potential of Snap Spectacles and explore how wearable AR technology can power new kinds of experiences. If anyone ever questioned whether wearable tech and Augmented Reality can drive innovation, these teams answered that question – loud and clear.
To spotlight their achievements, celebrate the creators behind them, and inspire others to experiment with emerging technologies, we spoke with two of this year’s winning teams: Dream CatchAR and SNAK.
Lucid Weave by Dream CatchAR
🏆 Winner: Best of Hardware Hack
🥉 Bronze Prize: Best in Hack
Lucid Weave is a living spatial manifestation of dreams – an immersive experience where gestures sculpt music, melodies bloom into light, and the entire interaction unfolds directly through wearable technology.
👥 Team Members: Meghna Sonie, Aishah Gudaro, Abraham Pacheco, Krunal MB Gediya
Noodle AR by Team SNAK
🏆 Winner: Founders Lab Track (Powered by Reality Hack for Startups & Sponsored by Cognitive3D)
🏆 Winner: Best Use of Spatial AI (Sponsored by Snap Spectacles)
Noodle transforms your surroundings into an infinite collaborative spatial interface. Users can move from a simple 2D sketch to a fully realized 3D creation using only their hands and voice, enabling a seamless creative workflow without the need for traditional input devices.
👥 Team Members: Ash Shah, Neha Sajja, Kavin Kumar, Stacey Cho
Can you introduce your team and share how you came together for MIT Reality Hack? What did each of you bring into the project?

We are Team DreamCatchARs, a powerhouse of four individuals who brought highly specialized, yet perfectly complementary, skills to the table:
➡️ Meghna Sonie (XR Product Designer & Prototyper): As our Spatial Product Designer, Meghna led the visual design of the AR experience, ensuring users truly felt they were painting music in the air. She also co-designed our smart LED dress.
➡️ Aishah Gudaro (XR + AI Creative Technologist): Aishah spearheaded the wearable tech dress design alongside Meghna and collaborated heavily on engineering our ESP32 and Arduino Uno Q hardware for real-time LED integration.
➡️ Abraham Pacheco (Aerospace & Explosives Engineering): Abraham bridged the gap between digital data and physical hardware for us. He programmed the interpretation software that converted our Supabase data inputs into expressive LED patterns and engineered the tether between the ESP32-S3 and Arduino Uno Q for redundancy and localized processing.
➡️ Krunal MB Gediya (XR Prototyper): Krunal developed our core AR technical integration, utilizing Snap Spectacles Lens Logic to translate movement into data. He also built the WebClient for real-time sound synthesis and dynamic 3D visualizations, which ultimately drove our smart dress’s lighting.
Our project’s genesis was actually the collision of two exciting, vastly different ideas.
Krunal and Meghna envisioned a way to make music from thin air, an experience where natural body gestures replace traditional instruments, allowing technology to adapt to human expression.
Meanwhile, Aishah and Abraham were laser-focused on an ambitious hardware concept: a “living” garment. Using fiber optics, and microcontrollers, they wanted to build a dress that reacted to a wearer’s emotions via brainwaves turning angry red or blissful pink based on the user’s state.
This project has the potential to transform live concerts into deeply immersive, multi-sensory experiences.
Imagine an artist like Taylor Swift wearing this dress: her movements actively generate music in real-time, which the audience both hears and sees through dynamic lighting embedded in the dress and synchronized AR visual effects. The performer’s body becomes the instrument.
During the team matching process, the four of us were nervously scanning the room, feeling a bit stranded. But the moment we connected, it instantly clicked.
We realized our visions could seamlessly merge: the AR Spectacles would create and visualize the music, and that output would drive the dynamic lights of our dress. It truly felt like a match made in heaven.

The team consists of myself (Rbkavin), Ash Shah, Neha Sajja, and Stacey Cho. Stacey, Neha and I had a common friend who introduced them to me. We were looking for another teammate, and Ash joined us.
Ash and I took care of the development while Stacey and Neha took care of the design and UI/UX.
What was the track you participated in and specific category you have won?

We cast a wide net, participating across multiple tracks including the Spectacles Hardware Track, the Qualcomm Track, and the General Hardware Track. Our ambition paid off, and we proudly took home 2 awards 🏆🏆
Overall Best in Hardware Track (1st Prize)
Bronze Award in the Overall Reality Hack (3rd Prize)

We have participated in the Spectacles Spatial AI track, where the goal is to build a Spectacles AR lens that uses Snap’s Spatial AI capabilities.
How did the idea for your project take shape? Why did you decide that it had the potential to stand out among so many strong concepts?

Lucid Weave emerged from the unexpected fusion of our AR music-generation concept and our emotion-responsive smart dress. Once we realized we could combine the visual and auditory data from the Spectacles with the dress’s microcontrollers, the idea crystallized into a single, cohesive flow.
As for its potential to stand out, we didn’t engineer this concept just to beat the competition. In truth, we built it simply because it was something we deeply wanted to see exist in the world. We were driven by a very personal motivation to bring a genuinely unique idea to life.
It felt like turning a dream into reality, which ironically aligned perfectly with the event’s theme of “Dreams.” That raw, personal commitment to building something we loved is ultimately what gave Lucid Weave its distinctive edge.

The idea was inspired by comfy UI’s node setup. We felt that if we could ideate in spatial AR, where everyone in a room can see what you’re working on and collaborate then and there, instead of switching between multiple applications.
We felt that this idea had potential, as we faced the exact problem when we were ideating for this project.
Walk us through your project

Our Timeline
The project was conceived and executed from the ground up during the intense 48-hour MIT Reality Hack. Development officially began on January 23rd, culminating in a “pens down” moment on the morning of January 25th, all achieved with only 2-3 hours of sleep. This intense period saw us finalize the concept, fully integrate the AR, shop for necessary materials, create the dress, and then construct and program the physical hardware.
Our Challenges
Material Sourcing & Logistics: Procuring specific physical materials (fabric, corsets, crinoline) within the tight timeframe ate up an entire day for us. Gaining access to essential equipment, like a mannequin from the MIT Media Lab, required immense coordination and persistence.
Vision Scope Reduction: Our initial vision was massive featuring a fully fiber-optic dress, hand-gesture-triggered drones, and an unfurling mechanism. To meet the deadline, we had to make brutal cuts, deprioritizing the fiber optics and drones. We decided to use LED and focused on the core AR-to-dress interaction.
Real-Time Data Communication: Our early attempts to use a centralized web socket server for movement data resulted in heavy latency and a separate need for a dedicated server. We had to execute a rapid strategic pivot to the newly launched Snap Cloud feature and Supabase real-time databases to achieve low-latency data broadcasting.
Sound Synthesis & Mapping: Synthesizing melodious sound from raw data was incredibly complex. We had to experiment heavily, ultimately relying on octaves, while simultaneously tackling the difficult task of mapping hand movements to pitch, volume, and musical scales (Raag Sa Re Ga Ma / Do Re Mi).
Visual Design Iteration: Achieving an intuitive, smooth, “painting in the air” visual particle effect required multiple relentless iterations of VFX in the Spectacles Lens.
Prioritization Under Pressure
The key to our survival was dividing responsibilities early and setting strict timelines for each workstream. This allowed us to move at speed without stepping on each other’s toes. Frequent check-ins surfaced blockers immediately, allowing the team to swarm problems as they arose. Crucially, we still left room for ad-hoc exploration, striking a vital balance between rigid structure and organic creative evolution.

This project was built in less than 36 hours, as the first day it started late, while the last day’s timeline for the hackathon was cut short due to the snowstorm. But we were able to get it working as we expected, and it worked great.
While some of the challenges we faced are how to capture images from the camera using a circular motion.
The goal is that users can just draw a circle using their hands, and whatever is in that area will be captured and brought in, which allows for image-to-image editing. Making it work was a little bit hard, but we found a way to make it.
Also, another main problem was the node connection lines. The line between two nodes was a little hard to make since it’s 3d, and it has to look flawless and easy to use.
This was your first MIT Reality Hack. What motivated you to participate? Looking back, what level of experience or skills do you think are helpful for someone considering their first hackathon?

Our motivations were deeply personal, yet they all converged on a shared love for this community and technology.
For Meghna, whose day job isn’t currently in XR, side projects and hackathons are her absolute sanctuary, her “happy place” where her true passion lies. It’s where she can surround herself with the vibe, the energy, the fire, and the sheer passion of like-minded builders. Driven by her skills and dedication, and with 5 XR hackathon wins to her name, she finds immense satisfaction in the rapid-fire process of collaborating to build entirely new concepts in such a short amount of time.
The ability to form incredibly spatial (pun intended!) friendships, while learning about new devices and tools through hands-on workshops, makes these events vital to her creative spirit. She firmly believes that anyone with a creative mind and a passion for technology can jump in. So many skills are transferable; ultimately, it requires finding the right team with the right idea so that you can contribute your best and truly enjoy the process.
For Krunal, the drive came from witnessing the krazyy growth of the XR industry after attending LensFest and Lensathon in 2025, coupled with the desire to meet like-minded folks and share that deep passion and love for XR. Although the massive hurdle of travel expenses from oceans across had previously held him back, this year felt undeniably spatial. It drove him to finally step up and “show up” to the very events where the future of XR is being shaped.
He has always viewed XR as a technology that can positively shape our society. Being in the room to actively help shape that future was, to be completely honest, a deeply emotional experience. To finally “show up” meant everything. Looking back, he believes the most essential skill for any first-time hackathon participant is simply the will to do something krazyy and out of the box. That fearless mindset is what truly matters, empowering anyone to learn, adapt, and try new things.
Abraham and Aishah share these similar sentiments. Ultimately, we all agree that succeeding as a first-time hacker isn’t about a perfect resume or knowing a specific coding language. It’s simply about putting yourself out there, bringing that fearless mindset, and enjoying the beautiful chaos of the process. ✨

I always wanted to attend MIT Reality Hack as it’s where all the crazy minds in XR would come together and build something crazy. I would say that if you can build something decent or have a good design and user experience-related skill set, that’s all you need for your first hackathon.
As Hackathons are not just for Developers but also for Designers, I noticed that a lot of Designers think it’s not for them. But combining the design idea with dev minds can always make a great experience.
From your perspective, what are the biggest benefits of participating in hackathons, especially for creators working in XR and spatial computing?

We’ve identified several massive advantages for spatial computing creators:
Accelerated Prototyping: The sheer intensity of a 48-hour window forces us to bypass bloated processes and focus purely on functional execution, turning highly complex concepts into working realities almost overnight.
Forced Focus: Time constraints demand ruthless prioritization. Learning to kill our darlings (like our drone concept) to save the core project is a masterclass in scope management.
High-Impact Collaboration: Hackathons are the ultimate networking crucible. Finding a synergistic team where our vastly different ideas instantly merged is a rare, high-value experience.
Bleeding-Edge Problem Solving: We are forced to learn and deploy brand-new tech on the fly. Pivoting to Snap Cloud and Supabase under pressure provided us with invaluable, high-stakes experience using the newest spatial tools.
The “Show Up” Mindset: Being physically present in rooms where the future is being shaped catalyzes genuine innovation and pushes us to build wildly outside our comfort zones.

From my perspective, some of the biggest benefits are seeing the other creators and their work. You get inspired by their work and get to meet amazing people. A lot of times, the main benefits of these hackathons are networking, but also the exposure you get from them.
In this hackathon, they had hardware tracks, which created so many crazy projects. Most of the time, Creators are stuck to what they know, and these hackathons help them see what all can be possible.
Hackathons are often about more than just building projects. How did the community, mentors, or other participants influence your experience, and what advice would you give XR developers on networking effectively at these events?

Reality Hack had a strong “builder energy”, a contagious excitement for making new things that pushed everyone, including us, to do better work. It wasn’t just a technical contest; what made the community special was its focus on the reason why.
Instead of just showing off code or complex math, the focus was on the theme of the hack – “dreams” and it was exciting what that meant for each team and how they translated that into their projects. This atmosphere created a place where helpful feedback and shared inspiration thrived, making the whole experience much richer than a typical hackathon.
It was a place where the story and effect of the technology mattered more than how it was built. The mentors were a constant, invaluable presence, sharing their own battle scars and providing critical technical guidance that profoundly shaped our final output.
For XR creators/developers looking to network effectively, our advice is blunt: Be fearless and shameless. Engage with everyone. Ask everything. In a hackathon environment, no question is too small to ask, and no idea is too silly or too strong to explore.

I feel that more than the projects, these hackathons help you meet amazing Creators and Developers who come from different domains. The amount of exposure you get from seeing the work they make is great.
I personally got inspired by a lot of the projects that were made in the hackathon, where when I saw them, I went wow, I didn’t know we could do this. This is what makes these hackathons way better.
What is the MIT Reality Hack?
Every year, some of the brightest engineers, designers, entrepreneurs, thought leaders, and brand mentors gather from around the world to challenge themselves to push the boundaries of immersive technology.
The event brings together participants from diverse backgrounds who form new teams and spend several intense days building working software and hardware prototypes. The goal is clear: to contribute to the wave of innovative projects designed for XR devices and platforms.
Hosted at MIT in Cambridge, Massachusetts, the Reality Hack is co-organized by a global team of community volunteers, alumni, friends, and supporters, alongside the Reality Hack Organization (501(c)3) and VR/AR@MIT. Together, they share a common mission: to educate, empower, and enable people to become part of the experiential technology industry.
At its core, MIT Reality Hack is built around three pillars: Learn. Create. Connect.
The event is designed to bring together people with different skills, perspectives, and experiences, encouraging them to collaborate, experiment, and turn ideas into real technological solutions that contribute to a more sustainable and equitable world.
From Hackathon Prototypes to the Future of Technology
Projects like Lucid Weave and Noodle AR perfectly capture the spirit of hackathons like MIT Reality Hack, spaces where bold ideas, diverse expertise, and cutting-edge technology come together. They show that true innovation often emerges when creators from different fields collaborate and push boundaries together.
In just a few days, these teams transformed ambitious concepts into working prototypes, demonstrating how powerful spatial computing and wearable AR can become in the hands of passionate creators.