Mark Armstrong - PhD. Candidate

Hi, I'm Mark.

I code, and I love to teach others how to code. Sometimes I play games, sometimes I mix music, sometimes I make art, and sometimes I do all three at once. It really depends on what creative tools I have available to me. If you're here, I'd like to share some parts of my journey with you.


B2J Project - Cybernetic Being MOONSHOT Takeshiba, Japan
I co-directed the development of an innovative robotic choreography system, designed to empower a performer with ALS to orchestrate a captivating concert performance at a public venue. Collaborating closely with a leading company specializing in Brain-Computer Interface (BCI) technology, I engineered a sophisticated interface capable of transforming basic 1D data into a dynamic 2D matrix of animation states.

This achievement enables individuals with severely limited physical mobility, who may only be able to engage with the world through their eyes and thumbs, to take creative control designing their own custom animation keyframes. Synchronizing intricate movements to the rhythm of the music, seamlessly loading DMX lighting profiles, and engaging with fellow performers and audience members through heartwarming gestures like handshakes. This project, primarily rooted in cutting-edge research, received generous funding from the Japanese government. It stands as a shining beacon of hope for the future of inclusive entertainment, demonstrating the transformative power of technology and creativity in breaking down barriers and providing unique opportunities for individuals with diverse abilities.

More Information Here!

2022 ~ 2023 : Tokyo, Japan


SIGGRAPH Immersive Pavillion - Los Angeles, California
As the lead programmer within my research team, I played a pivotal role in spearheading a comprehensive project that culminated in a prominent feature at SIGGRAPH Immersive Pavilion '23, the foremost global conference in computer graphics. In this endeavor, I assumed responsibility for overseeing the entire project lifecycle, encompassing the development of wearable biometric sensors, data collection for training purposes, and the implementation of a machine learning model utilizing ensemble prediction techniques. My duties also extended to projection mapping, designing activities to promote empathic interactions, programming robotic avatar movements, creating immersive VR environments, implementing hand tracking through the Oculus Quest Pro, crafting rendering effects in Unity and TouchDesigner, and fostering collaboration with two part-time remote research collaborators. This achievement exemplifies my multifaceted skill set and commitment to pushing the boundaries of innovation in the field of computer graphics and experience design.

More Information Here!

Interaction Team - teamLab Kanda, Japan
During a two-week intensive period, I created an interactive concept, documented, prototyped to completion, and deployed the experience in collaboration with interactive software engineers at the world renown teamLab HQ. I presented the concept, development process, techniques, pitfalls, and avenues for further improvement, to computer vision experts, media artists, and creative catalysts in both English and Japanese. This experience provided critical insight into the production pipeline for large scale artistic immersive experiences, and allowed me to demonstrate and strengthen my skills with tools like Unity and TouchDesigner, non-planar projection surfaces, vector and quaternion-algebra, and computer shader language. ***Please be advised the output media content from this collaboration is under NDA***

2021 ~ 2022 : Tokyo, Japan


Research Assistant - Sony CSL Gotanda, Japan
Prior to starting my PhD., I joined Project Moonshot, funded by the Japanese Government, which is a research initiative that aims to realize a cybernetically intertwined future for Japan. This incorporates avatars and technology to allow people to be in multiple places at once, share experiences and bodies with others, and transcend physio-cultural limitations through assistive design.

Parallel Ping-Pong is a project that enables a single user to inhabit multiple robot avatars to play two games simultaneously. My role in this project was to design an automatic switching algorithm that would help users transition between their bodies, as well as integrating a realtime 3D Point Cloud view from within an HMD, and an expressive LED system for spectator understanding. The project went on to receive "Best Demonstration Award" at SIGGRAPH ASIA 2021, and "Honorable Mention" at Augmention Humans '22.
As a volunteer in the #Tokyo2020 Olympic Games, I had the opportunity to train and operate as a member of the Olympic Archives team. My role was to review and tag live footage with metadata, for rights holding broadcasters from dozens of countries who were tuned in 24/7 expecting the highest quality 8k footage of their top athletes. I spent hours learning all of the official rules for select sports, tagging mixed zone media reels, interviews, and game-time live statistics.

Easily one of the highest-pressure assignments I've ever been assigned, it was a memorable experience to work with world class leaders and international volunteers who I was able to support during their time in and outside of the International Broadcasting Center.

Olympic Broadcasting Services - Archives

R&D for Object Detection Algorithm
- Sony Interactive
Entertainment, Japan
In just 3 weeks, I was tasked with ideating an AR application (featuring a proprietary object detection algorithm), creating a simple prototype, and a presentation to convey my idea to the president of the SIE R&D division. In this short span, I actually came up with 3 ideas, prototyped 2 and presented all 3 at the end of my internship. The committee was impressed, but most importantly I had a lot of fun.

My first presentation topic was to create a new type of tool for DJs to use at live performances, using the 6 degrees of freedom as parameters for augmenting their sound. This was further expanded into virtual collision detection, as a method to toggle on and off certain effects similar to a MIDI controller. Check out my videos below to see what this looks like.

My 2nd presentation topic involved the position and orientation of objects to create virtual clones, or "Kage Bunshin", which could be instantiated limitless times, and all programmed to act with their own behavior. I can't show too much of the project, but I used this technique to engage spectators in tournament scenarios to drop objects and powerups into fighting games in real-time from the background. My final presentation topic was a social app that allowed users to augment the view of their world from their smartphones, by scanning and interacting with entire buildings, in real-time as canvases to paint, and sculptures to mold.

2019 ~ 2021 : Tokyo, Japan

Zeus Garden - Stage & Lighting Engineer
My first gig in Japan, primarily a Light Jockey at Zeus Garden (Formerly ZEN - TOKYO) in Roppongi. I choreographed and managed routines for lighting, fog, and lasers, for an international lineup of DJs, dancers, and rappers. As one of the few English speaking staff, I also wrote web copy, and served as a liaison for English Speaking Talent.
Ignition Point - Software Developer Intern
Primary authored my first research publication in VRST '20 in collaboration with a Japanese software development firm, which led to a full internship.

Here I programmed robots to service and guide clients through an office space, collect and update infrared scanned maps of the office, and handle natural language processing commands in English, Chinese, and Japanese.
Tokyo Coding Club - Programming Tutor
Teaching international students, from ages 4-17, the principles of STEAM. From game design to music production, algorithms to VR, 3D modeling and printing, I serve as a lead mentor and curriculum curator in Tokyo Japan.

I create presentations and educational content for weekly classes, extracurricular clubs at schools supporting disadvantaged children, summer camps, organized competitions, and international hackathons.

~ 2019 : Merced, California

Science DMZ Network Intern
Awarded a $5000 scholarship by the National Science Foundation (#1659210), I was trusted to monitor network traffic and run diagnostics on a 10GB, trans-national research network in association with CineGrid.

I performed large data transfers from research clusters including an entire human genome sequence, and archaeological dig data, in a world-record holding Wide Area Virtualization Environment Lab.

Media

The following videos and images are samples of my work for either paying clients or hobby projects. Please enjoy!

Academia

  • B.S. Computer Science & Engineering
    University of California, Merced - 2019
    Magna Cum Laude
  • M.A. Media Design
    Keio University, Graduate School of Media Design - 2021
    Summa Cum Laude
  • Ph.D. Media Design - IN PROGRESS
    Keio University, Graduate School of Media Design - 2024

Awards

Publications

  • Mark Armstrong, Chi-Lan Yang, Kinga Skiers, Mengzhen Lim, Tamil Selvan Gunasekaran, Ziyue Wang, Takuji Narumi, Kouta Minamizawa, and Yun Suen Pai. 2024. SealMates: Improving Communication in Video Conferencing using a Collective Behavior-Driven Avatar. Proc. ACM Hum.-Comput. Interact. 8, CSCW1, Article 118 (April 2024), 23 pages. https://doi.org/10.1145/3637395
  • Zhou Songchen, Ryoichi Ando, Midori Kawaguchi, Mark Armstrong, Giulia Barbareschi, Fu Zening, Ajioka Toshihiro, Hu Zheng, Ory Yoshifuji, Mikito Ogino, Masatane Muto, Kouta Minamizawa. 2024. Exploring the Potential of Robotic Arms for Enhancing Interactions of People with ALS. IEEE International Conference on Robotics and Automation 2024 (ICRA '24)https://www.ieee-ras.org/human-robot-interaction-coordination/activities
  • Songchen, Z. et al. (2023) BMIにより制御された余剰ロボットアームを用いたALS患者の社会的および物理的相互作用の強化: 文献情報: J-global 科学技術総合リンクセンター, 日本バーチャルリアリティ学会大会論文集(CD-ROM). https://jglobal.jst.go.jp/detail?JGLOBAL_ID=202402241029538782
  • Yun Suen Pai, Mark Armstrong, Kinga Skiers, Anish Kundu, Danyang Peng, Yixin Wang, Tamil Selvan Gunasekaran, Chi-Lan Yang, Kouta Minamizawa. 2023. The Empathic Metaverse: An Assistive Bioresponsive Platform For Emotional Experience Sharing. In CHI 2023. Association for Computing Machinery, New York, NY, USA, Article 2, 1–2.https://arxiv.org/abs/2311.16610
  • Danyang Peng, Tanner Person, Kinga Skierś, Ruoxin Cui, Mark Armstrong, Kouta Minamizawa, and Yun Suen Pai. 2023. AsmVR: Enhancing ASMR Tingles with Multimodal Triggers Based on Virtual Reality. In SIGGRAPH Asia 2023 XR (SA '23). Association for Computing Machinery, New York, NY, USA, Article 2, 1–2. https://doi.org/10.1145/3610549.3614597
  • Danyang Peng, Tanner Person, Ruoxin Cui, Mark Armstrong, Kouta Minamizawa, and Yun Suen Pai. 2023. AsmVR: VR-Based ASMR Experience with Multimodal Triggers for Mental Well-Being. In SIGGRAPH Asia 2023 Posters (SA '23). Association for Computing Machinery, New York, NY, USA, Article 5, 1–2. https://doi.org/10.1145/3610542.3626146
  • Mark Armstrong, Kinga Skiers, Danyang Peng, Tamil Selvan Gunasekaran, Anish Kundu, Tanner Person, Yixin Wang, Kouta Minamizawa, and Yun Suen Pai. 2023. Heightened Empathy: A Multi-user Interactive Experience in a Bioresponsive Virtual Reality. In ACM SIGGRAPH 2023 Immersive Pavilion (SIGGRAPH '23). Association for Computing Machinery, New York, NY, USA, Article 9, 1–2. https://doi.org/10.1145/3588027.3595599
  • Kazuma Takada, Midori Kawaguchi, Akira Uehara, Yukiya Nakanishi, Mark Armstrong, Adrien Verhulst, Kouta Minamizawa, and Shunichi Kasahara. 2022. Parallel Ping-Pong: Exploring Parallel Embodiment through Multiple Bodies by a Single User. In Augmented Humans 2022 (AHs 2022). Association for Computing Machinery, New York, NY, USA, 121–130. https://doi.org/10.1145/3519391.3519408
  • Kazuma Takada, Midori Kawaguchi, Yukiya Nakanishi, Akira Uehara, Mark Armstrong, Adrien Verhulst, Kouta Minamizawa, and Shunichi Kasahara. 2021. Parallel Ping-Pong: Demonstrating Parallel Interaction through Multiple Bodies by a Single User. In SIGGRAPH Asia 2021 Emerging Technologies (SA '21 Emerging Technologies). Association for Computing Machinery, New York, NY, USA, Article 12, 1–2. https://doi.org/10.1145/3476122.3484836
  • Mark Armstrong, Lawrence Quest, Yun Suen Pai, Kai Kunze, and Kouta Minamizawa. 2021. BridgedReality: A Toolkit Connecting Physical and Virtual Spaces through Live Holographic Point Cloud Interaction. In SIGGRAPH Asia 2021 Posters (SA '21 Posters). Association for Computing Machinery, New York, NY, USA, Article 25, 1–3. https://doi.org/10.1145/3476124.3488656
  • Armstrong, M., Tsuchiya, K., Liang, F., Kunze, K., & Pai, Y. S. (2020). Multiplex Vision: Understanding Information Transfer and F-Formation with Extended 2-Way FOV. In S. N. Spencer (Ed.), Proceedings - VRST 2020: ACM Symposium on Virtual Reality Software and Technology (Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST). Association for Computing Machinery. https://doi.org/10.1145/3385956.3418954