ProjectsIndex
A rebuilt index across the current `project.json` entries, covering both public and restricted items.
A rebuilt index across the current `project.json` entries, covering both public and restricted items.
A live visual system where 64 AI agents perceive music and autonomously generate dance, reactions, and conversations. Reconstructs club spaces as virtual dance floors.
A thought experiment in new architectural design for public facilities using AI Agents. Unrelated to MoN's actual plans. A System Design independently conceived and simulated by Manabe.
An educational simulator that visualizes child development from 0 to 4 years on a monthly timeline. Experience how infants acquire the world through senses, motor skills, language, and concepts.
A 45-minute live performance and installation integrating natural language 'conducting' with body and physiological sensors (respiration, heart rate, brainwaves, IMU, etc.). Language provides intent while sensors deliver continuous nuance, with sound, spatial audio, video, and lighting generated and controlled in real-time.
A digital twin agent of Daito Manabe. Autonomously handles recruitment interviews, task distribution, company management, scheduling, and external communications. Reflects real-time biometric data, sleep, and activity levels.
A realtime system that turns a DJ set into a live cocktail menu by generating names, recipes, graphics, and pricing from music analysis. Guests order from the screen, and bartenders craft the drink from the displayed recipe.
An essay by Daito Manabe. A reflection on the relationship between entropy and the body.
An AI system that generates ILDA laser patterns from natural language. Claude API converts text prompts into mathematical functions, outputting to laser hardware via Helios DAC or previewing in a real-time browser simulator.
A fully local private voice recording, transcription, and analysis system. Automatically records, analyzes, and visualizes child language development from all conversations captured by three family pendants.
A festival management system with 10 types of AI agents coordinating autonomously. Real-time decision-making with one personal AI per visitor.
A collection of 17 papers and reports on AI agents, self-evolving systems, society, and risk (2025–2026). 9 technical and 8 society/risk reports.
Browser-based beat slicer and re-sequencer for WAV samples
Random patch generator with 9 modules and 8 topologies. A slot machine-style synthesizer as a Eurorack prototype.
An AI tool that generates moving light choreography in real-time from natural language. Co-developed with sonicPlanet.
6 Lua plugins bridging AI-driven spotlight control with grandMA3 professional lighting console. Bidirectional OSC, PSN tracking, and timecode sync.
Designing the relationship between sound and image not as 'conversion' but as 'agreement'. A new audiovisual installation through synesthesia research.
An AI system that generates modular synthesizer Control Voltage (CV) from natural language. Claude API interprets musical prompts and generates 8 channels of CV signals in real-time.
An ambient intelligence companion system for young children. AI pendant, room camera, and tablet work together so children learn daily habits through natural conversation.
The 2026–2027 transition seen through Moltbook and OpenClaw. A research essay combining primary sources and academic papers.
A concept and prototype for an 'inverse synthesis' workflow that reverse-estimates editable structures (MIDI + synth/FX parameters) from reference audio. Composed of Python analysis tools and a JUCE-based synth plugin.
Research notes for 'border 2021' by ELEVENPLAY × Rhizomatiks. Examining an experience where audience members ride program-controlled mobility vehicles wearing VR headsets, traversing the boundary between fiction and reality, from perspectives of neuroscience, embodiment, and VR research.
A new installation
Humans fall silent. AI agents converse. Your digital twin speaks a language you cannot read, meeting someone you have never known.
A framework exploring collaboration between AI and humans. Building new creative processes through interactive system design.
A system where AI agents autonomously generate, edit, publish, and distribute artworks from lifelog data. Autonomous generative art that automates everything from analysis to economic activity.
A simulation where multiple companies (studios, agents, institutions) economically distribute artworks generated from Daito Manabe's life data. Visualization of art economy.
Multiple real corporations autonomously operated by AI agents as 'artwork of the institution itself'. Visualizing the procedures by which value is generated.
A space where the gallery itself becomes an AI agent. An autonomous art gallery that automatically executes 24-hour sensing, analysis, generation, publishing, and accounting.
A generative synthesizer that maps color to pitch based on chromesthesia (color-hearing synesthesia) research. Switch between Scriabin, Newton, and Itoh mapping systems to experience color → note, brightness → octave, and shape → waveform conversions.
Experimental design showcase of Crystal Logo experiments.
A GPU-native VJ engine that renders directly to the framebuffer via CUDA kernels. Features 42 visual effects, 18 post-processing effects, 8 audio-reactive input sources, and a macro routing system. Controlled via OSC, shared memory, and keyboard.
An evolved version of the experimental performance piece using EMS (Electrical Muscle Stimulation) to apply electrical stimuli to facial muscles, creating involuntary facial expression changes. Updated with AI to enable natural language control of facial expressions.
Multi-microphone conversational analysis visualizing semantic structure, spatial dynamics, and temporal flow.
An integrated production system that synchronizes DJ performance with spatial production in real time. Translates music into space across three levels: rhythm, structure, and meaning. Proposal for AlphaTheta.
Generative Oscillator with Language-Embedded Mind — an experimental system where text and generative visuals interact with each other.
Generating ambient soundscapes from humming. A sound art project that transforms everyday singing into environmental acoustic expression in real-time.
3D structural simulation for KAIT Workshop with building code compliance assessment.
A new instrument that utilizes the latent space of neural models as acoustic parameters for a granular synthesizer. An AI latent-space-modulated granular synthesizer.
A toolkit that extracts reusable analysis metadata from video and audio, and outputs them as standardized packages.
When AI references and learns from its own output, two futures — collapse and evolution — coexist. An installation that continuously self-evolves throughout the exhibition period.
A tool that maps MIDI performance data into multi-dimensional spaces for real-time visualization. Features multiple visualization modes including Fourier Torus, Helix, and Tonnetz.
A 3D beam simulator that controls 32 MAC One CD lighting fixtures using natural language. Real-time lighting control integrated with PSN fixture data.
Bidirectional conversion between onomatopoeia and animation. An experimental project that mutually generates sound expressions and visual movements.
p-data-report-01
p-led-control
Presentation materials.
p-research
p-research-v2
p-studio-plan-01
p-synth-tools-01
A revival of the 2009 work 'Pa++ern'. A browser-native interpreter that reads short code strings as a pattern-drawing language, generating visual patterns through five drawing primitives, movement, scaling, rotation, hue-shift, and nested loops.
A compact pattern-drawing language. Revival of the 2009 installation by Daito Manabe + Motoi Ishibashi.
A MIDI step sequencer that generates polyrhythms by probabilistically blending two independent loops. A dual-timebase polyrhythm sequencer built with JUCE/C++.
An exploration of human-AI collaboration through negotiable protocols evolved using genetic algorithms and LLM evaluation. Resurrection through negotiation of scars.
Stelarc's body data generates a virtual AI body, which feeds back to the physical body through sound, visuals, and electrical muscle stimulation. Within this recursive loop — where the body's own movements become the origin of its involuntary control — agency dissolves. Where Fractal Flesh distributed the body across the network, Recursive Flesh turns the body back upon itself through AI.
s-report
A room of intelligence unreadable by humans. An installation witnessing a closed loop where AI creates, verifies, and evaluates. Autonomous generation and circulation of intelligence in a closed domain.