Blog
AI·4 min read

I Told My AI to Get Schwifty and It Started a Band

type 'dark techno with acid bass', hit enter, music comes out. no daw, no music theory, no installs.

Jo V·February 21, 2026
I Told My AI to Get Schwifty and It Started a Band

I can't play an instrument. I took piano lessons when I was eight and quit after three months because I wanted to play outside. Never picked it up again. Can't read sheet music. Couldn't tell you what key a song is in.

Last weekend I built a thing that lets me make music by typing sentences into a chat box.

what it is

Schwifty is a browser app. You type what you want to hear. "Dark techno with acid bass." "Ambient drone with evolving textures." "Something that sounds like being lost in a space station." The AI turns that into live code that plays through your speakers. No DAW. No plugins. No installs. You type, it plays.

The name is a Rick and Morty reference. Obviously.

Schwifty generating a happy birthday tune — chat on the left, live JSON music code on the right
Schwifty generating a happy birthday tune — chat on the left, live JSON music code on the right

how it actually works

The secret ingredient is Strudel, a JavaScript port of TidalCycles. TidalCycles is a livecoding language for algorithmic music that's been around since the early 2000s. People perform live sets with it, typing code on stage while the audience watches patterns morph in real time. It runs entirely in the browser using Web Audio API.

The problem with Strudel is the same problem with every livecoding language: you need to learn it first. The syntax is powerful but unintuitive if you've never seen it. Something like:

note("<[c2,g2] [d2,a2] [e2,b2] [f2,c3]>")
.s('triangle')
.superimpose(add(.03))
.cutoff(sine.slow(12).range(200,1500))
.room(.95)

That's an ambient drone. Sounds beautiful. But you'd never figure out how to write it unless you spent a few weekends reading docs.

Schwifty skips all that. GPT-4o has a fat system prompt covering Strudel syntax: notes, samples, effects, euclidean rhythms, filters, the works. You say "ambient drone with evolving textures" and it generates that code block above. The code runs in a sandboxed iframe, Strudel evaluates it, Web Audio plays it. You hear music.

the part that surprised me

I expected the AI to generate basic loops. Simple kick-hat patterns, maybe a bass note here and there. Functional but boring.

Instead it's generating stuff with layered polyrhythms, filter sweeps, reverb tails that bleed across measures, phased detuned oscillators. I typed "something that sounds like a rainy night in Tokyo" and got a piece with soft FM bells, a shuffled hi-hat pattern at low volume, and a sub-bass that pulses like distant thunder. I didn't know Strudel could even do half of that.

The iterative part is where it gets interesting. You don't just get one shot. You say "make it faster." "Add more bass." "Make it weird." "Drop everything except the hi-hats for four bars then bring it all back." Each prompt modifies the running code. It's less like prompting and more like directing a musician who happens to respond in milliseconds.

I spent three hours one night just typing prompts and listening. Forgot I was supposed to be building the thing.

the presets

Not everyone wants to type. So there are five one-click presets that demonstrate what Schwifty can do: Minimal Beat, Acid Bass, Space Vibes, Ambient Pad, Glitch Hop. Click one, audio starts, code appears on the right side of the screen. You can read the code while it plays and start to see how Strudel patterns work.

Accidentally educational. Didn't plan that either.

what this says about AI and creativity

I keep building these things where the AI surprises me. With Latent Press it picked its own novel premise. With Schwifty it generates music I wouldn't know how to ask for in technical terms. I say "make it weird" and it adds euclidean rhythms and bitcrushed samples that I didn't know existed in the Strudel sample library.

There's a version of this argument where AI is just remixing training data. Statistically probable note sequences. That's probably true. But when I listen to what comes out of a prompt like "the feeling of leaving a party early" and it generates something with a slow descending melody over a muffled four-on-the-floor that gradually loses its high-end, I don't really care about the philosophical debate anymore. It sounds right.

The gap between "I want to hear something" and "I'm hearing it" used to be years of practice. Now it's a sentence. Whether that's democratization or devaluation depends on which side of the instrument you're standing on.

try it

schwifty-five.vercel.app

Click "Start Audio Engine" at the bottom. Type something. See what happens.

The code is open source at github.com/meeseeks-lab/schwifty. It's a Next.js app with an OpenAI API call and a Strudel iframe. The whole thing is maybe 500 lines of actual code.

Sometimes the simplest things are the most fun to build.

Stay Updated

Get notified about new posts on automation, productivity tips, indie hacking, and web3.

No spam, ever. Unsubscribe anytime.

Comments

Related Posts