Sound Visualization Works!

Thanks to that audio tutorial I found plus the rope script I already had on hand for the tetherball I was able to slap together a quick and dirty sound visualization that uses a 3D line segment.

Next up is to balance the levels a bit and then add it to the theremin’s screen. I also still need to properly texture the theramin model as its using a bunch of different materials right now (horribly inefficient for rendering).

Data Mapping

One math thing that I’m finding I’m doing a lot of with the theramin is mapping positional data to a set range of integers. The midi library uses a range from 0 to 127 for pitch. So I need to map the distance value between the controller and the pitch rod to that number.

One technique is to convert the value of range 1 into a percent and then multiplying the maximum of range 2 by that percent to get your new value (and then rounding to an integer). Because I already know that its only going to work inside the trigger box, I compare the current distance from the rod with the maximum distance to get a percentage.

But if one range uses negative numbers and the other uses positive numbers then that becomes a bit harder to do. So another technique you can use is a mapping algorithm.

I found a pretty great forum thread where people are sharing different algorithms they use for remapping:

https://forum.unity.com/threads/re-map-a-number-from-one-range-to-another.119437/

Theramin: Day 1

Generating music by pressing buttons is one thing but I thought it might be cool to be able to play music by waving your hands in the air. I’ve always wanted to play the theremin so I figured, “why not make it in VR?”

Right now I have the controls working based on trigger volumes and basic distance measurements between the rods and controllers. It uses the midi library to generate the tone which I can change to any instrument in the midi library (currently only from editor).

Does it work?

The volume control works really well but the pitch control is a bit more tricky to get smooth. I’ll post my first “performance” soon!

Music from Text

There are a lot of games that have music-like vocalizations from characters in place of speech. I am exploring what could be possible using the MIDI system plus a handy script I found to create a Typewriter effect in the UI. In the end I had to write my own code to make it play nice with the MIDI library but the link above would work for most typical situations I think.

How it works?

As the typewriter is typing out the text, each time it starts a new word, it picks a random note from a provided range to start a chord from. Then it increases using a 4, 3, 5 semi-tone pattern to create the major chord, repeating this pattern if there are more than 3 letters in the word. When it finds a space, it picks a new random starting note.

What’s Next?

I would like to experiment with different instruments as well as different expressions such as volume and pitch bending to create emphasis and inflection in the generated vocalizations.

Making Music

So I started falling down a rabbit hole of VR music generation and so far I’ve been able to load in a MIDI library and link it up to the buttons I already have.

With some really janky hacks I just finished implementing a sensitivity system so you can hit the button softer or harder to have the note play louder or softer.

Next up is fixing the damn buttons because they are REALLY not great. I need a better button system that can detect harder/faster presses. Basically they need to make it better, do it faster, harder better faster stronger.

Broadcasting Messages

Did more coding work on my meditation stations and had to remind myself how to broadcasting messages. The reason for this is because part of the meditation station is actually attached to the player, allowing me to detect where the hands are positioned, where as the other is attached to the station, allowing me to make sure the player is standing in the right spot.

I hope to be able to show a video of the full effect soon.