Home

  • ChatGPT + Code Interpreter = Magic

    ChatGPT + Code Interpreter = Magic

    @AndrewMayne

    tl;dr: OpenAI is testing the ability to run code and use third-party plugins in ChatGPT.

    OpenAI has announced that we’re developing plugins for ChatGPT that will extend its capabilities. [Link] Plugins range from third-party tools like WolframAlpha and OpenTable, to our browsing plugin and Code Interpreter that can generate code, run code, upload and download files ranging from csv data to images and evaluate the output all within the ChatGPT interface.

    Currently the Code Interpreter runs Python with a small selection of libraries. I’ve been playing with Code Interpreter and it’s been a lot of fun to see what it can do even with basic libraries.

    Beside generating code, Code Interpreter (CI) can analyze the output and use it in another function. This means that you can string together different sections of code, taking the output of one and feeding it to another. The Pac-Man gif above was made…

    View original post 1,023 more words

  • Speaking Cartography

    Here Be Dragons

    For this year’s Halloween, October 31st 2022, I’ve released my new track called “Here Be Dragons” on my website Speaking Cartography. You can only hear the song in a kilometer radius around four locations:

    DrugstoreBulevar Despota Stefana 115

    BarutanaKalemegdan, Donji Grad

    HangarDunavska 86, Luka Beograd

    TunnelPetrovaradina tvrdjava 021

    On the website you will see a map with your position at the center. When you are close enough to one of clubs, a play button and a pre-save button will appear.

    If anyone wants to put up their unreleased track on my website you can contact me by email at: dj.3maj.music@gmail.com

    Socials:
    Instagram, Spotify

  • Location based audio

    I want to zoom out of the exhibition space to a city level.

    Night of the Museum in Belgrade

    A single picture or a video can’t capture an amazing experience of Museum Night in Belgrade. It’s a cultural spectacle that spans more than 120 cities around the world. In my country almost 200 locations in more than 60 cities host about 500,000 visitors.

    This coordinated manifestation benefits greatly by location based audio.

    HearUsHere location and object based storytelling platform

    HearUsHere lets you compose your own audio experiences on any location by assigning audio to a location on a map. You can walk towards the sounds or stories you want to hear, approaching them will increase volume. Sounds will overlap and crossfade from one to another while you’re traveling. You create your own personal experience by changing location and creating your own route. 

    The app is created for non-linear and location based storytelling which allows new narratives to arise. 

    ROUNDWARE – CONTRIBUTORY, LOCATION-AWARE AUDIO

    Roundware is contributory audio augmented reality platform. It shares a lot of functionality with HearUsHere and improves upon it in many ways. Most impressive is the use of centralized repository that makes it possible for many people to collaborate. From that repository, a seamless, non-linear, location-sensitive layer of audio in any geographic space is mixed on the fly based on participant input.

    Location based audio could help us scale coalescence to the level of the city and beyond, by coordinating exhibition spaces and bridging the space between them.

    How does it all connect?

    I’ll assume you’re familiar with the web platform I’m building for real-time music collaboration. In summary, my platform will create a way for musicians and artists of differing disciplines to collaborate and for audience to participate in the process of music making. Musicians associate sounds with objects in the exhibition space. Sounds are playing on a loop, and together form a coherent musical composition. Each audio loop can be customized in real-time by the visitors of the exhibition.

    With the use of technology of a location based audio app like Roundware, each exhibition space will broadcast its live musical performance to the people in the area.

    This is possible by the use of Geolocation API in the web browser. By opening a page on the web and allowing it to read GPS of your device it can pinpoint your location on a map in real-time.

    Virtual concentric circles are drawn on a map around the location of the exhibition spaces. Broadcasted audio and its character is determined by the person’s location, specifically by the virtual circle they’re location in. Checkout proof of concept on my github.

    Outer circles overlap with one another. When a person who is tuned in to the broadcast finds himself in a location where circles overlap, he hears multiple musical compositions, each emitted from its respective exhibition space. Not only is the music performed live and in multiple locations across the city in real-time but the location they’re hearing it from will affect its sound. Here we’re bridging the physical gap between locations, where sound can’t physically travel to mix together, by a device ( phone e.g. ) that’s connected to the internet.

    Hearing two songs playing at the same time is usually an unpleasant experience because they’re not made as a coordinated effort. They’re usually made by people from a different period in history, culture, tempo and key. Masters of DJing, for example, mastered mixing different songs together. While performing live they find commonalities in songs and overlap them with one another to create a good sounding composite.

    This musical challenge HearUsHere doesn’t address because it only crossfades between songs. I can see how these overlapping zones can be enriched by musical composites.

    Overlapping zones

    Can two unique songs, created on the spot, by performers from two distant locations, sound good together if performers can only hear their own performance? Sounds impossible. It calls for a compromise that’s achieved in an interesting way:

    Crossfading between recorded music and live performance.

    On my platform live performance is built upon recorded sounds. When a person is outside and is getting close to the overlapping zone we’re going to crossfade from a live performance to a recorded song to create a seamless transition.

    We’ve avoided the impossible blending of two live performances by compromising on the live aspect. We’re left with the challenge of blending two different songs in overlapping zones. Task of composing two and more songs to fit perfectly together is difficult. To make it easier we only have to make some of their parts fit together well.

    Ideally each musical composition has a signature musical motif that can fit together well with motifs of other songs. I’ll be solving this musical challenge of composing many songs that fit together in this way in an upcoming article.

    Overlapping zones are as much a destination as a transitory space. They’re a unique environment to visit, so mark their location on a map and invite your friends to join.

    Music hubs

    Under the right circumstances, spontaneous gathering will emerge. Phones make it easy to share location with a friend and every phone comes with a speaker that can play music. On the same web page from which you’ll listen to the broadcast you’ll see a map that contains marked locations of other listeners to make it easy to spot gatherings.

    Realistically, most people will gather in bars and music venues that are prepared to entertain and serve them on a big night such as the Museum Night. Places that play music will be hard to incentivise to play something other than what their clientele are used to hearing.

    To compete with them, temporary stages can be set up and venues booked for the occasion.

    Conclusion

    Location based audio is new and exciting strategy for enhancing immersion and creating meaningful aesthetic experiences. It helps create a game everyone will want to play by scaling coalescence to a new level.

    To join my private discord community and access exclusive content buy me a coffee.

  • Building a prototype

    GOALS

    My first goal is to the find a way for musicians and artists from different disciplines to collaborate in a exhibition space.

    LISSON GALLERY listening room by Devon Turnbull

    Lisson Gallery listening room by Devon Turnbull treats custom made HI-FI audio equipment as a sculpture.

    Live performance by a musician or a DJ is more engaging. I’ve performed a live DJ set for the opening night of an exhibition in Bife Ventil.

    Realistically, a live and a dynamic performance is difficult to organize for the entire duration of the exhibition that lasts more than a couple of days. Unless visitors perform music themselves. Which brings me to my second goal: audience participation.

    Reimagining the Ecology of Art in u10

    I’ve been lucky to participate in the Reimagining the Ecology of Art exhibition that encouraged visitors to bring their instruments and record pieces inspired by exhibited objects. Visitors that can’t bring an instrument are given five-in-one custom made instrument to play with other visitors. Recorded pieces can be listened to on a tablet device with headphones next the the object that inspired them.

    The third goal is to encourage participation by ensuring visitors will create a sound that is pleasant and harmonious in a way that is easy and intuitive.

    Play music with your touch, TouchMe device demonstration

    I’ve experimented with a device called TouchMe that can transform human touch into sound. Participants hold the device in one hand and touch a person, who is doing the same thing, with their other hand. The creates a closed electric circuit that triggers MIDI signals to the computer. The computer interprets that signal and produces a sound by a virtual instrument. The pitch of the MIDI note corresponds to the amount of electrical current that passes through to the device and it can be increased by making the surface area of a touch larger.

    My experiments sounded better if I had a backing track to play along. I would compose something in my DAW and play it on a loop. While the loop is running I would exclude some elements like percussion and bass for a couple of sections and then bring them back again. Controlling the mix of the song was as easy as pushing a button or turning a knob. That’s even something non-musicians can find enjoyable, impactful and pleasant sounding.

    Stem player by Kanye West

    Stem player by Kanye West is a device that can customize songs by isolating parts like voice, drums and bass. Customizing is done by changing the volume, speed and effects of these isolated parts in real-time. It was released as a commercial product and exclusive platform for listening to Kanye West’s Donda 2 album.

    With clear goals in mind and an idea of what’s possible to do I came up with a solution.

    SOLUTION

    An audio player and mixer on the web that many people can access at the same time and control in real-time.

    Audio mixer of a virtual bar

    I Miss My Bar is a virtual bar which functions as an audio mixer. It has 8 channels, 7 of which are field recordings of ambient sounds from a bar. And one channel is a spotify playlist. Changing the volume of an audio channel will affects only my page.

    I built a replica of this site using only HTML and javascript as a starting point. For more control I loaded audio files as arraybuffers and played them using Audio Web API instead of using audio tags. At the start of the page all the files are loaded, then played at the same time, muted and in a loop.

    To make this simple player collaborative I needed a way to change the volume of one of the audio channels from a auxiliary page and have it affect the main page in real-time. That auxiliary page could be opened on a different device, by a different person in a different physical location. To communicate in real-time between pages I’ve decided to use websockets on a an express server instead of serverless peer-to-peer communication with WebRTC. The main page holds an open connection to the server and listens for messages from auxiliary pages. Auxiliary page controls send messages to an API endpoint on the server. The server emits these messages over the open connection it has with the main page.

    Main page – Player

    audio player that outputs audio and listens to changes
    example url: coalescence.com/player

    Auxiliary Pages – Controls

    audio channel controls for volume, speed and effects
    example url: coalescence.com/control?name=trackName

    This setup allows multiple people to control the audio mixer in real-time. You can find the source code for a working prototype on github.

    USE CASE

    The audio player should be opened on a device connected to the main audio system of the space.

    Musicians prepare audio tracks that can be looped and are in harmony when played at the same time with each other.

    Tracks must form a coherent whole and have enough variation to be played for the entirety of the exhibition. This is a musical challenge. One of the more interesting approaches I’ve learned to overcome this challenge I’ve found in generative music. I’ll discuss them in one of my upcoming articles.

    For music to fit the themes of an exhibition I recommend an audio track be created for each object in the exhibition. Access to the audio track control page should be placed near the object that inspired it. The easiest way for people to access its controls would be to scan a QR code on their phone.

    This creates a dynamic environment in which sound is broadcasted to all the visitors, is controlled by the visitors in a way that’s a reaction to the object they received it from, they can ponder why the musician attributed a particular sound to this object. Most importantly they will always be reacting to the ever changing sound environment that all visitors are participating in, by adjusting tracks that are meant to sound harmonious together.

    To join my private discord community and access exclusive content buy me a coffee.

  • Introduction

    coalesce verb
    co·​a·​lesce \ ˌkō-ə-ˈles \

    intransitive verb

    1. : to grow together
      // The edges of the wound coalesced.
    2. a : to unite into a whole : fuse
      // separate townships have coalesced into a single, sprawling colony
      — Donald Gould
      b : to unite for a common end join forces
      // people with different points of view coalesce into opposing factions
      — I. L. Horowitz
    3. : to arise from the combination of distinct elements
      // an organized and a popular resistance immediately coalesced
      — C. C. Menges

    transitive verb

    : to cause to unite
    // sometimes a book coalesces a public into a mass market
    — Walter Meade

    What is scaling coalescence?

    A project started in August 2022 by Lazar Todorovic. It’s goal is to make it easy for people to create music together in real-time. It allows musicians to collaborate with artists from different disciplines by associating sounds with objects in a exhibition space. These sounds are then to be controlled by visitors to create a unique musical composition.

    u10 gallery space in Belgrade, Serbia

    How does it work?

    It works over the internet on all devices that have a web browser.

    Player : outputs audio. It plays many audio tracks simultaneously in a loop.

    Track control : controls audio parameters like volume, speed and effects.

    Server : relays changes of audio parameters to the player in real-time.

    Learn about how I built a prototype.


Design a site like this with WordPress.com
Get started