Page 39 of 64 FirstFirst ... 2935363738394041424349 ... LastLast
Results 381 to 390 of 636

Thread: The LSX tutorials thread!

  1. #381
    Join Date
    Sep 2009
    Location
    Brno, CZ / Povazska Bystrica, SK
    Posts
    491

    Default

    don't you have any output even if you play your own effects? i guess with etherdream there is this protection feature, that it won't play protected shows, or maybe that was an issue of certain level of firmware

    if you don't see ED at all, it will be most likely a network issue, turn off firewall or allow ports that Swami mentioned

  2. #382
    Join Date
    Jan 2014
    Location
    North Carolina, USA
    Posts
    220

    Default

    Sometimes this program can be so awesome... and sometimes, doing something simple can be so difficult and elusive... ugh!

    I am still working through figuring out how to do all the "fundamental" things... and one of those is drawing a realtime "waveform" display. The thing is, this seems quite simple if you are using the audio in buffer, because the wave2 function has a parameter where you specify the value in a time "buffer" that you want. This allows you to easily map the place in the buffer with the X axis - and makes drawing a waveform fairly simple. See CMBs tutorial video here:

    https://www.youtube.com/watch?v=FByTP0q_uMM#t=433

    What I can't seem to figure out, is how to repeat this exact same effect, except for using the waveform data of the audio file from the show. In order to do this, you have to use the "wave" function, and the "wave" function only provides the instantaneous value for the amplitude at the particular time it is called, so my guess is you would have to create your own buffer to save these values over time, and somehow map this buffer to the x axis? But that sounds difficult. Is there an easier way?

    I know what I want to do... I just can't for the life of me figure out how to do it in LSX.

    Glad things have slowed down for me a bit, and I finally have some time to play with my lasers beams a bit more!

  3. #383
    Join Date
    Sep 2009
    Location
    Brno, CZ / Povazska Bystrica, SK
    Posts
    491

    Default

    i'm not entirely sure what you're trying to do, do you want to unfold a line that would copy the audio waveform over some time? it means not draw actual waveform realtime but in some delayed steps?

  4. #384
    Join Date
    Jul 2008
    Location
    My momentum is too precisely determined :S
    Posts
    1,777

    Default

    Quote Originally Posted by BlueFang View Post

    How to use the wave expression in the same way as wave2(a,b) using the show audio file instead of the live audio input?
    That's a feature I have requested a long time ago. I haven't found an actual solution but the next best thing is as you suggest, using your own buffer. Creating a buffer is simple enough and reading it out is easy as well... the trouble rises when you want to fill in the buffer with data. I haven't found a way that fills the buffer continuously with audio data. You'd need to somehow create a concurrent "thread" or expression loop that runs while the other expressions are evaluated.

    This is the simplest solution:

    Code:
    i=0;
    repeat(200,
       assign(gbuf(i), wave)
       & assign(i, i+1)
    );
    result=gbuf(idx*200);
    This way makes use of the gbuf(a) expression: this is a global buffer (hence the name) which can be used across expressions and across timelines. You can access it by giving it an integer as argument, this returns the value stored at that location in the buffer. Storing a value happens by using the assign(a,b) expression, which assigns the value b to the variable it finds in a. So, assign(gbuf(5), 0.5) stores the value 0.5 at location 5 in the global buffer. There are a million locations, according to the expression help.

    The repeat(a,b) expression is the for loop in LSX. The first argument (a) is the number of iterations, the second (b) the statement that needs to be evaluated. Multiple statements can be separated by an & (no need for a semicolon). Inside the repeat expression, assigning a value to a variable with the = operator doesn't work, but assign(a,b) works just fine.

    Reading out the buffer is done by simply using the idx variable multiplied by 200, which was the number of points in my test (maybe it's a good idea to use a var() for that instead).

    So what this piece of code does is get the value stored in the wave variable and assign it to a location in the buffer which is then used to create a wavelike shape. But the result is usually a straight line at random y positions, as the wave variable doesn't or nearly doesn't change during the loop. Worse, this expression isn't evaluated constantly so you get only a small amount of all wave values that "happened".

    I don't have an obvious solution for this. I'm sorry. But I can offer you this:

    Code:
    inttime = time - lasttime;
    if( above( inttime, 1), assign(go, 1) & assign(lasttime, time), assign(go, 0));
    
    i=0;
    if(go,
       repeat(201,
          assign(gbuf(i), gbuf(i+1))
          & assign(i, i+1)
       )
    ,0);
    assign(gbuf(200),wave);
    result=gbuf(idx*201)+0.25;
    This expression loops through the buffer and shifts all values one position in the buffer. The additional go part is so that it doesn't loop too fast (or you're back at the previous problem). It's a bit too slow for my taste but probably closer to the ultimate solution.

  5. #385
    Join Date
    Dec 2010
    Location
    DC/VA metro area, USA
    Posts
    554

    Default

    If you are playing the audio from the same laptop that you are running LSX on, you should be able to use the stereo mix input device to get the audio stream in as if it were live, yes? You will probably have to install the native drivers for your audio chipset to get that input device, though... stock MS drivers do not support it.

  6. #386
    Join Date
    Jan 2014
    Location
    North Carolina, USA
    Posts
    220

    Default

    Quote Originally Posted by colouredmirrorball View Post
    That's a feature I have requested a long time ago. I haven't found an actual solution but the next best thing is as you suggest, using your own buffer. Creating a buffer is simple enough and reading it out is easy as well... the trouble rises when you want to fill in the buffer with data. I haven't found a way that fills the buffer continuously with audio data. You'd need to somehow create a concurrent "thread" or expression loop that runs while the other expressions are evaluated.

    This is the simplest solution:

    Code:
    i=0;
    repeat(200,
       assign(gbuf(i), wave)
       & assign(i, i+1)
    );
    result=gbuf(idx*200);
    This way makes use of the gbuf(a) expression: this is a global buffer (hence the name) which can be used across expressions and across timelines. You can access it by giving it an integer as argument, this returns the value stored at that location in the buffer. Storing a value happens by using the assign(a,b) expression, which assigns the value b to the variable it finds in a. So, assign(gbuf(5), 0.5) stores the value 0.5 at location 5 in the global buffer. There are a million locations, according to the expression help.
    Thanks CMB. That is definitely what I was looking for but couldn't find enough docs to figure out how to do it. It does also bring up the question of what is the frequency that the expression is being called - but it shouldn't impact the animation too greatly I suspect. I will try it out.

    And yes, this would be much easier if the wave() function gave access to a rolling window (or buffer) of the audio track data automatically.

    Quote Originally Posted by tribble View Post
    If you are playing the audio from the same laptop that you are running LSX on, you should be able to use the stereo mix input device to get the audio stream in as if it were live, yes? You will probably have to install the native drivers for your audio chipset to get that input device, though... stock MS drivers do not support it.

    Indeed, tribble, I have thought about this, though I haven't tried it yet, as my main curiosity was to try to figure out how to do this with the wave() functionality as is


    The one issue I can foresee with the audio routing approach is that there doesn't seem a way to assign which of the "audio-in" inputs LSX uses for it's input channel. With my audio hardware (MOTU Track 16), I can route any output to a virtual set of inputs - however I don't know of a way to make LSX use those inputs for it's record-in channel. It looks like it might just use the "default" input channels, or maybe the first stereo pair it finds in it's enumerated list. I don't know. If this were on Mac OS X, this would be easy to do with Soundflower - but, I don't know much about the Windows world of audio - I actually try to avoid it because of the lack of OS level support and confusion.
    Last edited by BlueFang; 11-24-2014 at 19:34.

  7. #387
    Join Date
    Jan 2014
    Location
    North Carolina, USA
    Posts
    220

    Default

    Quote Originally Posted by dzodzo View Post
    i'm not entirely sure what you're trying to do, do you want to unfold a line that would copy the audio waveform over some time? it means not draw actual waveform realtime but in some delayed steps?

    Probably easiest to understand what I am trying to do if you just ignore my rambling about how to actually do it...

    I simply want to draw a waveform display using audio data from the audio track of the current show - not the "audio-in" or "record-in" input.

    If you look at CMBs tutorial, he shows exactly what I want to do, but he uses wave2() function which takes audio from the "audio-in" channel of his audio device, not audio from the audio track of the show. One of the parameters of this wave2() function is the index into the audio input buffer where you would like to get the amplitude of the waveform. This makes it fairly straight forward to create a waveform display by mapping the the values in this buffer (accessed by an index) to points along the x-axis - presto - a waveform display.

    Here is a screen shot of CMBs tutorial which shows the effect I am looking for:

    Click image for larger version. 

Name:	Screen Shot 2014-11-24 at 11.23.46 PM.png 
Views:	25 
Size:	553.0 KB 
ID:	45481
    Last edited by BlueFang; 11-24-2014 at 20:56.

  8. #388
    Join Date
    Jul 2008
    Location
    My momentum is too precisely determined :S
    Posts
    1,777

    Default

    Quote Originally Posted by BlueFang View Post

    The one issue I can foresee with the audio routing approach is that there doesn't seem a way to assign which of the "audio-in" inputs LSX uses for it's input channel. With my audio hardware (MOTU Track 16), I can route any output to a virtual set of inputs - however I don't know of a way to make LSX use those inputs for it's record-in channel. It looks like it might just use the "default" input channels, or maybe the first stereo pair it finds in it's enumerated list. I don't know. If this were on Mac OS X, this would be easy to do with Soundflower - but, I don't know much about the Windows world of audio - I actually try to avoid it because of the lack of OS level support and confusion.
    LSX uses the default audio input. On my system (windows 7), right click the speaker icon, click recording devices, select the one you want and set as default. Selecting the audio input device in LSX is another thing on the list of requested features. That list exists by the way, it's here: https://docs.google.com/document/d/1...it?usp=sharing

    Request access to edit the list, otherwise you can only leave comments.

  9. #389
    Join Date
    Jan 2014
    Location
    North Carolina, USA
    Posts
    220

    Default

    Ok, so I got a shiny new APC 40mkii and am having no problem using it to launch clips - err... play timeline loops and control parameters of the MasterBeam window - however, I am still on a mission to get it to behave more like the "Clip Launcher" when used with Ableton Live - i.e., where you can color-code clips that correspond to loops on the timeline and the currently playing clip is represented on the APC as nice ugly green color. I see you can get midi IN values in expressions - but is it possible to send midi notes OUT using expressions? Changing the color of the APC pads is done by sending Note messages - it would be awesome to be able to do everything in LSX. Also, there is a little SYSEX blurb that needs to be sent to the APC in order to tell it change to "Live" mode - and it would also be nice to have a way to send this SYSEX message in LSX and not have to rely on some external app to change the mode of the APC.

    The other option - which would require a bit more work - is to create a little Max/MSP patch to handle the coloring of the pads and current playing clip logic.

    Below is a screenshot of how I really wish LSX integrated with the APC - but until then, I am fine hacking things using expressions and/or Max/MSP

    Attachment 45529

  10. #390
    Join Date
    Dec 2010
    Location
    DC/VA metro area, USA
    Posts
    554

    Question Default Catalog?

    So, I'm trying to follow along some of the examples in this thread, but my catalog doesn't have the things in it that the examples do. Which catalog is the default catalog for LSX? There are lots of catalogs in the catalogs folder, but they seem to be associated with particular shows.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •