No products in the cart.
Per-Anders Edwards
Participant
Topics Started: 6
Replies Created: 14
Has Thanked: 2
Been Thanked: 0
Than-you John. learn something new every day!
How about the behavior of selections in the timeline? I don’t see anything in those notes about it but prior to the update duplicating with modifier + drag would leave the previous clips selected which made quick work on duplicating out a long region (would really love to have a “loop drag/expand” feature there ideally), now only the duplicated item is left selected, which does make sense as well but slows down the previous workflow. If this is a new feature is there an option somewhere to go back to the old behavior?
Why are you still typing.@Bear Faced Cow? You clearly don’t have a clue what you’re talking about, absolutely nothing about workflow, even less about event system in plugins nor the midi spec, yet here you are insisting on continuing to blather on. And again it continues to be irrelevant. If I have to spell out why this is irrelevant then here goes – because I’m asking for a workflow, how that’s achieved quite literally doesn’t matter to me and given that it’s already been achieved elsewhere there is no technical excuse (no matter how defensive) to be found for its lack here. The only three reasons for not doing it are – priorities/time, unmaintainable code base or intentional design decision. None of which any of us out here have control over or knowledge of. Therefore, irrelevant discussion.
What I asked for was perfectly clear first time, continues to be clear now. Ask anyone who uses a daw to “automate” something and they will do the same things. You can add automation lanes to any device you need not directly access an interface control, if you do it is via MIDI parameter automation. The “transformation” is a resulting internal function of the control itself if one exists, but need not be visible (see Mod Wheel, Pitch Bend etc which may have no corresponding virtual control), exposing the parameter for automation is a matter of tying to a MIDI channel in VST2 or utilizing the VST 3 event system.
As for the claiming it’s “BS self entitlement” to go to a software companies forum looking for technical support and to give feedback without being trolled – firstly that’s an insanely entirely and freakishly fanboy POV and secondly where exactly do you get off on talking to ToonTracks customers like that? Get over yourself. Do you work for them? I and others have sunk thousands into this product and everyone that has is absolutely entitled to support and should be driven by self interest to report and offer feedback that could help Toontrack improve their product. It’s also self evidently in the best interests of a company to foster a good relationship with their customers. There is nothing entitled about it, everyone who’s bought their products has a right to come on here to find out if a basic feature exists from Toontrack’s support, and if it doesn’t to then request said feature and also has the right to not be harassed by a clique of deeply obnoxious and technically illiterate stans, dead set on obstructing help for other users and for ToonTrack, what you do is a disservice to everyone. Please stop.
What’s needed to be requested has been requested. What’s needed to be said has been said. Mods please close this thread.
Thank you, I understand what is possible currently then. As a user I expect to be able to add an automation lane in my DAW of choice to attenuate Velocity in realtime within my DAW. That is all.
Competing products allow this. It’s even possible to set up a workaround to do this in most DAWs although you lose access to SD’s play style controls in doing so (see attached screenshot for a method that does this using a Player in Reason).
Here’s the thing. When a user requests a feature they don’t care how it’s done. Nor should they. For all you know I’m a random musician who zero technical knowledge. But from inference it should be evident I do know – how to use a DAW, the extremely pedestrian nomenclature of an “automation lane”, and that when I say “Velocity” I don’t mean Donald Duck. As amusing as it must be to some trolls here to waste time winding others up it sets a very negative message about attitudes towards paying customers that this is allowed on your own forums.
Sure, this is what it looks like in Reason.
Thinking back there could be another explanatio :
After the OSX update Reason’s save window is behaving badly so I switched to S1 for a session and launched SD as an AU for the first time ever. Possibly this caused it?
I don’t recall if I’d already tried out SD before that, at the time SD worked (although it crashed a few too many times for my liking so in the end I gave up on that whole cursed session).
When I next loaded up SD in Reason it started up with a message saying something like it was updating it’s midi library. After that the text was garbled.
I’ve since tried going to the advanced prefs and selecting “Restore MIDI Database” to no avail. It still looks like the attached screenshot.
So I guess you don’t know what those words mean. I described a process. The transform is a tiny part of that.
MIDI transformation isn’t a technical term so you can stop holding onto it like a totem it’s just a description, simply – the modification of MIDI data. As it’s a communication standard you can transform MIDI with hardware or software. It’s a function. Automation is the process and workflow used to apply a function over time common to most DAW software. MIDI need not be involved but uses common language, in fact within SD it’s doubtful the imported data is kept in the format.
More to the point it’s irrelevant. My posts are perfectly clear of intended workflow and outcome. There was never a need to go into any further detail.
@mods – Please clean the fluff from this thread up.
Reading comprehension folks. I specifically stated velocity, not volume. Velocity is not the same thing as volume, nor is it purely an input event, again – I stated automation for playback.
Automating velocity is quite common, competitors products offer this as standard. It doesn’t mean automating the values coming in externally nor did I say as such and you would really have to do a heck of a lot of mental gymnastics to imagine as such.
This is an incredibly basic idea and I can’t help but feel people are being purposefully obtuse for the sake of trolling. Seeing as you veritably demand a full explanation then here goes in rough technical form, try not to get lost :
SD owns it’s own timeline, at the same time any VST can support a large number of automation lanes and even offer descriptors to the host so that it can offer them to the user with readable names. How the VST uses the resulting values it gets from the host is up to it. In this case it’s trivial – the note values are read from the internal timeline, the velocity value for any new note is modified by the value passed from the host before it reaches the audio engine (sample player) which then triggers the appropriate sample playback at the appropriate volume.
The modification itself is simple. In this case it’s simply note.velocity = note.velocity * attenuation (where attenuation is the normalized value from the automation lane with a default value of 1.0) if you wanted to replicate the existing velocity clip modified behavior or if you wanted to behave a little more like a real drummer then note.velocity = max(note.velocity – attenuation, 0) as drummers don’t tend to play with more precision the quieter they play.
What was your attempt at education for? You injected yourself into this topic and seem to have difficulty with reading comprehension and a big need to interject with straw men.
Did I say directly manipulate the MIDI? I said automation, fairly standard language.
Your apologist opinion on how ineffective automation is simply isn’t relevant to my post or request. I’d be very surprised if it want there already and even if not what I’m asking for doesn’t obviate any ability to manually edit notes. Despite you and your protestations over accuracy and desire to hold things back SD even includes the convenience ability to manipulate a clip’s velocity before placement in the timeline, so the programmers already disagree with you and I believe them to be more competent than apparently you believe them to be.
What works for you, works for you. That’s great. This isn’t about you.
No, I’m pretty sure I know exactly what I mean and what I want.
Velocity is the second byte of the “note on” message within the midi spec, it’s well documented and understood. How a VST uses it is up to the VST, in the case of most drum libraries including SD velocity is mapped to a combination of sample layers and direct volume control of those samples to create smooth transition between them.
What is the point of your interjection? The dynamics will of course be affected by adjusting velocity either relative to the surrounding velocities or relative to the overall values depending on whether it’s a multiplication or a clamped subtraction. That’s neither here nor there though, a control to modify dynamics would only modify the available range of sample layers and output volume, I need a velocity control before I need an “evenness” control though both would be great. I see no advantage to manually editing lots of midi events to do basic things a control curve was designed to do.
No I mean velocity. Dynamics is the available range between quietest and loudest hit, volume is the (output) audio signal attenuation. Velocity is how hard you hit a note, represented as a MIDI value.
I’m looking to have a natural pulled back sound. So I don’t want to make the notes more even, if anything they should be slightly more erratic at lower volume because it’s much harder to hit evenly. I don’t want to just reduce the volume, I can automate the channel fader for that. I want to automate the relative velocity of the notes.
It’s dead simple stuff. I just don’t want to have to manually program every notes velocity.
The second, to quickly automate velocity of an existing track in a DAW for smooth build up/down.
Are you saying there’s isn’t one for this? If so then please add it to the suggestions list as it would speed up workflow a great deal as opposed to manually editing each hit to create the smooth builds within a single pattern that I want.
In an ideal world I’d really want to be able to automate both this and the “quantity” dial from the pattern editor to effectively emulate the way e.g, a jazz drummer will simplify and lower volume when it’s time for an instruments solo. It would be especially useful in a live situation where you can’t necessarily pre-program, but could attach an expression pedal and give yourself a much more flexible virtual accompaniment.
Oh, and tested. Apparently not a uniquely S1 issue, this also happens if I try the same thing in Reason. Seems to be a new issue (did not happen in the past). Only local change was the addition of a MIDI keyboard and 2 mic’s (resulting an opening up 2 additional channels on the audio interface) to the setup.
Oh, and other info – I am at the same time running an instance of Windows 10 using Parallels in the background for unrelated CG software development, this has access/affinity to half the memory and CPU cores.
Thanks for the thoughtful response.
Re. User MIDI – Thanks. I didn’t know the folders would be used for the family pane. That helps a lot, though I’d still love to be able to tag what’s a fill, verse, chorus etc.
Re. Adding a drum in. I’d be totally happy if it just added a default first drum in list when beats were added, just for the extra instrument and midi note lane.
Re. Multipart drums. Ah, cool. How do I do that in the “Edit Play Style” tab?
Re. Tap to Find – Yeah I think sort of that. I think we’re talking about the same thing. In my head I envisioned it sort of being a bit like the song creator right now in that you drag in a midi section that you like that matches your “tap to find” beat to get the “family”, but rather than using prebuilt rhyme scheme/stanza/patterns it’d basically assume you would prefer to keep in the same family and then do repeated match/find and fill Tap to Find for the whole rest of the input getting the nearest possible that also matches the family you liked, or alternatively it would try to figure out the pattern from the source midi track and then just work with song creator as now just using this “custom” user pattern. Could be a fun deep learning project (I still have to find time to play around with that stuff, got about a bazillion ideas, no follow through, like every mudlark). Anyhow, just throwing that out there, it’s been something I’ve dreamed of having as a “virtual drummer” for yonks now, but I know how hard it is to find time and resources when you’ve got a lot on already.
Thanks again.
No products in the cart.