Experiments in Agents Bar

Dawid Laszuk published on
3 min, 567 words

A small big milestone. This is a press release without a press. You might get here from our LinkedIn account in which case shout out to Agents Bar's social media team.

Today we're launching Experiments. In simple terms these are bridges between Agents and Environments. They allow Agents to interact with Environments, managing their states and learning process. We're excited about it as it mitigates the main disadvantage of using Agents Bar over doing things yourself. What, I hear you ask?

Main problem

Agents Bar provides deep reinforcement learning (DRL) agents and allows for their quick training and usage. It is a MLOps for Deep Reinforcement Learning although we know there are a few things missing. Focus on (deep) reinforcement learning means that we inherit one of its biggest problems - data famine (step above hunger). Yes, all machine learning (ML) are data hungry and doing "one up" never helps but, but, but. Reinforcement learning comes with a feedback loop; you observe, you deduce, you interact, you change, you observe. What you learned yesterday might be incorrect or insufficient today. Because the environment changes, agents need to change as well. So, it isn't enough to collect data once and then do some extensive learning. We need to keep collecting data and keep on doing extensive learning.

That's the main problem: the learning process is slow. Having agents somewhere on the internet, having environments somewhere else, and then communicating them through yet another place just adds a lot of slowness to already super slowness. In such a case we completely validate the argument for doing things yourself on your computer. Errata: validated. Until today.

Experiments to save you

With Experiments we not only help in orchestrating the learning process but we also tremendously improve learning time. This is possible because Experiments are collocated and on the same network as the Environments and Agents. The result is as if you were running them locally. Actually, it should be even better since we can rent top notch hosts and they only run what's needed in your experiment.

Individual performance is great but also consider that you can run as many experiments "locally" as you want. Oftentimes you don't know what the starting parameters are for your agent, so... why not test many of them at the same time? Instead of waiting a day to see whether your parameters were correct, go ahead and initiate 10 agents with different parameters. Getting 10x improvement will surely impress your colleagues, your boss and that one frenemy who is secretly competing with you in everything.

Try it

I (obviously) don't know about you but I'm (obviously) a tad bit excited about this. But, like, actually(?) excited. Yes, I just had a coffeee but that's beyond the point, OK?! That's one of two main things I wanted to achieve with agents bar: scalability and performance. There are rough edges, and things aren't as customizable as I want them to be, but now the fun part begins.

Want to try? Go ahead, register and try it. Let me know if you have any problems or want to extend your free trial.

Is there anything missing? Anything particular you'd like to see? Feel free to comment here or send me a direct message. I'm happy to chat and help out in any way I can :)