Tag Archives: comments

Speedy data 2

My Speedy data post generated a few comments and some discussion. I really appreciate people taking the time to get involved and share their knowledge and views.

The first comments came via twitter from TraderBot (here and here) with a link to stackoverflow. This is a site I’ve found to be really useful to get help with many programming issues in multiple languages (That reminds me, I keep meaning to do a list of what I use – apps and sites). The Q&As linked to, although relating to Java, are an interesting read with an answer to the speed question, in summary, of “it depends on what exactly you want to do/measure”.

LiamPauling commented on the post asking where I’m hosted and do I stream? I’m cloud hosted and not streaming. He continues that he thinks bottlenecks are more likely elsewhere, which, after further reference in later comments, seems to be a good point.

Betfair Pro Trader asked why I wanted to use an array. It’s not that I want to use an array more than any other data structure, I was looking at getting the best solution, if such a thing exists (which is becoming less clear).

Tony, via Twitter, suggested running a test code with the different structures used. This could be useful but I was put off from this initially by the confusion I was getting from reading differing opinions based on various implementations of arrays, collections and dictionaries (and later, lists). At this point I was thinking that the optimum stucture is dependant on the specific use and there isn’t an exact answer to my speed question.

Next, a comment from Ken. He points to Lists as it’s something that he uses regularly and he talks of some of the benefits. Again, I’d previously come across articles saying lists were slow but maybe I was too quick to dismiss them. Betfair Pro Trader has also suggested using lists and dictionaries combined. Ken adds that he codes in C# (C sharp) but I think for the purpose of data structures and speed they are similar (they, C# and VB.net compile to the same language and run against the same runtime libraries).

n00bmind added a detailed comment. He makes the point that the advantages of one structure over another are not always so, as mentioned above. Also, he goes on to agree with previous comments that my speed question may be missing the main issues – those being the program/algorithm itself and network latency. Further advice is given about profiling (something, as a specific process, I haven’t come across before) and maybe using a different language, such as Python (I have only a basic understanding of Python from messing with it on my Raspberry Pi).

Finally, Jptrader commented, agreeing mostly with n00bmind, and others, about looking at “handling network latency properly and doing performance profiling”.

Although a simple answer hasn’t been found (because there isn’t one), I’m guided by these comments to focus more on my code, handling serialization and latency, making the algorithm efficient and using the data structures that work for now, whether that’s arrays, collections, dictionaries, lists or a combination of. Moving to another language just isn’t feasible for me at the moment, it’s taken me over a year to get a running bot in VB, with limited hobby time. I am happy to accept that another language may have it’s advantages, so would advise others to look at this for optimising their bots performance (for me the advantage will be seen moving from VBA to VB.net).

The testing I’ve done hasn’t shown any particular advantage of the different structures. From my searches on the web I think this could be due to the relatively small amount of data I’m handling (many articles talk of data lines in the 10s to 100s of thousands when comparing structures). An error on my part also had me making double calls for data with my bot which added to my difficulties and questions initially.

I have plenty to be getting on with for now and will continue looking to improve my bots. Thanks again for all the comments.


Steve commented –

The simple fact is you won’t win long term gambling or trading if your not getting value. With trading at least one side of your trade needs to be a value bet, optimally we’d have both lay and back as value bets but that’s harder to acheive automated than doing it manually. So whether you like it or not your monthly £26.55 winnings are coming from some of the value bets you place. It’s not particularily hard to identify where your value is coming from bets you’ve placed and from there you can start to fine tune bots so they work as efficiently as possible.

I did consider writing a much more in depth reply re:value and how to assess it but you do seem to be more interesting in cultivating some cut price Cassini blog personna than your bots for the time being.

Thanks for the comment.

Getting value when betting on an outcome means taking a better than true price. If you back at 4.0 but true price is 3.65 then you have value. If you lay the same at 3.2, you have value. Based on this, consider if I enter on this selection with a back at 3.2 and exit at 3.0, I have a green trade. My entry point bet is not at value with regards to event outcome. My exit point bet is at value with regards to event outcome. My trade as a whole has value as it results in a green book. This fits with the points made in the comment above but not with the statement “every bet sent should be allowed to stand on its own merits”. As described, my entry point, made prior to an exchange crash, is not at value in this instance – longterm losing (the argument here could be that the random nature of these crash events would, over time, likely have a neutral (less commission) effect on my bank but the infrequency would be more disruptive than that theoretical neutralness). So the term “value” is relative to the actions and results intended. (Other comment points noted.)

Tobias commented –

Hi, I am also into bot betting, but with a different twist compared to your approach… You seem to have something interesting going on, have you done some thinking about scaling up stakes? What kind of potential does your strategies have for bigger stakes? Anyway, god luck with your bots and keep us updated on your progress!

Thanks for the comment.

Stakes – I’ve allowed stakes to rise with bank in the past and found that return falls steadily to a point at which larger variance occurs followed by losses. I now have Oscar set to various stakes within a fixed range just below the point of where I saw more variance. I don’t intend increasing stakes on this strategy but may revisit this in the future, as things can change.

Updates- I’ve spent a lot of time in recent months on programming a new bot in visual basic, I’ll do a post on that soon (hopefully), it’s been a journey. 

On Twitter, Tony tweeted –

As much as I like the graphs and regular updates, no graphs and monthly rep = more dev time for you. Hope I get some subs money back! 😉

As mentioned, I’ve been developing a new bot, in a new to me language, so that’s where my time has gone. The graphs can be included in the blog, I can’t say I thought through that decision thoroughly. With a months worth of data on them I was thinking they may be less detailed, as in a line with less variance than reality, I’ll reassess. Subs will be discussed at the next AGM.