- Unfortunately, we were not successful last night in improving performance sufficiently to release the latest code. Even though we have increased performance by as much as 100 times since earlier in the week, performance is still too slow for normal mining. We continue to work on this problem at our tax-reduced rate, which is unfortunately very slow. Hopefully, the taxes will be done in 4 or 5 days and we'll be able to get back to this at normal pace.
Status as of Friday, April 10
Forum rules
The Development forum is for discussion of development releases of Prohashing and for feedback on the site, requests for features, etc.
While we can't promise we will be able to implement every feature request, we will give them each due consideration and do our best with the resources and staffing we have available.
For the full list of PROHASHING forums rules, please visit https://prohashing.com/help/prohashing- ... rms-forums.
The Development forum is for discussion of development releases of Prohashing and for feedback on the site, requests for features, etc.
While we can't promise we will be able to implement every feature request, we will give them each due consideration and do our best with the resources and staffing we have available.
For the full list of PROHASHING forums rules, please visit https://prohashing.com/help/prohashing- ... rms-forums.
- Steve Sokolowski
- Posts: 4585
- Joined: Wed Aug 27, 2014 3:27 pm
- Location: State College, PA
Status as of Friday, April 10
Here's today's status:
Re: Status as of Friday, April 10
Would perhaps it be useful to push the update to a secondary box or VM, then connect a small subset of users to it in order to determine more precisely what is causing which process(es) to hulk out? Maybe a few actual users would cause the process(es) in question to spike enough to identify and/or diagnose, but not get too rowdy? You could control which users were tapped with some routing config to point traffic from their IPs (or blocks or whatever) to your QA box as needed.
- Steve Sokolowski
- Posts: 4585
- Joined: Wed Aug 27, 2014 3:27 pm
- Location: State College, PA
Re: Status as of Friday, April 10
I think that we can reproduce the problem fairly accurately now. A while back, we paid reddit user /u/bpj1805 a bitcoin to develop a testing application that submits invalid shares, and then we modified our server to add a configuration value that accepts invalid shares. That's how we were able to deduce that the system can handle 600GH/s.
The problem, however, is not in the hashrate but in the number of coins. We can't come up with a simulator for coins, and it is extremely expensive to host coin daemons. The production coin daemons are running on about $2500 of hardware.
But on a more practical note, I don't think that we could really benefit from a test environment like you suggested. We can easily just deploy the code to the live server for two minutes and see if it works once per day without inconveniencing people much, and every day we find that we've improved performance a little bit, but not enough. We're finding that this algorithm simply requires a lot of computation and it is extremely complex. While we appreciate your suggestion, it would require so much setup that the cost and time spent wouldn't be worth it.
I think if we just keep at it little by little, we'll eventually get it to a point where we can release. Fortunately, we haven't yet reached a point where we have made every improvement we can possibly make.
The problem, however, is not in the hashrate but in the number of coins. We can't come up with a simulator for coins, and it is extremely expensive to host coin daemons. The production coin daemons are running on about $2500 of hardware.
But on a more practical note, I don't think that we could really benefit from a test environment like you suggested. We can easily just deploy the code to the live server for two minutes and see if it works once per day without inconveniencing people much, and every day we find that we've improved performance a little bit, but not enough. We're finding that this algorithm simply requires a lot of computation and it is extremely complex. While we appreciate your suggestion, it would require so much setup that the cost and time spent wouldn't be worth it.
I think if we just keep at it little by little, we'll eventually get it to a point where we can release. Fortunately, we haven't yet reached a point where we have made every improvement we can possibly make.