Status as of Thursday, June 18

Discussion of development releases of Prohashing / Requests for features
Forum rules
The Development forum is for discussion of development releases of Prohashing and for feedback on the site, requests for features, etc.

While we can't promise we will be able to implement every feature request, we will give them each due consideration and do our best with the resources and staffing we have available.

For the full list of PROHASHING forums rules, please visit https://prohashing.com/help/prohashing- ... rms-forums.
Locked
User avatar
Steve Sokolowski
Posts: 4585
Joined: Wed Aug 27, 2014 3:27 pm
Location: State College, PA

Status as of Thursday, June 18

Post by Steve Sokolowski » Thu Jun 18, 2015 9:59 am

Here's today's quick status:
  • We are being overloaded with support tickets. Rest assured that we have all money owed and that we are going to respond as soon as possible. We want to get all the daemons moved over to the new servers as quickly as we can to increase reliability, if it can be increased any further.
  • We are discovering that some mining rental companies are promising our customers a single miner, but are actually packaging multiple inferior miners together in one connection, which means that they cannot take advantage of all features and earn maximum profits. This is something we'll investigate next week in an effort to notify affected users they are not receiving the product they are paying these cloud mining companies for.
  • We've moved 40 of the 137 active daemons over to the new servers. 70 more are downloading blocks (since it is faster to download across the Internet than to copy off the slammed disks). As you can see, performance has improved significantly but we don't want to rest now, given that we don't know what will happen if hashrate rises above 30 before we've finished.
  • Chris is still sick and despite that, he stayed up until 7:10am last night to move daemons.
  • One performance improvement I didn't consider is that when you consolidate these huge wallets, the daemons don't produce as much data on disk because they don't have to track the ridiculous number of private keys. This is something to keep in mind for people running bitcoin who receive lots of transactions.
  • Once we can delete the old daemon data, we were considering buying two 1TB SSDs, which would only cost $1k, and eliminating RAID and other complex solutions. Our database is 500GB, but part of that is because we cannot delete data without overloading the disks. I never considered that we would see a day when solid state disks were large enough to be the only tier of storage, and where CacheCade solutions would be obsolete. Instead of RAID, we could simply copy the database to an attached hard drive every day - while the process would take a very long time because the hard drive is slow, the SSDs would barely be affected. Rootdude's commends on this would be appreciated.
  • Finally, we apologize in advance to customers who are waiting for a while to receive replies. These are great times indeed, and who would have thought we would ever see profitability above 8 cents again?
User avatar
rootdude
Posts: 76
Joined: Wed Jan 07, 2015 3:14 pm

Re: Status as of Thursday, June 18

Post by rootdude » Thu Jun 18, 2015 10:32 am

Steve Sokolowski wrote:Here's today's quick status:
  • Once we can delete the old daemon data, we were considering buying two 1TB SSDs, which would only cost $1k, and eliminating RAID and other complex solutions. Our database is 500GB, but part of that is because we cannot delete data without overloading the disks. I never considered that we would see a day when solid state disks were large enough to be the only tier of storage, and where CacheCade solutions would be obsolete. Instead of RAID, we could simply copy the database to an attached hard drive every day - while the process would take a very long time because the hard drive is slow, the SSDs would barely be affected. Rootdude's commends on this would be appreciated.
It's the case that the daemons' data (outside of the wallet.dat) doesn't need redundancy - so reducing the overhead of RAID would be completely appropriate. Scheduling a wallet.dat backup to redundant storage while storing the db's on disk directly makes a lot of sense. As a matter of fact, I wouldn't even bother with RAID overhead with that stuff (wallet daemons). Every once in a while (depending on wallet activity), re-downloading the blockchain makes a lot of sense as well - a regular maintenance activity. Even better, PCIe SSD storage would be very, very efficient and fast as all getout (I'd opt for this before anything else... and certainly not use USB interfaced storage).
User avatar
Steve Sokolowski
Posts: 4585
Joined: Wed Aug 27, 2014 3:27 pm
Location: State College, PA

Re: Status as of Thursday, June 18

Post by Steve Sokolowski » Thu Jun 18, 2015 10:54 am

I think we would do sort of a hybrid solution. There would be an external 4TB cheap disk attached to one of the servers, and every night after payouts we update the backup on the cheap disk. Since the SSDs are much smaller, all servers can be copied to this one backup disk. To save CPU, we won't even need compression. Wallet files would continue to go across the Internet to the offsite location.

While blockchains are not unique data, it would take a week per machine to restore them, causing low profitability. As a mining server, downstream bandwidth is more limited than upstream bandwidth, and to compound the problem, most of these coin networks have few nodes and download slowly. We need to be only a few days behind on blocks, or else customers will depart due to making less money.
User avatar
rootdude
Posts: 76
Joined: Wed Jan 07, 2015 3:14 pm

Re: Status as of Thursday, June 18

Post by rootdude » Thu Jun 18, 2015 12:41 pm

Steve Sokolowski wrote:I think we would do sort of a hybrid solution. There would be an external 4TB cheap disk attached to one of the servers, and every night after payouts we update the backup on the cheap disk. Since the SSDs are much smaller, all servers can be copied to this one backup disk. To save CPU, we won't even need compression. Wallet files would continue to go across the Internet to the offsite location.

While blockchains are not unique data, it would take a week per machine to restore them, causing low profitability. As a mining server, downstream bandwidth is more limited than upstream bandwidth, and to compound the problem, most of these coin networks have few nodes and download slowly. We need to be only a few days behind on blocks, or else customers will depart due to making less money.
This is why, in the first place, keeping multiple wallet servers as hot standbys makes a lot of sense... Anyway, it's great to have options.
Locked