Page 1 of 2

Status as of Saturday, December 2, 2017

Posted: Sat Dec 02, 2017 8:30 am
by Steve Sokolowski
Good morning! Just a few brief comments today.
  • The infrastructure is killing us. This week alone, we lost several days to a sudden CPU failure on one of the computers we use for development, a residential Internet outage by Comcast, and the configuration and setup of the Atlassian tool suite. We're hoping that these issues get finished soon, but we still have the bug tracker, Bitbucket source control, and Confluence wiki to set up before we can start getting this stuff under control. It's amazing how the infrastructure issues took more time this week than it took to test the parallel mining server last week. Comments on how to configure this stuff more quickly, or why the physical equipment is breaking so often, are welcome.
  • We are still debating whether to go ahead with the release today, since the infrastructure issues caused us to be backed up with support tickets. If we decide to focus on infrastructure today instead, we'll issue the mining server release tomorrow.
  • After we verify that the mining server works well, our next task will be to reopen registrations. Our limiting factor in registrations is the ability to answer support tickets from new users, so we will likely allow only a few users at a time so we can keep up with the expected large number of tickets from new customers.
  • The new ticket system is fully operational and will be used for all future tickets. The old ticket system is still online at oldsupport.prohashing.com, and is being used by me today to continue to work through the 180 existing tickets there. Once tickets in the old system are closed, customers will need to use the new system for future correspondence. contact@prohashing.com is now generating an automatic reply instructing customers to use the new ticketing system. The website will be updated with the new procedure shortly.

Re: Status as of Saturday, December 2, 2017

Posted: Sat Dec 02, 2017 8:56 am
by pavvappav
I'd recommend outsourcing the Atlassian other customer support infrastructure. Services like elasity.io can handle the service for you. The added bennifit of moving these services off of your operational infrastructure is that you can rebuild it and retain the business and customer relation side of the shop.

I know you guys are very security minded, so AWS hosting of bitbucket may not be something your comfortable with; however, it could be done with the amazon virtual private cloud so that your development infrastructure is not on the open internet and only accessible from your research and development network.

Re: Status as of Saturday, December 2, 2017

Posted: Sat Dec 02, 2017 2:13 pm
by AppleMiner
For security I doubt they can outsource support anymore than they can anything else.

If all support had to deal with was...My new D3 cant connect to the pool, they could read a 10 sheet paper on how to connect a D3.
But I would wager most of the support tickets are...I didnt get X payment on Y day.
So the support team will need access to the books and internal account info, I dont think you want that just public access.
If so, it would have to be read only and then someone with special write access to make changes to the account you can trust.

I mean if you outsource it to me, Im going to put in a ticket I didnt get my last 100 BTCs, and sure enough Ill have them sent to myself.
I think all the parts of the operation need to stay within the brothers control or trusted employees they know and can count on.

Re: Status as of Saturday, December 2, 2017

Posted: Sat Dec 02, 2017 2:20 pm
by Steve Sokolowski
AppleMiner wrote:For security I doubt they can outsource support anymore than they can anything else.

If all support had to deal with was...My new D3 cant connect to the pool, they could read a 10 sheet paper on how to connect a D3.
But I would wager most of the support tickets are...I didnt get X payment on Y day.
So the support team will need access to the books and internal account info, I dont think you want that just public access.
If so, it would have to be read only and then someone with special write access to make changes to the account you can trust.

I mean if you outsource it to me, Im going to put in a ticket I didnt get my last 100 BTCs, and sure enough Ill have them sent to myself.
I think all the parts of the operation need to stay within the brothers control or trusted employees they know and can count on.
Thanks for the suggestion in the second post about AWS, but AppleMiner is correct in that we can't let the tickets off our disks. Some users provide wallet addresses and personally identifiable information in tickets, and having it hosted ourselves eliminates a whole class of investigations. When a customer reports that their identity was stolen, now we can say it wasn't us; we have to consider if someone gained access to the system if it was hosted remotely.

But the issue here isn't hiring a support person - that's something we plan to do eventually. The issue is that trivial things unrelated to mining are holding us up. For example, a residential Internet connection going out can cost us a whole day of work even though it has nothing to do with the site, and a computer failure puts us out of action until the replacement arrives from newegg. Even the fastest shipping method took them 3 days, and we're in a dispute about that with them now.

Re: Status as of Saturday, December 2, 2017

Posted: Sat Dec 02, 2017 3:15 pm
by AppleMiner
1, residential internet....pay the extra $30 a month get both ISPs in the area, so if 1 goes down, the other takes over?
I pay $60 for my main internet for my miners, and I pay another $30 for a backup from the other ISP provider...comcast and verizon.
$30 a month for a backup internet for the mining operation...if comcast goes out for 1 day a year and verizon is online to handle the miners for that 1 day, it paid for itself. Either way Im writing both off as business expenses needed for the business so I dont care. But yes, failover ISP on the modem so you dont lose a day because 1 of the services is out, may be worth the extra $360 a year? Comcast goes offline for maintence 3-4 hours at a time at least 4 times a year. Verizon is on during those times unless its a shared line issue, so no downtime. Other choice...a mobile hotspot, think you can get service and 2GB data stream for $20 a month. if the main ISP goes out, plug this into the wall for a 4G wifi, or USB plug it for computer ethernet to 4G internet conenction. JETPACK was what I had from verizon. Depending on your CARRIER and your SMARTPHONE...do you have tether or hotspot on your device. are you already carrying around internet in your pocket and you forgot if the ISP goes down you dont care, you can use the smartphone to allow the computer online?

2,Buy a backup PC with the same drives/ports (had a mobo with my M.2 drive go down, and realized it was only M.2 mobo=no drive access)
Maybe do a clone of your PC once a week and have 2 drives on the backup computer alternate each week which one gets that weekly backup so you have a backup computer with 2 copies of your computer from the past 2 weeks. If you mess up a file, dont have versions or a safe backup system in place, could always boot the backup PC and selective boot off this week or the previous weeks backups to recover a file that wasnt altered int he past 2 weeks.

3,Best Buy. Drive there, buy the parts, take them home and be up and running. Yes New Egg has the better prices, but if you have to have it now, today, means I get to work for 8 hours or watch TV and relax...thats a toss up (I go relax) pay the extra $40 and buy the part that day and fix it then.

Re: Status as of Saturday, December 2, 2017

Posted: Sat Dec 02, 2017 3:22 pm
by Helotours
Are you guys really not making enough money at 5% to repair, upgrade and stay operational?

Re: Status as of Saturday, December 2, 2017

Posted: Sat Dec 02, 2017 3:31 pm
by AppleMiner
This was their at home setup.

I've worked for multi-million dollar companies in the past. When a server goes down, and the only mobo is overnight extra paid for faster processing and 3X more expensive than it needs to be so it arrives at 8am on the red eye...you do it.

And you swap some drives to another system and try to get it limping along to finish out that day if its a mission critical system, otherwise take it offline til 8am the next day and then put it back up.

Things happen, machines break, takes time to get replacements if not immediately available.
And it could have been worse...it could have been PCs on the pool, that kept miners from connecting, or payout systems that kept those from going out. So if it was just a programmers PC at his house, and his homes internet connection...not the business sides issues.

Ahh, residential crap happens.--MOAR BACKUPS!

Re: Status as of Saturday, December 2, 2017

Posted: Sat Dec 02, 2017 3:44 pm
by greenhorn0815
AppleMiner wrote:1, residential internet....pay the extra $30 a month get both ISPs in the area, so if 1 goes down, the other takes over?
I pay $60 for my main internet for my miners, and I pay another $30 for a backup from the other ISP provider...comcast and verizon.
Yes, and a load balancing multi wan router who can handle both lines at once. ( http://www.wlanparts.com/peplink/peplin ... -balancer/ )

Using it for years with 2 lines and my uptime since is 100.00%

Re: Status as of Saturday, December 2, 2017

Posted: Sat Dec 02, 2017 3:47 pm
by greenhorn0815
Steve Sokolowski wrote:Good morning! Just a few brief comments today.
  • The infrastructure is killing us. This week alone, we lost several days to a sudden CPU failure on one of the computers we use for development, a residential Internet outage by Comcast, and the configuration and setup of the Atlassian tool suite. We're hoping that these issues get finished soon, but we still have the bug tracker, Bitbucket source control, and Confluence wiki to set up before we can start getting this stuff under control. It's amazing how the infrastructure issues took more time this week than it took to test the parallel mining server last week. Comments on how to configure this stuff more quickly, or why the physical equipment is breaking so often, are welcome.
I always have critical systems redundant and realtime mirrored, for lifeline systems hardware redundancy is sometimes trippled.

Re: Status as of Saturday, December 2, 2017

Posted: Sat Dec 02, 2017 3:48 pm
by Steve Sokolowski
Helotours wrote:Are you guys really not making enough money at 5% to repair, upgrade and stay operational?
Money isn't the issue.

Actually, one of the things you quickly find out when you get some money is that nobody wants to take it. We gave up on Comcast and they lost a $72,000 contract after their salesman never called us back. We're willing to pay for overnight shipping, but the quickest the vendor can get a processor out is three days. We want to use restaurants to save us time but none of them will deliver to us because we're too far away, no matter how much we offer them. We've been working with lawyers and still don't have a signed document after five months.

It's astonishing how worthless money is. You need a certain amount of it to live, but after that, nobody seems to be interested in earning it.

The other issue is that you can't predict what will fail next. I never even considered that a processor on a computer would fail. Since we just bought a new processor, it's unlikely that the next failure is going to be a computer, so buying a backup computer isn't the right move. I'm not sure how one gets a list of things that are likely to fail.