I will ask the hosting company to start the DB upgrade process tomorrow morning. This means the DB will go offline 29-Oct-2015, probably after 1100Z. It will be offline for some hours as the files and DB is transferred to newer servers. On top of this there will be some DNS changes. DNS changes take time to filter across the internet. There is nothing that can be done to speed this up, we all have to wait, including me.
I would have liked to give everyone more notice but we need to get things fixed and rather than wait longer I feel it is better to get on with the changes as soon as possible.
#You should expect the DB to be unavailable for probably 4 to 24 hours.
Fingers and toes crossed (that gives me gyp when I walk!)
Fingers and toes crossed…
I hope everything will perform as expected.
Thanks you for your dedicated work.
Good luck with the upgrade Andy.
Ed (currently in the UK preparing for two TW summit activations).
The upgrade will start around 1100Z today.
It is entirely possible - fairly easy in fact - to make DNS changes happen quickly when you need them to. But it requires a bit of advance preparation and in general hosting companies can’t be bothered to organise it, so de facto we do indeed have to put up with the unpredictable delay and inconsistencies during the transition.
Good luck with the upgrade!
Thanks Martyn, I did get a pro-forma set of “who needs to do what” instructions one of which was for me to actually do the DNS changes before the upgrade starts to minimise the time lag. But in this case the domain name is managed by the hosts so they are handling that and should have started the DNS changes.
I think Martyn was referring to the common practice of reducing the “time to live” setting on the dns info a couple of days before making the redirect change. That way the time it takes to propagate once you do change the redirect is much shorter. After everything is happy you can increase the TTL again to reduce the dns server load.
Yes that is an option, when the reflector was restored the other day, our instance came back up on a new server and IP address. Jon had the TTL at 1hr so the new server was visible quickly.
Playing with DNS and the under-guts of how the internet works is one of those things I do very infrequently. In a lot of cases that I deal with, domains are provided along with hosting packages meaning someone else sets them up and tweaks them and that limits direct exposure and experience.
Hello Andy thanks for the work everyone involved.
My question is should we all donate a few BOB towards any costs for servers or what ever else is needed to keep things running. I don’t imagine this system is free to have. It’s a bit like the local 2m repeater, a few pay for it or build it and keep it going, everyone uses it.
Ian vk5cz …
The upgrade was delayed a little but should be starting soon.
and there was me thinking it hadn’t worked as the DB kept kicking me off this morning. Fingers crossed it all goes well.
Further to Ian’s comments - is there a means of donating to SOTA running costs?
There is a donate button on the home page of the shop.
See all up and running again and its not kicking me off at times yeahhhhhh.
just noticed something and this stemmed from yesterday
In top right corner you have the activations history when you have tapped on the sota watch page and the summit info comes up. This summit activity comes up as
Total Activations: None yet!
On all of them i have looked into including ones I KNOW i have worked before in past and have history it seems none have history now.
Any one else getting this
And even bigger OH
Code Name Alt(m) Points Activations My Activations My Chases
G/DC-001 High Willhays 621 4 0
G/DC-002 Brown Willy 420 1 0
G/DC-003 Kit Hill 334 1 0
G/DC-004 Hensbarrow Beacon 312 1 0
G/DC-005 Christ Cross 261 1 0
G/DC-006 Carnmenellis 252 1 0
G/DC-007 Watch Croft 253 1 0
When you go one the regions of summits according to this none have been activated on any i have looked at this morning yet this was ok yesterday.
Don’t worry, Karl, it is known about and will get fixed later.
The upgrade went ahead yesterday around lunchtime UK. It seems (and I stress seems) to be OK but only time will tell.
The problems we were seeing, technically loss of session state, can happen occasionally anyway, So now time needs to spent using and observing the system to make sure that any problems can be explained and are not indicators of bigger problems.
A number of background jobs need to setup and tested and that is ongoing.
Any software that directly connects to the DB needs the new address of the DB server and so SOTAwatch will be affected till Jon gets the chance to change the address.
Lots to do over the weekend…
Nice one guys
Good to know you good peoples is on the case