Cellular Routers are now installed and configured at each end. Next up, tunnel config over-the-top.
Once that’s built, we’ll have emergency access to our data centre operations even with multiple fibre faults.
The new HPE switches are now live with link aggregation. Many thanks for your patience.
Work will conclude this week in terms of finalising back-end network upgrades internally.
Stacking and testing of the new HPE switches was successful this afternoon.
Tonight and tomorrow we’ll finalise the link-aggregated switching upgrade. Thanks for your patience.
From there, we’ll close this advisory and finish the remaining items “slow burn”. See the revised advisory for details. :-)
Due to our mounting positions for the new HPE switches, we have sourced reverse-airflow fan trays for them.
These will be swapped into service on Monday, and we’ll aim to finalise the stacked switching upgrade.
From there, the remainder can be finalised remotely. We have been working on all items BTS. :-)
Thanks for your patience as we progress through major works to improve your network!
6 out of 7 major items completed.
The testing machine has had optical fibre capabilities added, meaning extra testing options pre-production.
We’re continuing work on the new HPE replacement switches, with the previously planned Dell’s removed.
Thanks for your patience as we progress through the final parts of these major improvement works. :-)
Following a wealth of work on the new Dell 10/40G optical fibre switches, we’ve decided to stop work on them and instead replace our existing HPE 1/10G copper/fibre switches with HPE 10/40G optical fibre switches.
We made the move to go more “all of Dell”, and it was a good test in showing us to not use Dell for networking! The over-engineered nightmare that is their Network Operating System is quite an abomination of a NOS. :-)
HPE hardware is en route and we’ll return over the coming weeks to implement the switching upgrade.
Major upgrades to the OS on our Core Routers were successful! There was no impact to service. :-)
This introduces new functionality and better load management across our Core Routers.
We’re starting the major OS upgrade on Core Router #2.
If it’s successful, we’ll also upgrade Router #1.
Each router needs 2x reboots.
Testing of WAN Outage re-routing was successful! VRRP & iBGP re-routed traffic properly.
We’re done for today in terms of impact, which was around 2 minutes. :-)
Please let us know if you’re seeing any issues!
Optical fibre pull testing is commencing now.
Beware of brief impact/s up to 11am~ Sydney time. Thanks for your patience.
We have a technician arriving on-site this morning, who will be conducting “optical fibre pull testing”.
The goal is to verify whether or not the upgraded configurations will behave correctly in a failure scenario.
Impact should be limited to 5-10 minutes per router. If the Router 1 test fails, we will not proceed with Router 2.
The pre-requisite changes have been successfully completed on both routers!
We’ll be testing the upstream link failure failover mechanism this Wednesday the 30th.
From there, we’ll then be working to close out the remaining works within the Sydney network.
Today we’ll be adjusting more network config to keep next week’s testing on-schedule.
It’s expected to knock connectivity offline for a short while, as ARP comes back online.
Internal routing peer configurations have been amended, and physical tests will be run Wed 30th October.
We’ve completed enough verification to be happy that the network should recover after 5 minutes.
Once the config upgrades have been verified to work properly, we’ll update this advisory. :-)
We’re applying some changes to internal routing filters, and expect no-to-minimal impact.
Internal BGP configuration has been successfully deployed into the core network.
Thanks for your patience as we improve your experience!
The major config (VRRP etc) works have been performed, though failover testing was unsuccessful.
Works will continue to improve internal routing logic. Then we’ll verify through another real-world test.
Preparation works on the routers is complete, and major works are now commencing.
We’re starting the Network Engineering around 3pm. Radio impact will be last.
M-VPS: Please note, impact to Managed Servers will be first. The secondary router will have its config upgraded and tested. Impact is expected to be around 20-30 minutes.
RADIO: From there, going well, we will progress the changes to the primary router which will take down Internet Radio connectivity for a brief while. It could also be 20-30 minutes.
All times listed are approximate. We’ll update this listing once Router 2 has had the work done.
Major network engineering works are scheduled for tomorrow (Sunday 20th of October) and there is impact expected to all services as a result.
The outcome of these works will be a more resilient core network in Sydney, allowing for easier remote maintenance of the network which better serves our clients 24/7/365.
Engineers are back on-site to continue works.
Thanks for your patience as we improve your experience.
Questions or concerns? Please don’t hesitate to reach out to our crew.
Our engineers needed to take a break after a full long weekend of work. :-)
The final changes are scheduled to take place remotely this week.
From there, we’ll re-attend site this weekend for tidy-up.
Final testing of planned network interface failover automations is underway.
We’re on track to finish cutting over to the new switches & configs tonight - this is part-way done.
Many thanks for your patience this long weekend as we improve your service!
Network Engineering works continue. Thanks for your patience.
Interruptions to service during this Public Holiday should be brief.
Questions or concerns? Reach out to our crew for more information.
Cable Management work has now been completed within the rack-space.
Final item for this long weekend is the Network Engineering work.
That’ll continue through the night. Thanks for your patience!
All relocation works within the rack space have been completed.
Cable management & network engineering will take place over the weekend.
Many thanks for your patience as we continue improving your service with us.
Internet Radio clients are about to go offline, so we can perform some shuffling within the rack space.
We’ll also be adding more cable management, improving “race-ways” that branch cables out to gear.
Impact should be fairly brief. Any impact thereon should be minimal until work resumes tomorrow.
The 40G Dell Switches have been configured and stacked, and are now awaiting testing.
We’re now evaluating how much can be done from here, without the missing fibre bits.
The supplier has confirmed they will re-send on Monday. Thank you for your patience!
Hi there,
Following successful preparation works as well as research to determine the best ways forward with our Core Network in Sydney, we’ve now scheduled major optimisations and upgrades to be carried out.
This will involve some major works within AS138521:
While we’ll approach the works in a staggered manner, there will be impact to services during these works due to the major nature of the network re-engineering. The benefits from completing these works are great news for the resilience of your services moving forward, so we’re looking forward to completing them!
Following the major works, we’ll remotely finalise some internal routing/security upgrades:
If you have any queries or concerns, please don’t hesitate to get in touch.
Cheers,
Merlot Digital
Network: AS138521