The weird gateway day … this time it wasn’t DNS

This isn’t a story where at the end I will say “yes, it was DNS.”

Two days ago, I started noticing sporadic failures from servers at one client’s office, unable to be pinged for a few seconds, then operating as normal for five to ten minutes. One might start off with checking the switch, physically investigating the servers and cables which, I did. What made the issue stand out was it wasn’t just one server, it was multiple servers on multiple switches.

The investigation jumped to the routers. Nope, they were fine.

At this time, desktops began to lose connectivity to the affected servers. Not all servers, and not even all servers on the same switch! From the servers I found I could ping other machines sporadically but ping working servers constantly.

Magically, it all stopped (well, started working again, the problem stopped) at five o’clock on the dot. So, this was user related, not the servers, not the switches and not the routers.

I decided to setup a sting for the following day. I started watching users as they logged into the domain while running non-stop pings on the servers when suddenly, it began again! I raced across the office to the offending user’s office to find, of all things, an ancient HP printer sitting on the desk. The culprit had been discovered.

It seems he HP printer’s LaserJet card had thrown a wrench. It was auto-assigning the gateway address as is local IP address complete with no net mask and no gateway address (how could it, it was its own gateway.) By statically assigning an IP to the printer pool, all went back to working well.

The end result showed an HP printer, assigned its IP to the default gateway of the system. Servers picked up on this from ARP updates and began, sporadically sending packets destined to the routers to the printer.

I’m not sure how to prevent this from happening again other than to constantly monitor the MAC of the router and alert if the perceived gateway MAC suddenly changes.

It certainly was an entertaining hunt.

Automatic login with the Ubiquiti NVR (tested on 3.7.3)

I was looking for a way for my Raspberry PI to automatically log into my Ubiquiti NVR and having found a few resources online all dealing with VB script launchers figured there has to be an easier way. I was wrong. But here is what I came up with.

I’m going to assume you know how to use SSH, nano (or vi or your favorite editor) and can search/type in a compressed javascript file.

SSH into the NVR and head on over to the /usr/lib/unifi-video/webapps/ROOT/static/js/(version.js) file – this will be a long named javascript file with the latest build. Note – this will change on each patch so get used to editing this until Ubiquiti figures out this is a needed feature. In my case, the file is called uv3.833f7b634e8027fc5fcb19f3b27e440690db4b43.min.js

Search for the /uv3/views/LoginView controller and edit the data to change the lines from:

username: this.$username.val(), password: this.$password.val()
username: "(your username)", password: "(your password)"

This will automatically provide a username/password to the controller on form submission. Now to get the controller to automatically log you in, look for the function nearby called didInsertElement and edit it from:
this.$().find("input")[0] && this.$().find("input")[0].focus()
this.$().find("input")[0] && this.$().find("input")[0].focus();this.doLogin();

Now when you browse to the login screen it will automatically log you in without the need for a VB script.

Naturally, it will always login with the given username/password so before any admin functions you need to perform you will need to swap out the changes and any patching will undo all of these changes.

Round three of my edit will include the ability to provide a different username/password based on the calling IP address so from my administrator machine or VPN tunnel I can access the admin features while reception will automatically login with a standard user account.

PS. The logout button now will also not do anything unless you redirect it elsewhere.

My thoughts on server shares

Over the years I’ve installed, configured and setup hundreds of servers. I don’t get stuck naming them and I don’t get stuck securing them. I don’t really get stuck setting up file shares but that is where a lot of my time explaining my decision goes into.

Some administrators and managers are of the opinion that they should setup a single file share (\\share), give everyone in the company access to it and let them go to town. This is quite possibly the worst thing you can do to your network. Over time (and a short time at that) your single file share will devolve in to the wild-wild west. Your backups will include twenty to thirty empty new folders, several hundred or thousand tilde temp files and a file structure resembling the plotline to the movie Primer.

You should always split your file sharing structure into whatever works best for your users. Notice I said your users, not your network administrators. Either by department, division, operations or teams, whatever makes sense.

Security management of a file sharing scheme with one share can become a nightmare. You will, over time, find yourself removing inheritance and applying folder specific shares with individual users instead of maintaining security through the use of AD groups. Segmented sharing will give you a finer grain of control over the users who need access to that data. A single share gives you nothing but a headache.

Part of the security control on multiple shares is the ability to designate access via GPO mapping to shared resources. If you are using “logon.bat” in 2017 it’s time to open your browser and search for “GPO disk shares” and dig in to some reading with a nice cup of tea.

Segmenting will also help mitigate malware outbreaks if a ransomware application gets lose in your network. With a network segmented by department (and security locked), any outbreak will be limited to the areas of business the user has access to.

On the subject of segmentation, I have some clients who absolutely insist on being able to “see” the root level of all system shares on their network. I will always take time to explain why this is a terrible idea and if I am unable to dissuade, will demand my clients sign a separate addendum to our agreement absolving me of damage should malware hit, network wide, due to the actions of the managers or owners who insist on this level of access.
With that in mind, never give any one account direct access to all network shared data. Operationally, you don’t even need the administrator account to update or manage shares. You can create domain accounts whose sole purpose is the management of network shares.

Segment your shares, secure them with groups, share them with GPO mapping and enable VSS for instant recovery of files. Spend the time now to set your shares up and you will thank me (and many others) later.

I really don’t get Verizon FIOS sometimes…

So here I am, once again, looking to upgrade FIOS service for my client and the one thing holding me back is Verizon itself. After a series of calls where I was simultaneously told static IP addresses would not change, would change and would not change, a technician who actually understands these things confirmed that yes, all 13 IP addresses would in fact change with the upgrade from the BPON ONT to the GPON ONT.

So once again, I’m on hold scratching my head as to why Verizon a) can’t change the name on an account without first deleting it (losing all IP assignments) and b) can’t seem to upgrade an ONT without throwing away the IP addresses associated with it and providing new ones.

One would think with the technology at our fingertips today, Verizon would have figured this one out.

Phishing these days…

These days it’s not a matter of if you will get phished through email but a matter of when.  This is not the normal doom and gloom reminder but a simple request.  Slow down, just a bit.  When that email arrives asking you to transfer $60,000 to an account in Belize for a summer home purchase, sit back, sip some water and ask yourself, why is the CEO of the company asking me, the marketing manager to setup a wire transfer?

This is what I see almost every day.  Emails beginning with “My Dearest” or “So Kindly” (with many variations on spelling) litter the phishing world to the point that any email starting like that should just be deleted to begin with.

There are simple steps to take to eliminate being tricked (phished) through email.

  1. Does the return address make sense or even belong to you?
    1. Anyone can fake a sending address but the return is where they want to get you. You make think the email is from but on replying you will see or some other variant. If the reply-to domain is different than yours or to an address you’ve never used, delete the message and pick up the phone.
  2. Does the email have links to click?
    1. Almost all email programs can show HTML which means I can do this (pseudo-html ahead) Good Link – you will see “Good Link” in the email but be directed to bogus link when you click. How you can avoid that?  Put your mouse over the link and wait a second.  Email programs will show you the real link under the HTML link so when you see Click Here to access your account, a mouse over will show as the destination and a good indication that clicking on it is a bad idea.
  3. Make a phone call.
    1. This one is staggeringly simple yet, it’s number three in the list. Every case of phishing I have investigated would have been prevented if you had just picked up the phone and confirmed.  This should be number one in your list so in your mind, move it up to the top spot.

So that’s it in a nutshell.  Check the return domain, moue over any links and pick up the phone and phishing will be a thing of the past.

To the developers of Visual Studio, common sense time…

While I don’t usually rant (yeah right) about developers on my blog I feel I have to on this occasion. Since yesterday, Visual Studio locked up while trying to load a project in that had, until then, loaded just fine. It was stalling at project 10/12 and initially drove me a little nuts thinking I had a corrupted extension, GhostDoc had gone nuts or node.js/Git had gone off the deep end.

Now, I know what your saying right now (let’s face it nobody reads this thing so I can say that) why didn’t you just look at the .SUO file and see what project 10 was and why it was failing to load. Well, I did. Turns out project 10/12 was a node.js link to a server that had been shutdown on my vmware server by flying spider monkeys.

All good right? Wrong. My rant is that Visual Studio in all of its glory simply stalled on 10/12, eventually faded to white and crashed. No error, no warning, nothing like “Hey, your project is pointing to and that server can’t be detected anymore..” which is what these days we should expect our software to be able to do.

I removed the entry from the .SUO file and now the project loads just fine.

Me: 1 – Visual Studio: 0

Calling a web service (.net) from node.js

Took me a little experimentation (given the number of bad examples posted online) to figure this one out but I needed to authenticate a users token through the listener running in node (5.0.0).

I setup the token.js file to use with the following

 var app = require('express')(),
     http = require('http').Server(app),
     io = require('')(http),
     tokens = [];

Inside of the connection method I created a listener for the ‘register’ message that called the .NET web service to verify the authentication token. This code will be shifted to use the tedious framework in the near future. I included lodash to quickly check to see if the token is in the local array or if not, add it.

  socket.on('register', function (token) {
    var request = require('request');{
      url: 'http://xxx/authentication.asmx/VerifyToken',
      method: 'POST',
      headers: { 'Content-Type': 'application/json; charset=utf-8' },
      body: JSON.stringify({ token: token })
    function (err, resp, msg) {
      var body = JSON.parse(resp.body), t = JSON.parse(body.d);
      if (1 == t.flag) {
        var _ = require('lodash'), l = _.indexOf(tokens, token);
        if (l == -1) { tokens.push({ token: token, id: }); }
        socket.broadcast.emit('token', true);
      else {
        console.log('Invalid Token/Socket ID ' + + ' Token ' + token.substring(0, 15));
        socket.broadcast.emit('token', false);
  }); // register

Client side, once logged in, it’s as simple as passing:

 socket.on('connect', function () { socket.emit('register', user.token() });

And providing a callback listener that will check for ‘token’ coming back with a true/false. This way the administrator can void the authentication tickets server side and the client will automatically be notified their tokens are no longer valid and sent back to the login screen.

Testing Backups…

When the call comes in that there was an explosion in the building is not the time to ask “is all of our data backed up?” This just happened and reinforces my methodology of testing backups on a routine basis.

If you are backing up your data but not testing the backups by restoring data, volumes or virtual machines, then no, you cannot say you have a working backup. It’s the same as having batteries in the flashlight only to find a pool of acid in the handle when you open it after the lights have gone out from the batteries leaking.

Depending on your volume, at a minimum of once a month, you should restore from multiple points to test your backups. If your backup system supports multiple delta-points, pick the oldest you can find and restore those.

  • Restore entire virtual machines into your lab and fire them up to make sure they work.
  • Restore Exchange databases to your lab and make sure Exchange Server can mount them.
  • Restore SQL databases to make sure the account credentials were also stored so you don’t have to reassign logins.

In short, backup, backup, backup and then restore or don’t be surprised if you get a pool of acid in the flashlight handle when the lights go out.