Enterprise Network Security... are we doing it right?

Are we approaching Enterprise Network Security correctly?

Enterprise Network Security... are we doing it right?
Photo by Calvin Ma / Unsplash

Are we approaching Enterprise Network Security correctly? I have firewalls, WAF's, antivirus, etc.  I have deployed NAC at the access layer and use a VPN to connect remotely.  So I'm doing every thing correctly. Right?  Well maybe not wrong... but lately I've been feeling it's not right.

I've begun to ask my self if I/we are building and securing networks correctly anymore?  To answer that question I started by asking some other questions...

Q:  What do I  need to secure?  A: The network.
Q: Why? A: To protect the companies Intellectual Property (IP).
Q: Where is that IP kept? A: On the network.
Q: Where on the network?  A: In the data centers, cloud, workstations.

Ok, now we really should be securing physical data centers, employee workstations (many of which travel), the cloud and the network. Hmm.  This should be fun!  Let look at each in turn...

Data centers

Yes. They still exist.  How is the data center accessed from the WAN?  Is it separate from the LAN at the site?  Are there firewalls or WAF's between the users outside the data center and the servers inside the data center?  Do we know what servers/storage devices contain the most valuable IP?  Is the data all mixed together on the servers or separated?  Do we know where our backups are?  And are they immutable?  Is all the data in the data center the same classification (sensitive, IP, public, CUA/CDA, classified, etc)?  Are the different application tiers separated (front end, mid-tier, database)?

My thoughts

  • Design clear locations for different types of data.  Make it clear which type of data goes into which server.  It has been my experience that if users know a) how to classify data they have and b) where they should store that classification of data, they will do it religiously.  But there needs to be clear and easy instructions to follow for the user.  Otherwise everything is important which means nothing is important.
  • Take backups and make them immutable.  When backups are taken they should be made immutable so they cannot be changed.  And in these times, offline backups are also essential to survive a ransomware attack. I think the 3-2-1 approach makes sense.  3 copies of the data, on 2 different media, 1 is off-line.
  • I question wither separating applications by tier makes sense.  I understand the argument that separating the tiers prevents a compromise of the front end from going directly to the database.  However, if the front end is compromised, then the issuing commands the the mid-tier is still possible.  Any application mid-tiers trust their front ends implicitly? (I've seen a few.)  And if front end of application A is compromised, how easy is it to move laterally to application B?  Is each application administered by a different team?  Or the same team?  Using the same admin credentials?  I suspect a better (I did not say more secure) way to build these applications is to place each application (front end, mid-tier, database, etc) into a single walled garden.  That way if application A gets hit it cannot move to application B.
  • And maybe the most important first step is to place a firewall and WAF between the data center and the reset of the company.  While perimeter firewalls are still important for trying to keep the bad out, once the bad gets in a data center firewall and WAF provide another layer of defense and alerting.

The cloud

We have moved to the cloud... now what?  Where?  Which provider or providers? How is your data stored?  How is it secured?  Are there process in place that validate that the permissions on the data are correct?  Who can create places to store the data? How do many admins access it? Do customers need to access the data? How will they access it? Do internal users access it? How will they access it? Do I need firewalls in the cloud?

My thoughts

  • For the right application and workload the cloud is a good place to go.  For companies that cannot afford to deploy large redundant data centers with high capacity uplinks, the cloud is a good alternative.  "Rent a data center". However, just like in real estate, renting is different than owning.  So before you move all your workloads and data into a cloud, know what you will get for your money.  Does the cloud provider backup your data or are you responsible?  If you decide to leave is there are data extraction cost?  What happens during outages (yes even cloud providers have outages)? Who from the provider can get access to your data?
  • Encryption at rest?  It is easy now to make sure that your data is always available.  RAID, ZFS and other technologies make sure that our data is chopped up and duplicated across many physical disks.  So during disk failures no data is lost.  However, all this redundancy means that the data has been replicated across multiple drives.  How many?  What happens to the bad drives when they are removed? What is the device destruction policies of the cloud provider?  Are they followed 100% of the time?  Really?  Depending on your data classification, the policies of the cloud provider may be sufficient or they may not.  Instead of relying on everyone to "do the right thing" my preference is to protect the encrypt the data at rest so that it doesn't matter if the right thing is done or not.
  • Does you data need to be exposed to customers and internal users?  Is it the same data or different data sets.  If possible a) do not expose any internal user data to customers. Make sure that they are kept separate and b) if some data needs to be shared make sure it is the minimum amount.
  • Make sure that the creation of datasets, databases, file stores, etc can only be done by a select number of administrators.  If only a limited number of people can create a place to store the data it will help maintain control over where the data is.  Have processes and tools in place that continually audit the permission on those datasets to make sure that they are secure.  Emphasis on 'continual'. Depending on the data, hourly may not be often enough.  And when there is a misconfiguration (and there will be) large klaxons needs to be going off in the operations center and people need to be woken up.
  • Firewalls might not be necessary depending on the data and how native cloud security controls are utilized.  But there needs to be some process / device that is filtering the traffic and there needs to be some way to inspect the traffic.  So if the cloud tools fall short, then deploying a virtual firewall  and WAF may be required just like the data center.

Employee Workstations

Do employees take home work computers?  I bet during COVID they did. Are the hard drives encrypted? Can employees download and save files locally? What applications to employees use at home?  Can the be moved to the web?  VDI? What OS do the employees use?  Does it have to be that OS? What applications are used? Are they installed directly on the systems?  Are there SaaS alternatives? Where is the data of these applications stored? Is it easier to manage the cloud provider storage vs a fleet of workstation storage?

My thoughts

  • If any thick application clients are used then drive encryption of employee systems is a must.  Even if users use 100% cloud native applications or VDI, screen captures are still a possibility, so drive encryption might still serve a purpose.
  • Now for something really controversial... at least on it's surface.  Let's assume that all the employee actions could be performed from a SaaS solution and/or VDI. If we make that assumption, then does the workstation/laptop need to run MS Windows?  Do you need to spend all the time and energy with constantly patching the PC and making sure the virus scanner signatures are up to date? Think about it.  How much money can be saved not only in technician time but yearly maintenance costs for Windows?  What else is there? Mac OSX?  Sure, but there is also many flavors of Linux and BSD.  And with SaaS applications like O365, Google Workplace, etc., the use cases that need the heavy business applications is becoming fewer and fewer. There will still be a need for technicians to patch and maintain OS images, but if the exposure cam be limited by running more secure operating systems, will there be a cost savings? What about risk reduction? Depends on the specific situation, but my guess is there is.

Network

Now lets tie all the data centers, clouds, and workstations together.  With all those systems secure do we really need to worry about the network?  Do we need an "inside" and an "outside" network?  Or do we treat everything like it's the internet?

Traditionally enterprise networks have been constructed like an M&M.  Hard candy shell with soft gooey inside.  Once you are on the inside you can go anywhere.  This has worked fine for many years and still works well when a company is small where it is easy to keep the outside out.  But as a enterprise embraces the cloud and SaaS the requirement to allow outside companies into the gooey center will increase.

My thoughts

  • Mixing outside and inside still doesn't make sense.  Even if I can only keep 50% of the known bad things out, at least I can keep 50% of the known bad things out of inside network.  There still needs to be a separation of outside and inside networks.
  • Within the inside network, there needs to be different segments.  There are systems on most networks that will never need to talk to any workstation and never need to talk to the internet.  Think industrial devices, SCADA devices.  These devices should be on an isolated segment where any and all access is controlled.  There will be other types of devices that need to be on different segments.  But be careful, it is tempting to say "segment everything".  Without serious automation, managing even a handful of segments becomes challenging.  And don't forget that to cross segments traffic must flow through firewalls.  What happens when the routes through the firewalls becomes asymmetric?  Routing doesn't care... firewalls do.
  • VPN's are still necessary if to securely communicate over untrusted networks.  However VPN access should not automatically mean access to everything or even that the device is fully trusted.  Consideration should be give to the applications and data and if that data should be allowed outside the confines of the building.
  • If a device cannot connect to the network it cannot harm things.  But short of not allowing any devices on the network (the business doesn't like that) there need a way to allow and classify devices that do connect to the network.  Some of you are screaming NAC.  But traditional NAC doesn't fit.  A device is either allowed or not, doesn't work in the real network.  There needs a way to allow devices to connect that are unknown or their posture is not perfect.  It needs to be easy for anyone to add a device. The only requirements should be that the device has 1) and owner and 2) a way to identify it for further classification.  If I can make adding devices easy, then it is less likely the engineers of the company will work very hard at trying to "get around" the network security. They need to be part of the solution.  (Yea, I know... easier said than done.) Easy access onto  the network doesn't mean easy access to everything. If device cannot be classified or profiled, it should have limited access to data. So there needs to be a way for the NAC system to communicate with the data center and cloud firewalls to identify different posture levels.

Wrapping Up

So where does this leave us.  While the M&M model of security isn't holding up, maybe a M&M filled with M&M's may do the trick for a while.

What do I have right?  Wrong?  What did I miss?