Manx Technology Group (MTG) has launched “Umbrella”, a Cloud-based secure internet gateway that provides visibility and protection against internet threats. The secure internet gateway (SIG), powered by Cisco, secures internet access from the corporate network and for staff working remotely. The service is ideal for businesses of every size, capable of scaling from 1 to 10,000 users – and with the basic service, there is no need for expensive new hardware or software to be installed.
Industry 4.0 is the latest trend in manufacturing that encompasses automation, instrumentation and data. Many describe the concept of Industry 4.0 and the Smart Factory as the beginning of the fourth industrial revolution. Read more
A survey of over 2,000 security professionals has found only 42% of organisations have policies in place that restrict or monitor the use of unsanctioned cloud applications. This figure is despite the fact that 53% of respondents said unauthorised apps are their biggest cloud security threat. The survey, undertaken by BitGlass Inc looked at the evolution of cloud security. Read more
More and more companies are being impacted by Ransomware. Not a week goes by where cyber-attacks, malware and ransomware are not featuring in the news. Unfortunately, this trend does not appear to be slowing down. There are a number of steps an organisation can take to help you reduce the risk of infection and to help you defend against similar threats.
What is Ransomware?
Ransomware is an advanced piece of malware (malicious software) that, once it has infected a system, seeks to encrypt or otherwise render useless data files, office documents, images and other important files. For a business to regain access (and use!) of these files, they are required to pay a ransom (typically in Bitcoin). The concept of ransomware is not new, and there have been incidents as far back as 1989, but the events in 1989 are a million miles away from what we see today. The Internet potentially makes every internet user a target, malware is far more sophisticated and difficult to detect; and organisations are far more digitally enabled and connected. You also need to remember that Ransomware is a lucrative business and the Internet knows no bounds – so there is an added incentive for Ransomware authors (rather than just kudos). Read more
There are several ways of address the risks that originate from a risk assessment; you can avoid the risk entirely (withdraw), reduce or mitigate the risk, transfer the risk (e.g. insurance) or accept the risk.
If you consider a datacentre operator who houses a large amount of IT equipment, fire is a risk, whether that is due to an electrical malfunction or a fault with customer equipment. Read more
Shadow IT is a term often used to describe a situation where IT systems; applications, software, devices, cloud services and similar solutions are used by an enterprise without the knowledge or approval of the business. This can present a security and operational risk to the business given it bypasses the organisations’ standards, processes and procedures. This could include the enterprises’s configuration, licensing setup, security controls, documentation and implementation standards. More serious scenarios could actually jeapordise an organisation’s ability to comply with certain industry regulations such as data protection, PCI or even SOX. Imagine a serious breach relating to software even the IT department wasn’t aware of!
Security incidents are not new. Data theft, DDOS attacks and website defacements have been commonplace for many years. The thing that stands out with the recent spate of attacks is the amount of time it can take an enterprise to realise they have been compromised. The attackers may have been inside for some time.
The recent UCLA breach is a prime example. They suspected something as early as October, the FBI identified the breach in May. Quite some time, but a reflection of how good the malware is at remaining undetected. I do not doubt UCLA had firewalls, antivirus and followed best-practice, you have to assume they did.
Many commentators are quick to criticise the IT team, the lack of security investment or they blame human-error. There is no denying the fact that human error and poorly developed software are common causes, but not always.
Advanced Persistent Threats
Nowadays, there is a new class of threat, the Advanced Persistent Threat (APT).
APTs are forcing many enterprises and organisations to rethink their security strategy and revisit their approach to identifying and safeguarding against threats.
I am not going to explain what an APT is, definitions vary by the vendor – but in short, it is a new type of advanced threat that can go unnoticed, bypassing existing security controls and often moving throughout an organisation’s internal systems. Some describe it as custom malware. Vendors are quick to develop APT-beating solutions, analysts have a new market segment to discuss and businesses have something new to worry about!
The purpose of this brief article is to outline some of the technologies available to help safeguard your business against APTs.
(I am assuming your systems are already patched, hardened and you have a robust perimeter security policy – that is common sense.)
1) Control lateral movement with an Internal Segmentation Firewall
In the network world, your LAN to Internet traffic iscan be described as North-to-South. The traffic flow between your users and servers is referred to as East-to-West. Traffic that moves East-to-West is also known as Lateral movement.
Once you are infected by modern malware or an APT, it (or they!) will attempt to move laterally throughout your IT and network environment. Using network enumeration, privilege elevation and further exploitation, they will try and compromise other systems or hone-in on higher value targets. This can be automated or controlled externally by an individual with malicious intent. The end-game could be ransomware or data theft.
A common approach to prevent this lateral movement is to break up your network into zones or segments.
Think of your network as a big circle. Your network is on the inside, and the perimeter of the circle is your firewall. Outside of that firewall is the internet. Once someone is inside, they are free to move around your business. It is very similar to castle walls.
Segmentation takes a different approach. Instead of a single circle or “wall”, your network still has that perimeter wall, but it is also made up of several internal zones. Think of a honey-cone structure within the circle, each department is a zone.
Traffic passing between these zones is subject to a network security policy, traffic flows are limited and scanned for malicious content or anomalous behaviour. A breach in one zone can (hopefully) be contained to that zone.
Most firewall vendors such as Fortinet, Palo Alto and Cisco have sold solutions like this for some time. It is only recently that the terms such as internal segmentation firewall and internal network firewall have grown in popularity. SANS has a paper about internal firewalls dating back to 2001, so it’s certainly not new!
In the absence of a firewall, most modern switching platforms also support some form of IP access list or network policies that can be applied to zones (typically L3 VLANs or SVIs). These can be used to inhibit or control lateral movements. They don’t solve the problem but they can make things harder. That is the name of the game.
The network segmentation approach is relatively inexpensive, and unfortunately for high-end environments, it may not scale.
For your typical enterprise or large organisation; 1GE, 10GE and 40GE solutions are available. If you are trying to secure 100Gbps of traffic between blade chassis or Hadoop clusters, then things can and will get out of hand.
Ultimately the cost will depend on your topology (e.g. the number of zones) and the volume of traffic.
Virtualisation presents another challenge. East-to-West traffic can physically move around your network (between devices). With VMWare and the likes, this lateral movement takes place within the virtual environment. Fortunately, many vendors (including VMWare) have virtual product equivalents and to some extent, these are easier (and cheaper) to implement than dedicated appliances. If you run VMWare/Citrix, you cannot overlook the virtual network.
2) Identify threats with DNS Intelligence and behavioural analysis
Every time one of your internal systems or servers wishes to access the internet, the DNS protocol will be used somewhere to resolve the domain name of the website or mail server they are trying to reach. When malware (or a hacker) tries to phone home, they too may use DNS to connect to their command and control (C2) servers.
A growing number of DNS and security providers are offering a new kind of DNS service. Essentially you re-point your DNS traffic towards one of these providers and they screen it.
DNS and its functionality continue as before. The key difference is they check each one of your DNS queries in an attempt to identify anomalous behaviour or attributes that may indicate ill intent. Using their intelligence networks, machine-learning and the power of the crowd – they can make split-second judgements on the behaviour of your DNS traffic, this could be based on known-knowns, inference or patterns.
This is an easy service to implement and rarely requires any kit or significant change in your infrastructure.This service is more of a diagnosis tool rather than a fix. It can tell you something is going on, but it won’t necessarily prevent it. It is a starting point though!
DNS solutions are often priced upon the number of users or on the volume of DNS queries originating from your network. This is often a simple inroad for an organisation, even if the solution is used as a barometer to gauge if something is going on.
It is worth mentioning that many firewalls support IP reputation analysis which performs a similar function. If your network assets are connecting to dodgy networks (and IP addresses), it alerts you and blocks said traffic.
3) Advanced Endpoint Threat Detection
Traditional antivirus has its limitations. It uses a database of known vulnerabilities and viruses. It attempts to identify known threats through signatures or basic behavioural observations, often using heuristics. Traditional AV has a place but in many respects it is being overtaken by more advanced solutions.
The concept of databases is like having a register of all the bank robbers in the world. Naturally, you wouldn’t want these in your bank, but at what point does a bank robber become a bank robber… After they’ve robbed a bank. Before they do, they are a civilian like any other. In a similar way, a virus is only a virus once someone says it is. Until then…
The latest generation of solutions that build upon the limitations of AV are known as Endpoint Threat Detection systems or Endpoint Behavioural Analysis platforms.
The intelligence in these solutions is typically a central appliance, software solution or it is located in the Cloud (keeping the vendors intellectual property safely tucked away).
The agent’s job is to observe behaviour, kernel system calls, privileged processes, network-traffic and file access – all the while communicating its findings to a central brain. The brain has insight into your whole IT environment so using its advanced intelligence, machine learning, pattern matching or crowd-intelligence – it can make a judgement call.
Many of these agents work in harmony with additional devices or controls, providing containment alongside detection. In the event an attack or if a potential incident is detected, the solution can trigger events that can force other solutions to take action, whether that be containment or alerting individuals.
To make this work effectively, you need an agent on each and every endpoint (workstation or server). This can be costly, but effective.
4) Sandboxing (Payload Analysis)
Sandboxing is a technology that effectively mimics your live environment. In an enterprise, traditionally if someone e-mails you an attachment, your mail filter would scan it for spam and viruses, perhaps the file-type (e.g. PDF) and if ok – pass it through.
Much like traditional AV, these solutions are unable to spot advanced threats.
Sandboxing takes a different approach. When someone e-mails you a file, the sandbox will open the attachment in a secure container environment and observe its behaviour. Does it act maliciously? What does it do? What files does it access? It then makes a judgement call. This is also known as payload analysis. Rather than looking at the label on the packet, it opens the packet, pokes it, eats it, tests it and sees what happens.
The challenge with sandboxing is that malware is intelligent. Modern malware attempts to detect the presence of a sandbox, trying to evade detection.
Sandboxing solutions are clever too. They are aware of these evasion techniques so they employ their own anti-evasion techniques.
Crafted malware is intelligent too, it understands the evasion-detection techniques the sandboxes use so it tries to avoid the anti-evasion-detection techniques with more magic. You get the picture. It really is a constant battle.
Sandbox solutions are rated on the number of files or messages they can process per hour. There is typically a capex purchase with ongoing support and maintenance. Some sandboxing vendors are cloud solutions, so represent an ongoing opex.
With Cloud, you need to be careful from a regulatory compliance perspective (HIPAA, Data Protection, PCI-DSS), after all, you may be uploading your files to the sandbox provider who could be located in a country that falls foul of your obligations.
That was a quick run through some of the technologies available to help safeguard your organisation against APTs. It is by no means an exhaustive list but serves as a starting point in any discussion around network security.
Cyber-security is an increasingly board-level topic of discussion, and conversations about security should now happen at every level. If you are in IT, educate your board. If you are on the board, ask IT.
Now is a good time to consider the security controls your organisation employs to safeguard against these emerging threats.
If your IT budget is a challenge or if the business has other priorities, you may find your existing systems (albeit with some tweaks) are already capable of providing an additional level of security without a massive capex or the sudden onslaught of a security subscription. For a robust defence, consider a managed security service or speak to your IT Support provider.
— Technologist Joe Hughes is a CEO of Manx Technology Group, a company that provides a range of IT, network and security services to organisations of every size. A key area of interest to Joe is cyber-security, FinTech, healthcare technology and the growing use of data throughout every aspect of the business.
We are regularly engaged by clients who are looking to enhance or replace their perimeter security solution (e.g. firewalls).
When we embark on a project like this we rarely approach the problem from a technical or network standpoint. To implement a solution that confidently protects a customer network and your information assets, you first need to understand their business. What systems do they use? Where are their users located? How many sites? Do they permit remote access? Who and what should access the internet? Read more
There is a growing trend to outsource the IT infrastructure requirements of a business to the Cloud or to a datacentre environment. In our experience, this transition is normally prompted by one of the following events:
- Business growth or acquisition. This prompts the business to review their IT infrastructure requirements. Outsourcing, typically with a level of opex is a consideration rather than a capex investment in a new solution. Both cloud and datacentre providers will have some form of managed service offering culminating in an opex service model.
- Outdated infrastructure. An organisation may have historically invested in their IT setup but since then there has been little in the way of incremental upgrades or investment. This has meant the business is facing a significant capital expenditure; making the prospect of an opex scheme more attractive. The datacentre or cloud provider will typically have modern, scalable and redundant infrastructure components.
- Staff resource. With so many applications, business systems, day-to-day end user support; the IT team may be swamped in their own jobs to worry about the IT infrastructure. In these scenarios a business may choose to outsource the infrastructure (IaaS) and platform components (PaaS) to a datacentre or cloud provider, enabling their IT team to focus on application or user support.
- An IT Event. Essentially something has gone pear shaped. A core system failure, a storage outage or a network issue – they can all prompt management to review their IT systems. Despite IT lobbying the business for some time, it often takes an event like this to prompt people into action. Often the cloud or datacentre will cover off or resolve many of the issues that triggered this event.
- Regulatory Pressures. This could be new industry regulations or a scenario where there is greater scrutiny being applied; forcing businesses to take stock and ensure compliance. This could be DR (Disaster Recovery) or BCP (Business Continuity), information security or documentation. Datacentres and cloud platforms are often a fast-track to compliance in many areas, particularly DR, PCI-DSS and others where security and availability are considered paramount. Outsourcing is however not without its pitfalls and in the quest of compliance, you may inadvertently fall foul.
- New Outlook. A new CTO/IT-Director will often take a fresh perspective when it comes to an organisations IT setup. Often with the support of the board, this can lead to a raft of changes within a business.
These are just some examples of catalysts for change.
Things to consider
There are some key items to consider before transitioning to the cloud, datacentre or if you are considering certain elements of your IT environment:
- Connectivity. High speed, secure and reliable network connectivity to your IT environment is vital. So often do we see businesses adopt VDI, off-site DR or remote-working but they have simply overlooked the need for connectivity. The initial forecasted savings can be wiped out immediately when it becomes apparent a fibre or leased-line is required. If your business is shifting its core IT operations to a datacentre or cloud, then good, solid connectivity is a given.
- Regulation. Many industries are subject to regulatory oversight. Depending on the industry and the regulatory body, often they will set down requirements or guidelines that govern how your IT and outsourcing operations. Financial services, banking and healthcare are particular industries where outsourcing can be a challenge. The UK’s FCA, Isle of Man FSC, PCI-DSS and HIPAA all have specific requirements or guidelines around outsourcing (Cloud) and security (shared infrastructure). This is perhaps one of the reasons Private Cloud is a core offering of many datacentres as it circumnavigates many of the grey areas. In many cases public or shared IaaS is an option, but you have to demonstrate you have considered (and documented) the risks, be able to prove your outsourcer is compliant and be comfortable you comply. For this reason, many businesses err on the side of caution.
- Paper office. If your business is a paper handling organisation, for example printing and scanning documents – you need to evaluate whether moving your back office into the datacentre will cause other issues within your organisation. Document management, imaging, faxes and retrieval can often be an after-thought once a business has moved to the cloud. Unfortunately, this afterthought is often a vital part of the business, leading to a loss in operational effectiveness and similar technical issues.
- SLA. The SLA is often not worth the paper it is written on. If 99.999% is promised, ask how those metrics are calculated and how the provider plans to meet them. Ask for historic measurements. Ensure the service credits and liabilities align themselves to the losses your business would face. If your business demands 99.999% (five nines) then be prepared to pay the price for that level of uptime, it is simply not fair (or even possible) to provider five-nines on a shoe string.
These are just some of the issues to consider when your organisation is considering a move to a hosted, cloud or outsourced environment. The team at MTG have over a decade of experience transitioning businesses to (and from!) hosted or cloud environments. MTG’s range of network, IT and security solutions are used by organisations with their IT infrastructure on-premise or hosted in an outsourced environment. The MTG board have held previous positions at datacentre, cloud and telecoms providers – so we have a thorough understanding of the business models, pitfalls and constraints of the hosted model.
If your business is considering a change in strategy or an infrastructure upgrade, speak to the experts.
What is DLP (Data Loss Prevention)?
DLP (Data Loss Prevention) is a group of technologies whose purpose is to ensure data is not lost, misused, disclosed or accessed by unauthorised users. DLP solutions generally classify data, protect confidential information, implement controls, identify data in transit and help prevent users (or customers) from accidentally or maliciously sharing data. Read more