Fortinet’s FortiGuard Labs released an advisory relating to Flash earlier this month. Essentially a specially crafted SWF could allow an attacker to execute code on a user’s PC arbitrarily. The exploit actually uses a vulnerability patched in Flash 188.8.131.52. There are more details on their website here. Fortinet classified the vulnerability as SWF/SwfDlr.BC!tr?
If your organisations requires Flash, the obvious course of action is to ensure that Flash is up to date. The continued use of Flash is not really recommended unless you need it. With the widespread adoption of HTML5 for video and interactive web applications, there have been questions for sometime regarding the longevity of Flash. With exploits appearing in the wild every so often, it is not wonder its demise is perhaps accelerating, with people turning to open standards such as HTML5.
We have talked about IPS (Intrusion Prevention Systems) in a number of articles and discussed how there is often a misconception that these are positioned around hosted applications or e-commerce. In the case of this particular vulnerability, this is exactly where an IPS excels.
FortiGuard’s own labs tested and identified this exploit. They have created a signature that is deployed to Fortinet firewalls and Web Application Firewalls (WAF) in real-time. These devices could be protecting the perimeter or internal network segments. Once this signature is loaded on, if any user in the organisation is tricked into connecting to a malicious website (or a legit compromised website), the IPS engine will identify the attempted exploit and block it. In these scenarios, the IPS is in effect blocking intrusions that are to some extent initiated by the user.
The ability to inspect SSL encrypted traffic is equally challenging, especially if the crafted exploit code was delivered via a website over HTTPS. Your common web-filters or firewalls (not configured to do SSL inspection) will simply not see it. The use of SSL inspection does need some careful consideration given that as a business you can look at all traffic, including potentially employee-confidential traffic. It is for this reason you staff handbook, terms and security policy make it clear what actions you are doing and why.
10GE (10 Gigabit Ethernet) networks are commonplace in service provider, cloud and datacentre environments. Whilst there is some adoption in the enterprise, many of the solutions can be cost prohibitive or they tend to be all-or-nothing solutions. The purpose of this post is to highlight some ways you can increase network throughput within your organisation. Read more
Windows 2003 officially went EOL (end of life) July 2015, this is not exactly news. If your organisation still operates Windows 2003 then we would recommend you lock them up and enforce strict visitation rights. Read more
Unified Communications is considered one of the top IT investments for European Healthcare Providers in 2015.
Unified Communications (UC) solutions are one of the most widespread communication technologies used in business today. UC platforms integrate voice, video, instant messaging, e-mail and voicemail into a seamless communications platform. Read more
We are regularly engaged by clients who are looking to enhance or replace their perimeter security solution (e.g. firewalls).
When we embark on a project like this we rarely approach the problem from a technical or network standpoint. To implement a solution that confidently protects a customer network and your information assets, you first need to understand their business. What systems do they use? Where are their users located? How many sites? Do they permit remote access? Who and what should access the internet? Read more
We have been busy recently upgrading a number of our customers to VSphere 6.0. Nearly every one of our customers uses virtualisation and VMWare for their core IT infrastructure requirements. The benefits of virtualisation are well understood and the ease of DR is a key driver. VMWare’s latest version improves on its predecessors, keeping apace with the latest trends in IT, storage and compute.
The key enhancements and changes are highlighted below:
- Scalability. Hosts now support 480 CPU cores, 12TB RAM and 1024 VMs per host.
- Support. The HCL has been extended to include a number of other chipsets, drivers, devices and OS.
- Graphics. Native NVIDIA GPU support provides hardware-accelerated graphics.
- Instant Clone. Technology that allows you to copy VMs up to 10x faster than before.
- Control/Traffic-Shaping. You can now provision per-VM bandwidth reservations to control bandwidth and apply limis.
- Multicast Snooping. For environments that use IGMP, snooping (and MLD for IPv6) provides greater performance and scale.
- VMotion IP Stack. VMotion now has its own IP-Stack/Instance, enabling separate IP address and gateway management.
- VMotion. You can now VMotion guests between hosts even with 100MS Latency between sites. This allows inter-continental VMotion moves.
- Replication Assisted VMotion. Customers who use active:active replication can now leverage this to VMotion guests with up to 95% savings on efficiency.
- Fault-Tolerance. Now allows 4 x vCPU, a significant improvemnet.
- Content-Library. A central repository to store you templates, ISO, scripts and VMs. This can be distributed using a publish/subscribe model.
- Cross-VCenter Clone/Migration. Copy and move guests between different VCenter servers.
- UI. Web client improvmeents.
If you would like to learn more about VSphere6, speak to MTG or refer to the VMWare website.
There is a growing trend to outsource the IT infrastructure requirements of a business to the Cloud or to a datacentre environment. In our experience, this transition is normally prompted by one of the following events:
- Business growth or acquisition. This prompts the business to review their IT infrastructure requirements. Outsourcing, typically with a level of opex is a consideration rather than a capex investment in a new solution. Both cloud and datacentre providers will have some form of managed service offering culminating in an opex service model.
- Outdated infrastructure. An organisation may have historically invested in their IT setup but since then there has been little in the way of incremental upgrades or investment. This has meant the business is facing a significant capital expenditure; making the prospect of an opex scheme more attractive. The datacentre or cloud provider will typically have modern, scalable and redundant infrastructure components.
- Staff resource. With so many applications, business systems, day-to-day end user support; the IT team may be swamped in their own jobs to worry about the IT infrastructure. In these scenarios a business may choose to outsource the infrastructure (IaaS) and platform components (PaaS) to a datacentre or cloud provider, enabling their IT team to focus on application or user support.
- An IT Event. Essentially something has gone pear shaped. A core system failure, a storage outage or a network issue – they can all prompt management to review their IT systems. Despite IT lobbying the business for some time, it often takes an event like this to prompt people into action. Often the cloud or datacentre will cover off or resolve many of the issues that triggered this event.
- Regulatory Pressures. This could be new industry regulations or a scenario where there is greater scrutiny being applied; forcing businesses to take stock and ensure compliance. This could be DR (Disaster Recovery) or BCP (Business Continuity), information security or documentation. Datacentres and cloud platforms are often a fast-track to compliance in many areas, particularly DR, PCI-DSS and others where security and availability are considered paramount. Outsourcing is however not without its pitfalls and in the quest of compliance, you may inadvertently fall foul.
- New Outlook. A new CTO/IT-Director will often take a fresh perspective when it comes to an organisations IT setup. Often with the support of the board, this can lead to a raft of changes within a business.
These are just some examples of catalysts for change.
Things to consider
There are some key items to consider before transitioning to the cloud, datacentre or if you are considering certain elements of your IT environment:
- Connectivity. High speed, secure and reliable network connectivity to your IT environment is vital. So often do we see businesses adopt VDI, off-site DR or remote-working but they have simply overlooked the need for connectivity. The initial forecasted savings can be wiped out immediately when it becomes apparent a fibre or leased-line is required. If your business is shifting its core IT operations to a datacentre or cloud, then good, solid connectivity is a given.
- Regulation. Many industries are subject to regulatory oversight. Depending on the industry and the regulatory body, often they will set down requirements or guidelines that govern how your IT and outsourcing operations. Financial services, banking and healthcare are particular industries where outsourcing can be a challenge. The UK’s FCA, Isle of Man FSC, PCI-DSS and HIPAA all have specific requirements or guidelines around outsourcing (Cloud) and security (shared infrastructure). This is perhaps one of the reasons Private Cloud is a core offering of many datacentres as it circumnavigates many of the grey areas. In many cases public or shared IaaS is an option, but you have to demonstrate you have considered (and documented) the risks, be able to prove your outsourcer is compliant and be comfortable you comply. For this reason, many businesses err on the side of caution.
- Paper office. If your business is a paper handling organisation, for example printing and scanning documents – you need to evaluate whether moving your back office into the datacentre will cause other issues within your organisation. Document management, imaging, faxes and retrieval can often be an after-thought once a business has moved to the cloud. Unfortunately, this afterthought is often a vital part of the business, leading to a loss in operational effectiveness and similar technical issues.
- SLA. The SLA is often not worth the paper it is written on. If 99.999% is promised, ask how those metrics are calculated and how the provider plans to meet them. Ask for historic measurements. Ensure the service credits and liabilities align themselves to the losses your business would face. If your business demands 99.999% (five nines) then be prepared to pay the price for that level of uptime, it is simply not fair (or even possible) to provider five-nines on a shoe string.
These are just some of the issues to consider when your organisation is considering a move to a hosted, cloud or outsourced environment. The team at MTG have over a decade of experience transitioning businesses to (and from!) hosted or cloud environments. MTG’s range of network, IT and security solutions are used by organisations with their IT infrastructure on-premise or hosted in an outsourced environment. The MTG board have held previous positions at datacentre, cloud and telecoms providers – so we have a thorough understanding of the business models, pitfalls and constraints of the hosted model.
If your business is considering a change in strategy or an infrastructure upgrade, speak to the experts.
Accenture’s 2015 Global Risk Management Study found that 65% of American bank executives believe cyber risk is going to be increasingly severe throughout 2015
The series of reports from consulting firm Accenture reaffirms the fact that risk management and cyber security in particular are becoming regular board-level topics. Read more
What is DLP (Data Loss Prevention)?
DLP (Data Loss Prevention) is a group of technologies whose purpose is to ensure data is not lost, misused, disclosed or accessed by unauthorised users. DLP solutions generally classify data, protect confidential information, implement controls, identify data in transit and help prevent users (or customers) from accidentally or maliciously sharing data. Read more
Traditional firewalls that had UTM-type functionality (e.g. Web filtering, Intrusion Prevention, Antivirus) often suffered from poor performance; low throughput, latency and inconsistent accuracy. As firewall technology evolved, as did the performance and scanning capabilities. The Next Generation Firewall (NGFW) term was coined to define a firewall that met the following criteria: Read more