TruStack supports NCFE in business transformation

Almost a decade after they partnered for the first time, two of the North East’s most forward-thinking and flourishing organisations are looking forward to a healthy future together.

Digital business experts at TruStack have completed a transformation of IT systems at NCFE, a leading provider of educational services for more than 170 years.

TruStack was formed last year after the merger of SITS Group, PCI Services and Pivotal Networks, with SITS installing NCFE’s IT infrastructure back in 2013.

Not-for-profit organisation NCFE, which designs and certificates technical qualifications as well as offering assessment and educational technology solutions, decided to update its systems to keep pace with the organisation’s growth.

NCFE was pleased to continue its long-standing relationship with TruStack, a business which prides itself on giving clients ‘innovative solutions and expert support’.

Nick Evans, NCFE’s Information Security Manager, said: “TruStack has always supported us on our business journey. We feel that they are almost an extension of our own team.

“Our engineers trust their engineers. That respect between the organisations from an engineering level to managerial level has been born from a long-term relationship between us.”

“However, there was nothing to say that we were definitely going to go with TruStack for the project, but when it came down to it no one else could provide the same level of support and respect that we have received from them.”

“I have always felt they are not there just to make money from a client – they care. They had a vested interest in making sure the project was successful and that is what they did.”

In the past five years NCFE has seen its turnover more than double and its workforce increase from around 200 to more than 450 employees.

Security is vital to NCFE, with the organisation contributing to the success of millions of learners at all levels, in a range of sectors.

From September, it will take responsibility for delivering one of the government’s new T-Level qualifications, with five more to follow in 2021. 

Lindsey Gibson, Head of Group IT at NCFE, added: “We help learners from all walks of life to progress in their education and into employment, in line with our core purpose to ‘promote and advance learning’.

“We are firmly focused on the future and TruStack is a key partner in helping us to grow and increase our reach and impact.”

TruStack’s engineers spent eleven days planning for, and delivering, the project at NCFE’s head office at Quorum Business Park, in Newcastle-upon-Tyne, with the project going live in December.

Liam Holliday, TruStack sales manager, said: “It was a case of giving NCFE a platform to host its business applications that would last well into the future. 

“We pride ourselves on getting things right first time, and we are pleased things have turned out so well for NCFE in the latest stage of our partnership.”

He added: “We can never rest on our laurels. We see every opportunity we get like it’s a new business. 

“By treating every customer like a new customer we give ourselves the best chance possible of winning their business next time.”

TruStack works with hundreds of companies across the North East and beyond, including several of the North East’s Top 200 companies including Unipres (UK) and Vertu Motors. 

Other clients include the Natural History Museum and Collingwood Business Solutions.

TruStack has its head office on the Northumberland Business Park, Cramlington, with a branch office situated at the Evolve Business Centre, Houghton le Spring. If you are interested in finding out more please click here.

Cisco Meraki Systems Working in the Real World

Corporate networks are stretched thin by cloud services, SaaS applications, and mobility. Plus, organisations require better connections to branch offices to deliver higher-quality network services. As they transition to a digital business model, their network topologies are significantly impacted.

The adoption of cloud services, the virtualisation of the traditional network, and an increasingly mobile workforce accessing applications in the cloud are accelerating advancements in wide area networking technologies.

We held a webinar with Cisco Meraki Systems Engineer Ben Kersnovske, Lake District National Park Authority IT Manager, Frank Blackburn and TruStack Commercial Director Phil Cambers to discuss the benefits of SD-WAN and how it performs in the real world.

You can recap on what was delivered by filling in the form below.

Webinar Download

  • This field is for validation purposes and should be left unchanged.

Data Classification: What it is, why you should care and how to perform it.

Many organisations have limited resources to invest in safeguarding data. Knowing exactly what needs to be protected will help you develop a secure plan so you can allocate your budget and other resources wisely.

The best place to start is by classifying your data. Classification provides a solid foundation for a data security strategy because it helps to identify the data at risk in the IT network, both on premises and in the cloud.

In this article, we will give the data classification definition and explore the steps involved in getting started.

What is data classification?

Data classification is the process of organising both structured and unstructured data into categories. It enables more efficient use and protection of critical data, including facilitating risk management, legal discovery, and compliance processes.

For years, it was up to users to classify data they created, sent, modified or otherwise touched. Today, organisations have options for automating classification of new data that users create or collect.

What is data discovery? 

Data discovery is the process of scanning repositories to locate data. It can serve many purposes, such as enterprise content search, data governance, data analysis and visualisation. When combined with data classification, it helps organisations identify repositories that might contain sensitive information so they can make informed decisions about how to properly protect that data.

Data security

To safeguard sensitive corporate and customer data adequately, you must know and understand your data. You need to be able to answer the following questions:

  • What sensitive data, such as intellectual property (IP), protected health information (PHI), personally identifiable information (PII), and credit card numbers, do you store?
  • Where does this sensitive data reside?
  • Who can access, modify and delete it?
  • How will your business be affected if this data is leaked, destroyed or improperly altered?

Having answers to these questions, along with information about the threat landscape, enables organisations to protect sensitive data by assessing risk levels, planning and implementing appropriate data protection and threat detection measures.

Regulatory compliance

Compliance standards require organisations to protect specific data such as cardholder information (PCI DSS), health records (HIPAA), financial data (SOX) or personal data (GDPR). Data discovery and classification helps to determine where these types of data are located so you can make sure that appropriate security controls are in place and that the data is trackable and searchable as required by regulations.

Guidelines for data classification

There is no one-size-fits-all approach to data classification. However, the classification process can be broken down into four key steps, which you can tailor to meet your organisation’s needs as you develop your general data protection strategy.

Step #1. Establish a data classification policy

First, you should define a data classification policy and communicate it to all employees who work with sensitive data. The policy should be short and simple and include the following basic elements:

  • Objectives – The reasons data classification has been put into place and the goals the company expects to achieve from it.
  • Workflows – How the data classification process will be organized and how it will impact employees who use different categories of sensitive data.
  • Data classification scheme – The categories that the data will be classified into.
  • Data owners – The roles and responsibilities of the business units, including how they should classify sensitive data and grant access to it.
  • Handling instructions – Security standards that specify appropriate handling practices for each category of data, such as how it must be stored, what access rights should be assigned, how it can be shared, when it must be encrypted, and retention terms and processes.

Step #2. Discover the sensitive data you already store

Now it’s time to apply your classification policies to your existing data. You could choose to classify only new data, but then business-critical or confidential data you already have might be left insufficiently protected.

Rather than trying to manually identify databases, file shares and other systems that might contain sensitive information, consider investing in a data discovery application that will automate the process. Some technology tools report both the volume and potential category of the data.

Step #3. Apply labels

Each sensitive data asset needs a label in accordance with your data classification model; this will help you enforce your data classification policy.

Step #4. Use the results to improve security and compliance

Once you know what sensitive data you have and its storage locations, you can review your security and privacy policies and procedures to assess whether all data is protected by risk-appropriate measures.

Step #5. Repeat

Files are created, copied, moved and deleted every day. Therefore, data classification must be an ongoing process. Proper administration of the data classification process will help ensure that all sensitive data is protected.


Data classification is not a magic wand that ensures data security or compliance with regulatory requirements by itself. Rather, it helps organisations identify the data most critical to the business so they can focus their limited time and financial resources on ensuring appropriate data protection.

For more information contact us here.

Veeam Vanguards Attend Summit – Ian Sanderson

What is a Veeam Vanguard?

A Veeam Vanguard is someone that is committed to excellence in delivering Veeam solutions. It doesn’t necessarily have to be someone with a public presence doing blog posts etc, it could be internal champions at a workplace. There are a lot of Veeam partners who have staff members that design and deliver solutions everyday – they make up some of the Veeam Vanguards. There are roughly 70 Vanguards worldwide and about 5 or 6 in the UK so it’s a pretty prestigious position to hold! Each year you must apply to get into the programme and to do that you have to demonstrate what it is you have done to deserve to be in the programme. For example, do you promote Veeam, are you an internal champion at work, or do you work for an MSP and deliver core Veeam services.

What are the benefits of being a Veeam Vanguard?

One of the main benefits is the private chat channel with all the other Veeam Vanguards. It is a place where you can discuss any challenges, or ideas you may have with an exceptionally talented group of people. You get direct access to product management within Veeam and our opinions are listened to a little more closely on the direction that products are heading, they certainly appreciate our feedback.

As part of this you went to the Veeam Vanguard Summit – what exactly is that?

The Veeam Vanguard summit is an event that Veeam host once a year. They gather all of the Vanguards from around the world to a particular location, this year it was in Prague, and deliver 2.5 days of solid content where you can interact with the product management team to get an idea of what Veeam’s roadmap is and what products they are working on. As you would expect from an event like this there are a lot of NDA’s and general updates on special features and new products that are due to be updated soon. It is also an opportunity to catch up with your virtual Vanguard peers, in person and shoot the breeze.

Is there anything that you can tell us from the event that isn’t under NDA?

There is! Veeam version 10 is imminent and is currently in Beta, due to be released in Q1 2020. There are a couple of new features in there which our customer base has certainly asked for. There is NAS back-up, so the ability to back up files from a file server or NAS device. There is continuous data protection which as the name suggests allows you to protect virtual machines with near zero-time difference between the protected copy and the live copy. Veeam backup for Office 365 will be introducing the ability to backup to Object based storage as well, so things like Azure Blob, AWS S3 or S3 compliant storage.

I’m guessing there is quite a lot that you can’t tell us?

Yes, there are quite a few things that I can’t say – if I could give a hint, take a look at Veeam’s messaging at the moment. They are marketing themselves as a full cloud data protection company, its not just your VMware environment anymore. I’m not sure when the NDA is going to be lifted on those ones, but I can keep you updated.

North East IT Company Stacked for Success

Three of the North East’s most well respected and forward-thinking IT Services Businesses have announced their intention to merge and form a new business. SITS Group, PCI Services and Pivotal Networks will merge to form a newly incorporated company called TruStack LTD.

The turnover of the new business will exceed £10M with significant growth plans in terms of turnover and additional services post merge.

All three companies operate across a multi sector UK wide client base and share the same ethos of high-quality customer experience whilst delivering a range of complimentary services that will benefit many of the existing customers.

All three companies provide very similar services such as cloud computing; network design, implementation and support; data centre services; managed services; unified communications and Cybersecurity solutions with a strong shared emphasis on client retention and customer service.

Existing clients include Muckle LLP, Collingwood Business Solutions LTD and many of the North East’s Top 200 companies.

The Directors of TruStack namely Joe Olabode, Richard Common, Paul Watson, Phil Cambers, Russell Henderson and Geoff Hodgson look forward to exciting times ahead.

In a joint statement, the Directors said: “There is clearly a very similar positive culture between all of the businesses. It makes sense to merge and the joining of the businesses will, in turn, benefit our loyal and longstanding client base who will get access to an even wider pool of commercial, administrative and technical services. We have spent many months performing due-diligence on all sides and we are delighted to announce this merge. We would like to thank all parties involved in making this deal happen, including legal advice from Ward Hadaway and financial advice from Clive Owen and Haines Watts.”.

Why You Need a Security Operations Centre – Paul Watson, Managing Director.

High profile security breaches are commonplace in today’s media, driving everyone’s awareness of the importance of Cybersecurity across businesses of all sizes. In the first few months of 2019, the following breaches have been reported:

  • Marriot Hotels suffered a breach of its reservation system compromising the personal information of 500 million users.
  • Apollo, a sales engagement company reported that 200 million records of prospective clients had been stolen from a database it maintained.
  • Google announced that it will shut down Google+ after discovering a bug exposed information for 52.5 million users
  • Quora, a popular question-and-answer website announced that personal information of 100 million users was exposed in a data breach.

The estimated cost and impact of these breaches is staggering:

  • A study by Detica on behalf of the UK Government Cabinet Office estimates that cybercrime will cost UK Businesses £8 billion annually.
  • A study conducted by Cybersecurity Ventures estimates that cybercrime will cost the world $6 trillion annually by 2021 exceeding global trade in all major illegal drugs combined.
  • A study by PWC reported that only 39% of senior executives were confident that adequate safeguards were in place to deal with cyber threats. In addition, just 53% feel that they are in the process of building sufficient protection.

Cybercrime is clearly big business; the profile of attackers involved in cybercrime has changed from individual ‘hobbyists’ to well organised and highly skilled people performing these actions as a job. The complexity of attacks and exploits have increased exponentially with many being well planned, co-ordinated and using sophisticated methods of evasion. Add to this the fact that the number of Internet-connected devices has exploded over recent years with the estimated number of connected IoT devices in 2019 at a little over 42 billion devices in addition to traditional Internet-facing services and the scale of the problem is apparent.

To combat these exposures and minimise their attack surface, many companies are introducing multiple products into their infrastructures, each of which is designed to address specific areas of security. These products may be DNS based security, firewalls, IPS\IDS, Web filtering, email filtering, end-point protection, breach detection, cloud access security brokers (CASB), end-user behaviour analysis (EUBA), the list goes on. With an estimated 1200+ vendors (many providing multiple products) within the cybersecurity solutions market, there is a huge number of products to choose from.

Each of the products introduced do an excellent job to mitigate cyberthreats within their specific areas and most provide a wealth of information and intelligence that companies can use to provide proactive protection and mitigation to further strengthen their security posture. Whilst these products provide information and intelligence, companies face many challenges when trying to leverage this information such as:

  • Much of the information is contained within log files generated by the products. Whilst these log files are generally in plain text format, they tend not to be human-readable and a single log files could easily contain 1,000’s or 10,000’s of entries.
  • Log files are not generated in any particular standard; different vendors and products will produce logs with different information and formats making deciphering the contents difficult.
  • Each product will generate its own set of logs and events resulting in multiple locations of log file information for companies to decipher.
  • Because log files are scattered throughout the company’s infrastructure, correlating entries across multiple log files and multiple products manually is extremely difficult if not impossible.
  • The human resource required to achieve these tasks is significant. Most companies simply do not have; and do not have the appetite to employ multiple people who could dedicate their time to analysing log files.
  • In addition to simply having the human resource to analyse log files, these employees need to have some form of threat intelligence to make informed decisions regarding emerging threats to really add value.

Over recent years there has been significant growth in the SIEM (System Information and Event Management) market. These systems are designed to ingest logs and events from a diverse number of sources, index that information and enable IT departments to build visualisations (dashboards) based on their requirements and the indexed data. SIEM products form part of the foundation of a SOC service.

SIEM products are available as both on-premise or SaaS offerings. Running on-premises SIEM products requires companies to employ skilled individuals because installing, configuring and supporting SIEM products takes specialised skills. In addition to staff requirements, companies must provide suitably specified compute resource to process large volumes of data in near real time; architect the server and network infrastructure to be able to cope with periods of peak activity; provide and maintain storage needed to store large volumes of data; and backup the normalised and historic data. The barrier for many companies to enter the SIEM market is cost; most SIEM vendors license their products based on the volume of logs and events ingested. This volume is very difficult to quantify resulting in a variable cost service to customers which is difficult to budget for and commercially unattractive. Consuming SIEM as a SaaS model mitigates the requirement for specialist hardware although, the pricing model remains the same and skilled individuals are still required to develop indexing rules and build visualisations.

Pivotal Networks are proud to announce our hosted SOC service which leverages a mature SIEM platform augmented with robust rulesets and algorithms to highlight and correlate well-defined Indicators of Compromise (IoC). This is further enhanced by 3rd party Threat Intelligence feeds and highly skilled security analysts investigating all suspicious or malicious activity. Our hosted service provides a proven, purpose-built SOC which removes the requirement for our customers to employ additional skilled resource and specialised hardware. Pivotal Networks hosted SOC service provides the following benefits for our customers:

  • Easy licensing model providing static monthly costs regardless of the volume of data ingested by the SOC.
  • A comprehensive hosted SOC service backed by a mature SIEM platform.
  • Twenty security analysts work in our SOC; these analysts have an average of more than 5 years professional experience and hold a variety of security-based certifications including CompTIA Security+, CISSP, CEH, ECSA providing human intelligence.
  • Comprehensive visibility of events across all security products in your company.
  • Aggregation of logs and events from multiple sources.
  • Correlation of events across multiple systems and time zones providing full correlation of events.
  • Around 1000 product parsers are available at present. If new services are added which do not have parsers available, new parsers will be written at no additional cost.
  • Integration with on-premise applications as well as SaaS products through API integration.
  • Shared Threat intelligence across multiple companies around the world resulting in faster discovery of emerging threats.
  • Automatic creation of remediation tickets providing information about malicious or suspicious activities, involved hosts, supporting evidence, remediation steps and comprehensive activity logging.
  • Raw log retention for a 12-month period (can be extended if required).
  • Raw logs are stored encrypted and fingerprinted and can be used as “chain of custody” if required.
  • 24×7 monitoring by security experts.

Please contact us today to arrange a live demo. We can also offer a free 30-day trial of our hosted SOC service for a limited time.

Triumph in Top Tech 100 Once Again for SITS Group

Finishing within the Tech Top 100 of the Northern Tech Awards were Specialist IT Services (SITS) Group.

Formed in 2008 SITS Group are a cloud infrastructure, security and support partner based in Northumberland with a focus on best of breed technologies such as VMware, Veeam, Cisco, DellEMC, Zerto & Trend Micro.

The awards organised by technology advisory and investment firm GP Bullhound – held in Manchester – championed key players in the region’s digital sector, celebrating the successful entrepreneurial path of business founders and leaders.

Such achievement comes at the end of a record year for SITS Group where they finished with 20% growth on the previous financial year.

Phil Cambers Commercial Director said ‘We are not a company that has ever looked back, however as we approach our eleventh year in business I think it is important to remember the journey we’ve been on and remind ourselves of some of the fantastic achievements we have gained on the way.

The belief we have in our abilities in 2019 is as strong, if not stronger, than in 2008. We now have 11 years of experience in growing, running and adapting in business which will help pave the way for the next phase of growth we have planned.

I can say that on behalf of myself and the entire team at SITS Group, some of whom have been on practically all 11 years of that journey, that there has never been a dull moment and we’ve enjoyed the challenge that comes with growing a business from a highly successful and mature business in 2019’.

For all press enquiries contact; Zoe Christopher (Marketing Manager)
[email protected]

The Truth About the Cloud Marketplace and What it Means Today.

The aim of this article is to provide some realism to the cloud messaging that has been delivered over recent years. The ‘Cloud First’ strategy that was being investigated by many businesses and adopted by some is disappearing and being replaced with a ‘Cloud Appropriate’ strategy. What we have seen is that a lot of the ‘Cloud First’ strategy push has come from board level or senior management without any real understanding of what transitional requirements and impact, cost and potential disruption that means to the business other than the ‘promise’ of future cost savings. Many of these strategies stem from hearing a peer or competitor has already adopted this approach, therefore ‘we should be also’. The reality is though, a lot of the detail regarding the journey to get there or the actual costs and day to day operational changes on how to adopt a ‘Cloud First’ strategy have not been thought about up front.

Today’s public cloud hyperscalers such as Microsoft Azure, AWS and Google provide a variety of platforms and services to deliver IaaS or PaaS to businesses globally, however most businesses are not in a position to simply transpose their current applications, processes and services into these hosted platforms without significant rearchitecting. Not too long ago the public cloud message was firmly about moving all your existing workloads to the cloud and let the supplier worry about the infrastructure. If you didn’t, you were behind the curve because everyone else is or already has. The reality is though, simply taking workloads that were running on premises as is and running them in the public cloud is considered by many the wrong way to do it.

You won’t get the benefits of automation, resilience, scaling or cost saving by doing this. Systems such as large file servers, domain controllers, line of business application servers that are always running and consuming constant and consistent resources tend to cost significantly more to run in the public cloud and this excludes consideration for application interoperability and interconnectivity requirements, securing access and threat protection management.

In short a “simple” pick and lift isn’t an option that will return the desired benefits.

Let’s look at an example of what ‘Cloud Appropriate’ may mean to you. I am sure you have heard of Office 365. Office 365 offers an alternative solution to on premises workloads such as e-mail servers, SharePoint servers as well as offering additional value with tools like workplace collaboration in the form of Microsoft Teams, Skype, task automation with Microsoft Flow and so on. This is a Software as a Service (SaaS) solution, born in the public cloud, that can take full advantage of the infrastructure that underpins it and really does deliver business advantages such as mobility for users to work from where they need to, with access to their applications and data securely and enhanced communications internally with video calls and IM services as well as to customers and external contacts.

Cloud appropriate would also be systems that usually are largely idle and consume a small amount of resource, however at times must spike to accommodate huge workloads. Think retail or online betting. An average mid-week night requires small resources however the night of a title boxing match or a championship event sees millions of users hitting the system in a short time window. Accommodating this huge resource spike privately would be hugely expensive, so having the cloud scalability works here due to the majority of the time the system uses very little resource therefore costs very little to operate.

So where does that leave us?

Public cloud isn’t a panacea for offloading a full infrastructure easily and at a reduced cost, yet, it may get there in time. You still need to manage and secure your systems in the public cloud, the providers don’t do that for you so system administration isn’t going to disappear overnight. The reality is, it’s a hybrid world out there. Businesses are consuming a mixture of public cloud SaaS, IaaS and PaaS systems as well as retaining private on premises infrastructure for those workloads that are best suited either financially or technically. Ultimately a move to a public cloud will only be made if it yields a strong business advantage or a strong cost saving having factored in all the transitional costs.

Public cloud is certainly a revolution but for many businesses evolution of their existing IT infrastructure may better serve their needs. Hyper Converged Infrastructure is a fitting example of the evolution of a traditional on premises architecture comprising of networking, compute and storage. The services being offered are the same, but the footprint in terms of space, cooling and power consumption is lower whilst offering greater levels of performance which ultimately offers better value to the business.

In summary, a move to the public cloud does make sense for many businesses for certain services and will deliver flexibility and cost savings but it is rare that a business can move wholesale into the public cloud.

Cloud appropriate is understanding how systems operate and inter-operate, the user processes around them, the cost of transitioning to the cloud and the real appreciation of the benefits the cloud delivers. Cloud appropriate also means understanding what needs to stay on premise and investing in the right new infrastructure that is cloud aware and integrated.

Further reading.

CRN and IDC: Why early public cloud adopters are leaving the public cloud amidst security and cost concerns.