TruStack Lunch and Learn – Datto SaaS Protection

In 2020, businesses everywhere pivoted to remote working styles. As a result, we saw an increase in the adoption of cloud software and services for increased efficiency and collaboration. What many businesses may not know, is that just because data is created or stored in the cloud, doesn’t mean it’s protected. Cloud migration is set to accelerate in 2021, which could put valuable data at risk to cyber attacks without solutions in place to keep it protected.

A big thank you to all those that joined us on the webinar last week, you can find a copy of the recording below.

For more information on Datto SaaS Protection, or backup, please follow this link to take you to our Contact Us page!

Could You Recover From a Cyber Attack?

Data Protection – Not Just Backup

Over the last year, ransomware attacks have become more and more sophisticated in their approach. We have seen normalities such as deletion of backup files and encryption of all other files in the system.

This poses the question whether it is enough to have one back up and data protection vendor in your environment, or do you need to be looking at a more comprehensive data protection and disaster recovery strategy.

Data Protection

A well thought out data protection strategy relies upon multiple layers to help protect data at the core of a business’s infrastructure. As a business, you can no longer rely solely on a local backup that is always online and readily available. This could potentially lead to a complete loss of data.

There are however different methods that could help to better protect your data, or even other methods of duplicating said data. Each layer should have its own security and hardening in place to protect the data further.

As we know, your data is normally the ultimate target of any ransomware attack. If we start from the inside out, you can normally adjust some minor aspects to assist in protecting the data.

  • There should be appropriate permissions in place to ensure that only users that need access to the data have the permissions to do so. This will then limit the attack surface, should a ransomware attack take place.
  • Ensure that you avoid making all users a global admin.
  • Follow principles such as read-only groups, read and modify and full control.

One product that can be used and is on ‘the truck’ at TruStack is Netwrix. Netwrix can assist with NTFS permissions management and configuring permissions.

Near-Line Storage/Back-Up

Near-line storage or back up is a target that is quick to recover from and is always online. This could range from a server to a NAS or a purpose-built platform that offers benefits such as hardware compression or deduplication. The use case for near line back up is typically used if someone deletes a file and needs to recover that data quickly.

Physically securing these devices is sensible, and like the data at the core, you should follow similar principles.

  • Access to the backup repository should always be configured
  • Do not use default admin accounts
  • Lockdown firewalls
  • Don’t domain join a repository.

Offsite Backup

Offsite backup targets could be considered as cloud-based object storage, another building hosting a backup target, or rotated hard drives.

This offsite backup is classed as your insurance policy should anything happen to your data and the near-line backups mentioned previously.

Depending on where this data is stored, this can offer additional protection from ransomware and malicious attacks. If you find that someone has compromised your server and deletes the backups, what do you do?

You could use a third party back up target. These targets can help to protect your data, even from a ransomware attack, or internal threat. Many vendors offer this type of service which is normally shortened to BaaS, or backup as a service. Vendors that we use include Veeam and Datto.

Air Gap Backups

Air-gapped backups are those that are completely off the network and not online, so there is no way that anyone could log onto the device and delete that data on it. Tape is the most common example of this and something that is still used frequently today.

However, with tape backups you still need to consider how these are going to be stored should the worst happen. At a minimum they should be stored in a fireproof safe, and preferably off-site.

Also, remember that tape does not last forever should you consider using it for archiving purposes, and each LTO generation is only compatible with the most two prior versions.

Snapshots

SAN snapshots are not backups; however, many SANS now offer the ability to create a snapshot of their volumes for a quick rollback. If the worst happens, and as the last resort, a SAN can roll back to a volume that is in a known good state and could be exactly what is needed. The volumes on a SAN where many servers run from are typically not exposed to a production environment where an attacker could manipulate them and delete data.

Securing access to the SAN should also still follow the same precautions as mentioned previously.

Remember, a backup is only as good as the last time it was tested, so make sure that this is done as often as necessary.

For more information on Data Protection and the services that TruStack can provide, please head to our Contact us page.

Data Classification: What it is, why you should care and how to perform it.

Many organisations have limited resources to invest in safeguarding data. Knowing exactly what needs to be protected will help you develop a secure plan so you can allocate your budget and other resources wisely.

The best place to start is by classifying your data. Classification provides a solid foundation for a data security strategy because it helps to identify the data at risk in the IT network, both on premises and in the cloud.

In this article, we will give the data classification definition and explore the steps involved in getting started.

What is data classification?

Data classification is the process of organising both structured and unstructured data into categories. It enables more efficient use and protection of critical data, including facilitating risk management, legal discovery, and compliance processes.

For years, it was up to users to classify data they created, sent, modified or otherwise touched. Today, organisations have options for automating classification of new data that users create or collect.

What is data discovery? 

Data discovery is the process of scanning repositories to locate data. It can serve many purposes, such as enterprise content search, data governance, data analysis and visualisation. When combined with data classification, it helps organisations identify repositories that might contain sensitive information so they can make informed decisions about how to properly protect that data.

Data security

To safeguard sensitive corporate and customer data adequately, you must know and understand your data. You need to be able to answer the following questions:

  • What sensitive data, such as intellectual property (IP), protected health information (PHI), personally identifiable information (PII), and credit card numbers, do you store?
  • Where does this sensitive data reside?
  • Who can access, modify and delete it?
  • How will your business be affected if this data is leaked, destroyed or improperly altered?

Having answers to these questions, along with information about the threat landscape, enables organisations to protect sensitive data by assessing risk levels, planning and implementing appropriate data protection and threat detection measures.

Regulatory compliance

Compliance standards require organisations to protect specific data such as cardholder information (PCI DSS), health records (HIPAA), financial data (SOX) or personal data (GDPR). Data discovery and classification helps to determine where these types of data are located so you can make sure that appropriate security controls are in place and that the data is trackable and searchable as required by regulations.

Guidelines for data classification

There is no one-size-fits-all approach to data classification. However, the classification process can be broken down into four key steps, which you can tailor to meet your organisation’s needs as you develop your general data protection strategy.

Step #1. Establish a data classification policy

First, you should define a data classification policy and communicate it to all employees who work with sensitive data. The policy should be short and simple and include the following basic elements:

  • Objectives – The reasons data classification has been put into place and the goals the company expects to achieve from it.
  • Workflows – How the data classification process will be organized and how it will impact employees who use different categories of sensitive data.
  • Data classification scheme – The categories that the data will be classified into.
  • Data owners – The roles and responsibilities of the business units, including how they should classify sensitive data and grant access to it.
  • Handling instructions – Security standards that specify appropriate handling practices for each category of data, such as how it must be stored, what access rights should be assigned, how it can be shared, when it must be encrypted, and retention terms and processes.

Step #2. Discover the sensitive data you already store

Now it’s time to apply your classification policies to your existing data. You could choose to classify only new data, but then business-critical or confidential data you already have might be left insufficiently protected.

Rather than trying to manually identify databases, file shares and other systems that might contain sensitive information, consider investing in a data discovery application that will automate the process. Some technology tools report both the volume and potential category of the data.

Step #3. Apply labels

Each sensitive data asset needs a label in accordance with your data classification model; this will help you enforce your data classification policy.

Step #4. Use the results to improve security and compliance

Once you know what sensitive data you have and its storage locations, you can review your security and privacy policies and procedures to assess whether all data is protected by risk-appropriate measures.

Step #5. Repeat

Files are created, copied, moved and deleted every day. Therefore, data classification must be an ongoing process. Proper administration of the data classification process will help ensure that all sensitive data is protected.

Conclusion

Data classification is not a magic wand that ensures data security or compliance with regulatory requirements by itself. Rather, it helps organisations identify the data most critical to the business so they can focus their limited time and financial resources on ensuring appropriate data protection.

For more information contact us here.

Data Owner vs Data Processor – Why You Need to Protect Your Own Data

There’s a common misconception among Software as a Service (SaaS) users that backup isn’t necessary for their data because it exists in the cloud – and that provider will backup and secure your data, right? Unfortunately, this is untrue. SaaS applications such as Microsoft 365 unfortunately are just as vulnerable to data loss as on-premise apps.

Why? Because the number 1 cause of data loss is human error. Staff members accidentally deleting files, opening phishing emails, accidentally downloading malware and more. 
Some scenarios where customers could lose data include:

  • Malicious deletion by a disgruntled employee or outside entity
  • Malware damage or ransomware attacks
  • Operational errors such as accidental data overwrites
  • Lost data due to cancelled app licenses

SaaS providers like Office 365 offer a convenient service to provide access to e-mail services, data storage, and collaboration tools. These features were traditionally offered from an on-premises infrastructure with services like Exchange server and SharePoint server, where the data processor and data owners tend to be the same thing.

Now let’s think about what this means in a SaaS environment, the data processing task has moved to a cloud service where you don’t need to worry about it anymore, however you are still the data owner. This means that you are still responsible for how the data is protected.

In this example, Microsoft’s responsibility as a data processor is bound by the Service Level Agreement, they operate to which guarantees that the service they offer will be available. As of Q1 2020, O365 has a 99.98% up time, or to put that into perspective, an average of 17 seconds downtime per day. Microsoft operates a resilient infrastructure, which meets stringent security qualifications such as Cyber Essentials PLUS and hardware-level resilience by operating its services from multiple data centres in dedicated regions around the world.

All of this is great for providing a service, but it doesn’t protect the data within those services that you as the data owner are responsible for. Let’s assume you have a business requirement to maintain 7 years’ worth of email data when that data lived on-premises, that requirement doesn’t suddenly go away when you move the data to the cloud. Equally, if e-mails were deleted or were subject to some kind of ransomware attack, you would rely on a backup to recover the data. The same thing still applies when the data is running in a SaaS service like O365.

This is where products like Datto SaaS Protection comes into play. For more information on how we can help or a free demo, send us an email on [email protected] or call us on 0191 2503000