Do You Sleep Well at Night? You Shouldn’t If Your Data Is in The Cloud

Tags

For many business owners, and even for those whose main computer usage is strictly personal, the decision to utilize Software as a Service (SaaS) cloud-based products such as Salesforce, Google Apps, or Office365 is made generally as a productivity enhancing and cost-saving measure. However, many users – especially those in the SMB space – neglect to invest in proper backup and recovery tools. They assume that products built on the backs of tech giants like Google and Microsoft must be inherently secure and resilient. Unfortunately, they’re wrong.

Word Cloud "Big Data"

One of the key factors that is most often overlooked when deploying a SaaS solution is the need for backup and recovery. Many users assume that the cloud vendor provides these services or that they are included in the base service, but that isn’t the case. SaaS provider’s backup their systems for the purpose of ensuring their business resiliency and disaster recovery efficiencies, not for end-user data recovery purposes. In fact, the majority of SaaS providers do not offer an easy, integrated backup solution at all which has opened the market for third party vendors like Datto, EMC, and others to provide this service, as well as additional services like archiving and eDiscovery, all for a monthly fee per user.

Infrastructure as a Service providers (IaaS) operate similarly to SaaS providers and only backup their infrastructure for their own disaster recovery and system resiliency purposes. However, unlike SaaS providers, IaaS providers like Microsoft, Google, and Amazon offer a tool set that can backup not only data but VM as well. Depending on the level of service offered, these backups may be located in the same data center as the primary server, or they may be geographically dispersed for additional security and resiliency. That said, a surprising amount of deployments fail to configure backup and recovery options properly, and simply rely on the platform’s overall stability. This is, of course, a disaster waiting to happen.

At this point, it should be evident that although the cloud isn’t as secure as the average consumer may be lead to believe, there are certainly a number of ways to ensure that your data is not only safe, but recoverable should a disaster occur.

Following these simple steps will help to ensure that your data is protected and let you get a great night’s sleep:

  • All cloud deployments (SaaS, IaaS, PaaS) should have a Disaster Recovery plan
  • Data should be independently backed up to a third party vendor or an alternate location, such as a second data center
  • Backups should meet the firm’s data retention policy, for example 30 days, 6 months, 1 year, etc.
  • Testing is key
    • Test restores
    • Test your organization’s Disaster Recovery plan
    • Audit backups frequently to ensure functionality
  • Document the process

Most importantly, remember this rule of thumb: Data protection is the responsibility of the end-user, not the vendor.

SSL & Public Certs: How they work

Tags

Public key encryption is a powerful tool mainly because it is readily accessible to others and puts less strain on hardware resources during execution. If someone needs to sent an individual an encrypted document, they can easily obtain the public key and encrypt the document using the public key–which the recipient could only read using his/her private key. Public-key certificates were created because of the obvious major weakness with public key authentication–any user could pretend to be user A and then send a public key to another participants. This would enable the forger to receive any document that might potentially be sent to him until the breach is identified. Public-key certificates consists usually of a public key, user id and a block of the certificate signed by a third-party. These third parties are certificate authorities are called CA’s and are legitimate private companies that are trusted by the internet community and issue certificates on behalf of a user making sure that the user is who they claim to be and also make sure they are requesting a certificate for a domain that they own. The universally accepted format for a public certificate is the X.509; they are used for network security applications, in the areas of IP security, SSL, secure electronic transactions (SET) and S/MIME (e-mail security).
A certificate contains many fields of information such as : serial number- used to identify the certificate, signature algorithm, public key, thumbprint algorithm ( the algorithm used to hash the certificate), Thumbprint: the hash itself to make sure that the certificate contents have not been changed. Let’s take a step back and understand how a public certificate becomes signed; an unsigned certificate contains a user ID and the users public key; a hash code is first generated of the unsigned certificate and then the hash code is encrypted using the CA’s private to create a certificate signature. At this point, any recipient who receives the certificate can verify it by checking it the certificate using the CA’s public key which are downloaded to most browsers. The X.509 is an important standard because it is recommended by the ITU-T because the certificate is used in many arenas, not only for SSL transactions. X.509 recommends RSA as the recommended algorithm and assumes that some sort of a hash function is used as the digital signature scheme.

SSL was designed to create a reliable end-to-end secure service for TCP. SSL provides security for higher protocols, like HTTP which is the service that communicates between the web server and the user. SSL connection and session are two important concepts; a connection is essentially a transport that provides the type of service–in this case they are peer-to-peer connections. Each connection is associates with a session which is an association between the client and the server initiated by the Handshake Protocol. The SSL session determines which type of encryption will be used within a connection. One of the parameters in the session is a peer certificate–an X509.v3 to be an exact.

Businesses with web servers should be encouraged to use http over SSL because it safeguards both client and server data. Any externally accessible resource on the network should be protected by a public certificate.

Back to Fundamentals: How Poor Firewall Configurations Invite Disaster

Tags

aaeaaqaaaaaaaak6aaaajgyymmy1ngyzlwvhndktndnknc1iyzhkltg5mtllmwixnzixnqOne of the most alarming trends in today’s Information Technology landscape is the “allow all out” rules that are the default setting on most new firewalls. Allowing all traffic out may be easy – and less work for the IT department – however the negative effect that it could have on your over-all security is profound.

When reviewing some of the largest data breaches over the last 5 years, it becomes apparent that a majority of them could have been prevented by deploying security best practices such as egress rules and source and destination routing restrictions. In fact, in most cases Internet-based intrusions can be pre-empted by simply reconfiguring your existing firewall. Although this won’t provide absolute security, it will reduce your attack surface and thereby mitigate the potential for intrusions.

A recent article posted by Naked Security discussed Cryptomining and the fact that Network Attached Storage (NAS) devices are being used for distributed computer power. These types of attacks can be easily avoided by deploying a simple egress rule, which will prevent the device from having Internet access in the first place.

Within the SMB and Enterprise spaces, the benefit of egress rules is clear. By restricting access to the internet your organization reduces its over-all exposure and threat surface. A perfect example of how an egress rule would come in to play is to take a look at something nearly every company has – their mail server. Implementing an egress rule would mean that the mail server would be the only device on the network allowing any data out (in this case, sending email). By creating a firewall rule that allows only the mail server to send outbound mail, the threat of an infected machine sending out information via email (smtp) is eliminated.

Strengthening your organization’s network starts with the edge devices (firewalls, routers, etc.) and then works all the way in, to end-user education and limitations. In today’s business landscape – where nearly every user is at least somewhat technology literate and IT departments are constantly running to keep up with patches, updates, and user requests, it’s become commonplace for end-users to be the administrators of their own devices and deal with the minutia of day-to-day computing (installing programs or setting up printer access, for example) on their own. This, however, creates a huge security hole. A user should not be an administrator of any end-point, as it makes it easier for unauthorized applications and programs including malware and ransomware to be inadvertently installed.

To review, there are five considerations to be made for ensuring that your organization’s network is secure at the most basic levels:

–    Implement egress rules whenever possible for things like DNS, SMTP, etc.

–    Implement source and destination IP restrictions

–    Deploy a next generation firewall that includes UTM services for an extra layer of security

–    Utilize third-party vendor solutions for additional security

–   Remove Administrative rights for the end-users