SLAs – Who’s being served?

Life insurance, car insurance, homeowners insurance, business insurance, health insurance…Think there’s a pattern here? Let’s cut to the chase. A Service Level Agreement (SLA) is basically, insurance for your cloud data service. One would assume that your business data with a cloud service provider is at least as important as your home or car. Wouldn’t you feel better if you had some guarantees and parameters set that protect you? Of course, SLAs not only hold accountable, but also protect the service provider. When you and the provider are protected there’s no guesswork, no finger-pointing and no misunderstanding. So, if there should be a problem, the problem is readily solved. And yes, dealing with insurance companies can be a pain, but dealing with a service provider who is offering you the best deal they can, is completely different.

It’s true, not everyone has insurance and there are varying levels of insurance. You have to weigh the real need with your level and type of business. It’s not an all or nothing proposition. Every time we buy a big priced item from a couch to a washer to a mobile phone, you’re asked, ‘do you want a warranty with that?’ I bet, depending on what you’re buying and how it’s being used, you may mull over whether or not to get the warranty or to take your chances. Warranties and insurance policies essentially give you overall peace of mind just in case something should happen.

Do you really need that service agreement for your IT? Let’s address the realities of what SLAs mean to the provider and customer and how they address the most important component – uptime.

  •  SLAs are a two way street: Agreements, generally, protect both the customer and the service provider. After all, we’re both in business to make money and provide the best service and/or product to stay ahead in our industries. So, don’t expect a one-sided agreement that puts either customer or provider at zero risk.
  • SLAs have caveats: There are service interruptions that are the provider’s fault and the customer has to deal with the downtime, potential loss of revenue and the overall anxiety that we all feel when servers go down. The provider must assume responsibility and liability. Then there are service interruptions ‘outside of the provider’s reasonable control.’ Just make sure you read the fine print. It could mean natural disasters. In that case, no one, the customer, nor the provider has any control over interruptions when dealing with those circumstances. Finally, there’s everything in between. We know what we can and can’t handle and will have stipulations and disclaimers just like the customer has disclaimers, setting parameters on offers to the general public. Both the provider and customer have restrictions and the agreement should fit both parties the best it can.
  • SLA guarantees: The reality is, there are no guarantees. When a product touts a one hundred percent guarantee, there are still stipulations and disclaimers in the fine print. You may ask yourself, so why do businesses bother making that statement? Good businesses will always attempt a one hundred percent guarantee and satisfaction rate.

Uptime in the cloud cannot have a one hundred percent guarantee without some sort of fine print. Measure great service by the people and how they work with you, how they problem solve and how they continue to offer you the best. Remember, SLAs are an agreement between you and the provider and both parties need to take care of themselves so they can take care of one another.

Posted in Cloud Computing, Metrics, Service Level Agreements | Tagged , , , | Leave a comment

Reducing Risk in the Enterprise

At Datotel we routinely conduct risk assessments on our own operations, as well as assisting our clients to complete risk assessments on their environments.

Our objectives are generally to identify the risks in the environment; whether that be risks related to confidentiality, security, privacy, reliability or availability. From these identified risks, plans are then developed to address and mitigate those identified risks.

Whether you’re a financial institution, healthcare agency or a firm with no mandated regulations you should ensure you at least have some of the basic risks identified and covered. Continue reading

Posted in General, Security | Tagged , , , | Leave a comment

DCIM – Better Energy Management

In our data centers, we are under constant pressure to be more efficient, from shortening the new client onboard time to reducing power and cooling costs. Our Service Level Agreements (SLAs) are our number one priority and energy is obviously a very large expense for a data center. In fact, according to the Uptime Institute’s Annual Report: Data Center Density, the average kW consumption per data center rack rose from 2.1 kW in 2011 to 2.7 kW in 2012. The challenge becomes how we maintain our uptime and availability SLAs to our clients, while being as efficient as possible with energy, all the while making sure we are adequately managing our capacities and maximizing our resource utilization. Additionally, the level of complexity involved in building and managing the supporting power and cooling environments is ever increasing.

Data Center Infrastructure Management (DCIM) platforms enable organizations to take a more proactive and comprehensive approach to achieving the power and cooling goals through managing the attributes and the increasing complexities in the data center. This centralized information system allows a historic, real time and forecasted view of the facilities and how they are behaving allowing optimization of the assets and resources. Datotel, in partnership with CA Technologies (a provider of management solutions that help customers manage and secure complex IT environments), is now employing DCIM software to transform how we manage our data centers.

DCIM extends the more traditional systems and network management approaches to now include the physical and asset-level components .The true benefit of DCIM is really all about the data and having visibility to information we didn’t have before, providing us the ability to make more informed decisions in a faster time frame than we could have previously. This visibility helps us to be more efficient – both from a time and energy standpoint. In addition, DCIM helps to provide more in depth alerting capabilities; which in turn allows us to provide a higher level of service. This information access has helped us become more proactive in capacity planning both for ourselves and in projecting future growth and needs for our clients. For example, knowing exactly what capacity we have now and forecasted for the next six months, allow us to plan resources and capital allocation. This helps to decrease the time needed to deploy a new environment for a client – we can project what we need- when and where.

Another large benefit of DCIM is through what we call “a single plane of glass” view of our data center facilities, this means that we have one platform vs. multiple sources to get information on a very granular basis. This saves us time, resources and allows us to easily step back and look at the big picture. Within our DCIM platform, the operations team has a set of customized views into the data that represents their view of the world and information that is important to them allowing them to make key decisions quickly. This view is very different to the views at the executive level and the views our clients see for example. With more, accurate information presented in a timely fashion, we have the ability for better decisions, planning and ultimately, increased availability.

In our cloud computing model, our clients only pay for what they use; this same philosophy extends to the energy consumption within our data center. By utilizing the DCIM platform, Datotel is a unique position to enable our clients to pay for the actual amount of power consumed vs. the industry standard of paying a flat circuit fee- regardless of how much power is used. From our experience, this common industry practice of flat-rate energy billing is not favorable to the client. Knowing and paying for actual energy usage is not only a huge cost savings, but this knowledge allows companies to make smart business decisions and can become a sustainability effort as well.

Another priority for us is cooling. The data center environment is dynamic, making sure the temperature is regulated at the right place at the right time, and keeping on top of those changes, can be challenging. By utilizing DCIM, a centralized repository of power and cooling data allows us to establish thresholds and automatic triggers within the facility, devices and server racks so anomalies in power usage can be quickly and easily identified. So, for example, we will be notified immediately of changes in environment, such as humidity and temperature, which can indicate a wider problem.

We are now looking to extend DCIM beyond our four walls to those of our client locations. With visibility to our client’s entire energy consumption, we can help them make critical decisions on how they are using energy across their whole enterprise, not just the equipment within our data center. This optimization of the energy use can significantly impact the bottom line. But more on that next time…

@ddbrown

Reference: Uptime Institute Annual Report: Data Center Density, Preliminary Results, 2012

Posted in Cloud Computing, Colocation, Green Initiatives | Tagged , , , , , , , , , | Leave a comment

10 Components of a Successful Backup and Recovery Strategy

Backing up your organizations data sounds like a fairly simple thing to do. However, it’s often not until something can’t be retrieved that it’s discovered that the backup strategy was badly designed to the organizations goals, or has been poorly implemented. With organizations increasingly relying on their data and IT systems to operate on and service their clients, this is a level of risk that should be reviewed and minimized.  The ten components below should be kept in mind and approached proactively to ensure that you can restore that file you need when you need it.

  1. Classify your data – When considering a backup strategy, it’s useful to remember that not all data has the same value to you or your company. Losing the company picnic photos or an employee’s music collection versus a database powering your main Oracle ERP system are completely different things and would have equally different impacts on your business, and clients, if the data was lost. To have the most efficient back-up, classify the data into different groups, and treat them differently from a backup standpoint. One classification may even mean that the data is not backed up at all.
  2. Understand your data – Once your data has been classified, it is time to establish a recovery point objective (RPO) and recovery time objective (RTO) for each data class. This will determine the frequency of which backups are conducted along with the extent and method of backup required. The answers to other questions such as what level of security needs to protect the data, how often a restore is likely to be conducted and how long the data needs to be retained for, will also impact the selection of the solution to be put in place.
  3. Don’t forget about mobile devices – with more and more enterprise users utilizing mobile devices as their main device for conducting business, protecting the data held on those devices- from contacts and emails through to spreadsheets, documents, photos and personalized device settings- becomes more and more critical to business operations. No longer does the loss or failure of a device simply mean that only a couple of phone numbers have to be reentered into the new phone; critical data can be lost!
  4. Choose the backup strategy and method – different data sets may require different solutions in order to optimally meet the goals of the business and its data sets. Tiered recovery known as Backup Lifecycle Management or BLM is the most cost effective approach to storing data today. In most companies, more than 50% of data is older, of less value, and should cost less to protect. By setting up the correct strategy, you can align the age of your data to the cost of protecting it.
  5. Assign a responsible party for ensuring successful backups – just because a backup strategy has been implemented, doesn’t mean it’s going to run successfully every time. Contentions, lack of storage media and timing issues can occur and need to be dealt with in a timely fashion to make sure the data is protected. To ensure this happens it’s recommended that responsibility is clearly assigned with making this happen.
  6. Secure your data – The backup data whether that’s held on tape, disk or in the cloud offsite, should be protected both physically and logically from those who do not need access to it. Even for those that are actively managing the backup process, it is often not necessary for them to work with the data in its raw form, but rather they can manage the process with the data encrypted, adding another level of security and privacy. Although the best practice is to encrypt the data while it is in flight and at rest on the backup media, it is often not put in place due to the extra time required to encrypt the data during the backup process and as such increasing the backup window required. Ideally, look for 256 bit AES encryption.
  7. Conduct test restores – it’s always more preferable to find an issue proactively than reactively when there is no hope of restoration of data or time constraints are in place. So conduct periodic test restores of data to ensure that the process is working as planned.
  8. Keep track of your backups and document – documentation is key for control of the process and security around your data. Ensure that the backup process, methods, goals and ongoing operational status are documented both for internal purposes as well as to comply with any 3rd party audit requirements such as FDIC, HIPAA or PCI.
  9. Destroy backup media appropriately – whether you are handling HIPAA, Credit Card or “regular’ corporate data, it is a best practice to ensure that you are sufficiently keeping track of the backup media through it’s useful life all the way through to physical destruction. As common practice and to minimize any data leakage, this life cycle should be documented both from a process perspective and an actual traceable activity standpoint.
  10. Things change, review your strategy and implementation on a regular basis – if there is one thing in business and technology that holds true, it is that change is constant. With this in mind, a regular review of what is being backed up (or not) and verification that the business requirements around the data are consistent should be conducted.

In a world where the volume of data to be backed up is increasing substantially year over year and the complexity of the IT systems is not decreasing, proactive planning, management and execution is becoming ever more important. I hope these ten components to having a successful backup and recovery implementation helps guide you towards tackling that challenge head on.

@ddbrown

Posted in Asigra, Backup, Datavaulting | Tagged , , , | Leave a comment
  • Why Patch Management is so important:

    You’ve seen the news and read the articles. The recent ransomeware attack is being called the largest ransomware attack in internet history. But did you now that the damage could have been avoided if those computers had been properly patched? … Continue reading

    ...more
  • Admin Terms of Use Contact Us