Matthew Sekol

"The basic tool for the manipulation of reality is the manipulation of words."

Category: Cloud

Bimodal IT: Creating Rifts or Opportunities?

As a result of consumer driven conveniences via applications ‘that just work’, employees are expecting more flexibility, capability and speed of delivery from their IT group than ever before. Enterprise IT can address these requirements with the cloud and the efficiencies it can bring to a business if they standardize and streamline processes. Never before has IT had a bigger chance to make an impact. With this new opportunity though, comes a lot of change.

IT needs to align with the business in a way they haven’t before and develop processes designed to enable agility and speed. This is evident from a recent CIO article called “What Gartner’s Bimodal IT Means to Enterprise CIOs,” which theorizes on what Gartner’s vision of the future of IT is. Supplement that with Kathleen Wilson’s “Enabling Azure Operations” session at Microsoft Ignite, and the story really starts to make an impact.

What is Bimodal IT?
Bimodal IT is a way to address the increased speed of solution delivery in an Enterprise brought on by the cloud. It posits that IT needs to be broken down into 2 groups to facilitate this, but also eliminates a 3rd IT group:

1. A traditional IT that deals with rack and stack installations and on-premises troubleshooting.
2. A new modern IT in which everyone is a generalist and they can quickly organize to deliver solutions that drive the business.
3. This has the benefit of eliminating (or lowering) shadow IT because now IT is equipped with the tools to move as quickly as someone with a credit card.

Traditional IT knows what the business needs from an IT strategy perspective (example: We need more storage for our designers), but modern IT understands the business strategy (example: If we had a more robust design environment, we could deliver designs 10 times faster to our customers, resulting in a quicker R&D return).

Is This the Death of Traditional IT?
Bimodal IT builds on the practices that some Enterprises have already been doing for years since the advent of virtualization and subsequently automation. The difference is that now, with the cloud, everything is sped up and even more immediate. Lydia Leong at Gartner wrote a great blog post that shows how the similarities between virtualization and public cloud aren’t close enough though, even through the software defined data center. Despite this assertion, Enterprise IT should already be on the way to towards a modern IT.

This phrase though is particularly troubling for traditional IT staff and shows that a mindset change is needed is staff are to survive.

The IT-centric individual who is a cautious guardian and enjoys meticulously following well-defined processes is unlikely going to turn into a business-centric individual who is a risk-taking innovator and enjoys improvising in an uncertain environment.

I’m not sure I agree completely as this does sound like a death knell for traditional IT. The software designed data center and Devops play a critical role in both and skills and processes likely can transfer. What I take from this quote is that communication between IT and the business units needs to be more immediate, but I don’t agree that you introduce risk by allowing the speed to delivery get in the way of due diligence and long-term planning. Doubt me? Check your data center for those legacy, yet critical applications still hosted on Windows 2003. We can’t let that type of development work continue without some amount of planning, certainly there must be a balance.

I’m hopeful that CIOs always understand the business needs, but historically, those needs haven’t been communicated to the IT managers and engineers. In an Enterprise where IT is split into traditional IT and modern IT, the issue can be exacerbated as both factions are fighting for power. Gartner seems to be suggesting that this model should be immediately implemented even if the CIO isn’t ready! Yikes! (check the Gartner agenda here).

Is that really what they are saying though? Well, likely there is little IT organizational movement now with still small IT budgets, so some IT departments are likely stuck with a structure put in place a decade ago. Certainly Gartner’s suggestion would be less than optimal because of the natural rift that it causes, unless they are building on the efficiencies that Enterprise IT groups might already have in place – infrastructure and business applications teams that work well together, taking the best of both worlds to move into the bimodal model. Traditional IT can offer standards and best practices while modern IT has the agility to deliver quickly.

For traditional IT, this doesn’t represent a death knell, but an opportunity to move into a more agile way of working.

The CIO Needs to Change Too!
Effective Enterprises should be on their way to solving this problem with tight communication between both traditional IT, modern IT and the business. The CIO (or some leader) needs to facilitate this relationship.

With the speed at which cloud moves, the CIO can no longer afford to sit back and be a funnel of business information any more to only one group, ie. the business applications teams. They need to facilitate that deep understanding between IT and the business to enable agile movement and not maintain ‘at arms length’ traditional IT projects. For those businesses that remain with fragmented communication levels, they’ll likely find competitors with an efficient edge squeezing them out over time.

There are hints that this is what Gartner is saying, but it also seems like they are just encouraging 2 nearly separate ITs just to deal with innovation. I don’t believe that the cloud is so transformative that existing processes and standards knowledge can’t be built upon to deal with this new agility. Certainly, communication can help bridge the gap.

A Surprising Way to Get There
If you are in an Enterprise and can spare staff to move into modern IT, you will likely want to pull from those with the broadest skillsets so that they can understand the complexities of a cloud based solution. Generalists that can understand an entire application’s stack are better than someone troubleshooting just one component.

If you don’t have the staff or if your IT group remains highly fragmented without effective communication with the business and you don’t know how to address it, an IT Partner can often help bridge the gap. Think about it, a good IT Partner has experience in talking to different levels of the organization and getting to the real requirements and the results. Their job depends on this skill!

CIOs should not fear bringing an IT Partner into business conversations. Partners have the added advantage of seeing industry trends specific to your vertical and can perhaps facilitate external references for large initiatives. Traditional IT should also embrace a partner as they can have two effective means to get your IT group where it needs to be:

1. A Partner can focus on traditional IT, allowing existing IT staff to start develop processes and skillsets around modern IT practices.
2. A Partner can be the modern IT practice, interacting with the business while existing staff deal with traditional IT issues.

Not all businesses are Enterprise class, and a Partner can also help smaller businesses understand and make this transition as well. Not everything is about the big players, there are cloud efficiencies for everyone!

Regardless of how you get there though, shadow IT can still come into play if you’re not careful, proving that communication is the key to this transition. Peter Sondergaard, VP and Global Head of Research for Gartner, wrote a great blog article about bimodal IT and mentions that companies ignoring this trend risk shadow IT, but I think he misses how shadow IT might crop up when applying the Gartner model.

If traditional IT is kept out of the business conversation, shadow IT moves from the end users into modern IT and the solutions implemented will ultimately become unsupportable and fragmented themselves (harken back to the Windows 2003 example). Just because the cloud makes it easy doesn’t mean you move at a breakneck speed towards it. This balance is the value traditional IT can bring. The CIO, on the other hand, must ensure communication is tight throughout the business and keep the bleeding edge reigned in just enough to be secure, while being agile.

The cloud journey is very complex and getting it started right is key if IT is going to shift from cost center to enabler of business agility. IT staff need to embrace the change too. From the engineer up to the CIO, each level now has new roles that can be exciting, but you have to embrace the change!

Why Cloud?

Before we start, here is a quick refresher on the 3 types of cloud solutions, just in case you need it!

Someone asked me the other day why any company would move their IT infrastructure to the cloud. They could see the benefit of SaaS platforms, like Office 365 and ServiceNow since those solutions remove the OS and software management layer nearly completely and are just extremely easy to setup and use. They couldn’t wrap their head around moving other workloads to the cloud, especially IaaS. For example, why would you move some proprietary internal software or Windows file shares to a cloud solution when you’ve built up your datacenter kingdom? Over the years, companies have invested a lot of money in on-premises infrastructure, why change?  Let’s dive right in there!

Heavy Capital Investments
Depending on the size of your company and the workloads that you run, your IT infrastructure could be massive. Networking, server and storage companies have done a fantastic job of optimizing workloads on-premises, but at a cost.

Let’s look at a robust virtualization environment.

Your company has experienced sustained growth for several years and has had an initiative to get over 80% virtualized. It is now several years later and you’ve reduced the amount of physical servers, thanks to blades, tiered storage, and advanced networking across datacenters that allows you to do some amazing things with DR. The environment is huge, but runs well with the latest automation tools.

Phew. That’s a lot of capital you have invested in. Accounting is less than thrilled because they are still depreciating all those assets, but you have saved money in consolidating physical servers. Your CIO has no idea that your current utilization is only 70% because you over-purchased, but at least you have capacity. Plus, there are 4 environments for every critical production workload – dev, QA, support, sandbox. Most of these servers sit online and idle for 80% of the time. If they were offline, your utilization numbers would be closer to 40%.

Every year, you look at these numbers, talk to business application owners and try to predict new capacity, but new unexpected projects always come up. You always try to plan 20% more just in case to handle the additional phantom workloads.

A Better Way
Here’s where the cloud comes in. Instead of all that infrastructure, capacity issues and depreciation, you could utilize a cloud solution for some of those workloads. While capital expenses are difficult to predict and more difficult to assign to business units across a shared infrastructure, operating expenses are extremely easy. With a cloud service, you have the flexibility to only pay for what you use and scale up or down depending on your needs.

Here’s a great example.

Let’s assume you work for a national company and your resources are accessed only 12 hours a day, 5 days a week. If you had on-premises infrastructure, it is likely online for 24 hours a day in a data center with constant power and cooling. That is 720 hours of work per month (not including all that other stuff). If you really only needed 12 hours of capacity per workday though, that works out to be around 264 hours, That’s 50% savings, plus potentially hundred of thousands of dollars in savings from hardware costs and depreciation. Lastly, all those non-production servers can also be moved up and only turned on when they are needed, saving you even more money!

The cloud gives you this easy scalability.

When is a Good Time to Start Your Journey?
There’s two ways to look at this. If you are considering a massive all in cloud solution, which to me sounds awfully scary, a good time to look is during your network, storage and server refresh cycles. I would imagine that this is very hard to plan around though. Is your company regimented enough that you swap your entire datacenter out every 3-5 years and gain immediate savings? Probably not.

There are other ways to start on your cloud journey. SaaS based applications are probably easiest and most often the entry point without IT possibly even realizing they are in play. For example, do you use ADP for your paychecks? That’s really SaaS payroll right there!

When we are talking PaaS or IaaS though, all those non-development workloads are a great place to start. Think about what you do when provisioning a development VM for someone. Typically, you give them a lower spec machine that just sits there on all the time. You can get these workloads easily moved into a cloud service, like Azure, allocate the full specs to match your production environment and have your devs hit it. There are two benefits here. First, you are freeing up on-premises capacity for production workloads and second, you can now spin up and down the VMs and only pay for the usage (down to the minute in Azure)!

Scalability is Key
When cloud solutions first came to light, there were a number of cost saving measures offered. Among those were lower staffing considerations, which is always a tough topic. Staffing savings might be there, but it is more likely that your existing IT staff will still exist, but need a different skillset.

There are other features that the cloud can offer for your workloads. The cloud can easily provide backups and recovery. Another way is how cloud providers have worked with ISPs to provide direct connections into their services. Now, this does come as a cost, but it does have a lot of advantages for performance.

Security is another consideration. All I will say is this. Consider what you can do internally and then compare it to a company whose business interest is in protecting your data because their revenue stream depends on it. Microsoft has even gone so far as to fight the US government over it.

Putting all this aside though, PaaS and IaaS cloud really helps IT organization plan and chargeback to business units much easier than it’s ever been done before. This centers around scalability, but also capability. Imagine being able to move and adjust quickly as your business demands it. That’s what the cloud offers.

What’s the Catch?
As easy as all this sounds, a sound automation and operation model should surround the non-SaaS cloud in order to take advantage of all the cost savings. It does require a strong partner likely to get you there. This is key to a successful cloud implementation, but should be a part of any successful on-premises implementation as well.

This is why Microsoft’s hybrid cloud solution is so compelling. It focuses on getting the pieces to automate and manage all workloads first and then leveraging the cloud for what you can using those same tools and principals. There’s no need to rip out expensive infrastructure and go all into the cloud, but it can help you save money as you refresh environments.

The cloud can be a gradual journey that allows you to streamline existing operations before you move any workloads. Develop processes around what IT does well – enable business success through technology and then determine if they cloud is right for your workloads.

How to Use 15% of Office or How to Use Google Apps

For a couple of years now, people have been comparing Google Apps for Business and Office 365. One of the common perceptions from the pro-Google side has been that most people on use 15% of the functionality within the Office software. They expand the conversation to state that the most commonly used spreadsheet features are in Google sheets, and as of late, that is possibly true. A lot of people just use the same 15% of features over and over.

Some businesses can probably get by with most of what Google Apps for Business offers. This selling premise bothers me though and raises a few questions. Is your business only operating at 15% of what it could do by going with Google Apps and, better yet, if you have Office, are people taking advantage of more than 15%?

Cost
As much as I’d like to stray this conversation away from cost, this is foremost on people’s mind. Here’s what we can talk about – the hard costs. It is most fair to compare the base Google plan to the base Microsoft plan. Guess what? Both are $5/user/month! In my experience though, companies who choose Office 365 don’t go with this plan. People who choose Office want the Office desktop software, not just web based productivity tools.

The price for the popular Office 365 E3 plan is $20/user/month compared to Google Apps for Business with unlimited storage and Vault at $10/user/month. The big difference here is the Office software itself of course. For $20/user/month, you can run Office on up to 5 PCs or Macs and have up to 5 mobile and tablet versions anywhere (not including Office Web Apps which works via any browser).

So, is Office worth an extra $10/user/month? Well, Google supporters would have you believe that it is not. After all, that 15% usage creeps in. If Google has only focused on these features though, why isn’t the price for the base Google Apps 15% of the Office 365 E3 plan?

Let’s look at some ways Microsoft makes up for this price difference and how the usage matters. Full Disclosure – I have used both Google Apps for Business and Office 365 in a professional setting.

Email
This is where Google Apps for Business was born and where Microsoft has dominated over the last 20 years. Google Mail has been around since 2004. The other non-mail Google services have been stacked on over the years. Heck, even Microsoft used GMail in an augmented reality game for Halo 2 (that’s how I scored an invite).

Here’s one thing Google understood early on. People get a TON of email. In order to deal with it, they need a LOT of mailbox storage. I remember watching the GB counter every day with much email storage I could get with my free GMail account and comparing it to my 100MB corporate account.

Google’s solution: Search your email, don’t worry about organization or filing.

On the flip side, Microsoft understood something else. People get a TON of email. Email is content. Not all content should be consumed via email and there are different ways to foster collaboration. This is what I see when I look at Office 365 today. Different solutions for different content.

Microsoft’s solution: Put content in the right location and collaborate more effectively. Besides that, organize and prioritize your email. Microsoft knows though that not every corporate culture is savvy in dealing with email content, which is why there are tools to help you, as the recipient, prioritize and clean-up your mailbox (see Clutter, Junk, Ignore Conversations, and Filter Email).

Google actually contributes to the problem of email volume under a horrible guise – search and recall. The assumption of Google is that email is just another mass repository to dump everything and, when you need it, just search for it.

Ugh.

Let’s look at the Google and Microsoft productivity suites and see what else we can do.

Instant Collaboration
Both Google and Microsoft have instant messaging solutions, but they are vastly different. Even with their differences, both work for instant and impromptu communication, determining someone’s availability, file sharing and storing conversation history in their respective mailboxes.

Microsoft’s solution: Use instant messaging as a backbone for quick collaboration, but extend the functionality into meetings, audio and video sharing. Also, make it available throughout Office. As a result, Microsoft Lync is much more than chat, Lync is everywhere across the Office platform. Within the client or within other Office software, you can instantly collaborate with someone over chat, audio, video or with desktop sharing.

Google’s Solution: Just chat, well mostly. Google Talk, which had been wildly popular, was integrated with Google Mail as Lync is with Outlook, but the enhanced features of Lync, like video conferencing and desktop sharing have spawned another application, Hangouts. One thing of note though, Hangouts is not as ubiquitous throughout the Google suite and still it’s own application. Google might be driving towards a Lync-like solution, but they aren’t there yet.

File Storage
Microsoft’s solution: Let people collaborate in teams or spawn collaboration from the individual. SharePoint/OneDrive has come a long way in reducing emails and even file sharing content. This software has been massively popular due to the intuitive interface backed by real time collaboration of documents, spreadsheets and presentations. SharePoint does so much more than document management though and is great at other content management (Discussion Boards, Polls, Shared Calendars, Lists, etc.). OneDrive is more like your personal home drive, built on SharePoint Online and allows for easy sharing of documents.

Google’s solution: Individual file storage and sharing via Google Docs and Drive. The organization is geared towards the individual, not team or project based. Google Docs is really more like DropBox – a simple file repository. Google also has Sites for more team based collaboration, but the end user setup is confusing, requiring more web authoring skills than SharePoint, not to mention, the samples are extremely lame and look about 15 years old.

Enterprise Social
Businesses are starting to leverage social connections within the organization to distribute data and collaborate. This adoption can drive email message volume down and provide a way to easily collaborate with familiar tools from their personal life.

Microsoft’s solution: Familiar is good, natively adopt the best features of personal social media networks and develop an Enterprise class solution. Yammer is for real collaboration and simple broadcasts that are best kept out of email. Sick of ‘Congratulations’ emails? Just look to Yammer’s Praise feature. Yammer is a great place to disseminate static information and the best part is that the recipient is responsible for finding the content. This flips the email scenario on its head!

Yammer could stand some improvements though and better integration with Lync, instead of its own chat client. There’s also an overlap here with SharePoint that folks are expecting will get fleshed out soon.

Google’s solution: Well, no one really knows because everyone avoids it like the plague. With Google, we’re back to Hangouts and Google+, which, again, is just a disaster. Google seems to have a problem discerning consumer solutions from enterprise solutions. There’s a great post about Google+ from a former Googler (watch out for the language). You can see the emphasis on the consumer side throughout his article, but the enterprise conversation (and lack of direction for Google+ in general) is missing.

Good Enough
So, with all the Office functionality, looking at content in a new way and clear cohesiveness throughout the suite, is working at 15% with Google Apps going to work for your company? Office is really worth the money, but you have to make it work for you. Don’t be content to let your end users use only 15% of the suite. Set up some governance and controls to make the most out of your investment. You will find that your users will figure out how best to use the features and they will do some amazing things. I’ve seen it happen!

If there’s still any question about what’s possible, go watch the latest Sykpe for Business video from Microsoft and then go re-visit Google’s intranet Site sample.

SDDC and Self Service: Two Old Problems are New Again

A while ago, I attended an excellent conference held by EMC around the software defined data center (SDDC). It was an eye opening experience into the types of efficiencies an SDDC can offer, but beyond that, it gave a glimpse into the future of IT, at least from the participating vendors’ point of view.

Over the course of that day, IT was embraced as the enabler of customizations and flexibility for the end user while driving down costs. Two main results from the SDDC (or similar cloud initiatives) were self-provisioned VMs/storage and highly customized and portable applications. Despite the touted benefits, both of these brought to mind legacy problems though that centralized management strived to remove. While there are ways to mitigate the pitfalls, it harkened back to issues IT has been trying to resolve for years.

Self-Provisioning Virtual Servers and Storage
Prior to the Y2K bug, IT organizations were a little more lenient with end users. You might have found that end users would go talk to software vendors and purchase the software and servers they needed. If they needed a file server, they may have even gone out and purchased a server with some internal storage and given it to IT to implement (or stood it up under their desk). Perhaps your IT organization back then interjected, but in a lot of cases, I’m betting they didn’t. This was the time before standards and their cost savings were even realized. Folks needed what they needed and they got it through their own budgets.

At its core, this is self-provisioning in that it is end user driven. A user has a set of requirements, found a solution and asked IT to implement it on some infrastructure (or they did it themselves). In current times, IT will work with the end user to plan out resources and get them what they need. In the SDDC, we’ve reversed that. End users can now go to a portal and self-provision the infrastructure they need. IT provides the underlying infrastructure, the end user finds the solution and can implement it themselves using the automated self-provisioning portal. IT can create standards to lower costs, perform chargebacks for cost tracking and can potentially provide many diverse platforms, giving the end user the flexibility to use IT as a service.

The caution here though is stagnation and another divergence of standards at the end user level. The good news is that stagnation can be mitigated with the proper controls around server and storage expiration. This does require some care and feeding on the IT side. You certainly can’t have a user leave the company and their critical VM expires and is removed. IT needs to build processes after the self-provisioning occurs to move things into a production supported environment. Management cannot assume that the self-provisioning removes the need for IT. The IT staff’s responsibilities shift in a different direction.

IT should also perform investigation as applications are brought up to determine if there might be duplication. For example, if you work at an engineering company, you might have an IP management system. Let’s assume a new group has been acquired and they are starting to investigate IP management systems on their own. Someone should know what this group is doing so that you can take advantage of the existing IP management system (if possible). At a large organization, this consolidation of services can be hard to see without IT’s involvement.

In-House Developed Applications
How many of us have struggled with legacy applications that were developed in-house? Back before there were massive ERP applications and APIs for everything, a lot of development work was done to customize new applications built around your business. As time went on, these applications may have traded hands and, as cost cutting measures won the battle for IT, the developers were let go, but the application continued on. No one wants to acknowledge these applications exist, but of course they do! Not only that, these applications are critical to your business and haven’t been updated in years.

What the SDDC (and the cloud) can offer, along with self-provisioning, is a new application landscape that, again, harkens back to this original flaw. A developer can now be enabled to have instant platforms available to them and is encouraged to code it to be portable. We’ve only solved one problem here, namely the availability of resources or speed to deployment.

IT departments have to commit to the developers and the applications. Cost savings cannot trump the development work that needs to happen to keep the application modern. The SDDC and the PaaS based solutions require a commitment to modernizations and upkeep. If not, you will end up with yet another application that is minimally supported, but still critical to your business.

SDDC, PaaS, and IT
While the SDDC and cloud offer instant access to the latest platforms quickly and without much IT involvement, there are still concerns that need to be addressed and monitored by IT. The investment in these solutions requires a long term commitment to infrastructure and the applications running on them. EMC did an excellent job of outlining this strategy and it is certainly a convincing way to manage infrastructure and empower end users if managed and planned out properly.

If nothing else, the SDDC requires a lot of thought put into it and shouldn’t be entered into without a deep level of analysis and thought.

The Cloud and Scalability Pitfalls

Ah, the cloud. It will redefine our datacenters, lower our staffing costs and increase our scalability. If you give it a treat, it may even follow you home and provide everlasting comfort. Let’s talk scalability though. That term can be applied to the cloud in a multitude of ways. Once you build your company up to be a full-fledged Enterprise, it is important to understand this term and how it can be applied.

For the purposes of this article, I’m going to assume you know basic cloud terminology. Also, only scalability is considered, not security or other things you should think about when picking a cloud provider.

2271b7d

User Scalability
The first type of scalability is easy. Moving to the cloud, especially for SaaS based applications, gives you a massive level of user scalability. Depending on your contract, you can scale the licenses you need as your company grows or shrinks. For Saas, this is perfect because a one-size fits all application, like email, grows without you having to add on-premise resources.

The pitfall here is flexibility and expandability. With a one-size fits all application that can scale to a great amount of users comes a lack of flexibility in some cases. The reason is often that, in trying to keep the cloud subscription rates low, a cloud company providing SaaS may not invest in the resources to grow APIs or add functionality. Complexity becomes the enemy of their operating model and, as a result, customizations are limited. This could lead your company to engage multiple cloud partners providing various services to meet your requirements. Your savings have now vanished! A good contrast here would be Google Apps vs. Microsoft. Take a look at the investments over time in improvements by each. If there’s any question in your mind about what I’m talking about, go run Google Apps Sync for Microsoft Outlook. Yikes!

For expandability, you might be looking at development work for those providers willing to allow customizations. In that case, you may be hiring staff to support a development effort to stay in step with the cloud provider, but also expand functionality as requirements come up. This can be a good thing if the solution is flexible or a bad thing if they hit a wall on capabilities. Either way, it is something to consider.

Processing Scalability
This type of scalability makes a lot of sense when considering the cloud. Consider an application that does a heavy amount of processing, but only part of the time. The application is critical to your business, but you may not want to dedicate on-premise resources that will sit idle most of the time. These applications and the resources fall into PaaS or IaaS typically.

For example, consider a bank. All day long, they take in transactions. At the end of the day, there might be processing that needs to be done. A cloud solution might be preferable to on-premise because you could burst all that processing into the cloud after hours and bring down the results before the next day begins. You have no expensive hardware to maintain on-premise and only use periodically.

Another great example can be found in companies running huge e-commerce websites or web applications. These become the lifeline of the company, so there will be massive teams dedicated to these types of applications.

The scalability pitfalls may be the same development complexity as stated above, but the result would seem to justify the means. Even if the cloud infrastructure was affected, you have a portable enough application to move it freely to another resource and can expand it as processing needs grow. This is one reason OpenStack is so appealing and a big reason the cloud providers operate in different regions. If one region is affected and your application is portable, it can be brought up somewhere else quickly.

The bigger pitfall here can be management’s expectations vs. reality. If management is driving the cloud initiative down, they must understand that the cloud isn’t a catch all for every application and that other cost saving measures like server virtualization might make more sense. Segway approaching!

Scalability as a Consideration
Be wary when management asks you to look at your internal applications for the cloud. If your company is running smaller sets of applications that are used by small groups of people throughout the day, you will most likely find that the larger IaaS providers cannot match the cost of an on-premise solution and that processing scalability is moot. Also, the complexities involved in getting your applications portable and potentially backing up this critical data might also keep you from moving. You need to be sure that you understand how some IaaS providers offer their services because many require recoding of custom applications. They want you to conform to a platform so they can keep their costs down (enter OpenStack again). There is a great article over at GIGAOM about customers who don’t require cloud scalability, but still want to move. They do recommend a potential solution for these types of application sets and these options are worth investigating.

Balance of User and Functionality
When faced with a cloud initiative, scalability is often discussed, but may not truly be understood. User and processing scalability will only get you so far in a cloud-based world, but each has its place. The solution you choose must be flexible enough to meet all your requirements to make it worth the investment. Consider the product roadmap and your own internal company’s long term strategy when considering the user side of scalability. For those companies with an intensive processing requirement or the burden of servicing millions of users, the cloud can easily provide scalability and potentially, a quick return on investment. The question is – does your Enterprise have such applications? If your management has given the cloud-first directive, scalability may fall to the way side entirely and other factors will start to emerge that show you how applications are really used and that you may have already made the right decision in keeping the applications on-premise on hardware or virtualization.

© 2017 Matthew Sekol

Theme by Anders NorenUp ↑