Matthew Sekol

"The basic tool for the manipulation of reality is the manipulation of words."

Category: Business Processes

Bimodal IT: Creating Rifts or Opportunities?

As a result of consumer driven conveniences via applications ‘that just work’, employees are expecting more flexibility, capability and speed of delivery from their IT group than ever before. Enterprise IT can address these requirements with the cloud and the efficiencies it can bring to a business if they standardize and streamline processes. Never before has IT had a bigger chance to make an impact. With this new opportunity though, comes a lot of change.

IT needs to align with the business in a way they haven’t before and develop processes designed to enable agility and speed. This is evident from a recent CIO article called “What Gartner’s Bimodal IT Means to Enterprise CIOs,” which theorizes on what Gartner’s vision of the future of IT is. Supplement that with Kathleen Wilson’s “Enabling Azure Operations” session at Microsoft Ignite, and the story really starts to make an impact.

What is Bimodal IT?
Bimodal IT is a way to address the increased speed of solution delivery in an Enterprise brought on by the cloud. It posits that IT needs to be broken down into 2 groups to facilitate this, but also eliminates a 3rd IT group:

1. A traditional IT that deals with rack and stack installations and on-premises troubleshooting.
2. A new modern IT in which everyone is a generalist and they can quickly organize to deliver solutions that drive the business.
3. This has the benefit of eliminating (or lowering) shadow IT because now IT is equipped with the tools to move as quickly as someone with a credit card.

Traditional IT knows what the business needs from an IT strategy perspective (example: We need more storage for our designers), but modern IT understands the business strategy (example: If we had a more robust design environment, we could deliver designs 10 times faster to our customers, resulting in a quicker R&D return).

Is This the Death of Traditional IT?
Bimodal IT builds on the practices that some Enterprises have already been doing for years since the advent of virtualization and subsequently automation. The difference is that now, with the cloud, everything is sped up and even more immediate. Lydia Leong at Gartner wrote a great blog post that shows how the similarities between virtualization and public cloud aren’t close enough though, even through the software defined data center. Despite this assertion, Enterprise IT should already be on the way to towards a modern IT.

This phrase though is particularly troubling for traditional IT staff and shows that a mindset change is needed is staff are to survive.

The IT-centric individual who is a cautious guardian and enjoys meticulously following well-defined processes is unlikely going to turn into a business-centric individual who is a risk-taking innovator and enjoys improvising in an uncertain environment.

I’m not sure I agree completely as this does sound like a death knell for traditional IT. The software designed data center and Devops play a critical role in both and skills and processes likely can transfer. What I take from this quote is that communication between IT and the business units needs to be more immediate, but I don’t agree that you introduce risk by allowing the speed to delivery get in the way of due diligence and long-term planning. Doubt me? Check your data center for those legacy, yet critical applications still hosted on Windows 2003. We can’t let that type of development work continue without some amount of planning, certainly there must be a balance.

I’m hopeful that CIOs always understand the business needs, but historically, those needs haven’t been communicated to the IT managers and engineers. In an Enterprise where IT is split into traditional IT and modern IT, the issue can be exacerbated as both factions are fighting for power. Gartner seems to be suggesting that this model should be immediately implemented even if the CIO isn’t ready! Yikes! (check the Gartner agenda here).

Is that really what they are saying though? Well, likely there is little IT organizational movement now with still small IT budgets, so some IT departments are likely stuck with a structure put in place a decade ago. Certainly Gartner’s suggestion would be less than optimal because of the natural rift that it causes, unless they are building on the efficiencies that Enterprise IT groups might already have in place – infrastructure and business applications teams that work well together, taking the best of both worlds to move into the bimodal model. Traditional IT can offer standards and best practices while modern IT has the agility to deliver quickly.

For traditional IT, this doesn’t represent a death knell, but an opportunity to move into a more agile way of working.

The CIO Needs to Change Too!
Effective Enterprises should be on their way to solving this problem with tight communication between both traditional IT, modern IT and the business. The CIO (or some leader) needs to facilitate this relationship.

With the speed at which cloud moves, the CIO can no longer afford to sit back and be a funnel of business information any more to only one group, ie. the business applications teams. They need to facilitate that deep understanding between IT and the business to enable agile movement and not maintain ‘at arms length’ traditional IT projects. For those businesses that remain with fragmented communication levels, they’ll likely find competitors with an efficient edge squeezing them out over time.

There are hints that this is what Gartner is saying, but it also seems like they are just encouraging 2 nearly separate ITs just to deal with innovation. I don’t believe that the cloud is so transformative that existing processes and standards knowledge can’t be built upon to deal with this new agility. Certainly, communication can help bridge the gap.

A Surprising Way to Get There
If you are in an Enterprise and can spare staff to move into modern IT, you will likely want to pull from those with the broadest skillsets so that they can understand the complexities of a cloud based solution. Generalists that can understand an entire application’s stack are better than someone troubleshooting just one component.

If you don’t have the staff or if your IT group remains highly fragmented without effective communication with the business and you don’t know how to address it, an IT Partner can often help bridge the gap. Think about it, a good IT Partner has experience in talking to different levels of the organization and getting to the real requirements and the results. Their job depends on this skill!

CIOs should not fear bringing an IT Partner into business conversations. Partners have the added advantage of seeing industry trends specific to your vertical and can perhaps facilitate external references for large initiatives. Traditional IT should also embrace a partner as they can have two effective means to get your IT group where it needs to be:

1. A Partner can focus on traditional IT, allowing existing IT staff to start develop processes and skillsets around modern IT practices.
2. A Partner can be the modern IT practice, interacting with the business while existing staff deal with traditional IT issues.

Not all businesses are Enterprise class, and a Partner can also help smaller businesses understand and make this transition as well. Not everything is about the big players, there are cloud efficiencies for everyone!

Regardless of how you get there though, shadow IT can still come into play if you’re not careful, proving that communication is the key to this transition. Peter Sondergaard, VP and Global Head of Research for Gartner, wrote a great blog article about bimodal IT and mentions that companies ignoring this trend risk shadow IT, but I think he misses how shadow IT might crop up when applying the Gartner model.

If traditional IT is kept out of the business conversation, shadow IT moves from the end users into modern IT and the solutions implemented will ultimately become unsupportable and fragmented themselves (harken back to the Windows 2003 example). Just because the cloud makes it easy doesn’t mean you move at a breakneck speed towards it. This balance is the value traditional IT can bring. The CIO, on the other hand, must ensure communication is tight throughout the business and keep the bleeding edge reigned in just enough to be secure, while being agile.

The cloud journey is very complex and getting it started right is key if IT is going to shift from cost center to enabler of business agility. IT staff need to embrace the change too. From the engineer up to the CIO, each level now has new roles that can be exciting, but you have to embrace the change!

How to Get People to
Read your Notification

mel_bochner_blah_blah_blah

Business support groups like HR, IT, facilities, etc. need to communicate changes and updates out to employees often. Probably too often. You can’t blame these groups though, they have a responsibility to communicate compliance rules, training, changes to processes, downtime and improvements.

Your email is probably already inundated with useless information and now these notifications compound it. As a result, almost everyone filters or ignores these types of emails. Here’s some tips to manage a notification to ensure it is read.

To be fair, there will always be people that don’t read your notifications no matter what you do. These steps though should get people reading your email and at least transfer the responsibility to them to read it.

Step 1: Consistency
Well, you can’t start being consistent on your next email campaign, but you can put a good foot forward. Work with your team or organization and design a central notification sending address to work with. If you have a Corporate Communications or Internal Marketing group, work with them for reviewing, branding, and potentially sending.

For all notifications your organization sends out, use the same sending address so your employees begin to recognize the sender (and hopefully not set up a rule).

Design a reusable format that will be the standard for your organization.

Step 2: Pick Your Audience
Make sure that your audience is appropriate. For example, if you rebooting an email server that serves only 100 people, only send the notice to the 100 people. If you communicate useless information out to the masses, it will train them to tune you out.

Step 3: Include Something Actionable if Required
It is absolutely the worst when you send a notification that requires an action from your employees, but it goes into a black hole and no one does what they need to.

In the Subject line, include an ACTION REQUIRED: so that employees know they need to do something. This should halt them from deleting it immediately. Include dates to complete by in the subject or body of the message.

As a helpful side note, if there is an action required, be sure you have a way to track it and don’t be afraid to send follow ups. This may be required for tracking compliance training for example.

Step 4: Avoid the Technical Jargon
Coming from IT, I can tell you – just avoid anything technical. Chances are it will be over the employees heads. Stick with the impact. If we continue the example about rebooting an email server, You will not be able to send or receive emails between from 1-3PM EDT on Saturday, Oct. 25th. Incoming messages from the internet will queue and be delivered after 3PM EDT.

Step 5: Leverage Your Leaders
For major changes, engage your managers and leaders with supplemental information so that they can reiterate the message if their teams ask. You can do this through leadership focused emails.

Step 6: Bonus!
This step is a bonus and includes one! If you are desperate to get people to read your notifications, include an incentive. Put a gift card reward for the 50th person to reply or some other incentive at the bottom of the email. Eventually, this will train your users to stop ignoring you.

All these steps should help you do everything you can to inform the employees through a notification. If they still refuse to read, you can always ask them for feedback on communicating better. For example, an IM broadcast might work better in an emergency.

Warning – ask people for their opinions at your own risk!

SDDC and Self Service: Two Old Problems are New Again

A while ago, I attended an excellent conference held by EMC around the software defined data center (SDDC). It was an eye opening experience into the types of efficiencies an SDDC can offer, but beyond that, it gave a glimpse into the future of IT, at least from the participating vendors’ point of view.

Over the course of that day, IT was embraced as the enabler of customizations and flexibility for the end user while driving down costs. Two main results from the SDDC (or similar cloud initiatives) were self-provisioned VMs/storage and highly customized and portable applications. Despite the touted benefits, both of these brought to mind legacy problems though that centralized management strived to remove. While there are ways to mitigate the pitfalls, it harkened back to issues IT has been trying to resolve for years.

Self-Provisioning Virtual Servers and Storage
Prior to the Y2K bug, IT organizations were a little more lenient with end users. You might have found that end users would go talk to software vendors and purchase the software and servers they needed. If they needed a file server, they may have even gone out and purchased a server with some internal storage and given it to IT to implement (or stood it up under their desk). Perhaps your IT organization back then interjected, but in a lot of cases, I’m betting they didn’t. This was the time before standards and their cost savings were even realized. Folks needed what they needed and they got it through their own budgets.

At its core, this is self-provisioning in that it is end user driven. A user has a set of requirements, found a solution and asked IT to implement it on some infrastructure (or they did it themselves). In current times, IT will work with the end user to plan out resources and get them what they need. In the SDDC, we’ve reversed that. End users can now go to a portal and self-provision the infrastructure they need. IT provides the underlying infrastructure, the end user finds the solution and can implement it themselves using the automated self-provisioning portal. IT can create standards to lower costs, perform chargebacks for cost tracking and can potentially provide many diverse platforms, giving the end user the flexibility to use IT as a service.

The caution here though is stagnation and another divergence of standards at the end user level. The good news is that stagnation can be mitigated with the proper controls around server and storage expiration. This does require some care and feeding on the IT side. You certainly can’t have a user leave the company and their critical VM expires and is removed. IT needs to build processes after the self-provisioning occurs to move things into a production supported environment. Management cannot assume that the self-provisioning removes the need for IT. The IT staff’s responsibilities shift in a different direction.

IT should also perform investigation as applications are brought up to determine if there might be duplication. For example, if you work at an engineering company, you might have an IP management system. Let’s assume a new group has been acquired and they are starting to investigate IP management systems on their own. Someone should know what this group is doing so that you can take advantage of the existing IP management system (if possible). At a large organization, this consolidation of services can be hard to see without IT’s involvement.

In-House Developed Applications
How many of us have struggled with legacy applications that were developed in-house? Back before there were massive ERP applications and APIs for everything, a lot of development work was done to customize new applications built around your business. As time went on, these applications may have traded hands and, as cost cutting measures won the battle for IT, the developers were let go, but the application continued on. No one wants to acknowledge these applications exist, but of course they do! Not only that, these applications are critical to your business and haven’t been updated in years.

What the SDDC (and the cloud) can offer, along with self-provisioning, is a new application landscape that, again, harkens back to this original flaw. A developer can now be enabled to have instant platforms available to them and is encouraged to code it to be portable. We’ve only solved one problem here, namely the availability of resources or speed to deployment.

IT departments have to commit to the developers and the applications. Cost savings cannot trump the development work that needs to happen to keep the application modern. The SDDC and the PaaS based solutions require a commitment to modernizations and upkeep. If not, you will end up with yet another application that is minimally supported, but still critical to your business.

SDDC, PaaS, and IT
While the SDDC and cloud offer instant access to the latest platforms quickly and without much IT involvement, there are still concerns that need to be addressed and monitored by IT. The investment in these solutions requires a long term commitment to infrastructure and the applications running on them. EMC did an excellent job of outlining this strategy and it is certainly a convincing way to manage infrastructure and empower end users if managed and planned out properly.

If nothing else, the SDDC requires a lot of thought put into it and shouldn’t be entered into without a deep level of analysis and thought.

Roast Beef and IT

I recently came across a neglected IT system. The software itself was up to date and patched, but the processes and management of this system had fallen away. The explanation I received was a story that I hadn’t heard before, but one that is used frequently to describe such a phenomenon in a business. The story goes like this…

A newly married couple was preparing a roast beef dinner. The wife cut the ends off of the roast and placed it in the pan. The husband asked her why she was doing that. She explained that was the way her mother always did it. The next day, she rang up her mother who explained that the grandmother had done that when she was a little girl. When they asked the grandmother about it, she explained that the roasting pan she used to use was too small to fit a roast in, so she cut the ends off.

Clearly, the point of the story is to show how outdated processes can be a hindrance, or at least wasteful if not checked. When it comes to IT processes though, what can the harm be? Well, it turns out the harm can be pretty high.

The rest of this is a little technical, so hang on!

Active Directory Replication
The particular system that brought this story up was Active Directory. After working with a company, I heard a complaint that when computers would lose their domain membership, Site Support couldn’t delete the computer object from the domain and add it back with the same name. There would always be a conflict that would prevent it despite deleting the computer object from AD. As a result, their domain was riddled with computer01, computer01a, computer01b, etc.

After going through several Microsoft Active Directory Healthchecks over the years, this sounded to me like the convergence time was too high across the domain. My assumption was that the computer object deletion wouldn’t replicate fast enough before a new computer was joined to the domain. I found an excellent PowerShell script to test out my theory.

After running the script, I noticed that it was taking 15 minutes to replicate an object across the domain. In my own observation with manual object creation, this appeared to be upwards of 45 minutes! Furthermore, it was taking most of the time just to reach one particular site. I started my investigation.

I found that Site Links were neglected and in their place, manual connections were created between the domain controllers. So, I dug deeper and saw that IP subnets were incorrectly configured and domain controllers in 3 different physical locations were in 1 particular site. When I asked about this, it seemed this had been part of some legacy process.

The first thing I did was fix the Site Links to make sure where I wanted things to replicate is where they were going. I also enabled change notification on all the Site Links. If your network can handle this, I highly recommend it.

After that, I split out 3 different site domain controllers into 3 separate sites, added in the IP subnets accordingly and configured the new Site Links between them.

Now that things were looking better, I waited 15 minutes for everything to replicate around. I then logged into each domain controller and deleted the manual links that were created. I tried to do this (as much as possible) in pairs so that when I kicked off the KCC, it would find the new pair and create it, which it did.

I gave the domain around an hour to flesh out the new connections. One sidenote – there were 2 legacy 2003 domain controllers. I noticed the 2 Sites with those were having problems automatically creating all the Site Links. These were set to be retired, so I isolated them using their own IPs as the subnet and then the Site Links were able to be created properly.

I let everything settle down for around an hour and then ran the convergence test again. It was down to 43 seconds! No more sequential computer names need to be created!

Group Policy
Now that replication was fixed, it was time to check out Group Policies. What I found there was mind boggling. There were at least 8 DNS based group policies on site based computer OUs that did the exact same thing. Several of these had legacy VBS scripts that no longer existed on the NETLOGON share.

There were also other Group Policies with other VBS scripts that no longer existed and policies set with no settings. I found computer policies with the User Configuration enabled, but no user settings. There were also Domain Level policies with security settings configured to apply to only 1 user or a few computers (like Domain Controllers).

Where to start? Well, I consolidated the DNS policies down into one and removed the offending VBS script. I also combined several other Computer based (not site based) GPOs into one and disabled the User Configuration setting from processing. This would speed up the GPO processing in general.

I also combined the top level domain policy that was only applying to some Domain Controllers (because new ones hadn’t been added), into the Default Domain Controllers Policy. I also added in another policy that was on the Default Domain Controllers OU into the same policy.

I moved down top level policies that only applied to one user or computer (for testing) to avoid someone accidentally turning it on for everyone. This should also speed things up because users and computers would no longer even see this policy.

Needless to say, this whole exercise took about 4 hours, but the benefits were massive. Local errors were reduced, login times were decreased and simplicity was restored! The changes were communicated out to support teams so that these legacy processes were removed or updated.

No more Roast Beef!
Active Directory is a great example of something that can be so easy to manage, it falls to the way side for support. It can be passed around to those that just know enough and perpetuate legacy issues over and over. This served as a good lesson that can be applied to any IT systems. It is easy to assume legacy processes are still relevant enough to support the environment, but what does it take to give the users a good experience? In this case, it was buckling down for a day and sorting things out. That wasn’t so bad!

© 2017 Matthew Sekol

Theme by Anders NorenUp ↑