Six Qualities That Make Your ITOA Tool Modern



by Richard Whitehead

I routinely meet and talk with people who are extremely concerned about the future, specifically the degree to which their current Open IT Operations Analytics (ITOA) architecture will be a gating factor to the adoption of new technologies, development methodologies and lines of business.

They are naturally looking for a state-of-the-art alternative, and something that will future-proof themselves for the next decade at least. But a common question is, “What exactly makes an ITOA tool modern?”

Unlike legacy solutions built 20 years ago, truly modern tools leverage and cooperate with present day technologies that the modern IT organization depends on. Furthermore, they are built to withstand the rapid adoption of new technologies we are sure to see over the next decade.

In the interest of brevity, I reply that there are six core qualities that make an ITOA tool modern.

1. Design Patterns

This has a lot to do with the way in which people use technology. 20 years ago, the fashion was to re-invent the wheel. This was partly because new ground was being broken in the Open Systems world, partly because “proprietary” was considered a differentiator (but mostly just a crude layer of obfuscation for trade-secrets).

Today, we have a vast amount of prior-art to leverage. What would have been considered complex and proprietary years ago has now become the lingua franca of technology. Good examples include the components that make up a product’s architecture. Just mention that your system is built as a “LAMP” stack, and you’ll see nodding heads, and boxes getting checked. This is in stark contrast to legacy solutions, which are frequently built upon layers of proprietary technologies over the years.

2. Social Networking

Social Networking technologies have radically changed people’s perspective of how you interact with both technology and people. 20 years ago, the “NOC” was the norm. Shifts of people sat in front of large screens, looking at multiple dashboards and status displays. Now, the millennial is entirely comfortable (and expects) to be untethered and have notifications pushed to them. When it’s time to collaborate, the expectation is that communication will happen “in-App”, with a rich chat-based interface so they can multi-task.

Perhaps the most up-to-date illustration of this is the burgeoning “ChatOps” movement. When it’s time to execute run commands, diagnostics, remediation scripts etc. It’s done right within the chat interface, where’s it’s documented, and visible to all key stakeholders. We routinely have customers tell us how awesome it is to be able to perform these activities without “context-switching”.

3. Standardized Languages

A corollary to proprietary technologies, this acknowledges that there are incredibly powerful languages in mainstream use, which can be leveraged when it comes to extending the capabilities of solutions.

Where code needs to be written, modern vendors tend to adopt JavaScript. Fast, expressive and equally at home on the server or in a browser, JavaScript can accomplish in a few lines, what it takes hundreds to achieve using legacy tools.

While we’re on the subject, I love JSON. A simple, lightweight data-interchange format, that’s easy for both humans and machines to work with (unlike XML and the inappropriately named SOAP).

JavaScript Object Notation (JSON), especially when used in conjunction with Representational State Transfer (REST), creates Cloud-ready solutions.

4. Open Source Integration

Up there with virtualization and social networking, Open Source has had a profound impact on the last two decades. We see it making a difference in a couple of key ways.

Firstly, by judicious use of open-source technology, companies such as Moogsoft can accelerate product development to build enterprise-class solutions. By leveraging trusted technologies in the public domain to provide basic and non-differentiated functionality, our resources can focus on the things that set us apart, like cutting edge algorithms or seamless collaboration. In many cases, these technologies are the same ones trusted and relied upon by our biggest customers.

Secondly, by integrating with Open Source solutions, customers can deploy core OSS solutions as commodity management technologies. By using OSS “as-is”, they benefit from the lower costs without the burden of maintaining them at a code level. Customers can deploy Open Source unmodified tools and use solutions like Incident.MOOG to provide the custom enterprise “bells-and-whistles” so essential to their business. Truly the best of both worlds.

5. Compute

Virtual servers? On-demand resources? Public/hybrid-cloud solutions? Products built for today’s technology environment have an inherent advantage and can provide today’s enterprise with an unprecedented array of deployment options that reduce cost and reduce risk. That, plus the inexorable result of Moore’s Law, means that it’s possible to build solutions that scale in a manner inconceivable 20 years ago.

6. Bus technology

This is an example of how technologies that were unavailable 20 years ago are now built-in and fully exploited today. A powerful bus architecture allows you maximize the benefits of virtualized scalable computing, and deliver functionality in real-time.

In my previous life, when Netcool (now owned by IBM) was first conceived, in order to realize the vision of being the “single-pane-of-glass” for large networks, we realized that the Relational Database Management Systems (RDBMs) systems of the day wouldn’t be able to keep pace with the modest (by today’s standards) event rates. To make it work, the processing was handled in-memory, and data was discarded as soon as it had achieved its primary purpose. Even with that approach, it was inevitably necessary to create a multi-tiered solution to provide sufficient scale, which lead to multi-service architectures with servers being dedicated for ingestion, processing and display. An unfortunate consequence of this architecture, is the latency it creates between an event being ingested, and finally being displayed in a UI where an individual can see it.

In large-scale systems this latency is measured in 10’s of minutes, and depending on the automations that have been configured, it’s possible that events can be cleared from the system before they had a chance to be displayed. That’s an entire fault lifecycle with no operations visibility!

Bus architectures that are built into the core architecture of the system (rather than retro-fitted) eliminate this problem. They allow scale-out architectures that are flatter, and higher-performance. Used in combination with sophisticated technologies like Unsupervised Machine Learning on todays high-performance systems, you really do have real-time processing at scale!

Richard Whitehead is the Chief Evangelist at Moogsoft.

 

 

Leave a Reply

WWPI – Covering the best in IT since 1980