Welcome!

@BigDataExpo Authors: Luisa Milic, Liz McMillan, Automic Blog, Elizabeth White, Pat Romanski

Related Topics: @DevOpsSummit, Java IoT, Microservices Expo, Linux Containers, @CloudExpo, @BigDataExpo

@DevOpsSummit: Blog Feed Post

Beyond DevOps… APM as a Collaboration Engine

Gravitation towards fact-based constructive issue management spawned a whole new movement – DevOps

In the beginning there was a simply acronym: MTTI (mean time to innocence). Weary after years of costly and time-consuming war room battles, IT organizations turned to AppDynamics to give an objective application-level view of production incidents. As a result, application issues are swiftly pinpointed and fixed, accelerating time to repair by up to 90%.

In fact, gravitation towards fact-based constructive issue management spawned a whole new movement – DevOps – with the goal of ingraining this maturity and cooperative spirit into IT organizations from the ground up. The movement was discussed by Jim in a previous blog post. Of course, AppDynamics (or at least, easily accessible fact-based information about application behaviour in production) is a necessary prerequisite to this.

Looking back, before DevOps or even MTTI were topical buzzwords, this basic ability to foster communication between teams proved to be an invaluable benefit to the more drab and well-worn business realities of offshoring and outsourcing.

This blog reviews three real-life examples of this:

  • Managing external offshore development organizations
  • Facilitating near shore development teams
  • Bringing external developments in-house

Managing external offshore development organizations

Some months ago, I did some work with a then prospect who had started a self-service trial of AppDynamics.

When I spoke to them, they were delighted with the visibility that AppDynamics provided out of the box for their .NET application, a SaaS Learning Management System.

Digging into what had sparked their interest in AppDynamics, they told me they had commissioned an outside development firm to rewrite their flagship application, which was somewhat dated and not architecturally fit to support some newer services the business wanted to offer to customers.

The good news was the new version of the app was live, and supporting around 10% of their existing customer base. The bad news?  This 10% used the same hardware footprint as the remaining 90% on the old system. Extrapolating this hardware requirement for the entire user base would not only require a new datacentre, but also entirely break the business model for the application from an operational cost perspective (not the first time that hardware savings alone could pay for AppDynamics!)

For months prior to trying AppDynamics, the external developers had been under huge pressure to optimize the application footprint (and some pretty lackluster performance too). Armed with only windows performance counters and intuition, weeks had been spent optimizing slow database queries, which only turned out to be 5% of the errant response times at a transaction level.

Having put AppDynamics in place, the prospect

  • Easily found specific application bottlenecks, allowing them to focus developers on high-impact remediation
  • Could verify the developers had made the required improvements with each new release

Clearly, huge benefits at a technical level.

At a higher level, this helped lead to a more constructive relationship between the development shop and their customer – moving things away from the edge of litigation, constant finger-pointing, and blame shifting.

Facilitating near shore development teams
Another group I have worked with recently are responsible for a settlement system within a large global investment bank based in London. The system is developed in-house, and typical with most financial services institutions, the actual development team itself is located ‘near-shore’ in Eastern Europe to cut costs. The development processes are Agile, with new releases every few weeks.

Inevitably, with new releases can come new production issues and – of course – the best people to deal with these during the “bedding in” period are the developers themselves.

Another thing that is very common in the financial services industry is regulation, and this poses a problem in this scenario. Nobody is permitted to directly access the production systems from outside the UK due to data privacy regulations.

This means hands-on troubleshooting must be left to the on-shore architect staff who are not only expensive, but are not as well-equipped as the developers themselves to dig in to the issues in new code.

Enter AppDynamics. Our agents deployed in production made all the performance data readily available to anyone with the appropriate credentials, but – critically – having access to this does not expose ANY business data from the production system. Now, the near-shore development team can look directly at the non-functional behavior of their code in production, eliminating the time spent gathering sufficient log data to enable reproduction of issues in test environments.  Bingo, the business case for the AppDynamics purchase is made!

There is an interesting side note to this, which applies much more widely too. Many customers have observed an “organic” improvement in service levels once AppDynamics is installed in production. For the first time, developers can see how their code is actually working in the wild. Developer pride kicks in and suddenly non-functional stories are added to development backlogs to fix latent issues that get observed, which would have previously have gone unnoticed.

Bringing external developments in-house
Of course, as we all know the only constant in life is change, so no outsource is a one-way journey. As a result, I have come across several organizations that are now working on projects which were previously outsourced. Of course, once these customers have completed the initial challenge of recruiting a new development team they then need to get their arms around the existing codebase. Usually handover workshops can help with this, but in many cases these systems have been out- and in- sourced several times, with many changes of personnel along the way. There is only so much you can distill onto a whiteboard in a brain dump session, however long and well-intentioned.

It is here where the high-level visibility that AppDynamics provides can be invaluable. Out of the box, AppDynamics instruments previously unseen systems, automatically detecting and following transactions and draws up flow-maps. The end-to-end visibility of the entire system greatly eases the process. In fact, this system overview (and the ability to view how it changes over time) has proved invaluable for many customers for a number of reasons beyond whole-scale in (or out) sourcing, such as onboarding new team members, verifying compliance with architectural governance of externally developed code changes and so forth.

Conclusion
In summary, AppDynamics does not have to be all about troubleshooting and MTTI.  Nor even necessarily about DevOps and brave new worlds. The easily configured deep insight that we provide into the dynamic behavior of your applications has many uses – and business cases – beyond the traditional MTTI/MTTR domain.  APM is, after all, just one use-case (albeit an important one) for our Application Intelligence Platform.

Take five minutes to get complete visibility and control into the performance of your production applications with AppDynamics Pro today.

The post Beyond DevOps … APM as a Collaboration Engine written by appeared first on Application Performance Monitoring Blog from AppDynamics.

Read the original blog entry...

More Stories By AppDynamics Blog

In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally.

DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.

@BigDataExpo Stories
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...
Personalization has long been the holy grail of marketing. Simply stated, communicate the most relevant offer to the right person and you will increase sales. To achieve this, you must understand the individual. Consequently, digital marketers developed many ways to gather and leverage customer information to deliver targeted experiences. In his session at @ThingsExpo, Lou Casal, Founder and Principal Consultant at Practicala, discussed how the Internet of Things (IoT) has accelerated our abil...
With so much going on in this space you could be forgiven for thinking you were always working with yesterday’s technologies. So much change, so quickly. What do you do if you have to build a solution from the ground up that is expected to live in the field for at least 5-10 years? This is the challenge we faced when we looked to refresh our existing 10-year-old custom hardware stack to measure the fullness of trash cans and compactors.
Extreme Computing is the ability to leverage highly performant infrastructure and software to accelerate Big Data, machine learning, HPC, and Enterprise applications. High IOPS Storage, low-latency networks, in-memory databases, GPUs and other parallel accelerators are being used to achieve faster results and help businesses make better decisions. In his session at 18th Cloud Expo, Michael O'Neill, Strategic Business Development at NVIDIA, focused on some of the unique ways extreme computing is...
The emerging Internet of Everything creates tremendous new opportunities for customer engagement and business model innovation. However, enterprises must overcome a number of critical challenges to bring these new solutions to market. In his session at @ThingsExpo, Michael Martin, CTO/CIO at nfrastructure, outlined these key challenges and recommended approaches for overcoming them to achieve speed and agility in the design, development and implementation of Internet of Everything solutions wi...
DevOps at Cloud Expo, taking place Nov 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long dev...
Cloud computing is being adopted in one form or another by 94% of enterprises today. Tens of billions of new devices are being connected to The Internet of Things. And Big Data is driving this bus. An exponential increase is expected in the amount of information being processed, managed, analyzed, and acted upon by enterprise IT. This amazing is not part of some distant future - it is happening today. One report shows a 650% increase in enterprise data by 2020. Other estimates are even higher....
Aspose.Total for .NET is the most complete package of all file format APIs for .NET as offered by Aspose. It empowers developers to create, edit, render, print and convert between a wide range of popular document formats within any .NET, C#, ASP.NET and VB.NET applications. Aspose compiles all .NET APIs on a daily basis to ensure that it contains the most up to date versions of each of Aspose .NET APIs. If a new .NET API or a new version of existing APIs is released during the subscription peri...
Identity is in everything and customers are looking to their providers to ensure the security of their identities, transactions and data. With the increased reliance on cloud-based services, service providers must build security and trust into their offerings, adding value to customers and improving the user experience. Making identity, security and privacy easy for customers provides a unique advantage over the competition.
Qosmos has announced new milestones in the detection of encrypted traffic and in protocol signature coverage. Qosmos latest software can accurately classify traffic encrypted with SSL/TLS (e.g., Google, Facebook, WhatsApp), P2P traffic (e.g., BitTorrent, MuTorrent, Vuze), and Skype, while preserving the privacy of communication content. These new classification techniques mean that traffic optimization, policy enforcement, and user experience are largely unaffected by encryption. In respect wit...
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addres...
Is the ongoing quest for agility in the data center forcing you to evaluate how to be a part of infrastructure automation efforts? As organizations evolve toward bimodal IT operations, they are embracing new service delivery models and leveraging virtualization to increase infrastructure agility. Therefore, the network must evolve in parallel to become equally agile. Read this essential piece of Gartner research for recommendations on achieving greater agility.
SYS-CON Events announced today that Hitrons Solutions will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Hitrons Solutions Inc. is distributor in the North American market for unique products and services of small and medium-size businesses, including cloud services and solutions, SEO marketing platforms, and mobile applications.
Smart Cities are here to stay, but for their promise to be delivered, the data they produce must not be put in new siloes. In his session at @ThingsExpo, Mathias Herberts, Co-founder and CTO of Cityzen Data, will deep dive into best practices that will ensure a successful smart city journey.
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devices - comp...
The 19th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Digital Transformation, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportuni...
SYS-CON Events announced today that Venafi, the Immune System for the Internet™ and the leading provider of Next Generation Trust Protection, will exhibit at @DevOpsSummit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Venafi is the Immune System for the Internet™ that protects the foundation of all cybersecurity – cryptographic keys and digital certificates – so they can’t be misused by bad guys in attacks...
DevOps at Cloud Expo – being held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Am...
SYS-CON Events announced today Telecom Reseller has been named “Media Sponsor” of SYS-CON's 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, will discuss the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.