Welcome!

Big Data Journal Authors: Roger Strukhoff, Adrian Bridgwater, Elizabeth White, Liz McMillan, Pat Romanski

Related Topics: SOA & WOA, Java, AJAX & REA, Web 2.0, Big Data Journal, SDN Journal

SOA & WOA: Article

Fixing Real Problems with Real User Monitoring

In production support it’s hard to correlate what may be happening on local servers with what users are reportedly experiencing

In production support it is often hard to correlate what might be happening on local servers with what users are reportedly experiencing. In April, the developers for a Java application that handles electronic distribution of scanned mail and electronic faxes were receiving reports that their application was running slowly from remote offices and came to our Performance Availability and Capacity Management (PaCMan) team for help in determining the cause of this issue. From vanilla server-side dynaTrace, everything was looking fine. No particular transaction was taking a substantial amount of time. This led us to believe that the problem may lie more on the client side. Coincidentally, we were in the middle of a Proof of Concept for dynaTrace User Experience Management (UEM) so we decided to apply these efforts towards this application to help identify the issue.

Getting insight into the actual click paths of end users and the load behavior of pages on certain browsers allowed us to improve Client Rendering Time by 47%, Overall Page Load Time by 29% as well as implementing a new feature in Struts that prevents users from impatiently clicking Save too many times causing problems on our server-side implementation.

Finding #1: JavaScript Load Behavior causing problems on IE7
After configuring UEM and collecting data for a couple of days the analysis began. The first thing to become readily apparent with UEM was that a large amount of time was being spent on the client side for rendering. This was done by putting the server-side contribution, network contribution, and estimated client time on the same graph. The application developers set on correcting this immediately. The client browser version was changed to IE 9.0 from IE 7.0, and several common Web performance optimization changes, such as changing the load behavior of JavaScript files, were made in order to reduce the render time. Figure 1 shows the amount of time spent in a typical work week on the client side before (dashed line) and after (solid line) these changes were implemented. This resulted in an average of 608ms (47.57%) reduction in client-side rendering time.

Figure 1: Changing JavaScript load behavior helped to improve client-side Rendering Time by 47%

Finding #2: "Impatient" trigger save action
UEM also gives you the ability to directly correlate user actions at the client with what happens on the server. Using this ability, we were able to identify that some users were exacerbating their slowdowns by repeatedly clicking buttons and hitting refresh while they were experiencing an issue. The developers plan to fix this issue by implementing an Apache Struts feature to allow only the first press of a button to cause an action. Figure 2 shows a user that was experiencing slowdowns due to network latency (viewable in UEM), but was increasing their slowdowns by repeatedly clicking the "Save to XXXX" button and the refresh button before the page had finished loading.

Figure 2: Getting insight into end users' actions reveled problems with users impatiently clicking Save several times

For findings 3 & 4, and for further insight, click here for the full article.

More Stories By Derek Abing

Derek Abing is a System Engineers with a leading insurance company focusing on application performance.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Latest Stories from Big Data Journal
BlueData aims to “democratize Big Data” with its launch of EPIC Enterprise, which it calls “the industry’s first Big Data software to enable enterprises to create a self-service cloud experience on premise.” This self-service private cloud allows enterprises to create 100-node Hadoop and Spark clusters in less than 10 minutes. The company is also offering a Community Edition via free download. We had a few questions for BlueData CEO Kumar Sreekanti about all this, and here's what he had to s...
Labor market analytics firm Wanted Analytics recently assessed the market for technology professionals and found that demand for people with proficient levels of Hadoop expertise had skyrocketed by around 33% since last year – it is true, Hadoop is hard technology to master and the labor market is not exactly flooded with an over-abundance of skilled practitioners. Hadoop has been called a foundational technology, rather than ‘just’ a database by some commentators – this almost pushes it towards...
The cloud provides an easy onramp to building and deploying Big Data solutions. Transitioning from initial deployment to large-scale, highly performant operations may not be as easy. In his session at 15th Cloud Expo, Harold Hannon, Sr. Software Architect at SoftLayer, will discuss the benefits, weaknesses, and performance characteristics of public and bare metal cloud deployments that can help you make the right decisions.
Cisco on Wedesday announced its intent to acquire privately held Metacloud. Based in Pasadena, Calif., Metacloud deploys and operates private clouds for global organizations with a unique OpenStack-as-a-Service model that delivers and remotely operates production-ready private clouds in a customer's data center. Metacloud's OpenStack-based cloud platform will accelerate Cisco's strategy to build the world's largest global Intercloud, a network of clouds, together with key partners to address cu...
Technology is enabling a new approach to collecting and using data. This approach, commonly referred to as the “Internet of Things” (IoT), enables businesses to use real-time data from all sorts of things including machines, devices and sensors to make better decisions, improve customer service, and lower the risk in the creation of new revenue opportunities. In his session at Internet of @ThingsExpo, Dave Wagstaff, Vice President and Chief Architect at BSQUARE Corporation, will discuss the real...
IoT is still a vague buzzword for many people. In his session at Internet of @ThingsExpo, Mike Kavis, Vice President & Principal Cloud Architect at Cloud Technology Partners, will discuss the business value of IoT that goes far beyond the general public's perception that IoT is all about wearables and home consumer services. The presentation will also discuss how IoT is perceived by investors and how venture capitalist access this space. Other topics to discuss are barriers to success, what is n...
Where historically app development would require developers to manage device functionality, application environment and application logic, today new platforms are emerging that are IoT focused and arm developers with cloud based connectivity and communications, development, monitoring, management and analytics tools. In her session at Internet of @ThingsExpo, Seema Jethani, Director of Product Management at Basho Technologies, will explore how to rapidly prototype using IoT cloud platforms and c...
Amazon, Google and Facebook are household names in part because of their mastery of Big Data. But what about organizations without billions of dollars to spend on Big Data tools - how can they extract value from their data? Ion Stoica is co-founder and CEO of Databricks, a company working to revolutionize Big Data analysis through the Apache Spark platform. He also serves as a professor of computer science at the University of California, Berkeley. Ion previously co-founded Conviva to commercial...
Due of the rise of Hadoop, many enterprises are now deploying their first small clusters of 10 to 20 servers. At this small scale, the complexity of operating the cluster looks and feels like general data center servers. It is not until the clusters scale, as they inevitably do, when the pain caused by the exponential complexity becomes apparent. We've seen this problem occur time and time again.
When one expects instantaneous response from video generated on the internet, lots of invisible problems have to be overcome. In his session at 6th Big Data Expo®, Tom Paquin, EVP and Chief Technology Officer at OnLive, to discuss how to overcome these problems. A Silicon Valley veteran, Tom Paquin provides vision, expertise and leadership to the technology research and development effort at OnLive as EVP and Chief Technology Officer. With more than 20 years of management experience at lead...