Cloud Services
Bringing Data Center Visibility to Cloud Services
The move to hybrid IT and ubiquitous cloud deployment has accelerated. Gartner highlights the trends driving this growth and shaping the future of public cloud. The rapid migration to the cloud is causing headaches for anyone responsible for managing the network, maintaining peak application performance, and mitigating the damage from inevitable cybersecurity breaches.
Why are IT teams blinded by cloud services?
Historically, when most if not all applications were hosted on-premises, the best NPM solutions included a mixture of packet capture and analysis, NetFlow, and SNMP polling at a minimum. It was simple to instrument Enterprise datacenters with TAPs, packet brokers, and flow collectors. Together with ongoing polling of key devices, IT operations teams could achieve the necessary network visibility to understand how the network was performing and troubleshoot issues. Better network visibility comes at a price – the cost of packet capture v flow data, and IT teams have to make the hard decisions to deploy packet capture only where it is needed most – at the core. Remote and branch offices were mainly monitored through the less expensive flow approach. Even with cloud migration happening quickly, most enterprises still maintain some hosting internally, so this approach is still widely adopted.
However, for all Infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS) or software-as-a-service (SaaS) cloud-hosted resources, the traditional NPM instrumentation strategy breaks down. IT teams are left “flying blind”, unable to understand how their services are performing until a user complaint hits their plate. ITOps can’t be proactive or head off problems before they impact the business.
Let’s look at a recent customer example to illustrate the problem. A major bank in the Americas had been seeing a lot of issues with their online banking platform and decided to re-architect it and move it to an AWS EC2 cloud environment, instead of hosting it in their own datacenter. This co-incided with a major digital transformation initiative to transition other on-premises systems to SaaS applications. When they deployed a test version of their new banking platform, they realized they no longer had the visibility they needed to keep the banking app up and running 24x7. They had been using Observer for some time in their own datacenter and came to VIAVI for help.
How can Observer Restore Insight and Control?
Observer 3D Platform is comprised of a suite of complementary and integrated monitoring tools that work together to provide insight on-premises and in the cloud. For every on-premises component, there is a cloud-based variant. The result is network performance visibility across all your ecosystem. All of the data collected is turned into actionable insight by Apex, the analytics and intelligence component of the platform.
End-user experience monitoring provides ITOps teams with a single numerical value to help pinpoint potential problems that affect end-users, before it turns into a major drama.
Returning to the story of the bank, using VIAVI Observer Apex, they were able to implement GigaStor Software Edition to run in their cloud environment to capture packets. We also helped to give them visibility into their new SaaS applications using GigaTest active monitoring to benchmark and monitor application uptime issues while Observer GigaFlow provided visibility into additional data sources like AWS VPC Flow.
Recursos
Products
-
Observer Apex
Gain Comprehensive Cloud-to-On-Premises Service Visibility with End-User Experience Scoring
-
Observer GigaFlow
More than just flow; user, machine, network and, application data – in a single enriched record
-
Observer GigaStor
Enable end-user experience scoring with the best packet capture, analysis, and storage solution in the industry -...