Network Troubleshooting Latency Vs Retransmission

gckfhWhen troubleshooting network performance problems, most analysts find themselves chasing one of two issues: latency or retransmissions. Both scenarios result in performance degradation, but have very different root causes and solutions.

I’ve been involved in many troubleshooting exercises where I get a totally different perspective by changing my test point. In this video, I’ll show you how the same problem can look like a latency problem or retransmission-related issue by simply changing your location in the network.

Here are some additional tips when trying to identify if an issue is latency or retransmission related:

  • Try to monitor from the sender’s perspective. If the sender is not physically close, then make yourself the sender by uploading a file or running iperf.
  • Pay attention to your protocols. This information presented here is helpful when dealing with TCP base protocols. UDP is a totally different animal.
  • Try to leverage operating system commands like netstat –s to identify retransmissions.
  • Understand what your tools are reporting. For example Wireshark might note a retransmission, spurious retransmission, fast retransmission, or other notes.
  • Look for TCP-specific hints like Selective Right Edge (SRE) or Selective Left Edge (SLE)

Super Smartphone Apps For Network Administrators

smartphone-appsNetwork analysis tools are a must-have for networking professionals, providing crucial insight into performance and helping to solve bottlenecks and slowness. The right statistics and data about traffic flows, device configurations, and user behavior can identify problems quickly, or even before they actually happen.

Having that information immediately accessible — literally in the palm of your hand — can make things even easier. The use of mobile apps has exploded, and software has matured from games and entertainment to tools robust enough to use on the job. IT pros can increase their productivity and save precious time with access to network data with a simple tap on a smartphone, whether they are in the office, relaxing at home, or commuting on the train.

Here we highlight some of the highest user-rated network utilities — some developed for iOS, some for Android, and some available for both. We chose independent tools that are not simply an extension of a larger network management platform, so no other software is necessary. An extra bonus is that most of these apps are free or

Building A WAN Using Cisco DMVPN

Dynamic Multipoint VPN technology enables organizations to connect their offices via VPNs built over the Internet. Here’s how it works.


WANs have been around for a long time. The first networks were built for local needs but once they became popular, there was a need to be able to connect offices to each other. Frame Relay was one popular technology for building WANs and connecting offices. Frame Relay, a non-broadcast multiple access (NBMA) technology, was normally used to build a hub-and-spoke network. In hub-and-spoke networks, traffic is passed from spokes to the hub and then to other spokes.

Today, many organizations are turning to the Internet for their WAN needs. Why? There’s potentially a lot of money to save by buying Internet circuits as opposed to other technologies such as Multiprotocol Label Switching (MPLS). For large organizations with many WAN circuits, savings can be in the millions range.

Dynamic Multipoint VPN (DMVPN) is a Cisco technology that’s very popular for building VPNs over the Internet. This is a design blog so

The New Network Management Tactic: Bandwidth On Demand

There’s nothing like the holiday season to give enterprise network administrators a renewed sense of urgency. This is the time of year when many enterprises see significant spikes in network activity, from the online rush of holiday shopping to the seasonal complexities of shipping logistics. Network performance is paramount during this period for many enterprises; even small hiccups in performance can have a big impact on an enterprise’s bottom line.

Network volatility, it turns out, is the new normal for many businesses. Enterprises are facing an exponential increase in data traffic driven by more mobile devices, social media, video communications, big data and other day-to-day realities of the mobile cloud era. This creates a new problem for CIOs, who now need to manage more (and different) kinds of traffic over the same WAN. Balancing network resources can be particularly challenging, especially in an environment with fixed network capacity.

In order to meet service level agreements (SLAs) for their customers, notably during problematic “peak” periods of network activity, CIOs have traditionally followed one of two courses. One option is to simply buy more bandwidth in the form of additional trunks, switches, and servers. This can be an expensive proposition and often results

Can your laptop battery be overcharged

Most laptops today use lithium Ion (Li-ion) batteries. Overcharging Li-ion batteries is not a problem and does not affect the battery life span. These batteries can be charged 300 to 500 times, and they have an internal circuit to stop the charging process at full charge. The control system prevents overcharging, which can cause the lithium ion battery to overheat and potentially burn. This is why the Li-ion batteries are more expensive. The only way for the Li-ion battery to overcharge is if the charging system malfunctions, and then the battery will heat up while in the charger. If you don’t plan to use your laptop for long period of time, you can extend the life of your lithium ion laptop battery by storing it with a 50 percent charge. A fully discharged battery left for a long period of time will lose its charging capacity. Fully charged batteries discharge when they are left unused, and will lose effectiveness. This is why it is recommended to discharge a Li-ion battery until it is almost out before shutting down. Running a Li-ion battery down completely will diminish capacity. Keep the battery in a cool place and do not store

Computer Forensics on the Fly

Incident Responders regularly rely on Linux distributions like Backtrack 5R3 (which is very stable), Backtrack Reborn, Kali Linux, and SIFT – “SANs Incident Forensics Toolkit” for general purpose incident response. Although these are the most stable general purpose incident response distributions, Deft Linux is another distribution becoming more prevalent in IR Forensics Toolkits.

Deft Linux

Deft Linux is a forensics distribution of the Linux operating system, which has tools resident to it that are geared towards computer forensics and computer incident response. It also focuses on network forensics, and cyber intelligence. The version of this Linux distribution that is currently the most common in use is based on Ubuntu 11.10. To view the release, a user would get to the command line and type:

%cat /etc/lsb – release





This particular Deft Linux distribution is resident on top of the 11.10 version of Ubuntu. When you go to the site, It is available as an “iso” which can be used to create a live CD or you can order a live CD from . You just download Deft and use an unzip program (such as winZip or 7Zip) to unzip the file. You can use

Link Building & Signal

Search engines that crawl the web, links are the roads between sheets. Utilizing complicated link investigation, the motors can find out how pages are associated to each other and in what ways. Connection construction recounts actions directed at expanding the number and value of inbound links to a page.

Links that are granted naturally by sites and sheets that want to connection to your content or company work as a natural connection. These links require no specific activity from the SEO, other than of worthy material (great content) and the proficiency to create perception about it.Through SEO link building services, search engines can not only investigate the attractiveness of a website & sheet based on the number and popularity of pages linking to them, but furthermore metrics like believe, spam, and authority. Professional SEO Company always suggest that links are a very good way of identifying expert documents on a granted subject.

Link Signal

Used by Search Engine

It’s critical to realize the components of a link used by the seek motors as well as how those components component into the weighting of connections in the algorithms. Search motors use links in numerous different ways. While we don’t understand the whole link attributes measured

How to Remove JS/ClickJack

JS/ClickJack is a foxy and risky virus that infects both 32-bit and 64-bit Windows system. Normally, the Trojan enters your computer easily using system defects. Once successfully installed, it modifies system registry entries and creates many malicious files, which enable it to execute itself automatically on every Windows startup. Presence of JS/ClickJack virus makes your machine run slowly as it takes up a great amount of system resources. In addition, it is capable of hijacking your web browsers, blocking your downloads and disabling your programs etc. If not deleted timely, Trojan Horse Backdoor.Generic15.BYNL will even download additional computer viruses like, FBI virus or Montera Toolbar malware to damage your workstation further or steal your vital information without letting you know. So you ought to remove JS/ClickJack virus from your PC without any delay before it causes furthermore issues.

Problems from JS/ClickJack

When you launch the system, JS/ClickJack virus will run in the background and consume lots of system resources. As a result, you encounter a lot of problems like pop-up errors, insufficient system space, invalid programs and unstable Internet connection. Each time you connect to the network, this dangerous virus can use the browser flaws to download potentially unwanted programs onto

Why Heat Dissipation Is Important To Your Trading Computer

Heat will kill the life span of your trading computer. It’s like rust on a classic car. Over time, with out the right care, this will eat away at the metal of the car and completely ruin it. Your internal computer components do not like to be exposed to heat. This is why cooling and heat dissipation is essential to your trading computer. There is good news here though. The best stock trading computer will have several defenses built into the computer that battle this issue. This is why there are more guarantees in the lifespan of a computer built for trading. These types of issues are thought of in advance and built into the design of the computer.

Allowing your computer to become to hot is horrible for the internal components. Not only will this dramatically shorten the lifespan of your trading computer, it will also cause damage or potentially a data loss. High temperatures will affect your processor and motherboard. This is why your trading computer needs a great cooling system. A computer that is a workhorse like your trading computer should not be built with out the proper cooling system thought of in advance.

Computer Forensics Services Organizations Offering Training and Work Experience

Business and government organizations have shifted from paper to digital records. Banks too have computerized their operations. Digitization has given rise to cases of cyber crime where an individual may be rightly or wrongly accused of a crime. Where digital data is involved defense attorneys as well as prosecution turn to experts in computer forensics to gather vital data from a computer, laptop or smartphone’s memory.

In recent times there has been a rise in cyber crimes and a rise in the demand for computer forensic services. An individual may be wrongly accused in a cyber crime case and in such instances his defense attorney would employ a specialist in computer forensics who follows a legally accepted process to uncover evidence that will stand in a court of law. The critical factor is that such data collection needs to be carried out in a way that accesses even encrypted or deleted data but leaves the structure and original data intact i.e. it should be minimally invasive. This is not a task for any IT expert but it requires specialized training and use of specialized equipments that guarantee original data remains unchanged while information is extracted. The process

Things to Know Before a Computer Setup and Installation

If you have just started your business organization, you must be really puzzled wondering how you will set up all computer systems in your office. But you do not need to worry, as there are so many companies that will help you to do it. These service providers have highly qualified people in their team who will come to our office and help you to set up all the systems. But before you hire any such companies for setting up your computers there are a few things that you should know about the services that are provided by these companies.

Types of Services:

These companies provide various types of services that would suit all your requirements perfectly. If your business depends too much on computers, and you need immediate solutions for everything, then you can hire online services where experts will communicate with you through emails or phone, and help you with the installation processes. But if you want the team to visit your company in person then you can hire the on-site services. They will give their undivided attention to each and every system and make sure all the programs are installed, properly so that there is no trouble in the

Next Generation Cloud Analytics with Amazon Redshift

Amazon Redshift is changing the way companies are collecting and storing big data. Companies like Amazon Redshift can influence the control of cloud computing for data warehouse purposes. This Amazon cloud solution allows corporations to apply date warehousing more effectively than ever. Redshift is Amazon’s storage solution that allows business owners to move their date warehouse to the cloud for much less than outmoded options.

The main focus is on “storage”, and Redshift is prepared to meet your date warehouse needs. The accessible cost options are captivating. With no long-term commitments or up-front expenditure, Amazon provides “pay as you go” pricing giving you the freedom to choose as much storage as you need. It’s not always easy to gage the requirements for the resources. You may distribute fewer resources than needed, or you could allocate unnecessary resources and not take full advantage of the return on your investment.

Amazon cloud solutions allows for flexibility so you can keep the right balance. If you decide to terminate your relationship with Amazon Redshift, you can cancel any time. Redshift has opened doors for small businesses to make use of big data analysis and data warehousing without a large

VMware Expanding NSX Security

Non-stop news of security breaches has made data security a top-of-mind issue in the enterprise. Not surprisingly, security was a hot topic at this week’s VMworld, where VMware executives continued to push virtualization and its NSX platform as a way to tackle the security problem.

Security has been messy and complicated, where the investment in numerous security products bolted onto servers and network infrastructure isn’t paying off in increased protection, VMware CEO Pat Gelsinger said in his keynote.

“Virtualization provides the fundamental requirement allowing us to architect for security,” by allowing precise and dynamic binding of security services to applications, data and users, he said. “Now we can truly architect in security….Architected-in security allows us to be twice as secure at half the price.”

VMware has said that security has been a top use case for its NSX network virtualization platform with its micro-segmentation capabilities. This week, the company provided a view into additional security services its engineers are developing in NSX, specifically network encryption.

Tom Corn, senior VP of security products at VMware, joined Martin Casado, general manager and senior VP of

Troubleshooting MTU and ICMP Issues

Where do you draw the line between troubleshooting and optimization?

I believe the difference is related to your threshold of pain. In other words, how long are you (or your clients) willing to put up with an issue before you start investigating? I can’t count the number of times I have overheard a complaint and investigated, only to find out there was a legitimate technical issue that needed to be addressed.

The saying, “if it ain’t broke, don’t fix it” does not apply to the networking, medical, or automotive fields. In the past 10 years, troubleshooting has changed from “break and fix” to “slow and ignored.” Break and fix has a more straightforward approach and goal. Find the device that failed, fix it and test it to make sure it is back up. Performance issues can get more time consuming since variables such as the client’s configuration, location, network, server and other components may contribute to an overall performance issue.

This video deals with an issue that I run into often, but is typically overlooked because it doesn’t cause an obvious outage. These issues typically rear their ugly heads when you are in the middle of something important, like upgrading

Why Hyperconvergence Needs Networking

Hyperconverged systems are attractive alternatives for the data center, but the hot trend may fizzle out if vendors don’t incorporate networking into the mix.

Hyperconverged systems are challenging the status quo in the data center. All-in-one solutions from companies like Nutanix, SimpliVity, and Scale Computing are credible alternatives to traditional architecture that requires separate purchases for compute power and data storage. Beyond ease of use, the message is clear to potential customers: hyperconverged systems allow for scale-out simplicity as you grow.

But a critical piece is missing from these systems, which may end up stopping the hyperconvergence movement before it goes any further: integrated networking.

Hyperconverged products concentrate on optimizing the storage controller along with the CPU. By creating more efficient ways of writing data to disk you can control how data is accessed and stored by the system. This allows for architecture to be built around the idea that customers must eventually buy more units to grow. Traditional architectures that do not unify storage and compute do not integrate as well as hyperconverged solutions because they were never designed to work together as tightly from the beginning. Hyperconverged systems can be

Banks Bet Their Money On Private Cloud

The hyper-connected nature of our world has been disrupting industries since the beginning of the Internet revolution, and the financial industry is no different. Financial firms including retail banks are challenged with new consumer demands and also face competition from outside the traditional banking industry. Tech companies with new disruptive business models such as Google, Apple, PayPal and Square are now seen as a very real threat.

In order to compete in this rapidly changing digital market, traditional banks must leverage their wealth of consumer information, analytics and connectivity in new ways, and must be able to transform their existing business and to enhance their customers’ experiences, both digitally and face to face.

The process of transforming a bank to achieve a consistent omni-channel experience and a competitive advantage with clients requires a solid digital transformation strategy. This strategy must take into account consumer and market demands, regulation, control, and security that will enable banks to achieve bigger profits and assure their long term survival.

To achieve this transformation, banks are building and leveraging private clouds. The cloud enables a bank to place their customer at the center of their business so they can

WLAN Spending Fuels Enterprise Network Market Growth

The enterprise networking market grew 6% in the first half of this year as organizations overhauled their network architectures with a focus on adding more wireless capability, according to Technology Business Research.

Companies spent money to modernize their networks, driving $31.5 billion in revenue for networking vendors, a 6% increase year-to-year, TBR’s Enterprise Network Vendor Benchmark research showed.

“Customers are overhauling their networks with a focus on leveraging big data, enabling mobile productivity, and improving responsiveness and efficiency of the business,” Krista Macomber, a TBR data center analyst, said in an email interview. “As a result, customers are steadily deploying wireless LAN solutions.”

TBR estimates that wireless revenues tracked in its benchmark report grew 10.4% year-to-year in the first half of 2015, outpacing all other segments, she said.

While companies are increasingly interested in software-defined networking, most aren’t making wholesale migrations to SDN, Macomber said.

“Actual deployments are more incremental as customers avoid ripping and replacing existing hardware and test the benefits of the architecture in select workloads,” she said.

Still, the stage has been set for longer term transformation. “As

3 Key WAN Architecture Considerations

Today, network organizations face a large and growing set of WAN architecture options. In my last column, I discussed one of those alternatives: where to locate key functionality. In this column, I’ll discuss other WAN architectural alternatives and challenges facing network organizations.

Dynamic multi-pathing

Being able to load-balance traffic over multiple WAN links isn’t a new capability. However, in a traditional WAN, this capability was difficult to configure and the assignment of traffic to a given WAN link was usually done in a static fashion. One of the downsides of load balancing traffic in a static fashion is that the assignment of traffic to a given WAN link can’t change even if faced with adverse congestions such as a congested link.

There’s now functionality available that enables dynamic load balancing over WAN links  based on a combination of policy and WAN link characteristics. One approach to leveraging this functionality is to dynamically load balance traffic over both MPLS and Internet links with the goal of reducing the capacity, and hence the cost, of the MPLS links and replacing the reduced MPLS bandwidth with relatively inexpensive Internet bandwidth.

An alternative approach is to use

Corporate VPNs In The Bullseye

Virtual private network (VPN) connections can provide a false sense of security, and two separate and newly discovered attack campaigns exploiting the much-vaunted corporate channel serve as a wakeup call for how attackers can abuse and use VPNs.

Researchers at Volexity have witnessed attackers going after the corporate VPN by altering the login pages to Cisco Systems’ Web-based VPN, Clientless SSL VPNs via JavaScript code injected into the login pages in order to pilfer corporate user credentials at the VPN login phase.  It’s all in the name of the “P” in APT: “persistence.”

Meanwhile, enSilo researchers spotted a cyber espionage attack using a remote access Trojan (RAT) that among other things allows an attacker to log into a machine it infects using the user’s legitimate credentials. The so-called Moker RAT disables and sneaks past antivirus, sandboxes, and virtual machine-based tools, as well as Microsoft Windows’ User Access Control (UAC) feature.

Moker, which attaches itself to the Windows operating system and poses as a legitimate OS process, can be used by the attacker to operate “locally,” according to enSilo. “Consider a scenario where

Paving The Way For SDN Interoperability

The beauty of software-defined networks is that they give you the freedom to program your network, down to individual flows, based on business requirements. However, too much freedom can be overwhelming.

The OpenFlow protocol provides a rich set of control capabilities, not all of which are supported by all switches. To date, SDN applications, controllers, and switches have had to sort out feature options at run-time, which has made interoperability (and predictable operation) difficult. For example, a switch typically includes one or more flow tables, which are organized as a pipeline. Currently, applications must be “pipeline aware,” which effectively makes them dependent on specific hardware.

The Open Networking Foundation and other SDN innovators recognized that some type of abstraction layer was needed to support hardware independence, and two major interoperability enablers have been developed: Table Type Patterns (TTPs) and flow objectives. These abstraction frameworks provide a foundation for full interoperability between OpenFlow v1.3-enabled switches — including hardware-based switches–making it safe for network operators of all types to start investing in SDN built on such hardware.

TTPs and flow objectives

TTPs are an optional

Juniper Debuts Unite Architecture For Campus Networks

Juniper Networks today launched a new architecture for campus and branch networks with a new fabric that aims to simplify enterprise network management as companies shift to cloud services. As part of its new Unite architecture, Juniper also expanded its security portfolio with a new threat detection cloud service and additions to its SRX line of firewalls.

Altogether, Unite is designed to help enterprises have a common, converged network providing seamless and secure campus connectivity to applications wherever they sit — in a private cloud, on-premise data center or hybrid cloud.

A key part of the architecture is Junos Fusion Enterprise, which works with Juniper’s EX9200 programmable core switch to turn the campus network into a single manageable system. The fabric collapses multiple network layers to a flat tier, enabling enterprises to manage a single network across the data center and campus environments, Denise Shiffman, VP of Juniper’s development and innovation division, said in an interview.

Sponsor video, mouseover for sound

Juniper has other network fabrics including MetaFabric, but they’re focused on the data center while Junos Fusion Enterprise is “a