TCP Configuration: Timestamp Option

The function of the timestamp option is fairly self explanatory, it simply lets the sender place a timestamp value in each and every segment.   In turn the receiver will also reflect this value in it’s acknowledgement which allows the sender to calculate a round trip time for every received ACK.    Remember this is indeed per ACK and not segment as this can include multiple segments.

Initially most implementations of TCP would only allow one RTT per window however this has changed and nowadays larger windows sizes need more accurate RTT calculations.   You can read about the definitions of these calculations in RFC 1323 which covers the TCP enhanced extensions that allow these improved RTT calculations. The time is estimated by sampling a data signal at a lower frequency one time per window which works well with smaller windows (and less segments).

Accurate measurement of data transmission is often very difficult in congested and busy networks also when troubleshooting across networks like the internet.  It’s difficult to  isolate issues and solve problems in these sort of environments because you have no control or access to the majority of the transport hardware.  For example if you are tryign to fix a Netflix VPN problem remotely being able to check the RTT is essential to analyse where the problems potentially lie.

The sender will place a 32 bit value in the initial field which will be echoed back by the receiver in the reply field. This will increase the size of the TCP header from 20 bytes to 32 bytes when this option is used. The timestamp value will increase value on each transaction. There is no clock synchronization between the sender and the receiver merely an increase in the value of the timestamp unit. Most implementations of the timestamp option recommend that the value increment in units of one ideally between 1 millisecond and 1 second.

This option is configured during the connection establishment and is handled the same way as the windows scale option in the previous section. As you may know the receiving connection does not have to acknowledge every data segment it receives. This however is simplified because only a single timestamp value is maintained per active connection which is updated according to simple algorithm.

First of all TCP monitors the timestamp value ensuring it has the correct value to send in the next ACK. The sequence number is updated after each ACK value is sent and not as it’s acknowledged. After a new segment arrives then the byte numbered in a variable called lastack is incremented. After a new segment arrives then this value is increased but the old value stored in a variable called tsrecent, When a timestamp option is sent the tsrecent value is sent, and the sequence number field is stored in the variable called lastack.

This means that in addition to the timestamp option allowing for better RTT calculation it also performs another function. The receiver can use the function to avoid receiving old duplicate segments using an addition feature called PAWS – Protection against Wrapped Sequence Numbers.

Further Reading on Commercial Proxy Options – http://www.anonymous-proxies.org/2017/05/buy-uk-proxy-ip-address.html

Security Specifications and Initiatives

Throughout the internet community, there are many groups working on resolving a variety of security related issues online.    The activities cover all aspects of internet security and networking in general from authentication, firewalls, one time passwords, public key infrastructure, transport layer security and much more.

Many of the most important security protocols, initiatives and specifications being developed can be researched at the following groups.

TCSEC (Trusted Computer System Evaluation Criteria)

These are requirements for secure products as defined by the US National Security Agency.   These are important standards which many US and global companies use in establishing base lines for their computer and network infrastructure.    You will often hear these standards referred to as the ‘Orange book’.

CAPI (Crypto API)

CAPI is an application programming interface developed by Microsoft which makes it much easier for developers to create applications which incorporate both encryption and digital signatures.

CDSA (Common Data Security Architecture) 

CDSA is a security reference standard primarily designed to help develop applications which take advantage of other software security mechanisms.   Although not initially widely used, CDSA has since been accepted by the Open Group for evaluation and technical companies usch as IBM, Netscape and Intel have aided in developing the standard further.  It is important for a disparate communication medium such as the internet to have open and inter-operable standards for applications and software.   The standard also includes an expansion platform for future developments and improvements in security elements and architecture.

GSS-API – (Generic Security Services API)

The GSS-API is a higher level interface that enables applications and software an interface into security technologies.  For example it can act as a gateway into private and public key infrastructure and technologies.

This list is of course, a long way from being complete and because of the fast paced development of security technologies it’s very likely to change greatly.   It should be remembered that although there is an obvious requirement for security at the server level,   securing applications and software on the client is also important.   Client side security is often more of a challenge due to different platforms and a lack of standards – configuration settings on every computer are likely to be different.

Many people now take security and privacy extremely seriously, especially now that so much of our lives involve online activities.  Using encryption and some sort of IP cloaker like this to provide anonymity is extremely common.  Most of these security services are provided by third parties through specialised software.   Again incorporating these into some sort of common security standard is a sensible option yet somewhat difficult to achieve.

Further Reading: Netflix VPN Problem, Haber Press, 2015

Certificate Based Client Authentication

One of the most important features of SSL is it’s ability to authenticate based on SSL certificates.  Often people fail to understand that this certificate based authentication can only be used when SSL is functioning, it is not accessible in other situations.    Take for example the more common example on the web of insecure HTTP exchanges – this means that SSL certificate based authentication is not available.  The only option here is to control access by using basic username password authentication.  This represents possibly the biggest security issue on the internet today because this also takes place in clear text too!

Another common misconception is with regards the SSL sessions themselves.  SSL sessions are established between two endpoints.  The session may go through a SSL tunnel which is effectively a forward proxy server.    However secure reverse proxying is not SSL tunnelling it’s probably better described as HTTPS proxying although this is not a commonly used term.   In this example the proxy acts as an endpoint of one SSL session, accepting the endpoint of one SSL session and forwarding the request to the origin server.

The two sessions are distinct except of course they will both be present in the cache and memory of the proxy server. An important consequence of this is that the client certificate based authentication credential are not relayed to the origin server.   The SSL session between the client and the reverse proxy server authenticates the client to the proxy server.  However the SSL session between the origin server and the proxy authenticates the server itself.   The certificate presented to the origin server is the reverse proxy’s certificate and the origin server has no knowledge of the client and it’s certificate.

Just to summarise this is the ability to authenticate the client to the origin server though the reverse proxy server.

In these situations where client based certificate based authentication and access control are required, the role would have to be performed by the reverse proxy serve.  In other words the access control function has been delegated to the proxy server.  Currently there is no protocol available for for transferring access control data from the origin server to the reverse proxy server.    However there are situations in advanced networks where the access control lists can be stored in an LDAP server for example in Windows Active directory domains.   This enables all unverified connections to be controlled, e’g blocking BBC VPN connections from  including outbound client requests to the media servers.

The reverse proxy could be described in this situation as operating as a web server.  Indeed the authentication required by the reverse proxy is actually web server authentication not proxy server authentication.    Thus crucially the challenge status code is HTTP 401 and not 407.  This is a crucial difference and a simple way to identify the exact authentication methods which are taking place on a network if you’re troubleshooting.

 

Video Proxy – How to Unlock the World’s Best Media Sites

When you read about the internet, it’s usually about how it’s constantly expanding and growing but that’s not strictly true.   Although new information is being added all the time, the reality is that much of this is often inaccessible in particular when you’re looking at videos website.

For instance take the example of one of the world’s most popular websites the BBC iPlayer. Even if you remove page titles , it contains thousands of programmes, videos and radio broadcasts and indeed is updated every single day.   It’s a wonderful resource which is continually refreshed, yet unfortunately the site is not accessible when you are located outside the United Kingdom unless you use something like a video proxy to help you. So why is so difficult to access these sites, why do people who happen to be away from home, perhaps in France Roubaix or a seaside town in Spain be constantly search for ways to unblock video pages on YouTube and the big media sites?

It’s an incredible situation, yet one that is becoming increasingly common – the internet is becoming compartmentalised, split into geographical sectors controlled by the internet’s big players.   The method used is something called geo-blocking or locking and the majority of large web sites use it to some extent. You’ll find that a particular site will remove objects based on your location, in fact some countries it’s almost impossible to watch videos on any of the major platforms.   Now the method has been criticised from all sorts of civil liberty organisations. Indeed the EU itself has made criticism which you can find here because it also undermines it’s concept of a Single Free Market.

The technology implemented varies slightly from site to site, yet it’s basically the same – record IP address and look up it’s location from a central database of addresses. So when you try and visit the BBC web site to watch a David Attenborough definition, if your IP address isn’t registered in the UK then you’ll get blocked.

Video proxy

Planet Earth Documentaries on BBC iPlayer

It’s extremely frustrating especially for someone from the UK, and so the workarounds were created.  Now I mentioned above the concept of a video proxy to bypass these and it does work to some extent.  You bounce your connection off an intermediate proxy server based in the location you need, which effectively hides your true IP address and location.

However it’s important to remember that from 2016 onwards simple proxies no longer work on any of the major media sites.  Forget about the thousands of simply unblock sites that promise to bypass internet restrictions, they simply don’t work anymore. Unfortunately  without even simple ssl encrypted connections they can be detected easily and all the sites block them automatically. Some of them are still able to unblock Youtube videos but even those are fairly rare now. Many of them have been blocked at the server level and their hosting services have told them to remove scripts like Glype, Unfortunately the days of the free proxy sites and web proxies have now gone for good at least for being able to access videos sites and large multimedia companies who provide the top rated video production.

However the concept does still work just like the old video proxy method, it’s just you’ll need a securely configured VPN server which cannot be detected.   The encryption is useful giving you the insurance of anonymity whilst able to allow cookies to flow down the connection transparently too. This works in the same way hiding your real address and instead presenting the address of the VPN server.  So using this method, you can watch any media site from Hulu to Netflix and the BBC irrespective of your location.   Here’s one in action using a proxy to watch video content from the BBC –


It’s a highly sophisticated program that will allow you to proxy video through a secure connection, also fast enough to allow you to watch video without buffering. It’s very easy to use to unblock video and you’ll find it can bypass internet filters too which are also commonly implemented. The demo version is available to test it out, it won’t function as a YouTube proxy unfortunately but you can at least use the free version to unblock Facebook.

There is one other method, I should mention which you can find discussed in this article here  , it’s called Smart DNS and is a simpler alternative to using a VPN service.

It’s what literally millions of people around the world are doing right now, relaxing in the sun whilst watching the News on the BBC or their favorite US entertainment channel.  There are a lot of these services available now, but only a few that work properly.  Our recommendation doesn’t look like a TV watching VPN at first glance simply because they keep that functionality low key.  Yet for over a decade it has supported all the major media channels in a variety of countries.

It’s called Identity Cloaker – You can try their 10 day trial here – Identity Cloaker

Buy US Proxy with Transparent Proxying

When we are discussing the technological characteristics of proxies there’s one term which you will see used very often – ‘transparent’.    It can actually be used in two distinct ways when it comes to proxies.  The first is to refer to a definition which implies transparent proxying ensures that any user will see no difference to the original request whether it goes direct to the server or through a proxy.   In an ideal world pretty much all legitimate proxies would be considered ‘transparent’.

Proxies are however significantly more advanced from the early years when this original definition was created.  The term ‘transparent proxying’ now has much more meaning.  The extended definition means that transparent proxying ensures that the client software is not aware of the existence of the proxy server in the communication stream.   This is unusual because the client was usually configured to use a proxy, perhaps by the internet settings in it’s browser configuration.    Software would then make a decision in it’s requests and perhaps distinguish between proxy and direct requests.

When transparent proxying, in it’s modern context, is used the router is programmed to redirect the request through the proxy not the client. This means that the proxy can actually be used to intercept and control all HTTP requests that are targeted by outbound connections.  The request can even be parsed or perhaps even filtered and redirected.  This control allows the network to configure access control rules on all outbound requests,  A company network could use these to ensure unsuitable requests are not being made from a corporate network e.g. illegal web sites.

This level of transparent proxying leaves the client completely unaware of the existence of an intermediate proxy server.   There are some caveats though and the proxy can be detected in certain circumstances.  For  example there is little point in investing in a USA proxy buy if the server only supports HTTP/1.1 because the protocol makes no allowance for transparency in proxying information.

One of the main issues and indeed worries is that allowing completely transparent proxying might cause other issues particularly in the client side applications.  For example one of the fundamentals of using proxies in a corporate network is to reduce traffic by caching locally.  This could cause all sorts of problems if the behaviour of the proxy cache effects communication between the destination server and the client application.

Further Reading – http://www.changeipaddress.net/us-ip-address-for-netflix/

Optimizing Proxies – Protocol Performance

The importance of the data transport protocol is of course crucial to a global information network like the world wide web.  Unfortunately the HTTP/1.0 protocol has some inherent issues which are directly related to performance which have been largely addressed in version 1.1 of the protocol.  It is expected that future developments will further improve the performance of the protocol.

One issue is related to the three way handshake that is required by TCP before it can establish the connection. It is important to remember that during this handshake phase that no application data is transferred at all.  from the user perspective the delay will simply appear as latency in getting the initial connection established.   This three way handshake involves a considerable overhear preceding data transfer and has a noticeable effect on performance particularly in busy networks.

This problem is made worse by using the HTTP 1.0 protocol which makes extensive use of new connections.  In fact every new request requires a new TCP connection to be established, complete with a new three way handshake.  This was originally implemented as a measure to boost performance because it was thought that it would avoid long lived idle connections being left dormant.  The reasoning was that it was more efficient to establish new connections when required as the data burst would be small and frequent.

However the web has not developed like this and it’s is much more than a series of short html files quickly downloaded.  Instead the web is full of large documents and pages embedded with videos and images.  Add to the the multitude of applets, code and other embedded objects and this soon adds up.  What’s more each of these objects usually has it’s own URL and so requires a separate HTTP request for each.    Even if you invest in a high quality US proxy you’ll find some impact on speed using HTTP 1.0 simply due ti the huge number of connection requests it generates.

There were modifications made to increase the perceived performance from the user perspective.  For one, the use of multiple simultaneous connections was allowed and this would allow client software like browsers to download and render multiple components on a page.  This meant that the user wasn’t left waiting as individual components were loaded separately.  However although parallel connections increase performance on an individual level, they generally have a very negative impact on the network as a whole.   The process is still inefficient and allowing parallel connections does little to mitigate this situation.

As any network administrator knows, focussing on a single aspect of network performance is rarely a good idea and will almost never improve overall network performance.    The persistent connection feature was introduced to help solve this, and was added as a non-standard extension to HTPP 1.0 and included by default with HTTP 1.1.

Further Reading: Proxies Blocked by BBC Abroad

Remote Login Methods

The ability to remotely login to a machine that’s miles away from you is perhaps one of the internet’s most popular applications.  It might not seem so, but being able to access a remote host without a hard wire connection has transformed many areas of IT particularly in support and development.   Obviously you need an account on the host that you are trying to login to, but actually using the machine as if you are at the console is extremely useful in many situations.

Two of the most famous applications for remote login access when using a TCP/IP based network (e.g like the internet) are Telnet and Rlogin.   The most famous and probably used by every IT support technician over the age of 25 is Telnet, installed as standard in almost every TCP/IP implementation.   It seems relatively simple but this actually hides some great functionality not least the ability to Telnet from one operating system to another.  It’s incredibly useful to be able to sit at a Microsoft Windows machine with multiple command interfaces open in separate windows to Unix and Linux machines at the same time.

Remember these terminal windows are actually like physically sitting at the remote host’s console.  This is is completely different from just using a web session or using something like an Italian to stream RAI player abroad like this.  Each individual character that you type is entered into the remote host, there’s no streaming, no relaying or filtering.  Obviously there are some restrictions about running a terminal windows on a completely different systems.  However Telnet does an option negotiation phase between the client and server to ensure that only services which are supported at both ends are available.

The other famous remote login application is called Rlogin which was developed from Berkeley Unix.   This application was initially only available on Unix Systems however it has been ported to most other operating systems now and you can Rlogin between Windows and Linux.  Both of these applications use the Client/Server configuration – the client is the system where the initial connection is established to the remote server which is the target.

Nowadays, the most popular of the two application – Telnet has become much more sophisticated.  Over the years lots of functionality has been added to Telnet whereas Rlogin remains quite simple and unmodified.  However it should be noted that although Rlogin lacks features, it is a simple and stable remote access application.

The author – John Herrington has worked in IT for over thirty years in a variety of roles from support to latterly Network manager at a large bank.   He now works for himself and runs one of the largest paid VPN services on the West Coast of America. He obviously works remotely a lot of the time but will rarely use Telnet as it’s too insecure!

Tracking VPN and Proxy Users

There are similar challenges for network administrators in corporate networks and those running firewalls for authoritarian regimes about the use of proxies and VPN services.  The issue is that not only do they allow individuals the freedom to conduct their internet activity without being tracked, a VPN will also prevent most aspects of logging taking place too.

If you imagine a company network it means that an individual could potentially conduct all sorts of behaviour from a company computer whilst sitting in a corporate office whilst at work.   They could be downloading films, streaming Netflix or something perhaps much more sinister even.  Obviously this is potentially a risk to both the network infrastructure and also potentially to the company’s reputation.

So how do you block the use of VPNs and proxies?  For a corporate network there are actually many more options, and the simplest is probably to stop any sort of VPN and proxy being used in the first place.   You can lock down the advanced settings in a web browser quite simply, for example the Internet Explorer Administration Kit (IEAK) allows you to configure and deploy an IE package which cannot be modified onto every client in your organisation.  This stops proxies being used manually and VPN clients can be blocked by ensuring that  standard users have no administrative access to their desktops.

It is certainly easier to block any installation than trying to track the use of VPNs particularly some of the most sophisticated ones.   For example although you could potentially monitor logs in firewalls and routers for specific IP addresses which looked like VPNs some services allow you to switch to a range of IP addresses – Hide My VPN like the one in this video demonstrates:

As you can see if a service is rotated then identifying the VPN by it’s IP address is much more difficult.  However blocking installation of the highlighted service Identity Cloaker can also be difficult as it has a mobile version which can be run directly from a USB disk.

You can see that proxies are fairly irrelevant today as they can be easily blocked, also most content filters can detect their use too.   Significantly their use has now dropped globally for additional reasons mainly that they are mostly detected by websites which operate regional restrictions.   It is the more sophisticated Virtual private networks which are the difficulty, particularly those equipped with various VPN hider technologies and advanced encryption.

VPN Blocking on the Rise

For years people have used VPNs for all sorts of reasons, but it’s origin lay quite simply in the security they provided.  International companies will normally insist that their employees use VPN services when remotely connecting back to their servers using the internet.  It makes sense, otherwise important information and credentials would be trusted to the owners of coffee shop wifi or the administrator of your local Premier Lodge or hotel chain.

The concept is simple, create an encrypted tunnel which ensures that all the data which normally is passed in clear text instead is encrypted and unreadable.  Of course, this security means that as well as being safe from computer criminals and identity thieves – it’s also secure from intelligence services and state controlled snoopers too.  It should come as no surprise that anyone who opposes free speech generally hates VPNs and the protection that they give.

So when we hear stories about different organisations and companies from the Netflix to the Chinese Government trying to block VPNs what are they doing.  Well it depends, obviously the situation that leads to thousands of BBC iPlayer VPN not working is going to be slightly different to the Chinese throwing billions at the great firewall of China.   However the general techniques are basically the same as a small company want to achieve the same thing.

One of the most common options is to block the ports used by these services.  Most VPN tunnelling protocols operate on standard ports, e.g using PPTP or LTP.  They need to establish these connections to transfer and receive data, without them the service won’t function.  Other methods include identifying and blocking specific IP addresses or ranges which are being used by VPN services.   It is these two methods that are mostly used by the big media companies like Hulu and the BBC.

These methods can be time consuming though and it’s possible to switch address and some services allow you to configure alternative ports too. The Chinese Government as you would expect have gone one step forward and use more sophisticated techniques like deep packet inspection.   These involved looking at the data itself to identify if a VPN is being used to transport it.  For example if you are unable to read any data because none of it’s in clear text then there is the likelihood that it is being encrypted.   Of course, there are other methods which encrypt data like SSL so you need to be careful that you don’t block other traffic, it’s a risk that the Chinese would probably be happy to take however.

Even these methods are not foolproof and VPN companies can scramble things like the meta data to make identifying the use of a VPN even harder.  It is worthwhile noting that many people in China still use VPNs routinely and so if the huge resources available to the Chinese State can’t block their use – we should be ok to have a BBC VPN like this for the foreseeable future.

 

 

TCP Extensions – Virtual Circuits

TCP provides lots of additional services which have been added over it’s lifetime one of the more useful ones is that of the virtual circuit transport service. There are three distinct phases in the life of any TCP connection – establishment, transferring data and termination.    There are many applications including things like remote login and those that enable file transfer which are perfectly suited to using a virtual circuit type service.    Many other applications are suited better towards a transaction based service which is basically a client request followed by a server response.  This can be explained by briefly detailing it’s characteristics:

1: Any overhead of connection establishment and the subsequent termination should be minimized.  Ideally one request should be sent followed by the corresponding receive before any other packets are sent.

2: Latency should be reduced to the sum of the round trip time (RTT) plus the server processing time (SPT).

3: Server should be capable of detecting duplicate requests and not processing them again.

A very important application uses this type of service which forms the very backbone of the internet – the Domain Name System (DNS).   Other common applications such as the BBC VPN many people use to bypass the numerous region locking systems which exist online.   The other important decision that an application developer must consider is whether to use UDP or TCP for the transport.  The difficulty is that TCP simply provides too many features for an efficient transaction whilst  UDP doesn’t really provide enough.   Normally UDP is used simply because it avoids the overhead of TCP connections but this involves adding the features that are required like retransmission, dynamics timeouts and congestion avoidance.

The solution that is a better alternative than this is to provide an additional transport layer to provide more efficient handling for the transactions.  The transaction protocol which is commonly used now by many applications is called T/TCP defined in RFC 379 – extending the TCP protocol for transactions.

Remember most TCPs require 7 segments to open and close a connection.  An additional three more segments are added to deal with the requests and replies (initial and the one responding to the ACK).  In addition it may be necessary to add extra control bits to deal with other functionality and connection information required to complete the transactions properly.

Further Reading:

James Hibbert: Polskie Proxy, Haber Press, 2017