There are many ways to configure the way TCP/IP operates on specific networks. Some of these parameters are rarely used but when you’re running fast Gigabit networks with a wide variety of network hardware and infrastructure some options are extremely useful. One of those is the Windows Scale option which can be used to modify the definition of the TCP Window from it’s default of 16 bits.
For example in some environments it may be appropriate to increase the size of the TCP windows to 32 bits. What actually happens is that instead of changing the size of the header to allow the larger windows, the header still holds a 16-bit value. However an option allows a scaling parameter to be applied to the value which allows TCP to maintain the actual value of 32 bits internally.
The option for scaling can only appear in the SYN segment of the transaction which means that the scaling value by definition will be fixed in both directions when the connection is initially established. In order for window scaling to be enable both ends of the connection must include the option in their SYN segments. It should be noted thought that the scale option can be different in each direction.
There are methods for allowing suitable communication between different levels of hardware. For example the scaling factor can be reduced by sending a non zero scale factor which cancels the scaling if a windows scaling option is not received in the return SYN. This behaviour is specified in the relevant RFC which specifies that TCP must accept these options in any segment. This includes all sorts of of connection remember these can be across wide areas, imagine a US IP address connecting to a Netflix server on super fast hardware. However it should also be noted that TCP/IP will always ignore any option that it doesn’t understand.
For illustration, if the windows scale option is being used with a shift count of X for sending and Y for receiving. This would mean that every 16 bit window which is advertised would be left shifted by Y bits to obtain the real advertised window. So every time a windows advertisement is sent then we’d also take the 32 bit windows size and right shift by X bit to discover the real 16 bit value which is in the TCP header.
Any shift count is automatically controlled by TCP, which is because the size of the receiving buffer is important and cannot be controlled by the other size of the connection.
LAN in networking terms stands for Local Area Network and it refers to a shared communication system that many computers and other devices are attached. The distinction between this and other networks is that a LAN is a network limited to a local area.
The first recorded use of LANs where in the 1970s, where they grew from the very first basic networking setups. These consisted of two devices connected by a single network wire much like a child’s string and paper cup model designed to mimic the telephone. Computer scientists started to think why limit to two devices when the same cable could theoretically connect multiple devices. There were complications though, and possibly the most basic was finding a mechanism that ensured that multiple devices didn’t use the cable at the same time.
The methods used to ensure that use of the cables are shared properly are called ‘medium access controls’ for self explanatory reasons. There are a variety of these ranging from allowing workstations to announce their communications to a central device which controls access and allocates bandwidth as required. In some senses in the same way an individual may buy uk proxy access in order to route their connection privately whilst hiding their own IP address.
Although LANs are normally restricted to a smaller geographical location there are actually different topologies. The simplest and originally was the most common is the liner bus and the star configuration. The linear bus involves a cable laid throughout a building from one workstation to another. Whereas the star configuration has each workstation attached to a central location or hub connected by it’s own specific cable. There are pros and cons to each configuration and in fact if you use the most popular networking medium ethernet you can use either topology.
A local Area Network is actually a connectionless networking configuration. That definition is important and actually means that once a device is ready to use the network to transmit data it simply releases the data onto the cable and ‘hopes’ that it reaches it’s destination. In this basic setup, no initial process involves ensuring that the data reaches it’s recipient nor is there any check to see whether it has been received.
When data is transmitted across the LAN it is packaged into ‘frames’ before being dispatched. At the basic hardware level, each frame is transmitted as a bit stream across the wire. Every single device connected to this network will listed to the transmission although only the intended recipient will actually receive the data. Normally this is the case but it is possible to transmit on a multicast address which specifies that all devices on the LAN should receive the data. Other higher level protocols will actually package the data further into datagrams examples of these are IP or IPX.
There is no doubt that TCP/IP has transformed our computer networks and played a pivotal role in the expansion of the world wide web, however it is far from perfect. RSVP is an Internet protocol designed to alleviate some of the issues with TCP/IP particularly regarding delivering data on time and in the right order. This has been always one of TCP/IP’s biggest shortcomings – it’s ‘best effort’ IP delivery service has no guarantees. Whereas TCP which is connection orientated does guarantee delivery but gives no assurances on the time it takes.
Guaranteed on time delivery is essential in many of the modern day applications particularly over the internet – especially those including voice and video delivery. Indeed most web sites involve large amounts of video and voice data which require fast, reliable and timely delivery whenever possible. If anyone has tried streaming or downloading from applications like the BBC iPlayer like this for example they will know how frustrating slow speeds and missing data packets can be.
The issues are well known and RSVP is an attempt to provide a suitable quality of service for video and voice delivery particularly across the internet and other large TCP/IP based networks. The way RSVP works is to reserve bandwidth across router connected networks. It does this by asking each router to keep some of it’s bandwidth allocated to particular traffic flow. In some senses it is an attempt to add some of the quality features of ATM to TCP/IP in order to facilitate the changing requirements of modern day networks.
RSVP is one of the first attempts to introduce a quality service to TCP/IP but many vendors are looking at introducing many other options too. Most of them focus like RSVP on reserving bandwidth however this isn’t always an ideal situation. The obvious issue is that if you reserve network capacity for specific traffic or connection then the amount is reduced for all other users and applications. Some of this issue has been mitigated by the increase in capacity of both corporate networks and the connections for individual users to the internet.
RSVP works by establishing and maintain bandwidth reservations on a specific network so it’s not a WAN or wide area solution normally. The protocol works from router to router setting up a reservation from each end of the system. It is primarily a signalling protocol not specifically a routing protocol. If a specific router along the connection cannot provide the requested bandwidth then RSVP will look for an alternative route. Obviously this only works if the routers have RSVP enabled which many currently do to support this process. Applications can also use this feature by making similar requests.
Any computer that has network connectivity usually offers services to users both remotely and locally. Typically the computer will offer these services by running a number of locally hosted services. In a TCP/IP network, the services are usually available via ports on the local computer. When a computer connects to access a particular service and end-to-end connection is normally established and a socket set up at each end of the connection. In simple terms you can think of the socket as a telephone at each end of a line and the port is a specific telephone number.
Most of the common services are usually found at a predetermined port number, in fact they can act as an identifier of the service. It’s important to remember that although these port number assignments are normally followed there is no strict enforcement of these standards. Although it is likely that an FTP server is listening on Port 21 there is no actual guarantee that this is true. These predetermined port assignments are commonly followed though and it is usually considered best practice. In some senses it can make network management functions much simpler than if non-standard ports are used which makes identifying roles and services harder.
For instance most people would expect a service running on port 80 would be a HTTP server although there is nothing to stop some other service using it.
Republished from archive of Thomas Riemer’s Port Numbers page
The Registered Ports are not controlled by the IANA and on most systems can be used by ordinary user processes or programs executed by ordinary users.
Ports are used in the TCP [RFC793] to name the ends of logical connections which carry long term conversations. For the purpose of providing services to unknown callers, a service contact port is defined. This list specifies the port used by the server process as its contact port. While the IANA can not control uses of these ports it does register or list uses of these ports as a convienence to the community.
To the extent possible, these same port assignments are used with the UDP [RFC768].
afs3-callback 7001/tcp callbacks to cache managers
afs3-callback 7001/udp callbacks to cache managers
afs3-prserver 7002/tcp users & groups database
afs3-prserver 7002/udp users & groups database
afs3-vlserver 7003/tcp volume location database
afs3-vlserver 7003/udp volume location database
afs3-kaserver 7004/tcp AFS/Kerberos authentication service
afs3-kaserver 7004/udp AFS/Kerberos authentication service
afs3-volser 7005/tcp volume managment server
afs3-volser 7005/udp volume managment server
afs3-errors 7006/tcp error interpretation service
afs3-errors 7006/udp error interpretation service
afs3-bos 7007/tcp basic overseer process
afs3-bos 7007/udp basic overseer process
afs3-update 7008/tcp server-to-server updater
afs3-update 7008/udp server-to-server updater
afs3-rmtsys 7009/tcp remote cache manager service
afs3-rmtsys 7009/udp remote cache manager service
ups-onlinet 7010/tcp onlinet uninterruptable power supplies
ups-onlinet 7010/udp onlinet uninterruptable power supplies
font-service 7100/tcp X Font Service
font-service 7100/udp X Font Service
fodms 7200/tcp FODMS FLIP
fodms 7200/udp FODMS FLIP
sd 9876/tcp Session Director
sd 9876/udp Session Director
biimenu 18000/tcp Beckman Instruments, Inc.
biimenu 18000/udp Beckman Instruments, Inc.
dbbrowse 47557/tcp Databeam Corporation
dbbrowse 47557/udp Databeam Corporation
[RFC768] Postel, J., “User Datagram Protocol”, STD 6, RFC 768, USC/Information Sciences Institute, August 1980.[RFC793]Postel, J., ed., “Transmission Control Protocol – DARPA Internet Program Protocol Specification”, STD 7, RFC 793, USC/Information Sciences Institute, September 1981.
Throughout the internet community, there are many groups working on resolving a variety of security related issues online. The activities cover all aspects of internet security and networking in general from authentication, firewalls, one time passwords, public key infrastructure, transport layer security and much more.
Many of the most important security protocols, initiatives and specifications being developed can be researched at the following groups.
TCSEC (Trusted Computer System Evaluation Criteria)
These are requirements for secure products as defined by the US National Security Agency. These are important standards which many US and global companies use in establishing base lines for their computer and network infrastructure. You will often hear these standards referred to as the ‘Orange book’.
CAPI (Crypto API)
CAPI is an application programming interface developed by Microsoft which makes it much easier for developers to create applications which incorporate both encryption and digital signatures.
CDSA (Common Data Security Architecture)
CDSA is a security reference standard primarily designed to help develop applications which take advantage of other software security mechanisms. Although not initially widely used, CDSA has since been accepted by the Open Group for evaluation and technical companies usch as IBM, Netscape and Intel have aided in developing the standard further. It is important for a disparate communication medium such as the internet to have open and inter-operable standards for applications and software. The standard also includes an expansion platform for future developments and improvements in security elements and architecture.
GSS-API – (Generic Security Services API)
The GSS-API is a higher level interface that enables applications and software an interface into security technologies. For example it can act as a gateway into private and public key infrastructure and technologies.
This list is of course, a long way from being complete and because of the fast paced development of security technologies it’s very likely to change greatly. It should be remembered that although there is an obvious requirement for security at the server level, securing applications and software on the client is also important. Client side security is often more of a challenge due to different platforms and a lack of standards – configuration settings on every computer are likely to be different.
Many people now take security and privacy extremely seriously, especially now that so much of our lives involve online activities. Using encryption and some sort of IP cloaker like this to provide anonymity is extremely common. Most of these security services are provided by third parties through specialised software. Again incorporating these into some sort of common security standard is a sensible option yet somewhat difficult to achieve.
One of the most important features of SSL is it’s ability to authenticate based on SSL certificates. Often people fail to understand that this certificate based authentication can only be used when SSL is functioning, it is not accessible in other situations. Take for example the more common example on the web of insecure HTTP exchanges – this means that SSL certificate based authentication is not available. The only option here is to control access by using basic username password authentication. This represents possibly the biggest security issue on the internet today because this also takes place in clear text too!
Another common misconception is with regards the SSL sessions themselves. SSL sessions are established between two endpoints. The session may go through a SSL tunnel which is effectively a forward proxy server. However secure reverse proxying is not SSL tunnelling it’s probably better described as HTTPS proxying although this is not a commonly used term. In this example the proxy acts as an endpoint of one SSL session, accepting the endpoint of one SSL session and forwarding the request to the origin server.
The two sessions are distinct except of course they will both be present in the cache and memory of the proxy server. An important consequence of this is that the client certificate based authentication credential are not relayed to the origin server. The SSL session between the client and the reverse proxy server authenticates the client to the proxy server. However the SSL session between the origin server and the proxy authenticates the server itself. The certificate presented to the origin server is the reverse proxy’s certificate and the origin server has no knowledge of the client and it’s certificate.
Just to summarise this is the ability to authenticate the client to the origin server though the reverse proxy server.
In these situations where client based certificate based authentication and access control are required, the role would have to be performed by the reverse proxy serve. In other words the access control function has been delegated to the proxy server. Currently there is no protocol available for for transferring access control data from the origin server to the reverse proxy server. However there are situations in advanced networks where the access control lists can be stored in an LDAP server for example in Windows Active directory domains. This enables all unverified connections to be controlled, e’g blocking BBC VPN connections from including outbound client requests to the media servers.
The reverse proxy could be described in this situation as operating as a web server. Indeed the authentication required by the reverse proxy is actually web server authentication not proxy server authentication. Thus crucially the challenge status code is HTTP 401 and not 407. This is a crucial difference and a simple way to identify the exact authentication methods which are taking place on a network if you’re troubleshooting.
There are actually quite a lot of reverse proxy servers in use through large corporate networks performing a variety of purposes. However there are two distinct roles for which they are commonly used –
replicating Content to geographically dispersed areas
replicating content for load balancing
It’s a function that is not always considered for proxies, however content distribution is a logical function for any proxy server. In fact a reverse proxy server can even be used to establish multiple replica servers of a single master to diverse locations. Take for example if you have a multinational company with offices in countries all over the world.
It would be difficult for a single server with company wide data like templates, policies and procedures to server the entire company yet it is imperative that the integrity of any ‘copy’ is maintained. The reverse proxies could be set up in each branch server with a slightly different address, perhaps including location in name. These reverse proxies would pull their data from the master ensuring they were all identical.
This is quite an efficient use of the proxy in reducing bandwidth requirements across the network. However the reverse proxies must be configured to pull changes from the master very frequently in order to ensure any changes are replicated quickly. In fact it would be usually safer for the master server to push changes to the reverse proxies in order to ensure this.
The configuration can be complete by updating specific DNS entries in each zone. This would mean that you could resolve – www.master.com from all of the physical locations. That is to resolve london.master.com to point at the master server instead.
As mentioned the main issue is ensuring that changes are replicated efficiently and accurately. In fact replication is perhaps a little too advanced a term as really the proxies are merely caching information and updating them. So the master server has some modification to it’s content then it would push out the changes to any of the proxies online. So messages would be sent to the uk online proxy here, then to the asian proxy and so on.
THe other main use is of course load balancing for something like a heavily loaded web server. Any request received from a client will be distributed back to the multiple reverse proxies by using methods like DNS round robin. This ensure that the requests are spread out evenly and one of the reverse proxies doesn’t become overloaded with requests too. This often happened if static lists were used in rotation as the same proxy servers would be receiving the requests too frequently.
John Severn often sneaks off work to travel somewhere hot. After all he just needs to change ip address to United Kingdom and no-one will notice his emails are coming from the Costa del Sol next to a pool.
When you read about the internet, it’s usually about how it’s constantly expanding and growing but that’s not strictly true. Although new information is being added all the time, the reality is that much of this is often inaccessible in particular when you’re looking at videos website.
For instance take the example of one of the world’s most popular websites the BBC iPlayer. Even if you remove page titles , it contains thousands of programmes, videos and radio broadcasts and indeed is updated every single day. It’s a wonderful resource which is continually refreshed, yet unfortunately the site is not accessible when you are located outside the United Kingdom unless you use something like a video proxy to help you. So why is so difficult to access these sites, why do people who happen to be away from home, perhaps in France Roubaix or a seaside town in Spain be constantly search for ways to unblock video pages on YouTube and the big media sites?
It’s an incredible situation, yet one that is becoming increasingly common – the internet is becoming compartmentalised, split into geographical sectors controlled by the internet’s big players. The method used is something called geo-blocking or locking and the majority of large web sites use it to some extent. You’ll find that a particular site will remove objects based on your location, in fact some countries it’s almost impossible to watch videos on any of the major platforms. Now the method has been criticised from all sorts of civil liberty organisations. Indeed the EU itself has made criticism which you can find here because it also undermines it’s concept of a Single Free Market.
The technology implemented varies slightly from site to site, yet it’s basically the same – record IP address and look up it’s location from a central database of addresses. So when you try and visit the BBC web site to watch a David Attenborough definition, if your IP address isn’t registered in the UK then you’ll get blocked.
Planet Earth Documentaries on BBC iPlayer
It’s extremely frustrating especially for someone from the UK, and so the workarounds were created. Now I mentioned above the concept of a video proxy to bypass these and it does work to some extent. You bounce your connection off an intermediate proxy server based in the location you need, which effectively hides your true IP address and location and will unblock video sites easily
However it’s important to remember that from 2016 onwards simple proxies no longer work on any of the major media sites. Forget about the thousands of simply unblock sites or free video proxy server sites that promise to bypass internet restrictions, they simply don’t work anymore. Unfortunately without even simple ssl encrypted connections they can be detected easily and all the sites block them automatically. Some of them are still able to unblock Youtube videos but even those are fairly rare now. Many of them have been blocked at the server level and their hosting services have told them to remove scripts like Glype, Unfortunately the days of the free proxy sites and web proxies have now gone for good at least for being able to access videos sites and large multimedia companies who provide the top rated video production.
However the concept does still work just like the old video proxy method, it’s just you’ll need a securely configured VPN server which cannot be detected. The encryption is useful giving you the insurance of anonymity whilst able to allow cookies to flow down the connection transparently too. This works in the same way hiding your real address and instead presenting the address of the VPN server. So using this method, you can watch any media site from Hulu to Netflix and the BBC irrespective of your location. Unfortunately most simple proxies are now blocked so even the best free proxy sites are useless for accessing media sites like these.
Here’s one in action using a proxy to watch video content from the BBC –
It’s a highly sophisticated program that will allow you to proxy video through a secure connection, also fast enough to allow you to watch video without buffering. It’s very easy to use to unblock video and you’ll find it can bypass internet filters too which are also commonly implemented. The demo version is available to test it out, it won’t function as a YouTube proxy unfortunately but you can at least use the free version to unblock Facebook. The main program works on PCs and laptops but unlike simple unblock proxy sites you can use it as a video proxy mobile by establishing a VPN connection on your smartphone or tablet – it’s relatively simple to do. Check out a video of it in action switching IP addresses online on this page.
There is one other method, I should mention which you can find discussed in this article here, it’s called Smart DNS and is a simpler alternative to using a VPN service.
It’s what literally millions of people around the world are doing right now, relaxing in the sun whilst watching the News on the BBC or their favorite US entertainment channel. There are a lot of these services available now, but only a few that work properly. Our recommendation doesn’t look like a TV watching VPN at first glance simply because they keep that functionality low key. Yet for over a decade it has supported all the major media channels in a variety of countries.
When we are discussing the technological characteristics of proxies there’s one term which you will see used very often – ‘transparent’. It can actually be used in two distinct ways when it comes to proxies. The first is to refer to a definition which implies transparent proxying ensures that any user will see no difference to the original request whether it goes direct to the server or through a proxy. In an ideal world pretty much all legitimate proxies would be considered ‘transparent’.
Proxies are however significantly more advanced from the early years when this original definition was created. The term ‘transparent proxying’ now has much more meaning. The extended definition means that transparent proxying ensures that the client software is not aware of the existence of the proxy server in the communication stream. This is unusual because the client was usually configured to use a proxy, perhaps by the internet settings in it’s browser configuration. Software would then make a decision in it’s requests and perhaps distinguish between proxy and direct requests.
When transparent proxying, in it’s modern context, is used the router is programmed to redirect the request through the proxy not the client. This means that the proxy can actually be used to intercept and control all HTTP requests that are targeted by outbound connections. The request can even be parsed or perhaps even filtered and redirected. This control allows the network to configure access control rules on all outbound requests, A company network could use these to ensure unsuitable requests are not being made from a corporate network e.g. illegal web sites.
This level of transparent proxying leaves the client completely unaware of the existence of an intermediate proxy server. There are some caveats though and the proxy can be detected in certain circumstances. For example there is little point in investing in a USA proxy buy if the server only supports HTTP/1.1 because the protocol makes no allowance for transparency in proxying information.
One of the main issues and indeed worries is that allowing completely transparent proxying might cause other issues particularly in the client side applications. For example one of the fundamentals of using proxies in a corporate network is to reduce traffic by caching locally. This could cause all sorts of problems if the behaviour of the proxy cache effects communication between the destination server and the client application.
The importance of the data transport protocol is of course crucial to a global information network like the world wide web. Unfortunately the HTTP/1.0 protocol has some inherent issues which are directly related to performance which have been largely addressed in version 1.1 of the protocol. It is expected that future developments will further improve the performance of the protocol.
One issue is related to the three way handshake that is required by TCP before it can establish the connection. It is important to remember that during this handshake phase that no application data is transferred at all. from the user perspective the delay will simply appear as latency in getting the initial connection established. This three way handshake involves a considerable overhear preceding data transfer and has a noticeable effect on performance particularly in busy networks.
This problem is made worse by using the HTTP 1.0 protocol which makes extensive use of new connections. In fact every new request requires a new TCP connection to be established, complete with a new three way handshake. This was originally implemented as a measure to boost performance because it was thought that it would avoid long lived idle connections being left dormant. The reasoning was that it was more efficient to establish new connections when required as the data burst would be small and frequent.
However the web has not developed like this and it’s is much more than a series of short html files quickly downloaded. Instead the web is full of large documents and pages embedded with videos and images. Add to the the multitude of applets, code and other embedded objects and this soon adds up. What’s more each of these objects usually has it’s own URL and so requires a separate HTTP request for each. Even if you invest in a high quality US proxy you’ll find some impact on speed using HTTP 1.0 simply due ti the huge number of connection requests it generates.
There were modifications made to increase the perceived performance from the user perspective. For one, the use of multiple simultaneous connections was allowed and this would allow client software like browsers to download and render multiple components on a page. This meant that the user wasn’t left waiting as individual components were loaded separately. However although parallel connections increase performance on an individual level, they generally have a very negative impact on the network as a whole. The process is still inefficient and allowing parallel connections does little to mitigate this situation.
As any network administrator knows, focussing on a single aspect of network performance is rarely a good idea and will almost never improve overall network performance. The persistent connection feature was introduced to help solve this, and was added as a non-standard extension to HTPP 1.0 and included by default with HTTP 1.1.