Introduction to Kerberos Authentication

It’s one of the most widely used methods of authentication and this post will briefly introduce you to the subject. As well as being implemented into many operating systems you will find Kerberos is available in many industrial products too. Kerberos hasn’t been tested or verified. Kerberos has many crucial benefits. Kerberos has a few main flaws that system administrators want to take into consideration. Kerberos is the most frequently used example of this sort of authentication technology.

Encryption couldn’t be enabled. The encryption key is subsequently created. Transport layer encryption isn’t necessary if SPNEGO is used, but the customer’s browser has to be properly configured. This authentication is automatic in the event the domains are in the exact same forest. This sort of authentication is rather simple to understand, since it only involves two systems. There are lots of things that could fail with Kerberos authentication. If you’re failing to utilize Kerberos authentication utilizing the LocalSystem account, you’re more than likely failing to utilize Kerberos authentication when users are going to go to the remote system. It’s not only used for authenticating users, when your iPad connects through it’s VPN to watch British Channels online using your AD network it’s Kerberos that authenticates the machine.

In the event the password is incorrect, then you won’t have the ability to decrypt the message. It is extremely important that you don’t forget this password. You might be surprised how many users utilize a password that is just like their user name.

Your password isn’t a fantastic option for a password. When employing those services or those clients, you might have to put in your password, which is subsequently sent to the server. It’s very probable that this user has set the exact same password for the two principals for reasons of convenience. Ideally, you should simply have to type your password into your private computer, once, at the start of the day.

You won’t be able to administrate your server in case you do not keep in mind the master password. In case the server cannot automatically register the SPN, the SPN has to be registered manually. Its normal in order for it to take some opportunity to begin the admin server so be patient. The specified server cannot carry out the requested operation. A digital server simply suggests that it’s not a component of dedicated host. The RPC Server isn’t actively listening.

Server refused to negotiate authentication, which is needed for encryption. Before deploying Kerberos, a server has to be selected to accept the use of KDC. The network location server is a site that is utilised to detect whether DirectAccess clients are situated in the corporate network.

The client may be using an old Kerberos V5 protocol that doesn’t support initial connection support. If he is unable to get the ticket then you should see an error similar to one below. In Kerberos protocol, he authenticates against the server and also the server authenticates itself against the client. The RPC Client will send the very first packet, called the SYN packet.

If each client should happen to require a special key for each and every service, and if each service should happen to require an exceptional key for each client, key distribution could quickly come to be a challenging problem to fix. My client is not going to send the job unless it receives the right response. The client can’t decrypt the service ticket because only servers can do so, but nevertheless, it can send it on. Later he can use this ticket to get additional tickets for SS using the same shared secret. Both client and server may also be called security principals.

John Simmons
http://bbciplayerabroad.co.uk/uk-vpn-free-trial/

Filtering Authentication Credentials

When you use a proxy or VPN server there is a very important security consideration that you should be aware of that is sometimes overlooked.  Any connection should be very careful about how it handles any authentication credentials that are sent using that connection.  For example if you are using a proxy for all your web browsing, you will need to trust that server handling any user names and passwords that you supply to those websites.  Remember the proxy will forward all traffic to the origin server including those user credentials.

The other consideration is specific proxy server authentication credentials which also may be transmitted or passed on especially if the servers are chained.  It is common for proxy credentials to be forwarded as it’s reduces the need to authenticate multiple times against different servers.   In these situations the last proxy server in the chain should filter out the Proxy-Authorization: header if it is present.

One of the dangers is that a malicious server could intercept or capture these authentication credentials especially if they’re being passed in an insecure manner.    Any proxy involved in the route has the potential for intercepting usernames and passwords.  Many people forget this when using random free proxies they find online, they are implicitly trusting these servers and the unknown administrators with any personal details leaked whilst using these connections.  When you consider that often these free servers are merely misconfigured or ‘hacked’ servers it makes using them even more risky.

It is actually a difficult situation particularly with regards to proxies about how to deal with authentication details.  The situation with VPNs are slightly more straightforward, the details are protected during the majority of the transmission because most VPNs are encrypted.  However that last step to the target server will rely on any in built in security to the connection, although this can be effected as in this article – BBC block VPN connection.

Any server can filter out and protect authentication credentials but obviously those intended for the target can’t be removed.  It is a real risk and does highlight one of the important security considerations of using any intermediate server such as a proxy.    It is important that these servers are in themselves secure and do not introduce additional security risks into the connection.  Sending credentials particularly over a normal HTTP session are already potentially insecure without a badly configured or administered proxy server as well.

Most websites which accept usernames now at least use something like SSL to protect credentials.  However although VPN sessions will transport these connections effectively many proxies are unable to support the tunneling of SSL connections properly.  Man in the middle attacks are also common against these sort of protections and using a poorly configured proxy makes this much easier than a direct connection.  Ultimately there are several points where web security and protecting the data is a concern, it’s best to ensure that a VPN or proxy doesn’t introduce additional security risks into the connection though.

Additional Reading on UK VPN Trial

 

 

Content Filtering and Proxies

Proxy servers are as explained on this site, one of the most important components of a modern network infrastructure.  No corporate network should allow ordinary desktop PCs or laptops to directly access the internet without some sort of protection.  Proxy servers provide that protection to a certain extent as long as their use is enforced.

Most users, especially technically minded ones will often resent using proxies because they will be aware of the control that this entails.   The simplest way is to ensure that configuration files are delivered automatically to the desktop by network servers.  For example in a Windows environment this can be achieved using the active directory which can ensure desktops and users receive specific internet configuration files.  For example, you can configure Internet Explorer using a specific configuration which is delivered to every desktop on login.  In addition you can also use Active Directory to block access to install other browsers and configure them.

However although this allows you to control what browser and the internet route that each user will take – it doesn’t restrict what that user can do online.  Another layer is required and most companies will employ some sort of content filtering in order to protect their environment.    However as far as your proxy server is concerned content filtering will almost obviously have a major impact on performance.

One of the most common forms is that of URL filtering and this has one of the biggest performance impacts.  This is largely due to the fact that this sort of filtering inevitably has many types of patterns to match against.   Content filtering will severely impact the performance of a proxy server because of the sheer volume of data that is involved.  Even running a nominal content filter against a UK VPN trial had a similar effect.

There are a variety of different types of filtering such as HTML tag filtering, virus screening or URL screening.   It can be difficult though and the technology is developing all the time, for instance the ability to screen things like Java or ActiveX objects.

One of the biggest problems with content filtering and maintaining performance on the proxies is the fact that entire objects need to be processed.  A proxy server will need to buffer the entire file, and therefore can only proceed with the transmission after the whole file has been checked.   From the user perspective this can be frustrating as there will be long pauses and delays in their browsing especially on busy networks.   Obviously this delay can be justified in the extent of screening for viruses, however this can be controversial for other screening issues.

Further Reference: Using a Paid VPN Service

TCP Configuration: Timestamp Option

The function of the timestamp option is fairly self explanatory, it simply lets the sender place a timestamp value in each and every segment.   In turn the receiver will also reflect this value in it’s acknowledgement which allows the sender to calculate a round trip time for every received ACK.    Remember this is indeed per ACK and not segment as this can include multiple segments.

Initially most implementations of TCP would only allow one RTT per window however this has changed and nowadays larger windows sizes need more accurate RTT calculations.   You can read about the definitions of these calculations in RFC 1323 which covers the TCP enhanced extensions that allow these improved RTT calculations. The time is estimated by sampling a data signal at a lower frequency one time per window which works well with smaller windows (and less segments).

Accurate measurement of data transmission is often very difficult in congested and busy networks also when troubleshooting across networks like the internet.  It’s difficult to  isolate issues and solve problems in these sort of environments because you have no control or access to the majority of the transport hardware.  For example if you are tryign to fix a Netflix VPN problem remotely being able to check the RTT is essential to analyse where the problems potentially lie.

The sender will place a 32 bit value in the initial field which will be echoed back by the receiver in the reply field. This will increase the size of the TCP header from 20 bytes to 32 bytes when this option is used. The timestamp value will increase value on each transaction. There is no clock synchronization between the sender and the receiver merely an increase in the value of the timestamp unit. Most implementations of the timestamp option recommend that the value increment in units of one ideally between 1 millisecond and 1 second.

This option is configured during the connection establishment and is handled the same way as the windows scale option in the previous section. As you may know the receiving connection does not have to acknowledge every data segment it receives. This however is simplified because only a single timestamp value is maintained per active connection which is updated according to simple algorithm.

First of all TCP monitors the timestamp value ensuring it has the correct value to send in the next ACK. The sequence number is updated after each ACK value is sent and not as it’s acknowledged. After a new segment arrives then the byte numbered in a variable called lastack is incremented. After a new segment arrives then this value is increased but the old value stored in a variable called tsrecent, When a timestamp option is sent the tsrecent value is sent, and the sequence number field is stored in the variable called lastack.

This means that in addition to the timestamp option allowing for better RTT calculation it also performs another function. The receiver can use the function to avoid receiving old duplicate segments using an addition feature called PAWS – Protection against Wrapped Sequence Numbers.

Further Reading on Commercial Proxy Options – http://www.anonymous-proxies.org/2017/05/buy-uk-proxy-ip-address.html

Security Specifications and Initiatives

Throughout the internet community, there are many groups working on resolving a variety of security related issues online.    The activities cover all aspects of internet security and networking in general from authentication, firewalls, one time passwords, public key infrastructure, transport layer security and much more.

Many of the most important security protocols, initiatives and specifications being developed can be researched at the following groups.

TCSEC (Trusted Computer System Evaluation Criteria)

These are requirements for secure products as defined by the US National Security Agency.   These are important standards which many US and global companies use in establishing base lines for their computer and network infrastructure.    You will often hear these standards referred to as the ‘Orange book’.

CAPI (Crypto API)

CAPI is an application programming interface developed by Microsoft which makes it much easier for developers to create applications which incorporate both encryption and digital signatures.

CDSA (Common Data Security Architecture) 

CDSA is a security reference standard primarily designed to help develop applications which take advantage of other software security mechanisms.   Although not initially widely used, CDSA has since been accepted by the Open Group for evaluation and technical companies usch as IBM, Netscape and Intel have aided in developing the standard further.  It is important for a disparate communication medium such as the internet to have open and inter-operable standards for applications and software.   The standard also includes an expansion platform for future developments and improvements in security elements and architecture.

GSS-API – (Generic Security Services API)

The GSS-API is a higher level interface that enables applications and software an interface into security technologies.  For example it can act as a gateway into private and public key infrastructure and technologies.

This list is of course, a long way from being complete and because of the fast paced development of security technologies it’s very likely to change greatly.   It should be remembered that although there is an obvious requirement for security at the server level,   securing applications and software on the client is also important.   Client side security is often more of a challenge due to different platforms and a lack of standards – configuration settings on every computer are likely to be different.

Many people now take security and privacy extremely seriously, especially now that so much of our lives involve online activities.  Using encryption and some sort of IP cloaker like this to provide anonymity is extremely common.  Most of these security services are provided by third parties through specialised software.   Again incorporating these into some sort of common security standard is a sensible option yet somewhat difficult to achieve.

Further Reading: Netflix VPN Problem, Haber Press, 2015

Certificate Based Client Authentication

One of the most important features of SSL is it’s ability to authenticate based on SSL certificates.  Often people fail to understand that this certificate based authentication can only be used when SSL is functioning, it is not accessible in other situations.    Take for example the more common example on the web of insecure HTTP exchanges – this means that SSL certificate based authentication is not available.  The only option here is to control access by using basic username password authentication.  This represents possibly the biggest security issue on the internet today because this also takes place in clear text too!

Another common misconception is with regards the SSL sessions themselves.  SSL sessions are established between two endpoints.  The session may go through a SSL tunnel which is effectively a forward proxy server.    However secure reverse proxying is not SSL tunnelling it’s probably better described as HTTPS proxying although this is not a commonly used term.   In this example the proxy acts as an endpoint of one SSL session, accepting the endpoint of one SSL session and forwarding the request to the origin server.

The two sessions are distinct except of course they will both be present in the cache and memory of the proxy server. An important consequence of this is that the client certificate based authentication credential are not relayed to the origin server.   The SSL session between the client and the reverse proxy server authenticates the client to the proxy server.  However the SSL session between the origin server and the proxy authenticates the server itself.   The certificate presented to the origin server is the reverse proxy’s certificate and the origin server has no knowledge of the client and it’s certificate.

Just to summarise this is the ability to authenticate the client to the origin server though the reverse proxy server.

In these situations where client based certificate based authentication and access control are required, the role would have to be performed by the reverse proxy serve.  In other words the access control function has been delegated to the proxy server.  Currently there is no protocol available for for transferring access control data from the origin server to the reverse proxy server.    However there are situations in advanced networks where the access control lists can be stored in an LDAP server for example in Windows Active directory domains.   This enables all unverified connections to be controlled, e’g blocking BBC VPN connections from  including outbound client requests to the media servers.

The reverse proxy could be described in this situation as operating as a web server.  Indeed the authentication required by the reverse proxy is actually web server authentication not proxy server authentication.    Thus crucially the challenge status code is HTTP 401 and not 407.  This is a crucial difference and a simple way to identify the exact authentication methods which are taking place on a network if you’re troubleshooting.

 

Buy US Proxy with Transparent Proxying

When we are discussing the technological characteristics of proxies there’s one term which you will see used very often – ‘transparent’.    It can actually be used in two distinct ways when it comes to proxies.  The first is to refer to a definition which implies transparent proxying ensures that any user will see no difference to the original request whether it goes direct to the server or through a proxy.   In an ideal world pretty much all legitimate proxies would be considered ‘transparent’.

Proxies are however significantly more advanced from the early years when this original definition was created.  The term ‘transparent proxying’ now has much more meaning.  The extended definition means that transparent proxying ensures that the client software is not aware of the existence of the proxy server in the communication stream.   This is unusual because the client was usually configured to use a proxy, perhaps by the internet settings in it’s browser configuration.    Software would then make a decision in it’s requests and perhaps distinguish between proxy and direct requests.

When transparent proxying, in it’s modern context, is used the router is programmed to redirect the request through the proxy not the client. This means that the proxy can actually be used to intercept and control all HTTP requests that are targeted by outbound connections.  The request can even be parsed or perhaps even filtered and redirected.  This control allows the network to configure access control rules on all outbound requests,  A company network could use these to ensure unsuitable requests are not being made from a corporate network e.g. illegal web sites.

This level of transparent proxying leaves the client completely unaware of the existence of an intermediate proxy server.   There are some caveats though and the proxy can be detected in certain circumstances.  For  example there is little point in investing in a USA proxy buy if the server only supports HTTP/1.1 because the protocol makes no allowance for transparency in proxying information.

One of the main issues and indeed worries is that allowing completely transparent proxying might cause other issues particularly in the client side applications.  For example one of the fundamentals of using proxies in a corporate network is to reduce traffic by caching locally.  This could cause all sorts of problems if the behaviour of the proxy cache effects communication between the destination server and the client application.

Further Reading – http://www.changeipaddress.net/us-ip-address-for-netflix/

Optimizing Proxies – Protocol Performance

The importance of the data transport protocol is of course crucial to a global information network like the world wide web.  Unfortunately the HTTP/1.0 protocol has some inherent issues which are directly related to performance which have been largely addressed in version 1.1 of the protocol.  It is expected that future developments will further improve the performance of the protocol.

One issue is related to the three way handshake that is required by TCP before it can establish the connection. It is important to remember that during this handshake phase that no application data is transferred at all.  from the user perspective the delay will simply appear as latency in getting the initial connection established.   This three way handshake involves a considerable overhear preceding data transfer and has a noticeable effect on performance particularly in busy networks.

This problem is made worse by using the HTTP 1.0 protocol which makes extensive use of new connections.  In fact every new request requires a new TCP connection to be established, complete with a new three way handshake.  This was originally implemented as a measure to boost performance because it was thought that it would avoid long lived idle connections being left dormant.  The reasoning was that it was more efficient to establish new connections when required as the data burst would be small and frequent.

However the web has not developed like this and it’s is much more than a series of short html files quickly downloaded.  Instead the web is full of large documents and pages embedded with videos and images.  Add to the the multitude of applets, code and other embedded objects and this soon adds up.  What’s more each of these objects usually has it’s own URL and so requires a separate HTTP request for each.    Even if you invest in a high quality US proxy you’ll find some impact on speed using HTTP 1.0 simply due ti the huge number of connection requests it generates.

There were modifications made to increase the perceived performance from the user perspective.  For one, the use of multiple simultaneous connections was allowed and this would allow client software like browsers to download and render multiple components on a page.  This meant that the user wasn’t left waiting as individual components were loaded separately.  However although parallel connections increase performance on an individual level, they generally have a very negative impact on the network as a whole.   The process is still inefficient and allowing parallel connections does little to mitigate this situation.

As any network administrator knows, focussing on a single aspect of network performance is rarely a good idea and will almost never improve overall network performance.    The persistent connection feature was introduced to help solve this, and was added as a non-standard extension to HTPP 1.0 and included by default with HTTP 1.1.

Further Reading: Proxies Blocked by BBC Abroad

Remote Login Methods

The ability to remotely login to a machine that’s miles away from you is perhaps one of the internet’s most popular applications.  It might not seem so, but being able to access a remote host without a hard wire connection has transformed many areas of IT particularly in support and development.   Obviously you need an account on the host that you are trying to login to, but actually using the machine as if you are at the console is extremely useful in many situations.

Two of the most famous applications for remote login access when using a TCP/IP based network (e.g like the internet) are Telnet and Rlogin.   The most famous and probably used by every IT support technician over the age of 25 is Telnet, installed as standard in almost every TCP/IP implementation.   It seems relatively simple but this actually hides some great functionality not least the ability to Telnet from one operating system to another.  It’s incredibly useful to be able to sit at a Microsoft Windows machine with multiple command interfaces open in separate windows to Unix and Linux machines at the same time.

Remember these terminal windows are actually like physically sitting at the remote host’s console.  This is is completely different from just using a web session or using something like an Italian to stream RAI player abroad like this.  Each individual character that you type is entered into the remote host, there’s no streaming, no relaying or filtering.  Obviously there are some restrictions about running a terminal windows on a completely different systems.  However Telnet does an option negotiation phase between the client and server to ensure that only services which are supported at both ends are available.

The other famous remote login application is called Rlogin which was developed from Berkeley Unix.   This application was initially only available on Unix Systems however it has been ported to most other operating systems now and you can Rlogin between Windows and Linux.  Both of these applications use the Client/Server configuration – the client is the system where the initial connection is established to the remote server which is the target.

Nowadays, the most popular of the two application – Telnet has become much more sophisticated.  Over the years lots of functionality has been added to Telnet whereas Rlogin remains quite simple and unmodified.  However it should be noted that although Rlogin lacks features, it is a simple and stable remote access application.

The author – John Herrington has worked in IT for over thirty years in a variety of roles from support to latterly Network manager at a large bank.   He now works for himself and runs one of the largest paid VPN services on the West Coast of America. He obviously works remotely a lot of the time but will rarely use Telnet as it’s too insecure!

Tracking VPN and Proxy Users

There are similar challenges for network administrators in corporate networks and those running firewalls for authoritarian regimes about the use of proxies and VPN services.  The issue is that not only do they allow individuals the freedom to conduct their internet activity without being tracked, a VPN will also prevent most aspects of logging taking place too.

If you imagine a company network it means that an individual could potentially conduct all sorts of behaviour from a company computer whilst sitting in a corporate office whilst at work.   They could be downloading films, streaming Netflix or something perhaps much more sinister even.  Obviously this is potentially a risk to both the network infrastructure and also potentially to the company’s reputation.

So how do you block the use of VPNs and proxies?  For a corporate network there are actually many more options, and the simplest is probably to stop any sort of VPN and proxy being used in the first place.   You can lock down the advanced settings in a web browser quite simply, for example the Internet Explorer Administration Kit (IEAK) allows you to configure and deploy an IE package which cannot be modified onto every client in your organisation.  This stops proxies being used manually and VPN clients can be blocked by ensuring that  standard users have no administrative access to their desktops.

It is certainly easier to block any installation than trying to track the use of VPNs particularly some of the most sophisticated ones.   For example although you could potentially monitor logs in firewalls and routers for specific IP addresses which looked like VPNs some services allow you to switch to a range of IP addresses – Hide My VPN like the one in this video demonstrates:

As you can see if a service is rotated then identifying the VPN by it’s IP address is much more difficult.  However blocking installation of the highlighted service Identity Cloaker can also be difficult as it has a mobile version which can be run directly from a USB disk.

You can see that proxies are fairly irrelevant today as they can be easily blocked, also most content filters can detect their use too.   Significantly their use has now dropped globally for additional reasons mainly that they are mostly detected by websites which operate regional restrictions.   It is the more sophisticated Virtual private networks which are the difficulty, particularly those equipped with various VPN hider technologies and advanced encryption.