Anatomy of a Denial of Service Attack

Following the first planning and reconnaissance legwork is complete, the upcoming logical step is to make use of accumulated info and assault the network. The traffic generated by strikes may take numerous different forms. Everything from the remote exploitation code into questionable normal traffic may signify an attempted assault which needs action. Denial of Service A Denial of Service assault is any attack that disrupts the use of a system in order that legitimate users can no longer access it. DoS attacks are possible on most network equipment, including routers, servers, firewalls, remote access machines, and almost every other network source.  A DOS attack may be specific to the service, like in a FTP assault, or even an entire machine.   Many times the attacks are against commercial targets or to access useful resources.  Many attacks are simply to enable installation of rogue services such as VPNs or FTP which are then used to either store data or to access resources like UK TV abroad like this.


The types of Denial of service attacks are indeed varied and operate on a wide range of targets. However they might be separated into two unique categories that relate to intrusion detection: source depletion and malicious packet strikes. Malicious packet DoS attacks work by sending abnormal visitors to the host to call the service or host to crash. Crafted packet DoS attacks happen when applications isn’t correctly coded to handle abnormal or irregular traffic. Frequently out, of spectrum traffic may cause applications to respond unexpectedly and crash. Attackers may utilize DoS attacks of crafted packages to bring down Intrusion Detection Systems too, even well developed ones like Snort. Additionally to out, of specific range traffic, malicious programs can contain payloads which create a system to crash. A packet payload is input to a service.

In any circumstance whether it’s an application or network enabled device if the input isn’t correctly checked, the application can be DoS’ed. The Microsoft FTP DOS attack demonstrates the broad selection of DoS attacks available to black hats from the wild. The initial step in the assault is to initiate a legitimate FTP link. The attacker then issues a command with a wildcard sequence. Inside the FTP Server, a function that processes wildcard sequences in FTP controls doesn’t allocate enough memory when performing pattern matching. It’s possible for the attackers command containing a wildcard order to cause the FTP service to crash. This particular attack like many including the Snort lCl/lP DoS, are just two samples of the countless thousands of potential Denial of service attacks which are possible and accessible for attackers.  The service can then be used to install malware or other code which are then used for other purposes.  As mentioned above they are often used as hosts for VPN services which are used to watch British TV overseas or other video streaming functions.

The other means to deny service is through source depletion. A source depletion DoS attack functions by flooding a service with so much regular traffic that legitimate users can’t access the service. An attacker inundating an agency with regular traffic may exhaust finite resources like bandwidth, memory, and processor cycles.A classic memory resource exhaustion attack which will bring down a device is  a SYN flood. A SYN flood takes advantage of the Transmission Control Protocol 3, way handshake. The handshake starts off with the customer sending a Transmission Control Protocol SYN pack- et. The host then sends the SYN ACK in response. The handshake is finished when the customer responds with an ACK.

In case the host doesn’t get returned by the ACK, the host sits idle and waits with a session available. Every open session consumes a certain quantity of memory. If enough three, manner handshakes are initiated, the host consumes all available memory waiting for ACKs. The traffic created from the SYN stream is normal in all other respects.

Securing Wireless Networks in Windows Server

Most companies now have some sort of wireless access implemented within their networks.  It’s easy to see why, adding a few wireless access points can be extremely useful and save expensive cabling costs.   You can add extra clients and locations to a network for literally a few pounds compared to drilling through walls, laying cables, digging up roads which can be involved in connecting traditional ethernet access for example.

Yet the security implications are often ignored, too often you can find well developed and secure networks compromised by ad-hoc wireless access points installed with little or no thought with regards to security.  Often companies simply buy off the shelf WAPs and add them to their network.  The reality is that every access point added  is an additional gateway into that network and it is essential that it conforms to the same level of security as any other device.

There are various methods to secure these points but the key is to keep to a consistent standard and ensure that these can be enforced.  One common method particularly in Windows environments is to use Group Policy Objects to enforce the wireless network settings on access points and the clients that authenticate to them.  For example you can use GPO’s to ensure that wireless network settings are configured correctly for EAP/TLS authentication which is used for most 802.1x authentication.

You should assign the GPO to computer accounts which are linked either to the domain or a specific OU configured for wireless access.  The latter is the better option as it restricts and controls access to the wireless network meaning only specifically allowed clients can use this access.   Within the group policy you can configure a specific wireless network policy by configuring settings such as the following:

  • Enforce 802.1 Authentication
  • Restrict Access to WAPs only, no ad-hoc connections allowed.
  • Ensure Windows clients can configure wireless network settings automatically
  • Provide preferred and allowed SSIDS (plus block other networks)
  • Enforce encryption – either WEP or WAP as a minimum (although stronger encryption should be used)
  • Define EAP authentication methods and levels
  • Enforce mutual authentication by validating certificates issued by RADIUS servers.

This list is a long way from being complete however it does illustrate some of the minimum configuration issues that should be covered for wireless access. Obviously requirements will vary depending on the network, applications used and the sort of access that is required from wireless connections. However most best practice guides for securing wireless access are fairly sensible. For example there is little reason for not implementing the strongest form of wireless encryption that is available. Encryption adds very little overhead and it is unlikely that there would be any issues with running remote applications or client access across them.

Even running additional layers such as a secured VPN can operate over an encrypted wireless connection. However remember that these can affect external access, even sites like the BBC block some VPN access (read article) in order to enforce their region locks. Even still external access and applications should not be allowed to control or dictate levels of security of your clients and internal networks. Further more through group policy you can enforce minimum levels of authentication, deploy certificates and even define more specific wireless settings. Any clients accessing the network through a Wifi access point would have these settings applied in order to access network resources.

Further Reading:
BBC Deutschland – A Quick Guide

Loki – How ICMP Really Can be Dangerous

Overall ICMP has been viewed as quite a harmless and perhaps even trivial protocol. However that all changed with the rather nasty Loki.  In case you didn’t know Loki is from Norse mythology and he was the god of trickery and mischief.  The Loki exploit is well named and seeks to exploit the hither to benign ICMP protocol.  ICMP is intended mainly to inform users of error conditions and to make very simple requests.  It’s one of the reasons intrusion analysts and malware students tended to ignore the protocol.  Of course it could be used in rather obvious denial of service attacks but they were easily tracked and blocked.

However Loki changed that situation as it used ICMP as a tunneling protocol as a covert channel. The definition of a covert channel in these circumstances is a transport method used in either a secret or unexpected way. The transport vehicle is ICMP but Loki acts much more like a client/server application.  Any compromised host that gets a Loki server instance installed can respond to traffic and requests from a Loki client.   Which would also work if the client was spoofing their IP address to watch something like Netflix for instance – see this.  So for instance a Loki server could respond to a request to display the password file to screen or file. That could then be possibly captured and cracked by the owener of the Loki client application.

Many intrusion detection analysts would have simply ignored ICMP traffic passing through their logs.  Mainly because it’s such a common protocol but also an such an innocuous one.  Of course well read analysts will know treat such traffic with heightened suspicion, Loki really has changed the game for protocols like ICMP.

For those of us who spend many hours watching traffic Loki was a real eye opener.  You had to check those logs a little more carefully especially to watch out for those strange protocols being used in a different context.  There’s some more information on these attacks hidden on this technology blog – http://www.iplayerabroad.com/using-a-proxy-to-watch-the-bbc/.  It can take some finding though !!

 

Introduction to Kerberos Authentication

It’s one of the most widely used methods of authentication and this post will briefly introduce you to the subject. As well as being implemented into many operating systems you will find Kerberos is available in many industrial products too. Kerberos hasn’t been tested or verified. Kerberos has many crucial benefits. Kerberos has a few main flaws that system administrators want to take into consideration. Kerberos is the most frequently used example of this sort of authentication technology.

Encryption couldn’t be enabled. The encryption key is subsequently created. Transport layer encryption isn’t necessary if SPNEGO is used, but the customer’s browser has to be properly configured. This authentication is automatic in the event the domains are in the exact same forest. This sort of authentication is rather simple to understand, since it only involves two systems. There are lots of things that could fail with Kerberos authentication. If you’re failing to utilize Kerberos authentication utilizing the LocalSystem account, you’re more than likely failing to utilize Kerberos authentication when users are going to go to the remote system. It’s not only used for authenticating users, when your iPad connects through it’s VPN to watch British Channels online using your AD network it’s Kerberos that authenticates the machine.

In the event the password is incorrect, then you won’t have the ability to decrypt the message. It is extremely important that you don’t forget this password. You might be surprised how many users utilize a password that is just like their user name.

Your password isn’t a fantastic option for a password. When employing those services or those clients, you might have to put in your password, which is subsequently sent to the server. It’s very probable that this user has set the exact same password for the two principals for reasons of convenience. Ideally, you should simply have to type your password into your private computer, once, at the start of the day.

You won’t be able to administrate your server in case you do not keep in mind the master password. In case the server cannot automatically register the SPN, the SPN has to be registered manually. Its normal in order for it to take some opportunity to begin the admin server so be patient. The specified server cannot carry out the requested operation. A digital server simply suggests that it’s not a component of dedicated host. The RPC Server isn’t actively listening.

Server refused to negotiate authentication, which is needed for encryption. Before deploying Kerberos, a server has to be selected to accept the use of KDC. The network location server is a site that is utilised to detect whether DirectAccess clients are situated in the corporate network.

The client may be using an old Kerberos V5 protocol that doesn’t support initial connection support. If he is unable to get the ticket then you should see an error similar to one below. In Kerberos protocol, he authenticates against the server and also the server authenticates itself against the client. The RPC Client will send the very first packet, called the SYN packet.

If each client should happen to require a special key for each and every service, and if each service should happen to require an exceptional key for each client, key distribution could quickly come to be a challenging problem to fix. My client is not going to send the job unless it receives the right response. The client can’t decrypt the service ticket because only servers can do so, but nevertheless, it can send it on. Later he can use this ticket to get additional tickets for SS using the same shared secret. Both client and server may also be called security principals.

John Simmons
http://bbciplayerabroad.co.uk/uk-vpn-free-trial/

Filtering Authentication Credentials

When you use a proxy or VPN server there is a very important security consideration that you should be aware of that is sometimes overlooked.  Any connection should be very careful about how it handles any authentication credentials that are sent using that connection.  For example if you are using a proxy for all your web browsing, you will need to trust that server handling any user names and passwords that you supply to those websites.  Remember the proxy will forward all traffic to the origin server including those user credentials.

The other consideration is specific proxy server authentication credentials which also may be transmitted or passed on especially if the servers are chained.  It is common for proxy credentials to be forwarded as it’s reduces the need to authenticate multiple times against different servers.   In these situations the last proxy server in the chain should filter out the Proxy-Authorization: header if it is present.

One of the dangers is that a malicious server could intercept or capture these authentication credentials especially if they’re being passed in an insecure manner.    Any proxy involved in the route has the potential for intercepting usernames and passwords.  Many people forget this when using random free proxies they find online, they are implicitly trusting these servers and the unknown administrators with any personal details leaked whilst using these connections.  When you consider that often these free servers are merely misconfigured or ‘hacked’ servers it makes using them even more risky.

It is actually a difficult situation particularly with regards to proxies about how to deal with authentication details.  The situation with VPNs are slightly more straightforward, the details are protected during the majority of the transmission because most VPNs are encrypted.  However that last step to the target server will rely on any in built in security to the connection, although this can be effected as in this article – BBC block VPN connection.

Any server can filter out and protect authentication credentials but obviously those intended for the target can’t be removed.  It is a real risk and does highlight one of the important security considerations of using any intermediate server such as a proxy.    It is important that these servers are in themselves secure and do not introduce additional security risks into the connection.  Sending credentials particularly over a normal HTTP session are already potentially insecure without a badly configured or administered proxy server as well.

Most websites which accept usernames now at least use something like SSL to protect credentials.  However although VPN sessions will transport these connections effectively many proxies are unable to support the tunneling of SSL connections properly.  Man in the middle attacks are also common against these sort of protections and using a poorly configured proxy makes this much easier than a direct connection.  Ultimately there are several points where web security and protecting the data is a concern, it’s best to ensure that a VPN or proxy doesn’t introduce additional security risks into the connection though.

Additional Reading on UK VPN Trial

 

 

Content Filtering and Proxies

Proxy servers are as explained on this site, one of the most important components of a modern network infrastructure.  No corporate network should allow ordinary desktop PCs or laptops to directly access the internet without some sort of protection.  Proxy servers provide that protection to a certain extent as long as their use is enforced.

Most users, especially technically minded ones will often resent using proxies because they will be aware of the control that this entails.   The simplest way is to ensure that configuration files are delivered automatically to the desktop by network servers.  For example in a Windows environment this can be achieved using the active directory which can ensure desktops and users receive specific internet configuration files.  For example, you can configure Internet Explorer using a specific configuration which is delivered to every desktop on login.  In addition you can also use Active Directory to block access to install other browsers and configure them.

However although this allows you to control what browser and the internet route that each user will take – it doesn’t restrict what that user can do online.  Another layer is required and most companies will employ some sort of content filtering in order to protect their environment.    However as far as your proxy server is concerned content filtering will almost obviously have a major impact on performance.

One of the most common forms is that of URL filtering and this has one of the biggest performance impacts.  This is largely due to the fact that this sort of filtering inevitably has many types of patterns to match against.   Content filtering will severely impact the performance of a proxy server because of the sheer volume of data that is involved.  Even running a nominal content filter against a UK VPN trial had a similar effect.

There are a variety of different types of filtering such as HTML tag filtering, virus screening or URL screening.   It can be difficult though and the technology is developing all the time, for instance the ability to screen things like Java or ActiveX objects.

One of the biggest problems with content filtering and maintaining performance on the proxies is the fact that entire objects need to be processed.  A proxy server will need to buffer the entire file, and therefore can only proceed with the transmission after the whole file has been checked.   From the user perspective this can be frustrating as there will be long pauses and delays in their browsing especially on busy networks.   Obviously this delay can be justified in the extent of screening for viruses, however this can be controversial for other screening issues.

Further Reference: Using a Paid VPN Service

TCP Configuration: Timestamp Option

The function of the timestamp option is fairly self explanatory, it simply lets the sender place a timestamp value in each and every segment.   In turn the receiver will also reflect this value in it’s acknowledgement which allows the sender to calculate a round trip time for every received ACK.    Remember this is indeed per ACK and not segment as this can include multiple segments.

Initially most implementations of TCP would only allow one RTT per window however this has changed and nowadays larger windows sizes need more accurate RTT calculations.   You can read about the definitions of these calculations in RFC 1323 which covers the TCP enhanced extensions that allow these improved RTT calculations. The time is estimated by sampling a data signal at a lower frequency one time per window which works well with smaller windows (and less segments).

Accurate measurement of data transmission is often very difficult in congested and busy networks also when troubleshooting across networks like the internet.  It’s difficult to  isolate issues and solve problems in these sort of environments because you have no control or access to the majority of the transport hardware.  For example if you are tryign to fix a Netflix VPN problem remotely being able to check the RTT is essential to analyse where the problems potentially lie.

The sender will place a 32 bit value in the initial field which will be echoed back by the receiver in the reply field. This will increase the size of the TCP header from 20 bytes to 32 bytes when this option is used. The timestamp value will increase value on each transaction. There is no clock synchronization between the sender and the receiver merely an increase in the value of the timestamp unit. Most implementations of the timestamp option recommend that the value increment in units of one ideally between 1 millisecond and 1 second.

This option is configured during the connection establishment and is handled the same way as the windows scale option in the previous section. As you may know the receiving connection does not have to acknowledge every data segment it receives. This however is simplified because only a single timestamp value is maintained per active connection which is updated according to simple algorithm.

First of all TCP monitors the timestamp value ensuring it has the correct value to send in the next ACK. The sequence number is updated after each ACK value is sent and not as it’s acknowledged. After a new segment arrives then the byte numbered in a variable called lastack is incremented. After a new segment arrives then this value is increased but the old value stored in a variable called tsrecent, When a timestamp option is sent the tsrecent value is sent, and the sequence number field is stored in the variable called lastack.

This means that in addition to the timestamp option allowing for better RTT calculation it also performs another function. The receiver can use the function to avoid receiving old duplicate segments using an addition feature called PAWS – Protection against Wrapped Sequence Numbers.

Further Reading on Commercial Proxy Options – http://www.anonymous-proxies.org/2017/05/buy-uk-proxy-ip-address.html

TCP Configuration: Windows Scale Options

There are many ways to configure the way TCP/IP operates on specific networks.  Some of these parameters are rarely used but when you’re running fast Gigabit networks with a wide variety of network hardware and infrastructure some options are extremely useful.  One of those is the Windows Scale option which can be used to modify the definition of the TCP Window from it’s default of 16 bits.

For example in some environments it may be appropriate to increase the size of the TCP windows to 32 bits.   What actually happens is that instead of changing the size of the header to allow the larger windows, the header still holds a 16-bit value.   However an option allows a scaling parameter to be applied to the value which allows TCP to maintain the actual value of 32 bits internally.

The option for scaling can only appear in the SYN segment of the transaction which means that the scaling value by definition will be fixed in both directions when the connection is initially established.   In order for window scaling to be enable both ends of the connection must include the option in their SYN segments.   It should be noted thought that the scale option can be different in each direction.

There are methods for allowing suitable communication between different levels of hardware.   For example the scaling factor can be reduced by sending a non zero scale factor which cancels the scaling if a windows scaling option is not received in the return SYN.   This behaviour is specified in the relevant RFC which specifies that TCP must accept these options in any segment.   This includes all sorts of of connection remember these can be across wide areas, imagine a US IP address connecting to a Netflix server on super fast hardware.   However it should also be noted that TCP/IP will always ignore any option that it doesn’t understand.

For illustration, if the windows scale option is being used with a shift count of X for sending and Y for receiving.  This would mean that every 16 bit window which is advertised would be left shifted by Y bits to obtain the real advertised window.  So every time a windows advertisement is sent then we’d also take the 32 bit windows size and right shift by X bit to discover the real 16 bit value which is in the TCP header.

Any shift count is automatically controlled by TCP, which is because the size of the receiving buffer is important and cannot be controlled by the other size of the connection.

Further Reading

TCP Tricks, receiving BBC iPlayer in France – http://bbciplayerabroad.co.uk/how-do-i-get-bbc-iplayer-in-france/

 

Networking Terms: LAN

LAN in networking terms stands for Local Area Network and it refers to a shared communication system that many computers and other devices are attached.   The distinction between this and other networks is that a LAN is a network limited to a local area.

The first recorded use of LANs where in the 1970s, where they grew from the very first basic networking setups.  These consisted of two devices connected by a single network wire much like a child’s string and paper cup model designed to mimic the telephone.   Computer scientists started to think why limit to two devices when the same cable could theoretically connect multiple devices.   There were complications though, and possibly the most basic was finding a mechanism that ensured that multiple devices didn’t use the cable at the same time.

The methods used to ensure that use of the cables are shared properly are called ‘medium access controls’ for self explanatory reasons.  There are a variety of these ranging from allowing workstations to announce their communications to a central device which controls access and allocates bandwidth as required.  In some senses in the same way an individual may buy uk proxy access in order to route their connection privately whilst hiding their own IP address.

Although LANs are normally restricted to a smaller geographical location there are actually different topologies.   The simplest and originally was the most common is the liner bus and the star configuration.   The linear bus involves a cable laid throughout a building from one workstation to another.  Whereas the star configuration has each workstation attached to a central location or hub connected by it’s own specific cable.  There are pros and cons to each configuration and in fact if you use the most popular networking medium ethernet you can use either topology.

A local Area Network is actually a connectionless networking configuration. That definition is important and actually means that once a device is ready to use the network to transmit data it simply releases the data onto the cable and ‘hopes’ that it reaches it’s destination.    In this basic setup, no initial process involves ensuring that the data reaches it’s recipient nor is there any check to see whether it has been received.

When data is transmitted across the LAN it is packaged into ‘frames’ before being dispatched.  At the basic hardware level, each frame is transmitted as a bit stream across the wire.  Every single device connected to this network will listed to the transmission although only the intended recipient will actually receive the data.    Normally this is the case but it is possible to transmit on a multicast address which specifies that all devices on the LAN should receive the data. Other higher level protocols will actually package the data further into datagrams examples of these are IP or IPX.

Further Reading:

Network Troubleshooting – Which Smart DNS Still Works with Netflix

RSVP (Resource Reservation Protocol)

There is no doubt that TCP/IP has transformed our computer networks and played a pivotal role in the expansion of the world wide web, however it is far from perfect.   RSVP is an Internet protocol designed to alleviate some of the issues with TCP/IP particularly regarding delivering data on time and in the right order.  This has been always one of TCP/IP’s biggest shortcomings – it’s ‘best effort’ IP delivery service has no guarantees.  Whereas TCP which is connection orientated does guarantee delivery but gives no assurances on the time it takes.

Guaranteed on time delivery is essential in many of the modern day applications particularly over the internet – especially those including voice and video delivery.  Indeed most web sites involve large amounts of video and voice data which require fast, reliable and timely delivery whenever possible.  If anyone has tried streaming or downloading from applications like the BBC iPlayer like this for example they will know how frustrating slow speeds and missing data packets can be.

The issues are well known and RSVP is an attempt to provide a suitable quality of service for video and voice delivery particularly across the internet and other large TCP/IP based networks.  The way RSVP works is to reserve bandwidth across router connected networks.   It does this by asking each router to keep some of it’s bandwidth allocated to particular traffic flow.  In some senses it is an attempt to add some of the quality features of ATM to TCP/IP in order to facilitate the changing requirements of modern day networks.

RSVP is one of the first attempts to introduce a quality service to TCP/IP but many vendors are looking at introducing many other options too.  Most of them focus like RSVP on reserving bandwidth however this isn’t always an ideal situation.  The obvious issue is that if you reserve network capacity for specific traffic or connection then the amount is reduced for all other users and applications. Some of this issue has been mitigated by  the increase in capacity of both corporate networks and the connections for individual users to the internet.

RSVP works by establishing and maintain bandwidth reservations on a specific network so it’s not a WAN or wide area solution normally. The protocol works from router to router setting up a reservation from each end of the system. It is primarily a signalling protocol not specifically a routing protocol.  If a specific router along the connection cannot provide the requested bandwidth then RSVP will look for an alternative route.  Obviously this only works if the routers have RSVP enabled which many currently do to support this process.   Applications can also use this feature by making similar requests.

Further Reading:

Watching UK TV in USA – a study in optimizing video streams using QoS enabled routers.