Take Control of your IP Address

On a computer network, much like in real life, there are different levels of access dependent on a variety of reasons. It may be due rights assigned to username or account, perhaps an access token or often simply your physical location. These rights are assigned in different ways but the most popular method across the internet is based on your IP address.

The IP address is that unique number which is assigned to every single device which is connected to the internet, from computers and laptops to phones and tablets and even your internet enabled fridge. Every single device that is accessible online has a unique IP address and can be tracked by this number. Although you IP address can ultimately be traced back to a specific location and owner, this information is not available to any website that it visits. However even without access to an ISP record the IP address can be used to determine two pieces of information very easily – classification and location.


The first classification refers to the type of connection the IP address is registered to specifically residential or commercial. This piece of information is not always used as there can be some overlaps with this classification. The physical location however is used extensively by the vast majority of major web sites. Some may use it to help serve relevant content, perhaps supplying specific language versions depending on your location or serving up adverts which are more applicable to you. This is usually helpful although it can be very annoying if you are genuinely trying to access different content.

The most common use though is to block access based on this location, a practice used by virtually every large media site on the web. If you are in the USA for example, you will not be able to watch any of the UK media sites such as the BBC iPlayer or ITV Hub. Similarly every single one of the big American media sites will block non-US addresses. These blocks and controls are growing exponentially every year for instance there are now thousands of YouTube videos only accessible to specific locations.

Fortunately for the enlightened it isn’t such a big problem, because using VPNs and proxies you can actually control your own IP address. A simple method of using a British VPN server can give you access to the BBC iPlayer in the USA like this. It merely hides your physical location and instead the web site sees only the address of the VPN and it works with the vast majority of web sites.

HTTP Tracing

One of the most useful tools for troubleshooting in the HTTP/1.1 protocol is the TRACE method, which can provide lots of information for tracing routes between proxy chains.   Although the command is similar to the traceroute command, it is not identical as this tracks hops on the network router level whilst TRACE provides tracking based on the intermediate proxies involved in the route.

What can we use the HTTP TRACE command for?

  • identity the route between the proxies that the HTTP request makes.
  • identify each specific proxy in the chain
  • identify the server software, proxy version on each server
  • identify all versions of HTTP involved in the communication
  • detecting any loops in communication
  • tracking invalid responses and server misconfiguration

The command uses a similar format to the GET command, you pass the target and origin server URL as a parameter.  One important parameter to be aware of is the Max-Forwards: setting which specifies the maximum number of hops that are required.   This header is essential for detecting the presence of infinite loops present in a specified chain of proxies. It’s useful if there are complications like people running VPNs or external proxies like this.  If you do not use this parameter then any request will bounce between the proxies indefinitely.

Another useful facet of the TRACE method is the ability to use the command over a Telnet session which makes it extremely useful for troubleshooting remote sessions.  If you telnet to the first proxy in a chain before issuing the command then  you will get more accurate results.  To specify a particular route then the VIA: header can specify the route that trace will take.

Using the Proxy’s Cache for Troubleshooting

Sometimes an error or problem can appear intermittently, there may be a variety of reasons for this but these can be extremely difficult to troubleshoot.   In such situations the easiest way to find the cause is to examine the cache of the proxy servers which are involved.    It is essential that all key proxies are configured correctly to cache server responses

John Williams
http://identityvoucher.co.uk/

Is Anonymity Important Online?

There are many discussions across the world about using the internet and how it should be policed.  Many of the less democratic countries already have rather sweeping digital laws allowing content to be blocked, services closed down and users arrested.  These laws usually are phrased rather vaguely, using excuses like national interest or public safety.  They’re usually designed to be broad enough to cover whichever situation the authorities require without sounding unduly restrictive.  The reality is that in many countries the 140 characters of a Tweet is enough to get you hefty prison sentences.

People seek anonymity for different reasons depending on their location.  Of course in countries like Iran, China and lots of Far Eastern  you have to be very careful what  you say online, if you criticise leaders that can be enough to get you locked away for a very long time.  In 2015 a Thai man ‘liked’ and ‘shared’ a Facebook photograph which was critical of the Thai Royal family, he’s currently awaiting trial and faces 32 years in jail.  Needless to say Thailand is a country where you should be very careful about what you do online particularly if it involves the royal family.

In other more democratic and arguably civilized countries there are somewhat different concerns about privacy online.  You are unlikely to get arrested for being critical of Western leaders online, however don’t assume that your comments are not being monitored.  Most of the advanced countries, particularly in places like the US and UK, online activity is extensively logged.   In the UK legislation is being passed to legitimize this behaviour but it’s fairly certain to assume it’s already being going on for many years prior to this.

Much of the problems about privacy relate to the fact that it’s so easy to monitor people online.  The internet is simply not designed for privacy, it uses insecure clear text protocols like HTTP and email, whilst distributing our connections through a mesh of hardware owned by all sorts of people and corporations.   If you have access to a network hardware in a telecommunications company then there’s little you can’t access with the right resources.  Of course, the morality of this can be quite unclear but there are other areas where legality can be used as a perfectly justifiable excuse.

For example download a Bit Torrent client, join a swarm to download a pirated copy of the latest blockbuster movie and in your screen you’ll instantly see a page full of IP addresses of people illegally downloading copyrighted material.  It’s not hidden, not hard to find and only one step away from turning that into a list of names and addresses.   The people who use these programs are mostly unaware that they are not downloading torrents anonymously, in fact they’re doing it whilst actively broadcasting their identities.

The important factor to remember whatever you’re doing online, wherever you are and irrespective of who you are – you are probably being monitored to some extent.  Whether it’s merely being sucked up by one of the UK security services huge data trawls or more specifically by a media company seeking damages for copyright infringements – it could be happening.

John Herrod

Technology Author and Consultant

On Demand Caching for Proxies

Caching is one of the most important functions performed by proxy servers particularly in a corporate environment.  This is especially relevant when the network has internet connectivity to the desktop, caching is important to help reduce the amount of traffic generated from accessing the web.

If you look at the logs of any corporate network and analyse which external websites are being visited you’ll normally find that a large percentage of traffic is generated to a small number of sites.  News and social media sites if not blocked will often be accessed repeatedly, which means multiple requests for the same information.  Using a proxy server to cache these pages locally can vastly reduce the amount of network traffic generated by these requests.

For example in the UK you may find that a popular website like the BBC is generating hundreds of requests for the news pages.  If you enable on demand caching on a proxy server, when the first page is requested the proxy will store a copy of that page locally.   When the proxy receives the next request for the same page it will provide the cached copy from it’s store and will not need to visit the web page.    This means that no external traffic will be generated in this example and the amount of external bandwidth used will be heavily reduced.

This is called on-demand caching and it means that the web server/proxy only stores documents which are requested by a client.   The server will not attempt to store other pages from that server, only those which are specifically requested by the client browser.  This also helps you filter traffic which is not appropriate for example if someone was using a VPN to stream Netflix to their desktop.

In bigger organisations although proxies configured with caching can dramatically decrease network traffic, one is rarely enough.    However it obviously makes little sense to have duplicate proxies all caching the same external pages.  The question then is how to distribute this data efficiently within the network and to stop any individual proxy from being overloaded.  One of the most common models used in this scenario is that of the replication model, which involves the server mirroring or replicating it’s content to other servers in the network.

John Soames, Working Netflix VPN, Cromer Press, 2015

Introduction to DNS Recursion

The Internet’s DNS structure is often (accurately) described as hierarchical with authoritative servers sitting at the top of the structure.  However because of this setup it is essential that all DNS servers are able to communicate with each other in order to supply response to the name queries which are submitted by clients.

This is because although we would expect our companies internal DNS server to know all the addresses of internal clients and servers, we wouldn’t expect it’s database to contain every external server on the the internet.     Although in the early days of the internet, most DNS servers did contain an entire list of connected server addresses, nowadays that would simply not be feasible or in fact very sensible.

When a DNS server needs to find an address which is not in it’s database, it will query another DNS server on behalf of the requesting client in order to find the answer.    The server in this instance is actually acting in the same way as a client by making a request to another DNS server for the information, this process is known as recursion.

It’s actually quite difficult to detect whether a query is answered by recursion or by directly when troubleshooting DNS queries.    You need to be able to listen to all a DNS servers traffic in order to identify a recursive query.   The additional query (recursive one) is generated after the DNS serverc has checked it’s local database in order to resolve the query.  If this isn’t successful the DNS server will generate the additional request before replying to the client.   This is also dependent on the recursion bit being set in the initial query from the client too, as this allows the server to ask another server if the answer is not in it’s own database.

The recursive query is merely a copy of the initial DNS request and it has the effect of turning the server into a client. You can notice if you analyse the traffic that the transaction ID numbers will change in order to differentiate the initial query from the recursive query sent by the DNS server.   It’s important to keep a note of these transaction IDs when troubleshooting DNS traffic as it’s easy to get confused as many of the packets will look very similar.  If you are trying to analyze something more complicated like the modern, intelligent Smart DNS servers like these – http://www.proxyusa.com/smart-dns-netflix-its-back then it’s even more important to keep track of these transactions.  This is because these DNS servers actually make decisions on how to route the traffic in addition to resolving queries.

 

Residential IP Gateways

For anyone with a significant interest in working online, your IP address is important, it’s a vital part of your online presence.     Most people don’t really care about their address, as long as you have a valid IP address you can get online.   However there are distinctions about these addresses which can make a huge difference to your online experience.

Often the first indication people have that their IP address is of any relevance is when they find themselves getting blocked somewhere.   You might click on a video or website and get redirected to a message ‘sorry not available in  your country’ or you might try and view a website and get redirected somewhere else.   What’s generally to blame is where your IP address is registered and this behaviour is called ‘region locking’.  It’s extremely common and annoying especially if you’re settling down to watch the BBC News live while on holiday outside the UK for example.

This is all factored around the geographical location where you’re IP address is assigned to.  Which is why it usually becomes evident when people travel or go on their holidays, suddenly they find they can’t access the websites that they used to.  Watching domestic TV, streaming videos or accessing their online banking and things like that suddenly become very difficult when you’re outside your usual location.

People have found ways around this, normally you can hide your location by using a proxy or VPN service.  However this only works on a basic level, because there are other restrictions which stop these working mainly centered around the IP classification.   You see many websites now also look one step further than simply location – they look at the classification of the address and whether it originates from a commercial or residential origin.

Anyone who makes their living online is likely to need a little more control.  After all operating in a global market like the internet, getting blocked all the time because of location and what sort of IP address you have is going to be extremely inconvenient.   Sure you can use traditional proxies which are mostly run from datacentres but they too have significant problems.  The issue is that websites increasingly block access to all but residential IP addresses, they just want ordinary home users which means none of these proxy solutions actually work.  The alternative is to use VPNs that have residential IP addresses and gateways built in (read more here)

However it’s much, much harder to set up a residential IP gateway than it is a commercial one.  For instance you can’t just roll up to Comcast or BT and ask it to assign you a few hundred IP addresses, they use those for domestic customers only.   They are appearing but at the moment they are fairly hard to find and extremely expensive.  You have to be careful though as some of these ‘solutions’ actually piggy back domestic customers computers like the not recommended Hola which is a huge security risk to use.

World Wide Web Proxies – Web Proxy List

In the earliest days of the web in 1990, web proxy servers were usually referred to as gateways.  In fact the very first web gateway was created at CERN by the original WWW team , headed by Tim Berners-Lee.

Gateways are effectively devices which are used to forward packets between different networks. These devices can vary in complexity from simple pass through devices to complex system which are able to understand and convert different protocols.   It was in 1993 that the name Web Proxy Server was chosen as a standard term to describe the different types of Web gateways.

Web Proxy Server

These can be further classified into two distinct categories:

Proxy Server – internet/firewall gateways which act in response to client/PC requests.

Information Gateway – gateways which act in response to server requests.

However these are quite broad specification and below you will find some details of the key properties of the proxy servers and associated gateways.  Remember that these classifications can be affected by any application software which is installed on the proxies so they are not necessarily just the simple servers you find on web proxy lists – which are normally just basic Glype installations.   Particularly you may find that destinations and transparency may sometimes  be modified.

Proxy Server Properties

These are the general properties which can be applied to any specific proxy server, there are variations which will affect these.

Transparency: these proxies do not modify the data passing through them. They will perform any filtering specified by rules but this will not affect the end result. The connection will be the same if it was direct or through the proxy server.

Control: the client will determine whether it is uses the proxy or not.  This is typically controlled on the client by specifying the address of the proxy or through client based software.

Destination: the final destination of any request is not affected by any intermediate proxy.  In fact a client or user will often be completely unaware of the existence of the proxy.

Proxies can provide all sorts of features some of which might affect these properties.   They can be used to provide specific access controls, filtering, logging and even simply to speed up access to remote web resources through caching features.

It is in corporate environments that the transparency properties of proxies has usually been modified.  Frequently these firewall proxy servers will sit in the DMZ (Demilitarized Zone) and control both inbound and outbound traffic.  They will accept network requests from clients and forward them out to the internet if approved, then relay the replies back to the clients.   Most of these will also operate caching services to ensure that duplicate requests don’t generate more network traffic and bandwidth charges.

The other advantage of the dual role proxies is that it can act as a single entry point for internet access.   This means that all requests can be logged and monitored allowing an element of control to web access through the company infrastructure.  It also allows replies to be monitored for harmful code such as malware and viruses, this is an important extra layer of security to protect the internal network.

Switching Digital Identities Through VPNs

Once upon a time, no-one really used VPNs (Virtual private Networks) outside the corporate environment.  IT Support staff would use them to dial into networks to restart servers or reset some user accounts from home and laptop users would use a VPN to tunnel back to download email or a documents from their home share.   Nobody would really use this technology in their private life, except perhaps those who really understood how completely insecure the internet was.   This has now changed and now literally millions of people use virtual private networks every single day of their lives.

The main focus of the VPN is of course security, when you are using the internet via a VPN then all your data travels through an encrypted connection between you and the server.  Without this protection the majority of your data flies across the shared hardware of the internet mainly in clear text.   It stops your emails being intercepted, hides your login details and keeps your web destinations private however this has not been the primary driver in the use of this technology.

The real attraction is due to the way that the internet has become segmented over the last decade or so.  During the inception years of the internet, your location was largely irrelevant – if you were online you were exactly the same as any other user.  Of course some people were browsing over fast computers on dedicated data lines, whereas others where logging on to an ancient computer coupled to a standard telephone line and modem.   Yet  the principles of what people could access were exactly the same, there was no discrimination or segregation based on your physical location.

This is not now the case, in fact where you are located will heavily influence your online experience.  Browsing the  web from China is very different from downtown Chicago and I’m not talking about language localizations, but what you can access.  China is of course an extreme example as they heavily control what you can access over the internet, but even if you’re in a country who’s Government doesn’t filter the web – you’ll still find blocks and controls all over the place.   Your digital identity is effectively linked with  the physical location of your IP address and is used by web site owner to determine what you can see or not.  Ever tried to play a YouTube video and found that ‘this is not available in your country’? More often than not it will be down to a copyright or licensing issue. The same will happen, on thousands of websites across the world – your location will determine your access.

This can become tiresome, it’s not so bad if your digital identity is based on an American IP address for example because you’ll mostly get access to all the biggest media sites. However even then, there are loads of popular sites your location will deny you like the BBC iPlayer for example.
However if you’re somewhere a little more remote or obscure you’ll find yourself blocked from millions of web pages and treated somewhat like a web pariah.

It’s frustrating, yet it all is easily bypassed by simply hiding your real IP address. Most people aren’t able to modify their address because it is controlled by their ISP but if you connect to a VPN then your address will be determined by the location of the VPN server. Which is why companies like IPVanish and Identity Cloaker have produced VPN software which allows you to click any country and choose the IP address you want.

Network Analysis Using TCPDump

Should you need to observe any IPv6 traffic in your capture it’s possible to select IPv4 only. You’re able to specify networks also. There are lots of network monitoring utilities accessible to debug networked applications. It’s a widely-known program that provides an assortment of choices to gather just the details you want from the network. Unfortunately mastering this tool completely isn’t a simple task. These tools are especially vital for technical staff. Originally written by Van Jacobsen to analyze TCP performance issues, it’s still an adequate tool for this job, but a lot of features are added since then.

A fast hack might be the subsequent. Just like all things Linux, there are lots of tactics to get this done. Should you be using Solaris, you may use snoop to locate the CDP packets, but it doesn’t format the data nicely. It can be used with tcpdump (with regard to usage and options). Tcpdump gives a review of the form of protocol related at a certain time to ping peaks. Finally, it prints some information about the packet. TCPDUMP even demonstrate these sequence numbers.

monitor-1307227_640-1

Generally you will require root permission in order to capture packets on an interface. You can imagine this as something very similar to if statements. Typically, if the expression comprises shell metacharacters, it’s simpler to pass it like a simple, quoted argument. In practice, if it contains shell metacharacters, it is easier to pass it as a single, quoted argument. If no expression is provided, all packets on the web is going to be dumped. The expression includes one or more primitives. In fact, negating an expression a part of complex expressions syntax and we’re going to discuss complex expressions a modest later. Remember always get as near as the host as possible, rather than through a switch or hub not directly connected. Trying to use TCPDump over an encrypted tunnel can be confusing, as I discovered trying to use it to resolve the Netflix VPN ban as in this post.

You may also copy and paste the proper command into the terminal application to prevent typing mistakes. The whole path to the device name isn’t required. Simply take another look at the headers and see whether you may determine the field which has the VLAN tag info. You would be right about this, except for a single problem. Establishing the identity, you can’t be certain whether the issue lies with the customer or the server. The issue is it attempts to resolve every single IP address it meets. There are two methods to work out this issue. It is fantastic for tracking down network troubles or monitoring activity.

You may tell to quit capturing after a specific range of packets using the flag followed by the quantity of packets to capture. It is also possible to specify Ethernet addresses. At length, if you prefer to make absolutely certain you find the most possible information that’s being captured use the verbosity alternatives. A number of the info printed by tcpdump is a little cryptic, especially since the format differs for each protocol. It is simple to get information regarding packets of a specific protocol with the aid of tcpdump. It also includes a self-explaining help page.

You may capture packets from at the most 5 objects at once. Using should capture so much as the biggest RIP packets. You are able to get the packets depending on the protocol type. It doesn’t understand various protocols. The fundamental interfaces for each of these modules is the very same.  You can even specify a source or destination port utilizing similar commands. Additionally, it sets output to line-buffered so that I am able to observe packets once they arrive (). It doesn’t, however, produce any output. The verbose switch is useful especially if you’re trying to determine the location perhaps of a remote French IP address, see this.

The filter parameter is put on at the end of the command line. An extremely practical tcpdump filter is the capability to filter on various protocols. Unix shell has special comprehension of what brackets employed for. On the opposite hand, loosing valuable part of packets may be very critical. It is possible to use two standard kinds of network specifications. The format is designed to be self-explanatory. Occasionally, you might stumble upon an edition of tcpdump that needs an exceptional flag to be set to be able to enable promiscuous mode, but typically, tcpdump will make an effort to enable it by default.

Port Scanning – Information Security Skills

In the realm of information security, port scanning is a critical part. It is a network technique that allows the attacker to gain information about the remote host it is seeking to attack. It refers to computer networking ports, rather than an actual piece of computer hardware used to connect wires. Port scanning can likewise be employed to fix the kinds of hosts in the network which are in use through pinging them. It is the well known reconnaissance technique that is usually used by hackers. Using HPing as a method for scanning stipulates a decrease level example for how idle scanning is done. Syn scanning is faster since it doesn’t establish a complete TCP handshake.

Although not as important during legitimate penetration testing, it is vital to be aware when analysing real attacks that the originating IP address is likely to be false.  Any competent attacker would spoof their IP address, perhaps to a different country so a Russian attack would appear to originate from a British IP address for example.

code-1568556_640

Clearly, there are quite a few other techniques to detect port scans. There are a number of other different kinds of scans that may be done with a port scanner apart from the kinds that are mentioned inside this post. It is necessary to be aware that this scanner is only a connector and won’t read the codes and display the info by itself. Port scanners deliver basic views of the way the network is laid out.

You can proceed and see the exact same implementation of port scanning within this project. Not to mention that you might want to scan various protocols (UDP, TCP, ICMP, etc.). Additionally it is feasible to string packets with each other to monitor a full transaction. If no packet is received whatsoever, the port is deemed open. In case the packet isn’t encrypted it’s possible to read the info within it.

There are a large variety of tools offered for network sniffing. It’s possible for you to discover these easily by utilizing war dialer software such as ToneLoc. Among the most recognized port scanning tools is NMAP. The FORScan software is distributed beneath a freeware license that you have blatantly breached in many ways. After you connect both computers, after that you can run PCMover. Utilize system restore in Windows when you’ve got a problem that you can’t easily fix. It attempts to discover the operating system by utilizing some TCP header fields, yet this technique cannot tell the precise linux distro for example.

As a way to learn how to guard your network from threats through open ports, you first have to comprehend precisely what ports do and the reason why they’re important. The port may be stealthed, or closed. This port is known as the DLC (data hyperlink connector). USB ports are going to be in existence for a while to come so I recommend that sort of very long range wireless adapter. Specified ports on someone’s personal computer are open continually for example if they’re using a service like watching the BBC News live in the background, making them a target for absolutely any possible hacker who’s searching for people to victimize.

With a firewall, you’ll be able to lock down all your ports and help it become impossible to communicate at any system, or you may open ports to certain uses and numbers. The main reason why you would conduct a port scan is dependent on your viewpoint. The initial 1024 TCP ports are known as the Well-Known Ports and are connected with standard services like FTP, HTTP, SMTP or DNS.