You have just bought tickets to an exotic vacation spot. You board the flight, you land safely, you pull your netbook from your backpack, fire it up, and then check if there are any available Wireless networks. Indeed there are, unencrypted, passwordless, waiting for you. So you connect to the most convenient hotspot and start surfing. Being addicted as you are, you want to login into your email or social network just to check if something cardinal happened in the world during your four-hour flight. You’re about to hit the sign in button. Stop. What you’re about to do might not be safe.
This super-dramatic introduction is supposed to highlight
today’s topic: How to stay safe on the road. And I’m not implying
anything you may do in your person, like getting robbed or kidnapped or
drinking local tap water, but rather the security involving your
electronic gadgets and your online habits. Because if you bring your
computer with you, you will want to connect it to the outside world.
And this is where troubles begin. Or not. Hopefully, the article will
highlight all the perils and pitfalls of unsafe hex and how you can
avoid them.
What’s all this fearmongering?
There’s none really. But you have probably read a million
articles and scare posts telling you how this or that person’s email
credentials or credit card details where stolen. You will hear a lot of
warnings about not connecting to insecure networks. You will be warned
about using public hotspots and Internet cafes.
All right, let’s try to analyze the situation one byte [sic] at a
time.
First of all, it all comes down to trust. In a nutshell, you
trust your own computing device and you trust your ISP, at the very
least. This means the chain of communication at home is safe. Once you
leave that cushty enclave, things change.
There are several layers of potential dangers that might
compromise the security, privacy and integrity of your online
activities outside your home. We will start with the bottom layer.
Network
Whenever your computer connects to a new network, it tries to obtain an IP address. In most cases, your computer’s network device
will be configured to use DHCP, so it will broadcast a request for an
address. The available DHCP server on your network will respond, a
short exchange of information will occur, and you will become a member
of the network.
At this point, your machine will become a visible node in the
local network. This means that other clients will be able to
communicate with you. They might ping you or try to connect to
available services running on your machine. And here’s the interesting
part. If your machine has running services configured to listen on
external networks and accept unauthenticated requests, you may
inadvertently compromise your data. Moreover, if you are running
outdated versions of programs with known and easily exploitable
vulnerabilities, it may also be possible for determined and skilled
attackers, if present on your network, to try to gain elevated
privileges through the exploit and become administrative users on your
machine.
Examples that highlight these possibilities include sharing of
drives via Samba, being logged in as root/admin and running an ancient,
buggy version of a P2P client, having SSH with a trivial 123456 for
password, and more. How do you go about protecting against network problems?
The simplest solution: firewall.
This is the easiest way of ensuring your operating system is
inaccessible by other peers on your network. By default, the firewall
will block all incoming connections to your ports, unless you
specifically create exclusion rules. And you’re done.
Now, advanced users may want to try to minimize the exposure
vector of their machines, regardless of the firewall. This means
disabling unneeded services, like turning off file and printer sharing
when traveling or not using BitTorrent at the airport, and running as a
user with limited privileges, so even if there are problems, they will
be minimal.
A good example would be Ubuntu – it comes with no services
listening on external networks, so the firewall is not even necessary,
and you work as a normal user without root privileges.
DNS
We’re still talking about network, but one level higher. DNS
stands for Domain Name System, a protocol designed to translate domain
names, like websites, into IP addresses. For example, when you enter
osnews.com into the address bar of a browser, your DNS server, most
likely the one provided by your ISP and automatically configured for
you, will translate the query into an IP address. There will be a long
chain of packets being sent back and forth, but eventually, you will
end up reading the content as you expected, without knowing anything
about any numbers.
Whenever you connect to a network, you’re assigned a DNS
server. You can check this by examining your network information. In
Windows, for instance, open the command prompt and type ipconfig /all
. In Linux, take
a look at /etc/resolv.conf
.
The IP addresses will change based on what network you connect to.
If you want to use only specific, trusted DNS server, then you
will want to use static DNS servers that are not overriden by
DHCP assignments, by this requires changing some configuration files on
your machine. More importantly, this means having IP addresses of known
and trusted DNS servers available. Worldwide public services include
Google Public DNS and Open DNS, but whether you want or trust them is a
different story altogether.
Assuming you go with the default option, you will be relying
on whatever DNS server is assigned to you for name to IP address
translation. In theory, a rogue server could malform your requests and
return bogus information. So what do you do?
There are several things you can do. First, normal web
browsing. Connecting to non-secure websites, the ones you normally see
prefixed with http://
, unless you’re using one of the modern, fancy browsers that hide the
information from you, you will not know if you’re being forwarded to
bogus sites or not. Or rather, not without some extra work, but then,
you don’t need this article.
But just browsing is not really important. Things become
interesting when you need to input your username and password into a
login field, or better yet, provide your credit card details.
My recommendations is not to login into non-secure sites when
connected to untrusted networks outside your home. This probably
extends to your social network, so you might be hard tempted. We will
discuss that later on.
For secure websites, the ones starting with https://
, things are a little
different. Secure connections are all about creating an encrypted
tunnel between your browser and the remote server so that anyone
sniffing the exchange of information will see meaningless, random
packets rather than streams of clear text containing private data. But
there’s a catch. How do you know you’re connected to the site you think
you’re connected to?
This is the reason why secure connections are always
accompanied with certificates. Connecting to an HTTPS site is not
enough; you also need to be sure that the site is what it claims to be.
To that end, the idea of worldwide-trusted Certificate Authorities (CA)
was created, with the sole pupose of issuing identification cards to
websites. When you connect to a site, it offers its certificate. Your
browser compares the certificate to its own list. If the two match, you
proceed to login. If not, you are warned that you are connecting to an
untrusted site.
Now, the word untrusted does not imply bad or malicious. It
merely means that your brwoser does not know whether the site is what
it claims to be. There are two potential reasons. One, it has a
certificate that differs from the one it is supposed to have. Two, it
has a self-signed certificate.
In this situation, you need to make the decision whether to
trust the site or not. For most people, the best choice is to stop and
consult an expert. However, there are several ways common users might
help themselves distinguish between true and bogus sites.
One of the best ways of doing this is to keep bookmarks of
important sites. This way, there’s less of a chance of being offered a
wrong site when you search for it. Moreover, if your bookmarked sites
are trusted but have self-signed certificates, then you should write
down the site’s certificate checksum, so that when you connect from
an untrusted network, you can compare the certificate presented to you with
your saved list. In theory, it is possible to forge certificate
fingerprints to match those of genuine sites, but this is extremely
unlikely.
Another way of keeping track of certificates is a handy
Firefox extension called Perspectives.
This extension uses several online databases, known as notaries, to
compare the current site’s fingerprint with results taken in the
last 30 days. If the fingerprint appears unchanged, you may assume that
the site is ok, despite the warning. If the certificate seems to be
different or has changed many times recently, you might assume that
you’re possibly connecting to a wrong site. This should help you decide
whether to proceed.
Therefore, when you go about surfing somewhere outside your
home, like the restaurant or the airport Wi-Fi spot, you will know with
a very good degree of confidence whether you can connect to the sites
you need. This also extends to providing your credit card details and
other sensitive information. As long as your machine is your own, it
has a firewall, and the Web connections are secure and trusted, you’re
all right.
Secure connectivity
There are other ways you can connect safely to your sites,
including non-encrypted ones. This can be done by using tunneling. Your
current device becomes a viewing terminal, and all your confidential
activities happen on a trusted remote machine. For example, you may
connect to your home box using SSH. Of course, this implies being able
to connect to your home box, but we will discuss that shortly.
There are several methods you can use to establish secure
remote connections. Indeed, one of these is SSH. Encrypted VNC sessions are another possibility. You can also use a Virtual Private Network
(VPN) service like OpenVPN.
However, all of these solutions require a somewhat higher technical
knowledge.
To be able to achieve this, you will require a second machine
in a secure location, like your home or office, up and running. It must
also be accessible from outside, which means using a static, externally routable IP address
or maybe Dynamic DNS (DNS). It also requires opening ports in your home
network firewall, either the router or the box itself, with all the
additional security precautions. Your ISP must also allow that kind of
remote connectivity. Next, we come down to configuring the remote
sharing services properly, including robust passwords, limited
connection attempts, non-standard ports, and other settings.
Data
Now we come to your personal data. Having a firewall will
prevent external access, but what if your laptop, netbook or smartphone
gets stolen? There’s always a possibility someone might steal your
gadgets, but when traveling, the risk increases. To that end, you might
want to considering keeping your data inside encrypted containers on
your disk, or even encrypting the entire hard disk. For most people,
having a file container inside which the data is kept is sufficient in
most cases. The simplest way you can configure encrypted file
containers is using a program like TrueCrypt.
You may also want to consider moving your user profile or home
directory into the encrypted container, so that application settings,
browser links, saved passwords, and possibly other sensitive data are
also not available if the gadget gets compromised. This method makes
work a little more complicated, and there’s the risk of a permanent
data loss if the encrypted container gets deleted or corrupted, but it
outweighs the dangers and risks of damage in the case of theft.
Naturally, you should have multiple backups of your data in safe,
trusted locations, but this is true regardless of your travel plans.
You can learn more about TrueCrypt in this
tutorial.
Using a computer other than your own
All of the above only applies to your personal computing
devices. And none of these apply if you must use a public computer. In
that case, all bets are off. You have no way of knowing whether your
activities are recorded or logged in any way, even if you appear to be
savvy and can examine various settings to try to determine if
keyloggers, network sniffers or other tools might be active.
My recommendation is to refrain from providing any personal
information on public computers, including seemingly innocent logins
into forums and social network, especially if they are not encrypted.
The recommendation also extends to not plugging in your removable
devices, like digital camera smart cards, USB thumb drives, external
disks, and other gadgets into these public computers. It’s more than
just viruses and whatnot, and frankly, these are overrated. It’s the
simple fact of making your data accessible on machines other than your
own. If you do not trust the machine, then simply don’t use it.
Best travel option
Now, let’s see what your optimal traveling computing set
should look like.
Ideally, you would want to use a Linux-based operating system
for your travel machine. Normally, almost all efforts to compromise
your machine will be focused on Windows, so you will gain by just being
different. Moreover, Linux is easier to secure out of the box, because
of its least-privilege principle, allowing you to work as a limited
user with no ill side effects.
You will have your browser with several important, secure
sites bookmarked, and a list of sites’ fingerprints saved in a text
file for comparison in the case of discrepancies,
or alternatively, run Firefox with Perspectives. You will also
keep an encrypted container for your data, and possibly the entire home
directory. You will be running a firewall to make sure there are
no ports left open by mistake.
Last but not the least: YOU
No technology in the world can protect you from yourself.
There’s nothing that can stop you from divulging important personal
data in web forms, chat rooms, forums, social networks, and other
sites. If you decide you want to install software, share fires or input
your private information somewhere online, then firewall, encryption
and all other solutions become meaningless.
The safety of your digital travel starts with the concept of
discipline. If you cannot adhere to that, then you will have a hard
time ensuring the integrity and privacy of your data. But if you are
willing to follow simple principles, firewall, secure connections and
data encryption will cover some 80-90% of your needs.
Conclusion
Travel security seems complicated, but it narrows down to a
small number of protection layers – firewall for basic network
filtering, secure sites control against rogue redirection and
misidentification, data encryption, and basic discipline. Everything
else comes secondary.
Hopefully, this article clears away some of the fear mist that
shrouded your mind. It is all too easy to get lost in the media panic
generated around the risks and perils of travel, as if you’re going to
Mordor on your own. But things are much simpler. There you go.
Cheers.
About the author:
Igor Ljubuncic aka Dedoimedo is the guy behind dedoimedo.com. He makes a
living out of his very hobby – Linux, and holds a bunch of certifications
that make a nice pile in the bottom drawer.
From the title I thought this was going to be something completely different and was even thinking “why on earth would OSnews be posting an article like this??”
Nice article.
Nice article … but setting up a secure Windows 7 box using SRP and Limited User Account is really trivial these days; more so than Linux I would say.
As the author suggests, using a VPN provides the best available security while travelling and on others’ networks, whether open or secured. Unfortunately most OSes these days tend to do a lot of phoning home before they even allow you to establish a VPN connection, and on most open hotspots you’ll have to go through a captive portal before being able to establish your VPN tunnel.
What I’ve done on my personal Linux systems is to set up HTTP and SOCKS proxies on the VPN server and point everything on the local machine at those proxies. Be sure to use the system firewall to prevent traffic to those proxies from escaping unencrypted when the VPN link is not up! When I encounter a hotspot with a captive portal, I run a separate instance of Firefox with a different profile that is configured to always use private browsing, has plugins disabled, and has no proxy set. Once I log in, the OpenVPN tunnel establishes automatically. I do not have the tunnel take over the default route, since almost everything is configured to use the proxies; however, you can set that up easily enough too.
This configuration has the advantage that it is fail safe; that is, if I happen to leave a program running and connect to an untrusted network, the program won’t automatically start communicating on that network until the VPN link is up. I could imagine other ways to obtain this fail-safe configuration, but any of them would be much more difficult to implement.
Here’s how I accomplished this on Ubuntu; these instructions should work on Debian too, and will be very similar on other distributions.
To prevent VPN traffic from escaping on the wireless interface when the VPN is not up using the “ufw” firewall management script:
ufw deny out on wlan0 from any to 192.168.202.0/24
Adjust as appropriate for your VPN address range and network interface.
Place your OpenVPN configuration in /etc/openvpn/myvpn.conf , then edit /etc/default/openvpn and set AUTOSTART=”myvpn”. Be sure to use proto udp in your OpenVPN configuration if possible.
I use squid and dante on my VPN host to provide HTTP and SOCKS proxies, respectively. On the client side, these proxies are configured as the default through the desktop environment’s controls. To make Thunderbird use a SOCKS proxy, go to Edit -> Preferences -> Advanced -> General and choose Config Editor. In the config editor, set network.proxy.socks and network.proxy.socks_port as appropriate, then enable network.proxy.socks_remote_dns and set network.proxy.type to 1. All other proxy settings should be the default.
For SSH, I use a program called connect-proxy which is available in the Debian and Ubuntu repositories. Instructions on configuring it are available in the man page.
I’ve added the proxy to /etc/environment so that programs like curl automatically use it on all user accounts:
http_proxy=”http://192.168.202.1:8080“
HTTPS_PROXY=”http://192.168.202.1:8080“
FTP_PROXY=”http://192.168.202.1:8080“
ALL_PROXY=”http://192.168.202.1:8080“
NO_PROXY=”localhost,.local”
In addition, I’ve configured sudo to use a separate environment file /etc/environment.sudo so that commands like sudo apt-get update use the proxy as well. The contents of /etc/environment.sudo are the same as what I added to /etc/environment. To configure sudo, run visudo and add the following line near the beginning of the file:
Defaults env_file=/etc/environment.sudo
Be careful when editing the sudo configuration, since one mis-edit can ruin your day.
Using OpenSSH’s -D flag and a high port number (I usually use something in 8xxx), you can hack together your own SOCKS proxy with a shell one-liner. This is great if, like me, everything you use but HTTP is secure by default anyway (IMAPS, SSH, IRC over SSL) and you don’t have OpenVPN running on your server. I can then use FoxyProxy Standard, a great FOSS Firefox add-on, to tunnel my web traffic.
I did this for the last family vacation and it worked perfectly; routed all my traffic back through my home machine thanks to SSH and socks proxy settings.
For programs that do not recognize proxy settings, there is a handy little tcp2proxy utility that captures all of a program’s network traffic and redirects it through the proxy. Very handy little helper that one.
There’s also “tsocks” for *nix. Just type “tsocks foo” instead of “foo” on the command line, and it sets up tunneling, iirc through LD_PRELOAD.
A better solution is to use sshuttle:
https://github.com/apenwarr/sshuttle
You just need an SSH server with Python. sshuttle then redirects all your traffic, including DNS, if you want to, trough your SSH server. It’s extremely simple to use and extremely useful.
Oh comon the last bit was just pro linux FUD. Otherwise an interesting article.
Windows 7 and Vista both run under a “power user account”, even if you are Admin and have UAC turned it works exactly like Sudo does on Ubuntu (which was pictured) and/or OSX.
Also Sudo can be setup not to require a password (I do this on my OpenBSD VM).
Yes I know there are a ton of Viruses for Windows, but an up2date browser, AV and some common sense and you are fine.
Edited 2012-04-02 18:09 UTC
It may have been somewhat biased, but I think it’s stretching to call it FUD.
Asking me whether I want to do something dangerous provides little security from an outside attacker who is already working under my account, as there is no need to enter a password.
You can also set up your user account not to require a password either. Very convenient, saves a lot of time. Perhaps you should do that as well.
That last one eliminates 99.9% of your risk right there. Sadly, it’s not used as often as it should be.
It can be configured to require a password, and in that case it’ll require administrator password, not the same one you use to log in, ie. it’s actually a tad bit safer then than the Ubuntu default-behaviour. Just be sure to use such a password for admin that you don’t use anywhere else and which is hard to guess and you’re more-or-less set.
If the attacker already is in your account you obviously allowed some dodgy code to run in the past … the same would happen if you used Sudo on a *nix system.
Unfortunatly this is not the case anymore. Just recently I had to clean out a large scareware infestation that sneaked in by drive-by-download from a trusted supplier site. Our Enterprise McAfee solution was totally useless, too. I now mandated RequestPolicy but there is little you can do when the malicious software comes from a trusted source
To be honest, that right there is usually more than enough of a reason to pull out your hair and a 10kg sledgehammer out of your closet.
I anticipated such an answer and I agree that McAfee Enterprise may not be the best security solution available and certainly is a pain to administer. But our other branch office is running Sophos and they have had the exact same situation.
It is too convenient to blame such an occurence on a single piece of software and be done with it. We need to understand that being prepared and smart has stopped to be a reliable precaution against malware. That’s the unfortunate reality of today’s networks and we must learn to understand that. Leaning back and saying “common sense will prevail” is not going to help us.
Edited 2012-04-03 12:59 UTC
However the article isn’t about corporate networks … it about taking your laptop on Holiday.
How does this invalidate the statement that common sense is not going to protect you from malware?
It doesn’t … however it isn’t really wasn’t the context of what I was saying or the article was talking about.
What I mean about common sense is that 95% of the time you will be fine with only it, but the other 5% of the time the AV will be there to pick up when you have slipped up. So I’d rather have both. MSE doesn’t seems to have any significant performance penalty on my computer and seems pretty good, others have other preferences.
Linux is pretty good on the whole with Security (better than Mac in my opinion). But the whole system is designed to protected the system not the users data. Which is fine if you have lots of people using a system, this doesn’t help you however if you use it as a personal system.
Also Linux users are more Savvy on the whole, to even consider contemplating using Linux you need to understand to some degree what an Operating System actually is.
People on this website tend to be fairly savvy and have installed one or more Operating systems themselves, and I am sure most of the people on here could run a Windows system and not get viruses for years if they had to.
In any case people are far more savvy than they used to be, I work with many that are as good at using a modern PC as I am (I learn’t a few things watching the testers tear apart my pages) and I been using computers since the BBC Micro Model B (though the mac guys crap themselves when I open up the terminal, which is always fun).
My brother and sister who aren’t computer savvy at all understand not to download crap from dodgy websites and can spot something dodgy from a mile off … because it is something they have been brought up with.
Lots of people now have been brought up with PCs and aren’t dumb enough to fall for scams, they are however clever enough to get around Enterprise security which is an entirely different thing all together.
However this is somewhat aside from Corporate Security which is a totally different thing. Corporate Security is about protecting the network and the companies data, and most places I have worked that are fairly large have very locked down PCs and Laptops.
The article is about arming yourself with knowledge so you can spot dodgy stuff on a network. If you have read this you are probably savvy enough to know how to protect yourself when using a Windows system.
Edited 2012-04-03 18:12 UTC
Thank you for your answer. I think I understand where you are coming from and I agree. People have become much better in avoiding pitfalls and every ounce of common sense certainly helps. However…
… I can’t stress enough that being savvy is not enough anymore. Modern malware sneaks onto your system via totally legitimate and non-fishy vectors. All I really want to do here is to raise awareness of that fact. And telling ourselves that we are competent enough to avoid this or that is just creating a false sense of security. That’s something we need to be aware of, too.
The last investation I investigated I uploaded the file to virustotal.com and virscan.org
Of all the (up to date) virusscanners at those sites only 7 out of 35 or so detected it. And onle one of the well known brands detected it. So actually the ones I had never heared of recognized them.
But this has been knowns in the security community for years.
Of most of the virusses these days are just regenerated varients every 15 minutes or so.
And viruscanners only have blacklists, they can’t block virusses they don’t know about.
As an other example I administrate some Linux mailservers which obviously also need to do viruscanning.
If there was a new e-mail virus the person who was sending out these virusses was obviously using a botnet and just pressing a button every 15 minutes to generate a new variant. By the time the virusscanner was updated the variant was already not being send anymore.
I don’t even run any antivirus anymore on Windows. On Linux I’ve never run antivirus. I’ve decided virusscanners are not for me.
I keep my software up to date, don’t download anything stupid, etc. Disable most plugins in the browser (only Flash is enabled).
One thing that I personally like to use even at home is HTTPS Everywhere; an addon that tries to always use HTTPS on every possible site so that none of your details actually go over the wire in plaintext. I’m fairly certain most of the people here have heard of e.g. the Firefox addon that allows you to browse Facebook as an another user as long as the user is logged in on the same network. Well, this addon thwarts that one and many similar ones.
Good thing about this addon is that it requires no set-up, can be safely installed on computer-luddites’ devices, and atleast so far I have not found a single website that would’ve experienced any glitches due to it.
Do you actually look at the certs given to your HTTPS connections? In a “hostile” environment trusting HTTPS to be secure isn’t much better and often gives a false sense of security. It’s pretty trivial to just proxy any HTTPS traffic for a user and unless you actually look at the cert you’ll never know. I will admit that if your data stream between you and siteA is legit that people in between can’t sniff it, but if you’re starting out in a hostile area it can’t be trusted.
The only way to be secure in a hostile environment is a key based structure (SSH, VPN, etc..) where you already know the key on the other end. ie: you SSH to your home box and get a prompt for a new key, that you know you’ve been to before, one would be a fool to continue.
A bootable CD distro and a USB key with your various keys (SSH, VPN, etc…) pre-setup is a good way to go.
I’m not saying this isn’t a pain in the ass, but unfortunately real security normally is these days.
How do you plan to proxy SSL traffic without having the browser complain about incorrect certificate? Besides, you’d need to actually be able to intercept the data stream first to even set up a proxy, meaning that you’d need to be in control of the wifi hotspot, the machine issuing dhcp replies, or one of the machines between the user and the target website. A random machine in the same network can’t just start routing your traffic.
Well, yeah. In some random “hotspot” that you pick up while on the road the scenario that a DHCP server and/or router is compromised or maliciously setup is rather high. That’s the point.
So, getting a proxy setup where the user gets certificate warnings is trivial and most people “need” to check their facebook or whatever and would just click continue. Also, plenty of CAs around the world aren’t all that great and you could probably finagle a trusted cert out of many of them and use something like Squid and its SSLbump feature to just invisible proxy the SSL traffic for you.
WereCatf,
“…you’d need to actually be able to intercept the data stream first to even set up a proxy, meaning that you’d need to be in control of the wifi hotspot, the machine issuing dhcp replies, or one of the machines between the user and the target website. A random machine in the same network can’t just start routing your traffic.”
Actually there are some attack vectors where a random machine in the same network can just start intercepting traffic. These aren’t theoretical either, they work on many LANs.
Many wifi routers don’t firewall users from each other (though commercial hotspots should). Consequently this opens up a few attacks. DHCP is extremely vulnerable to race conditions where a client joins the wrong subnet or uses the wrong gateway. I’ve studied this method and it is pretty reliable.
With no inter-peer firewall, arp spoofing is just as feasible with WIFI as it is on a LAN. The wrong client claims ownership of another’s IP address and then forwards the packets to the real owner.
Another possibility, though this one requires some sophistication, is to spoof the hotspot’s own DNS server from the wan interface. I’ve never tried it as it requires non-egress-filtered internet access, but if you can flood the hotspot’s public ip with DNS responses before the real DNS server responds, there’s a chance it will accept the fraudulent answer, and will begin feeding the wrong IP address to hotspot clients. If I remember correctly, the odds of success were maybe only 1 in 10000 due to sequence number probabilities.
Another obvious problem with public wifi hotspots in particular is that it’s hard to verify that you are connected to the service you think you connected to. It’s easy to impersonate an SSID and even BSSID whether encrypted or not.
This says nothing of passive wireless snooping, or heck even physical wire snooping of the hotspot’s cable connection.
I usually don’t care though, it’s nice just to have internet access on the road regardless of who’s watching. I don’t particularly trust my own ISP anyways.
Edited 2012-04-03 08:12 UTC
Several certificate authorities were just revealed to have been selling subordinate roots to IT organizations which would allow them to do just that. Here’s the letter that Mozilla sent out to all their registered CAs about this issue:
https://groups.google.com/group/mozilla.dev.security.policy/msg/57b1…
So I’m afraid that trusting SSL to prevent MITM attacks is no longer possible. You should inspect certificates or use an addon like Certificate Patrol to help automate the process, and if you are connecting to an untrusted network, consider using your own VPN as well.
That’s kind of my rule of thumb if i’m on a public wifi network.
Actually Ubuntu has Avahi-daemon listening on the network and I’m sure a lot of people install OpenSSH-server for remote administration.
Avahi is used to discover services on the network. It is an implementation of Apple’s ZeroConf and similair: http://en.wikipedia.org/wiki/Zero_configuration_networking
While the developers of Avahi put a lot of effort in making it secure (use a chroot for part of the system, set up rlimits, run it as a seperate user, etc.) mistakes are still possible.
The configuration file of Avahi-daemon is /etc/avahi/avahi-deamon.conf
The configuration file of OpenSSH-server is /etc/ssh/sshd_config
Some tips:
– make sure your Ubuntu version you have installed is still supported and all the software is up to date.
– if you don’t need compatibility with certain old Apple systems, you can disable the ‘enable-wide-area’ in Avahi. This is especially useful when you are connected somewhere when IPv6 is enabled.
– Something else I haven’t tried is to just remove Avahi-daemon as a whole, I think it should be possible, I don’t think any other part of Ubuntu depends on it. I’ve just disabled IPv6 and IPv4 is behind NAT.
– I think it is the default, but I also always set: disabled-publishing=yes in Avahi. This makes sure that any software you install on your Ubuntu installation does not announce on the network it existence.
– For SSH I always use a non-standard port, if only to prevent a lot of automated break-in attempt.
– SSH should be setup to only allow Protocol 2 (default in newer installations): Protocol 2
– SSH should be setup to not allow root logins: PermitRootLogin no (I have no idea why this still isn’t the default)
– I always setup SSH to only allow certain users with: AllowGroups sshusers
or:
AllowUsers username1 usernam2
– If you do want to enable root-login for SSH, atleast set it up to only allow from a certain IP-address by adding this to the end of the configuration file:
Match address 12.12.12.12
{tab-for-readability}PermitRootLogin yes
– And a non-security tip: to speed up SSH-login I also disable DNS, which could really help if ‘reverse DNS’ is broken or slow:
UseDNS no
(which is something which could really help if you have setup your own router and use SSH to login to your router. You don’t want your router to depend on DNS)
Edited 2012-04-04 12:19 UTC
“SSH should be setup to not allow root logins: PermitRootLogin no (I have no idea why this still isn’t the default)”
Being able to rsync over SSH as root can be very convenient since rsync via user accounts doesn’t preserve ownership. Do you know of an alternative?
“And a non-security tip: to speed up SSH-login I also disable DNS, which could really help if ‘reverse DNS’ is broken or slow:
UseDNS no”
Also removing / disabling the following feature can eliminate a few second delay that happens on every single login (disable it in the server or client). It won’t affect anyone using password and/or RSA authentication.
GSSAPIAuthentication yes
I honestly don’t know why it’s always so slow even on fresh installs, but LOG_LEVEL Debug confirms it’s the culprit. Don’t know if it’s a bug or if it’s normal, but the following indicates it’s been a problem since 2007.
https://bugs.launchpad.net/ubuntu/+source/openssh/+bug/84899
Edited 2012-04-04 13:50 UTC
Yes, use: without-password for the PermitRootLogin and passwords will be disabled, but you can use keys. Your rsync is most likely setup with keys anyway that don’t have passwords set for them, if it’s a automated type of solution.
GSSAPI is Kerberos authentication, I think it only causes problems when you install the libraries you need for Kerberos authentication but don’t actually configure it.
Lennie,
“GSSAPI is Kerberos authentication, I think it only causes problems when you install the libraries you need for Kerberos authentication but don’t actually configure it.”
That’s possible, however like everyone else in the earlier linked thread I wonder why a distro would come prepackaged that way considering the annoyance it causes the majority of users. Or why they don’t fix the source of the delay in kerberos itself. Unless it’s a deliberate connection throttling mechanism?
Just now I looked for kerberos packages and lib files, but I don’t see anything installed. Granted I don’t know what I’m looking for, but disabling it works well enough.
Probably because large organisations, which do use Kerberos, need a default install to be able to allow people to login.
If there is no Kerberos installed, how can SSH be used to login ?
Most be some reasoning like that.