Why buy cialis on the internet is really beneficial for you?
So you’ve decided to order cialis and do not know where to start? We can give you some advice. First, ask your doctor for advice in order to properly determine the dosage, when you do that, you need to decide for yourself exactly where you will be buying the drug. You can buy cialis online, or you can just buy it at the pharmacy. Buy cialis online has a number of advantages, one of which is price. The cost of the Internet will always be lower than in stores, and when combined with the free shipping, it will be the best choice. Besides the price there are a number of advantages over conventional pharmacies, one of which is anonymity. Also, you can always check the online store on reliability, read reviews about it and the opinion of other buyers. Read more .
I use the terminal a lot, even under OS X, and use bitmap fonts there. Until recently I had been using the Proggy family of fonts, as they were the best bitmap fonts available for the Mac. However I recently updated my machine to Mavericks, which managed to mess up the font rendering on iTerm2 (truncating the bottom of dangling characters such as ‘g’, which can get a bit confusing when you use git a lot).
Rather than just adopting an officially sanctioned but less usable Apple font, I got nostalgic for the old SGI Screen font. This is my terminal font of choice under Linux, but does not exist in a Mac-compatible format. However I dug up a copy of the PCF files online (it’s been relicensed to MIT and now ships with OpenSUSE) and ran it through FontForge (via Alistair Buxtons bitmap2ttf wrappers). This produced a passable TTF version of the fonts that will install on OS X and is usable under iTerm2:
The glyphs have some artifacts in the font-book previewer and when antialiased, but they work well in bitmap mode in iTerm2 (i.e. with antialiasing disabled).
The TFF and PCF files are available in this git repo:
 And if anybody can tell me how to do the equivalent of Alt-Backspace (delete backwards by word) in iTerm2 I’d be eternally grateful. Note that Ctrl-W is not the same thing. e.g. give the line
cat myfile.txt, Alt-BS will just delete
txt, whereas Ctrl-W will delete
Update: It turns out this can be fixed by configuring iTerm2 to send the hex codes
0x1b, 0x08 in response to Command-Delete or Ctrl-Delete. More details are available in this blog post.
A related issue is that Ctrl-Left/Right for backward/forward-word doesn’t work under Mac. This is due to OS X not shipping with an
/etc/inputrc (used by the Readline library). This can be fixed by copying this file from a Linux host (this one from Arch should suffice); either to
I’ve written a small Clojure library for geolocation routines for a side project I’m working on. The routines in it so far are based on the algorithms described by Jan Philip Matuschek, but converted to be more idiomatic Clojure.
The code, etc:
While uploading the slides from my Devopsdays talks, I thought I’d upload some other talks I’ve given. The main ones of historical interest are the talk to accompany my paper on AccessGrid over XMPP for APAC ’05, and my presentation to the Sydney SIGGraph chapter on the emerging technologies we were pursuing at the Sydney University visualisation lab.
This November I represented Atlassian at Devopsdays London, giving two ignite talks. The ignite format is a 5-minute talk, with 20 slides and a fixed 15-seconds per slide cadence. It’s not a format I’m particularly comfortable with as it removes the ability to ad-lib and go off on tangents, which I like to do when speaking. I think this shows up in talks; the bits where I sound most comfortable are where I briefly go ‘off script’.
The first talk was intended to raise the idea that much of the perceived separation between dev/ops and other aspects of business is purely that, perception. As more and more desktop and manual tools migrate into the cloud the difference is largely moot. With this in mind, I suggest that the advantages of devops culture and tools should apply equally to other functions with the company, and provide some concrete suggestions on how to do this. After all, if we’re breaking down silos, why limit those silos to ‘dev’ and ‘ops’?
The second ignite started as a longer talk on the rebuilding of the Atlassian order system to be atomic, but I ended up paring it down to a few key points. It outlines the credit card pre-authorisation technique we use to attempt to wring robustness out of notoriously unreliable credit card gateways. The deeper point was about the necessity of anticipating the effects of catastrophic system failure and preparing for it.
The talk sparked a few conversations afterwards, with others sharing their woes at making credit card systems reliable. It turns out others have used this technique too (and I think I may be partially responsible for its adoption)
Update: The slides for the talks are up on Slideshare:
SPDY is the next big thing in web technology. Nominally it is intended to speed up websites by multiplexing multiple site requests over a single connection; however there is some question about how effective it is at this. But personally I see it’s advantages in the datacenter; by reducing the number of TCP connections required to serve up a page to 1, the resources required for file-descriptors and firewall entries is massively reduced for high-volume sites. I suspect this is why sites such as Twitter and Facebook are adopting it before its usefulness for the end user has been proven.
Always one to jump on a passing bandwagon, Haltcondition is now being served via SPDY if your browser supports it. This is possible via the recently-released Nginx patches. Prior to this I had been testing the official Google Apache module, however this proved unstable as it is incompatible with mod_php; running WordPress under FCGI proved flaky. Adding Nginx as a caching/SPDY/SSL frontend allowed me to continue using Apache as an application container for WordPress.
To enable SPDY on Haltcondition I took the following strategy:
- Download the Nginx patches and follow the instructions to build an SSL/SPDY-enabled instance. Personally I installed it under /opt/nginx…
- Modify the existing Apache/Wordpress vhost to bind to a different port; 8080 is traditional.
- Configure Nginx to serve HTTP and HTTPS, and forward requests to 8080.
- On the HTTP vhost configure Nginx to send the ‘Alternate-Protocol “443:npn-spdy/2″‘ header; this tells the browser that SPDY is available on the HTTPS port.
- Configure your system to start Nginx; personally I use daemontools with Nginx is foreground mode.
One gotcha is that WordPress doesn’t handle this sort of proxy-chaining very well and will tend to go into redirect loops. The workaround for this is to disable the ‘redirect_canonical’ filter; there’s no official way to do this but the ‘Fix Multiple Redirects’ plugin will do this for you.
Imagine a not-too-distant future where IPv6 is starting to see widespread adoption. On sunday evening you login to Amazon.com on your laptop and purchase some sex-toys for you and your wife for your upcoming anniversary; good for you for keeping it interesting. Naturally you enable privacy mode in Firefox so it won’t show up in your history, society being what it is.
On Monday you head into your job at a large daycare center where you’re a manager in HR. There’s an upcoming restructure and you want to make sure the employees are reassured that its a good thing; in between meetings you flick through some change-management books on Amazon on your laptop, but can’t see anything useful.
Congratulations! Amazon.com (and anyone they feel free to share with) now know that you have sex-toys and access to young children. No logins, no cookies, all they need to do is look in their logs for laptop’s unique identifier and then match your work’s network block to your purchases at Amazon.
How does this work? First a bit of background (the following skips a few details but is basically true for most people)…
Every piece of network hardware in every computer, phone, etc. in the world has a unique identifier: the Media Access Control address, or MAC. This address is 48 bits long, and different from the IP address you use on the internet; it is used purely for finding machines on your local network.
Although it was never a deliberate design decision, the IPv4 internet has a few privacy mechanisms built into it, almost as a side-effect of its limitations. IPv4 addresses are 32 bits long, far too small to contain any significant portion of the MAC address or any other identifier; the MAC address is quietly dropped the moment your traffic enters the wider internet. And although the IP assigned to you or your employer by your ISP is globally unique, in practice its tracking potential is limited: your home IP is regularly reused by your ISP for other customers, and at work the public address is shared by dozens or even hundreds of employees due to NAT.
With IPv6 it’s a different story. A 128-bit IPv6 address consists of two components; a network address that identifies your whole network (usually 64 bits) and a local component that identifies your machine on your network. This local component is based on your MAC address, and by default is included in all communication with the wider internet. Because it’s bound to your physical hardware the local part always stays the same, regardless of which network you’re connected to; it is in essence a global tracking code, and can be used by remote sites to infer some interesting information about you. The example above is the simplest I could come up with; advertising providers operating across multiple sites are going to be able to do some truly stunning pattern matching. And hardware vendors will already have massive database mapping MAC addresses to users and credit-cards; some of them (e.g. Apple) have deep ties with organisations such as the RIAA, who would dearly love to be able to match an IP address to a name and mailing address without any of that inconvenient subpoena stuff.
Luckily this problem was anticipated during the IPv6 specification process and a solution added; RFC3041 privacy extensions. The gist of this is that your operating system can generate a random, short-lived fake local-address that is used for outgoing connections. In the example above, assuming the temporary address is set to a short enough timeout, by the time you’re at work the next day the address you used from home will have been replaced by a new one.
There’s only one problem; it’s not enabled by default in all operating systems. Here’s how to enable it in some of the common ones:
Linux desktop/server distributions
Most Linux distributions seem to have temporary addresses disabled by default. Enabling them is simple enough though:
sudo sysctl -w net.ipv6.conf.all.use_tempaddr=2 sudo sysctl -w net.ipv6.conf.default.use_tempaddr=2 echo net.ipv6.conf.all.use_tempaddr=2 | sudo tee -a /etc/sysctl.conf echo net.ipv6.conf.default.use_tempaddr=2 | sudo tee -a /etc/sysctl.conf
Temporary addresses seem to be disabled by default in Android. However if you have rooted your phone then you can use the Linux method. Either use an Android terminal app or ‘adb’ from the SDK to get a root shell:
mount -o remount,rw /system cd /system/etc/ echo net.ipv6.conf.all.use_tempaddr=2 >> sysctl.conf echo net.ipv6.conf.default.use_tempaddr=2 >> sysctl.conf
Then reboot your phone.
Mac OS X
As of 10.6.7 temporary addresses are disabled. Enabling them is similar to the Linux method:
sudo sysctl -w net.inet6.ip6.use_tempaddr=1 echo net.inet6.ip6.use_tempaddr=1 | sudo tee -a /etc/sysctl.conf
This security advisory implies that iOS 4.3 has this enabled by default. For older releases you’re probably out of luck though.
IPv6 temporary addresses seem to been enabled by default; if you can confirm please comment.
Well, as of Friday the 4th of February 2011 IANA is officially out of IPv4 addresses. It’s now up to the regional registries to dole out the remaining addresses as they see fit, which will be increasingly sparingly.
To celebrate the beginning of the end of IP as we know it, Haltcondition.net is now available over IPv6:
I’ve also added an IPv6 detection widget on the right, courtesy of Patux. The IPv6 connectivity is provided by a Hurricane Electric tunnel to my Linode box; the fact that I even need to use a tunnel at a professional hosting site is sign of how painful the next couple of years are going to be.
Luckily my ISP are currently trialling consumer-level IPv6, so I can at least test the site. However at this point setting up IPv6 in the home is far from simple; I had to convert from DD-WRT to OpenWRT on my router and do a lot of manual configuration to get an end-to-end connection. It’s going to be a painful transition.
Update: Linode have announced provisional support for IPv6, so this blog is now native end-to-end if your ISP has support. The Linode setup is a bit odd (they only provide a single IP rather than the usual /64) but appears to work.
One of the more intriguing speculations doing the rounds is that Linode rolled this out early as Slicehost are gearing up for IPv6 as they transition into Rackspace’s cloud. If so this is promising, as I hadn’t expected IPv6 to be product differentiator for some time.