Collapsible shipping container

Mar 4th 2010, The Economist

Transport: A collapsible shipping container could help reduce the
environmental impact of transporting goods

OVERHAULING an industry of which you know little is not easy, but
neither is it impossible. In 1956 Malcolm McLean, a trucker from North
Carolina, launched the first "intermodal" shipping container, which
could be transferred easily between lorries, trains and ships. It
revolutionised the transport of goods by abolishing the traditional
(and back-breaking) system of "break bulk" loading, and thus helped oil
the wheels of globalisation. Now another outsider to the shipping
industry is trying to get a similar change under way.

Rene Giesbers, a heating-systems engineer from the Netherlands, has
invented a collapsible plastic shipping container which, he hopes, will
replace McLean's steel design. Because it is made of a fibreglass
composite, it weighs only three-quarters as much as a standard
container but--more importantly-- when empty, it can be folded down to
a quarter of its size. The composite is more resistant to corrosion
than the steel it replaces, is easier to clean and floats. It is also
greener to manufacture. Making one of the new containers produces 25%
of the carbon dioxide that would be generated by the manufacture of its
steel counterpart.

A collapsible shipping container would be useful for several reasons.
Patterns of trade mean that more goods travel from China to America,
for example, than the other way around, so ships, trains and lorries
inevitably carry some empty containers. If these were folded, there
would be more room for full containers and some vessels would be
liberated to ply different routes. If collapsed containers were bundled
together in groups of four, ships could be loaded more quickly, cutting
the time spent in ports. They would also take up less space on land,
allowing depots to operate more efficiently.

Mr Giesbers is not the first to invent a collapsible container. Several
models were experimented with in the early 1990s but failed to catch
on, mainly because of the extra work involved in folding and unfolding
them. There were also concerns about their strength. Mr Giesbers says
the Cargoshell, as he has dubbed his version, can be collapsed or
opened in 30 seconds by a single person using a forklift truck. It is
now undergoing tests to see if it is strong enough to meet
international standards.

There are currently about 26m containers in the world, and the volume
of goods they carry has risen from 13.5m "twenty-foot equivalent units"
in 1980 to almost 140m today. It is expected to reach 180m by 2015. Mr
Giesbers aims to have 1m Cargoshells plying the seas, rails and roads
by 2020, equivalent to 4% of the market.

Bart Kuipers, a harbour economist at Erasmus University in Rotterdam,
thinks that is a little ambitious, but he reckons the crate could win
2-3% of the market. He thinks it is the container's lower weight,
rather than its collapsibility, that makes it attractive. It will
appeal to firms worried about their carbon footprints--and if oil
prices rise, that appeal will widen.

Ultimately, the main obstacle to the introduction of the Cargoshell may
be institutional rather than technical. As Edgar Blanco, a logistics
expert at the Massachusetts Institute of Technology, points out,
"Everyone is vested in the current system. Introducing a disruptive
technology requires a major player to take a huge risk in adopting it.
So the question will always boil down to: who pays for the extra cost,
and takes the initial risk?"


harvesting April 1st hoaxes for future technologies

As in many years previously, on April 1st, a new RFC has been published:

Obviously, it's a hoax. "IPv6 over Facebook" is not something anyone is gonna actually believe in, or even implement. But wait: the idea behind this joke is actually quite a good one: define an ipv6 prefix and assign another computed value to the rest of the address. What the hoax doesn't provide is working routability and compatibility with the existing ipv4 internet.

If you think the idea is ridiculous, consider this:

Some men see things as they are and say, ‘why?’ I dream of things the way they never were and say, ‘why not?’"
- Robert F. Kennedy, after George Bernard Shaw

The concept of computing new, dynamic ipv6 addresses is actually available in practice through the Teredo standard. There are different implementations out in the wild, one of which is free: Miredo. Teredo provides both global addressability and routability! And it is compatibe to the existing ipv4 internet infrastructure.

So how to use this technology for a social network, wherein every user/participant will receive a globally unique, dynamic and routable ip address? Simple, just make miredo part of the peer-to-peer software and make sure to enable ipv6 in the operating system.

This is what etoy.CORPORATION has implemented in the ANGEL APPLICATION. The ANGEL APPLICATION NETWORK, an arcane network of computers, is loosely connected via the internet, safeguarding and sharing digital fragments of MISSION ETERNITY PILOTS. The individual ANGELS are technically routed over the existing internet via virtual ipv6 addresses, just as the RFC hoax suggests. This is loosely documented in the ANGEL WIKI.

This april fools hoax is very interesting in the sense that ideas, be it for jokes, can turn out to be real world concepts/products that help us find new ways for experiencing culture, emotions, rituals, belief, life and death.

Timothy Leary's last words are reported to have been: "Why? Why not? WHY NOT? Why not? Why not? Why not?" and later, "Beautiful."


etoy got slash dotted

... and - as the cliché requires - the etoy servers collapsed under the massive traffic the article and discussion on (news for nerds) generated.
result: a lot of confused hackers ;-) some even think that this is a media hack / prank by etoy.

etoy.HAEFLIGER wrote:
hi, here's how it got there: Matthias Stürmer who works in our team at ETH booked an office tank (etoy.CONTAINER) for their hackontest end of September and announced it on /.
... He misplaced the link to the SARCOPHAGUS instead of a regular etoy tank. Google is one of their sponsors.
So this confusion created the PR frenzy without giving us the opportunity to really benefit, except for the traffic...
thats great! this viral data salad shows perfectly how memetic mutation can lead to crazy stories that are not at all intentional. nobody created this to trigger a hype. a link to the wrong web address ... the rest is distributed story telling, conspiracy theory and imagination. its all in there: dead bodies, drugs, hackers and even google!

fact is: etoy just wants to be nice and contributes a regular etoy.TANK to the event ;-)

i admit that the etoy.CONTEXT and our history probably fueled the whole madness: etoy - leaving reality behind since 1994.

actually we don't create art and code to confuse people. our stuff is just a bit closer to the edge of reality than most corporation's output. things sometimes get a bit out of  our control.

thank you for helping us grow!
agent zai, ceo etoy.CORPORATION

On Apr 22, 2008, at 12:22 PM, etoy.POL wrote:

I find it rather disturbing to get to know about such huge public involvement of etoy in the hacker community through /.

especially since it could involve finding dedicated contributors for the angel-app.

Wo genau ist die Verbindung zwischen einem Google-Projekt und etoy?


silvan.zurbruegg wrote:
Die ganze story hier beschert uns seit gestern massiv traffic: / war gestern zeitweise nicht
erreichbar. Es beginnt zu bessern heute ...
Comments (2)  Permalink

Roll your own Minority Report with the Wii Remote

Two videos on YouTube demonstrating seriously cool hacks with the Wii Remote.

Track your fingers:

Virtual Whiteboard:

Comments (3)  Permalink

The Y10K Problem -- Prepare while you have time

Fun little article on the supposed Y10K problem:
   The most common fix for the Y2K problem has been to switch to 4-digit
years. This fix covers roughly the next 8,000 years (until the year
9999) by which time, everyone seems convinced that all current
programs will have been retired. This is exactly the faulty logic
and lazy programming practice that led to the current Y2K problem!
Programmers and designers always assume that their code will
eventually disappear, but history suggests that code and programs are
often used well past their intended circumstances.
In a similar vein, but intended to be serious: A Long, Painful History of Time.
Related Entries:
ANGEL APPLICATION 0.4.2 "pollination"
harvesting April 1st hoaxes for future technologies

Tangible Functional Programming

In this beautiful google tech talk, Conal Elliott makes a convincing case that API and UI should in fact be one and the same -- while currently there lurks one of the deepest schisms of IT. Among other things, he makes this rather outrageous statement (see 17:40):
The essence of programming has nothing to do with programs.
but of course not without thoroughly backing it up. In fact, he presents a prototypical tool kit which does just that. The talk gets a bit technical towards the end, but should be accessible (and I recommend it) to everyone with a general interest in programming and user interface design.



We are very pleased to be able to announce the immediate availability of ANGEL APPLICATION version 0.3.0.

This update consists of stability fixes (see e.g. here), api cleanup work (see the current module import graph), as well as GUI work. See the CHANGELOG. Further information is available on the M∞ ANGEL-APPLICATION Developer Wiki.

One important thing to note: if you are upgrading from an older version, you will have to purge/empty your local repository once before being able to help safeguard MISSION ETERNITY data forever. This can be done with a single mouse-click in the File menu -> "Purge repository".
Comments (4)  Permalink

Fixing urlparse: Make the simple easy, keep the complex solvable

In my previous post, I presented netaddress, an RFC 3986 compliant (I believe) URI parser (and all the shenanigans that come with it, such as numerical IP addresses). Now, while it's good to know that that's available, it has made the parsing simple URI's (the most common case) more complicated than it needs to be. This is because it now exposes most of the complexity inherent in URI's. But this is yet another place where parser combinators really shine. Say, I'd want to parse URI's of the simplified form $(scheme)://$(host)$(path), then this is all you need to do:

from rfc3986 import scheme, reg_name, path_abempty
from pyparsing import Literal
host = reg_name.setResultsName("host")
path = path_abempty.setResultsName("path")
URI = scheme + Literal("://") + host + path

And now you've got yourself a validating parser for your reduced grammar. Nice, no? I've added this as an extra module ("notQuiteURI") to netaddress, so you can use it like this:

>>> from netaddress import notQuiteURI 
>>> uri = notQuiteURI.URI.parseString("")
>>> uri.scheme
>>> uri.path
(['/', 'path', '/', 'to', '/', 'resource'], {})

Update: netaddress is now available through the python cheese shop. If you're interested, you should be able to install it by simply typing:

$ easy_install netaddress

Fixing urlparse: More on pyparsing and introducing netaddress

This is the last in a series of three posts (1, 2), discussing issues with pythons urlparse module. Here, I intend to provide a solution.

In the last post, I was talking about parser combinators and parsec in particular, mentioning pyparsing towards the end. The angel-app being a python application, parsec, while cool, is of no immediate use. pyparsing on the other hand provides parsec-like functionality for python. Consider this excerpt from the RFC 3986-compliant URI parser that I'm about to present in this post (please ignore as usual the blog's spurious formatting):

dec_octet = Combine(Or([
Literal("25") + ZeroToFive, # 250 - 255
        Literal("2") + ZeroToFour + Digit,     # 200 - 249
        Literal("1") + repeat(Digit, 2),       # 100 - 199
        OneToNine + Digit,                     # 10 - 99
        Digit                                  # 1-9    
IPv4address = Group(repeat(dec_octet + Literal("."), 3) + dec_octet)

And now:

>>> from netaddress import IPv4address 
[snipped warning message]
>>> IPv4address.parseString("")
([(['127', '.', '0', '.', '0', '.', '1'], {})], {})
>>> IPv4address.parseString("350.0.0.1")
Traceback (most recent call last):
File "", line 1, in ?
egg/", line 1244, in parseImpl
raise exc
pyparsing.ParseException: Expected "." (at char 2), (line:1, col:3)

Anyhow, what I mean to say is this: We have a validating URI parser now. Apart from the bugs that are still to be expected for a piece of code at this early stage, it should be RFC 3986 compliant. You can get either the python package, or a tarball of the darcs repository (unfortunately my zope account chockes on the "_darcs" directory filename, so I'm still looking for a good way to host the darcs).

This is how one would use it:

>>> from netaddress import URI
>>> uri = URI.parseString("http://localhost:6221/foo/bar")
>>> uri.port
>>> uri.scheme

Or, in the case of a more complex parse:

>>> uri = URI.parseString("http://vincent@localhost:6221/foo/bar")
>>> uri.asDict().keys()
['scheme', 'hier_part']
>>> uri.hier_part.path_abempty
(['/', 'foo', '/', 'bar'], {})
>>> uri.hier_part.authority.userinfo
>>> uri.hier_part.authority.port

Hope you find this useful.

Comments (5)  Permalink

Fixing urlparse: A case for Parsec and pyparsing

In a previous post, I described issues with parsing and validating URL's with the functionality provided by Python's stdlib. I will just restate that clearly, all messages exchanged by angel-app nodes must be validated in order for it to work properly. What to do? First of all, I was of course not the first person to notice the module's shortcomings. However, I was surprised at the answers that popped up: It seems like no one was interested in actually coming up with a validating parser (perhaps even just for a subset of the complete URI syntax), but instead people focussed on fixing specific cases where the parser would fail -- in essence adding new features, rather than putting the whole system on a solid basis. Suggestions go so far as to propose a new URI parsing module. However, the proposed new module is again based on the premise that the input represents a valid URI, the behavior in the case of an invalid input is again left undefined. WTF? Have these people never looked beyond string.split() and regexes?

Dudes, writing a VALIDATING PARSER is NOT THAT HARD, if you have a reasonable grammar and good libs. Why do people keep pretending that it is? Sure, you might be afraid of having to fire up lex, yacc and antlr, and for good reason. But with sufficiently dynamic languages, that's usually unnecessary, if you have a parser combinator library handy.

The key idea behind parser combinators is that you write your parser in a bottom up fashion, in just the same way that you would define your grammar. You write a parser for a small part of the grammar, then combine these partial parsers to form a complex whole. The canonical example in this context is Haskell's parsec library. Let's start out with a simple restricted URI grammar:

module RestrictedURI where

import Text.ParserCombinators.Parsec

data URI = URI {
host :: [String],
port :: Int,
path :: [String]
} deriving (Eq, Show, Read)

schemeP = string "http" "scheme"
schemeSepP = string "://" "scheme separator"

hostPartP = many lower "part of a host name"
hostNameP = sepBy hostPartP (string ".") "host name"

pathSegmentP = sepEndBy1 (many1 alphaNum) (string "/") "multiple path segments"
pathP = do {
root - string "/" "absolute path required";
segments - pathSegmentP;
return (root:segments)
} "an absolute path, optionally terminated by a /"

restrictedURIP :: Parser URI
restrictedURIP =
do {
ignored - schemeP;
ignored - schemeSepP;
h - hostNameP;
p - pathP;
return (URI h 80 p)
} "a subset of the full URI grammar"

parseURI :: String -> (Either ParseError URI)
parseURI = parse restrictedURIP ""

(Where you should forgive me for the blog inserting break tags all over the place). But just to illustrate:

vincent$ ghci 
GHCi, version 6.8.1: :? for help
Loading package base ... linking ... done.
Prelude> :l restrictedURI
[1 of 1] Compiling RestrictedURI ( restrictedURI.hs, interpreted )
Ok, modules loaded: RestrictedURI.
*RestrictedURI> parseURI ""
Loading package parsec- ... linking ... done.
Right (URI {host = ["localhost","com"], port = 80, path = ["/","foo","bar"]})

Plus, we get composability, validation and error messages essentially for free:

*RestrictedURI> parseURI "" 
Left (line 1, column 17): unexpected "2" expecting lowercase letter,
"." or an absolute path, optionally terminated by a /

Now consider the following excerpt from Haskell's Network.URI.

--  RFC3986, section 3.1  
uscheme :: URIParser String
uscheme =
do { s - oneThenMany alphaChar (satisfy isSchemeChar)
; char ':'
; return $ s++":"

(Again, please forgive for the blog eating my code, but you can also get it from the haskell web site.) And compare that to the ABNF found in the corresponding section of the RFC:

scheme      = ALPHA *( ALPHA / DIGIT / "+" / "-" / "." )

Note how the complete URI grammar specification in the RFC is barely a page long. So yeah, implementing this grammar is a significant amount of work (of course you could always choose to support just a well-defined subset), but if you have a good parser combinator library, it's just a few hours of mechanically transforming the ABNF into your parser grammar. You can even watch the Simpsons while doing it (I did). In the case of Network.URI, this boils down a line count of 1278, with about half of the lines being comments or empty lines. Not only that, but given the complete grammar specification, it's super easy to formulate a modified grammar.

As it turns out, Python has a library quite like parsec, it's called pyparsing and I'll bore you with it in my next (and last) post on this topic.
Next1-10/24 twisting values since 1994