Wednesday, August 03, 2011

The reality distortion effect, 3 months on...

One of the things I've never talked about in this blog, mainly because I talked about it enough elsewhere, is the iPhone Tracking scandal and the bizarre reality distortion effect that went on during those couple of weeks back in April.

Mac Slocum finally managed to corner me at OSCON last week and got me to talk about things now everything has settled down.


Interview with Mac Slocum at OSCON

One of the things I mentioned during the interview as one of the really positive things to come out of the iPhone Tracking debacle is the visualisations from the crowdflow.net group. If you haven't already seen them, I'd encourage you to take a look at what they've managed to pull out of the data.

CREDIT: Crowdflow.net
The movements of 880 iPhones in Europe during April 2011

Talking about connecting iOS to the real world

I seemed to spent a lot of time in front of the camera while I was at OSCON last week, amongst other things I talked about connecting iOS devices to the real world using the new Redpark serial cable for iOS.


Demonstration of the cable


Interview with Mac Slocum talking about the implications

Tuesday, July 26, 2011

OSCON 2011

This week I'm in Portland OSCON 2011 along with OSCON Data and OSCON Java. If you're not here in person O'Reilly is streaming keynotes and interviews live from the conference floor.

The Live Stream Schedule

But if you are around, come see me talk on Thursday when I'll be discussing connecting iOS to the real world and the Internet of Things.

Monday, July 18, 2011

Connect your iPhone to the real world

The arrival of Google's Accessory Development Kit (ADK) for Android, which allows you to connect your Android handset to an Arduino-based development board was seen by some as the beginning of the end for Apple's restrictive Made for iPod (MFi) program.

Today we discovered Apple's response to Google's ADK, and while it's still inside a crunchy MFi wrapper, the program is now a bit gooey in the middle, as today saw the release of the Serial Cable for iOS from Redpark.

The cable is a fully MFi approved external accessory that allows home-hobbiests to talk to external hardware, no jailbreak required. On one end of the cable is a dock connector that plugs directly into your iOS device. On the other, is an RS-232 serial port that you can easily connect to anything that speaks a serial protocol.

Suddenly connecting your iPhone to the real world became a lot easier, easier in fact than using Google's ADK.

I've been working with the pre-release version of the cable for a couple of months now and I've put up some sample code to get you started. Including a rather nifty Universal application for the iPhone and iPad which will let you directly control an Arduino board. I've dubbed it the "Paduino."

The "Paduino" application.

Because someone had to...?

 
A simple "Push the Button" example.

Also in the works, but not quite ready yet, is a book which will walk you through how to use the cable and how to integrate your iPhone or iPad into the Internet of Things.

Finally if you're at OSCON next week I'll be talking about the cable and how to use it on Thursday. We're hoping to have an early-release copy of the book ready by then.

I can't wait to see what people can do with this...

Thursday, June 16, 2011

Apple and the Web Free Cloud

This article was originally published on the O'Reilly Radar.

The nature of Apple's new iCloud service, announced at WWDC, is perhaps more interesting than it seems. It hints very firmly at the company's longer-term strategy; a strategy that doesn't involve the web.

Apple will join Google and Amazon as a major player in cloud computing. The 200 million iTunes users Apple brings with them puts the company on the same level as those other platforms. Despite that, the three companies obviously see the cloud in very different ways, and as a result have very different strategies.

Amazon is the odd man out. Their cloud offering is bare metal, contrasting sharply with Google, and now Apple's, document-based model. To be fair, Amazon's target market is very different, with their focus on service providers. If you're a Valley start-up looking for storage and servers, you need look no further than Amazon's Web Services platform.

Google and Apple's document model contrasts sharply with Amazon's service-stack approach. Both Google and Apple have attempted to abstract away things, like the file system, which stand between the end user and their data. An unsurprising difference perhaps, Google and Apple are consumer-facing companies that are marketing to the final end user rather than the people and companies who aim to provide services for those users.

But that's where the similarity between Google and Apple breaks down. Google sees the cloud as a way to deprecate general purpose computers in the hands of their users. In the same way that their new Chromium OS is built for the web, their cloud strategy is an attempt to move Google's users away from native applications so that their applications and data live in Google's cloud of services. Perhaps coincidentally, this also gives Google the chance to display and target their advertising even more cleverly.

Apple's approach is almost entirely the opposite. They see the cloud as a way to keep the general purpose computer on life support for a few more years until touch-based hardware is really ready to take over. Apple's new cloud platform is built for native applications, in an attempt to pull users into native apps designed for their platforms. This method also gives Apple the chance to sell hardware, applications, and content that will lock users into their platform even more firmly. This is the basis of the often remarked "halo effect."

At least on the surface things seem to be simple — the "why" of the thing is not in question. However it's what hasn't been said, at least openly, that raises the most interesting questions.

Web 2.0 Summit, being held October 17-19 in San Francisco, will examine "The Data Frame" — focusing on the impact of data in today's networked economy.

Save $300 on registration with the code RADAR

Apple is fundamentally platform orientated. It's deep in their company genetics. The ill-fated official cloning program from the mid-'90s, which was brought to a screeching halt by the return of Steve Jobs, seems to have set a deep fear inside the company about letting someone else control anything that might stand between the company and direct access to their customers.

At least to me, nothing confirms that mindset more than Apple's return to designing their own processors in-house in Cupertino. Apple has a long history of using its own custom silicon, but it's been more than five years since Apple has done so. With the move to Intel, the hope was to delegate nearly all of Apple's custom chip development. Unfortunately, that proved to be a stumbling block when Apple built the first generation iPhone. The Samsung H1 processor in the original model wasn't quite what Apple wanted, even though it was what had been asked for, and I think the return to custom silicon probably brought a sigh of relief in some corners of the company.

The link between custom chips and the cloud may seem tenuous at first glance, but I think Apple's return to designing their own silicon is telling. Almost as telling as spending half a billion dollars on a custom data center to support their new iCloud service. Both moves show the company is now committed more than ever to controlling the verticals. From the chips inside the devices to the data centers their customers' data ultimately resides on, Apple is committed to controlling the user experience, and the web has no place in that.

You might argue that this is because the web is "too open" and that threatens Apple's platform. However, the continuing argument over openness, or lack there of, isn't really relevant. Despite Google's protestations to the contrary, neither of these two companies is particularly open. The very document-based model they're both advocating in their cloud architectures precludes a truly open system. It's such an obvious straw man argument that it's not actually that interesting.

What is interesting is that there was little or no mention of the web, or HTML5, during Apple's WWDC keynote. I think you'll see far less emphasis on HTML5 from Apple in the future, unless someone asks to do something with Apple's platform the company disapproves of, and then the traditional answer of "Well, you can always do that in HTML5" will be rolled out again.

Apple has finally put their cards on the table. They have not yet bet the company on iCloud, but it's telling how deep the integration into both iOS and OS X appears to be. They have for too much invested in iCloud for it to fail, if only in reputation. Whether the first incarnation lives up to its promises out of the box is still to be seen, but success isn't out of the question. Despite MobileMe, Apple does know how to build large-scale reliable backend services. You only have to look at the App Store itself for an example.

So in the future don't be too surprised to see Apple integrate iCloud even more tightly with both iOS and OS X. For the same strategic reasons, don't be shocked to see more custom chips appear — I expect to see the arrival of ARM-based MacBooks and the transition away from Intel for Apple's laptops. That's because for Apple, It's all about the platform.

Thursday, May 19, 2011

The next, next big thing

This article was originally published on the O'Reilly Radar.

In my old age, at least for the computing industry, I'm getting more irritated by smart young things that preach today's big thing, or tomorrow's next big thing, as the best and only solution to my computing problems.

Those that fail to learn from history are doomed to repeat it, and the smart young things need to pay more attention. Because the trends underlying today's computing should be evident to anyone with a sufficiently good grasp of computing history.

Depending on the state of technology, the computer industry oscillates between thin- and thick-client architectures. Either the bulk of our compute power and storage is hidden away in racks of (sometimes distant) servers, or alternatively, into a mass of distributed systems closer to home. This year's reinvention of the mainframe is called cloud computing. While I'm a big supporter of cloud architectures, at least at the moment, I'll be interested to see those preaching it as a last and final solution of all our problems proved wrong, yet again, when computing power catches up to demand once more and you can fit today's data center inside a box not much bigger than a cell phone.

Thinking that just couldn't happen? You should think again, because it already has. The iPad 2 beats most super computers from the early '90s in raw compute power, and it would have been on the world-wide top 500 list of super computers well into 1994. There isn't any reason to suspect that, at least for now, that sort of trend isn't going to continue.

OSCON Data 2011, being held July 25-27 in Portland, Ore., is a gathering for developers who are hands-on, doing the systems work and evolving architectures and tools to manage data. (This event is co-located with OSCON.)

Save 20% with the code OS11RAD

Yesterday's next big thing

Yesterday's "next big thing" was the World Wide Web. I still vividly remember standing in a draughty computing lab, almost 20 years ago now, looking over the shoulder of someone who had just downloaded first public build of NCSA Mosaic via some torturous method. I shook my head and said "It'll never catch on, why would you want images?" That shows what I know. Although to be fair, I was a lot younger back then. I was failing to grasp history because I was neither well read enough, nor old enough, to have seen it all before. And since I still don't claim to be either well read or old enough this time around, perhaps you should take everything I'm saying with a pinch of salt. That's the thing with the next big thing: it's always open to interpretation.

The next big thing?

The machines we grew up with are yesterday's news. They're quickly being replaced by consumption devices, with most of the rest of day-to-day computing moving into the environment and becoming embedded into people's lives. This will happen almost certainly without people noticing.

While it's pretty obvious that mobile is the current "next" big thing, it's arguable whether mobile itself has already peaked. The sleek lines of the iPhone in your pocket are already almost as dated as the beige tower that used to sit next to the CRT on your desk.

Technology has not quite caught up to the overall vision and neither have we — we've been trying to reinvent the desktop computer in a smaller form factor. That's why the mobile platforms we see today are just stepping stones.

Most people just want gadgets that work, and that do the things they want them to do. People never really wanted computers. They wanted what computers could do for them. The general purpose machines we think of today as "computers" will naturally dissipate out into the environment as our technology gets better.

The next, next big thing

To those preaching cloud computing and web applications as the next big thing: they've already had their day and the web as we know it is a dead man walking. Looking at the job board at O'Reilly's Strata conference earlier in the year, the next big thing is obvious. It's data. Heck, it's not even the next big thing anymore. It's pulling into the station, and to data scientists, the web and its architecture is just a commodity. Bought and sold in bulk.

Strata job board
The overflowing job board at February's Strata conference.

As for the next, next big thing? Ubiquitous computing is the thing after the next big thing, and almost inevitably the thirst for more data will drive it. But then eventually, inevitably, the data will become secondary — a commodity. Yesterday's hot job was a developer, today with the arrival of Big Data it has become a mathematician. Tomorrow it could well be a hardware hacker.

Count on it. History goes in cycles and only the names change.

Saturday, May 14, 2011

The secret is to bang the rocks together

This article was originally published on the O'Reilly Radar.

Every so often a piece of technology can become a lever that lets people move the world, just a little bit. The Arduino is one of those levers.

It started off as a project to give artists access to embedded micro-processors for interaction design projects, but I think it's going to end up in a museum as one of the building blocks of the modern world. It allows rapid, cheap, prototyping for embedded systems. It turns what used to be fairly tough hardware problems into simpler software problems.

CREDIT: Arduino.cc
The Arduino UNO.

The Arduino, and the open hardware movement that has grown up with it, and at least to certain extent around it, is enabling a generation of high-tech tinkerers both to break the seals on proprietary technology, and prototype new ideas with fairly minimal hardware knowledge. This maker renaissance has led to an interesting growth in innovation. People aren't just having ideas, they're doing something with them.

Goodbye desktop

The underlying trend is clear. The general purpose computer is a dead end. Most people just want gadgets that work, and that do the things they want them to do. They never really wanted computers. They wanted what computers could do for them.

While general purpose computers will live on, like the horse after the arrival of the automobile, these systems will be relegated to two small niches. Those of us that build the embedded systems people are using elsewhere will still have a need for general purpose computers, as will those who can't resist tinkering. But that's the extent of it. Nobody else will need them. Quite frankly, nobody else will want them.

We'll be saying a big hello to all intelligent lifeforms everywhere and to everyone else out there, the secret is to bang the rocks together, guys. - The Hitchhiker's Guide to the Galaxy, Douglas Adams

The humble Arduino is the start of that. The board has multiple-form factors, but a single-programming interface. Sizes range from the "standard" palm of your hand for prototyping, down to the size of your thumb for the almost-professional almost-products now starting to come out of the maker renaissance. From wearable versions like the Lilypad, sized and customised to be stitched into clothing, to specially built boards launched into space onboard the new generation of nano-satellites build on a shoe-string budget by hobbyists, to Google's new Android Open Accessory Kit.

CREDIT: NASA
The ANDE deployment from STS-127 in July 2009.

Every interesting hardware prototype to come along seems to boast that it is Arduino-compatible, or just plain built on top of an Arduino. It's everywhere.

Maker Faire Bay Area will be held May 21-22 in San Mateo, Calif. Event details, exhibitor profiles, and ticket information can be found at the Maker Faire site.

Things are still open. They're just different things.

There has been a great deal of fear-mongering about the demise of the general purpose computer and the emergence of a new generation of consumption devices as more-or-less closed platforms. When the iPad made its debut, Cory Doctorow argued that closed platforms send the wrong signal:

Buying an iPad for your kids isn't a means of jump-starting the realization that the world is yours to take apart and reassemble; it's a way of telling your offspring that even changing the batteries is something you have to leave to the professionals.

I'm philosophical about the passing of the computer. What we're seeing here is a transition from one model of computing to another. We've seen that before and there were similar outcries for the death of the mainframe, as there has been for the death of the desktop. There is plenty of room for closed platforms, but the underlying trend is toward more openness, not less. It's just the things that are open and the things that are closed are changing. The skills needed to work with the technology are changing as well.

What the Arduino and the open hardware movement have done is made hard things easy, and impossible things merely hard. Before now, getting to the prototype stage for a hardware project was hard, at least for most people, and going beyond a crude prototype was impossible for many. Now it's the next big thing.

Wednesday, April 06, 2011

.Astronomy 3

This week I'm at the .Astronomy 3 conference in Oxford. A wierd mashup between astronomy, cutting edge computer science, and social media.

The .Astronomy3 Trailer from Markus Poessel.

Shunned by their colleagues for not doing enough research, an elite team of computer geeks has gone into the Oxford underground. If you can find them, maybe you can hire... the .A Team.

Tuesday, March 22, 2011

Location Enabled Sensors for iOS

Yesterday evening I gave a talk on "Location Enabled Sensors" to the Bristol branch of the BCS, the slides from the talk are embedded below.

Updated Visualisations for Japan

Download: Fukushima-Data.zip (1.1MB) updated at 22-03-2011T11:44Z

With thanks to Gemma Hobson and her ongoing efforts at data entry, we've now extended the duration of the visualisation to cover a full four days. From 17:00 on the 16th of March through till 16:00 on the 20th of March.


Environmental Radioactivity Measurement, 17:00 16th March - 17:00 20th March -- For data from the Fukushima site itself see the "Readings at Monitoring Post out of 20 Km Zone of Fukushima" data sets online. Data collection from Mlyagi (Sendai) ceases at 17:00 on the 17th of March and does not resume. Several other smaller duration data drop outs also occur during the monitored period.

It can be seen that radiation levels near Fukushima are somewhat higher than the original visualisations, whilst the previous data showed peak around 0.15 µSv/h, measurements are now peaking at 0.25 µSv/h. The typical minimum and maximum values across Japan are plotted below on the same scale for comparison.



Additionally, the map embedded below again shows the environmental radioactivity measurements with respect to the typical maximum values for that locale.


Environmental Radioactivity Measurement,
Ratio with respect to typical Maximum Values

This shows that enhancement around Fukushima is now spiking around four times typical maximums, the previous visualisations showed enhancements of only around twice typical maximums.

Finally, earlier today Sarah Novotny pointed me in the direction of an excellent visualisation by Paul Nicholls showing the series of magnitude 4 to 6 aftershocks that are still rocking Japan. Over 670 to date since the original magnitude 9 earthquake.

Japan Quake Maps by Paul Nicholls

Update: I've just been pointed to this interesting visualisation of the original magnitude 9 earthquake in Japan. It was created using GPS readings from the Japanese GEONET network, which should not to be confused with the similarly named GeoNet network in New Zealand.

The video shows the horizontal (left) and vertical (right) displacements recorded when the earthquake struck. The resulting ground movement ripple propagating through the country is very evident.

Saturday, March 19, 2011

Radioactivity Measurement in Japan

This article was updated to reflect the latest data at 16:29 20/Mar/2011.
With thanks to Pete Warden and Gemma Hobson.

Download: Fukushima-Data.zip (1.1MB)

Over the weekend I came across some data on levels of radiation in Japan collected by the Japanese government, and helpfully translated into English by volunteers.

Unfortunately the data was also somewhat unhelpfully stuck in PDF format. However between us Gemma Hobson, Pete Warden and I transcribed, mostly by hand, some of the more helpfully formatted files into CSV format (16KB) making it acceptable to Pete's OpenHeatMap service. The map embedded below shows our first results.

Environmental Radioactivity Measurement,
17:00 16th March - 17:00 18th March
For data from the Fukushima site itself see the "Readings at Monitoring Post out of 20 Km Zone of Fukushima" data sets online. Data collection from Mlyagi (Sendai) ceases at 17:00 on the 17th of March and does not resume. Several other smaller duration data drop outs also occur during the monitored period.

As you can see from the visualisation environmental radiational levels change fairly minimally over the time course of the day. Most measurements are steady, and within the historic ranges, except around the troubled Fukushima plant where readings are about double normal levels.

Things become more interesting however when we look at the historic baseline data, the two maps below show the typical range of background environmental radiation in Japan. The first shows typical minimum values, while the second shows typical maximum values, put together they illustrate the observed range for environmental radiation across Japan.

Environmental Radioactivity Measurement,
Typical Minimum

Environmental Radioactivity Measurement,
Typical Maximum

Finally the map embedded below shows the environmental radioactivity measurements with respect to the typical maximum values for that locale. From this visualisation it is evident that the measured values throughout Japan are normal except in the immediate area surrounding the Fukushima reactors where levels are about double normal maximum levels.

Environmental Radioactivity Measurement,
Ratio with respect to typical Maximum Values

However when analysing this data you should bear in mind that the normal environmental range in that area is actually fairly low compared to other areas in Japan. Currently the levels of radiation at the plant boundary are actually still lower than the typical background levels in some other parts of the country. Levels also seem fairly static over time, and do not seem to be increasing as the situation progresses.

Unless the situation significantly worsens, which admittedly is always possible, human habitation in close proximity to the plant will not be affected in the medium term. From talking to people on the ground in Japan, and by looking at the actual measurements across the country, a very different picture seems to be emerging than that reported by the Western media which seems highly skewed, and heavily politicised, by comparison.

I think everyone should take a deep breath, step back, and look at the evidence which is suggesting that this is not another Chernobyl in the making. It's may not even be another Three Mile Island. If the remaining functioning reactor units are decommissioned following this incident it may well have more to do with politics than the science.

Update: A Radiation Dose Chart from the people that brought you xkcd.com. The extra dosage you would pick up in a day while in a town near the Fukushima plant is around 3.5 µSv.


Radiation Dose Chart

For comparison a typical daily background dose is around 10 µSv, whilst having a dental X-ray would expose you to an additional 5 µSv. The exposure from a single trans-continental flight from New York to L.A. is of the order of 40 µSv.

Update: This visualization compares the energy mix and number deaths related to each of the main sources of energy worldwide - coal, oil, natural gas, nuclear, hydro and biomass.

Thursday, March 03, 2011

The abandonment of technology

This article was originally posted on the O'Reilly Radar

Right now the Space Shuttle Discovery is in orbit for the last time, and docked with the International Space Station (ISS). On its return to Earth the orbiter will be decommissioned and displayed in the Smithsonian's National Air and Space Museum. Just two more shuttle flights, Endeavour in mid-April, and Atlantis in late-June, are scheduled before the shuttle program is brought to and end.

Tracy Caldwell Dyson in the Cupola module of the International Space Station
Tracy Caldwell Dyson in the Cupola module of the International Space Station observing the Earth below during Expedition 24 in 2010.
(Credit: Expedition 24, NASA)

Toward the end of last year I came across an interesting post about the abandonment of technology by Cameron Locke. A couple of months later on I read an article by Kyle Munkittrick who argues that the future is behind us, or at least that our current visions of the future are outdated compared the current technology:

The year is 2010. America has been at war for the first decade of the 21st century and is recovering from the largest recession since the Great Depression. Air travel security uses full-body X-rays to detect weapons and bombs. The president, who is African-American, uses a wireless phone, which he keeps in his pocket, to communicate with his aides and cabinet members from anywhere in the world ... Video games can be controlled with nothing but gestures, voice commands and body movement. In the news, a rogue Australian cyberterrorist is wanted by world's largest governments and corporations for leaking secret information over the world wide web; spaceflight has been privatized by two major companies, Virgin Galactic and SpaceX.

I've been thinking about these two articles ever since, and Discovery's last flight brought these thoughts to the front of my mind. On the face of things the two posts espouse very different view points, however the underlying line of argument in both is very similar.

The future is already here and we may be standing at a crucial decision point in our history. Forces are pulling us in both directions. On one hand the rate of technological progress is clearly accelerating, on the other, the resources we have on hand to push that progress are diminishing at an ever-increasing rate. In a world of declining resources, and increasingly unreliable energy supply, you have to wonder whether our current deep economic recession is a sign of things to come. Will the next few decades be a time of economic contraction and an overall lower standard of living?

Big problems, unaddressed

At the 2008 Web 2.0 Expo, Tim O'Reilly argued that there are still big problems to solve and he asked people to go after the big, hard, problems.

And what are the best and the brightest working on? You have to ask yourself, are we working on the right things?

I think Tim was right, and I don't think much has changed in the last couple of years. I'm worried that we're chasing the wrong goals, we're not yet going after the big, hard, problems that Tim was talking about. Solving them might make all the difference.

While not all big problems are related to the space program by any means, the successful first launch of SpaceX's Falcon 9 rocket last December has given me some hope that some of our best and brightest aren't just throwing sheep at one another or selling plush toys.

Despite this, I see signs of a growth in pseudo-science, and an inability of even the educated middle classes to be able to tell the difference between it and more trustworthy scientific undertakings. There is also a worrying smugness, almost pride, among many people that they "just don't understand computers." While some of us are pushing the boundaries, it appears we may be leaving others behind.

We abandoned the moon, then faster flight. What's next?

The fastest westbound trans-atlantic flight from London Heathrow to New York JFK was on the 7th of February 1996, by a BA Concorde. It made the journey in just under 3 hours. Depending on conditions, the flight typically takes between 7 and 8 hours on a normal airplane. As a regular on the LHR to SFO and LAX routes, I spend a lot of time unemployed over Greenland. I do sometimes wish it was otherwise.

A few weeks ago I watched an Airbus A380 taxi toward take-off at Heathrow, and I felt a deep sense of shame that as a species we'd traded a thing of aeronautical beauty for this lumbering giant. Despite the obvious technical achievement, it feels like a step backwards.

I'm young enough, if only just, that I don't remember the Moon landings. However when I was a child my father told me about how, as a younger man, he had avidly watched the broadcast of the Moon landings, and I have a friend whose father was in Mission Control with a much closer view. In the same way I can tell my son that we once were able to cross the Atlantic in just 3 hours, and that once it was possible to arrive in New York before you left London. I do wonder if things go the wrong way — and we enter an age of declining possibilities and narrowing horizons — whether he'll believe me.

Tuesday, March 01, 2011

The return of the Personal Area Network

This article was originally publish on the O'Reilly Radar.

The recent uprisings in Egypt and across the Middle East have caused some interesting echos amongst the great and the good back in Silicon Valley. Despite the overly dramatic language, I find Shervin Pishevar's ideas surrounding ad-hoc wireless mesh networks, embodied in the crowd-sourced OpenMesh Project, intriguing.

Back in the mid-years of the last decade I spent a lot of time attempting to predict the future. At the time I was convinced that personal area networks (PAN) were going to become fairly important. I got a few things right: that things were going to be all about ubiquity and geo-location, and that mobile data was the killer application for the then fairly new 3G cell networks. But I was off the mark to think that the convergence device, as we were calling them back then, was a dead end. Technology moved on and the convergence devices got better, thinner, lighter. Batteries lasted longer. Today we have iOS- and Android-based mobile handsets, and both platforms pretty much do everything my theorised PAN was supposed to do, and in a smaller box than the handset I was using in the mid-naughties.

It's debatable whether the OpenMesh Project is exactly the right solution to create secondary wireless networks to provide communications for protestors under repressive regimes. The off-the-shelf wireless networks Pishevar talks about extending operate on a known number of easily jammable frequencies. Worse yet, there has been little discussion, at least so far, about anonymity, identity, encryption and other security issues surrounding the plan. The potential for the same repressive regimes the protestors are trying to avoid being privy to their communications by the very nature of the ad-hoc network is a very real threat.

Where 2.0: 2011, being held April 19-21 in Santa Clara, Calif., will explore the intersection of location technologies and trends in software development, business strategies, and marketing.

Save 25% on registration with the code WHR11RAD

For these reasons I'm not absolutely convinced that mesh networking is the right technical approach to this very political, and human, problem. Perhaps he should be looking at previous work on delay-tolerant networking rather than a real-time mesh protocol.

However, Pishevar's suggested architecture does seem robust enough to cope with disaster response scenarios, where security concerns are less important, or if widely deployed to provide an alternative to more traditional data carriers. Pishevar isn't of course alone in thinking about these issues: the Serval Project uses cell phone swarms to enable mobile phones to make and receive calls without the conventional cell towers or satellites.

The Bluetooth standard proved too complicated, and at times too unreliable, a base upon which to build wireless PAN architectures. The new generation of ZigBee mesh devices is more transparent, at least at the user level, possibly offering a more robust mesh backbone for a heterogenous network of Wi-Fi, cell and other data connections.

With the web of things and the physical web now starting to emerge, and with wearables now staring to become less intrusive and slightly more mainstream, the idea of the personal area network might yet re-emerge, at least in a slightly different form. The resulting data clouds could very easily form the basis for a more ubiquitous mesh network.

I'd argue that I was only a bit early with my prediction. The technology wasn't quite there yet, but It doesn't mean it isn't coming. The 2000s proved, despite my confident predictions at the time, not to be the decade of the wireless-PAN. Perhaps its time is yet to come?

Wednesday, February 16, 2011

Ignite London #4

On my way back from Strata Conference I stopped off in London and gave a 5 minute Ignite talk at Ignite London on "Who owns your data?."

Who Owns Your Data? - by Alasdair Allan, Ignite London 4, 8 February 2011


Update: Edd Dumbill just posted an interesting article on data as a currency that makes some similar points to my Ignite talk.

Thursday, February 03, 2011

Interview on Machine Learning

This week I've bee out at Strata, and earlier today I was interviewed by interviewed by Mac Slocum for the O'Reilly Media YouTube channel about my talk on Machine Learning in the Real World and other related topics.

Machine Learning in the Real World

Today I gave a talk on machine learning in the real world at O'Reilly's Strata Conference. Can machines help us make better decisions? During the panel we looked at how machine learning is applied in industry and academia to optimise the use of resources and help with decision support.


Alasdair Allan talking about "Machine Learning in the Real World"
at O'Reilly Media's Strata Conference in 2011.