As someone who uses both Android and iOS regularly, I'm getting increasingly frustrated by fragmentation. However it's not on my Android devices I see this, but on the iOS ones. I install a popular, well-funded application like Instagram, Flickr, or Circa on my iPad, but when I launch it 3/4 of the screen is black bars, with a teensy little app in the middle. Or I can choose to scale it up without smoothing, so jagged pixels I have't seen since the 1990s reinforce the sense that I am doing something wrong by attempting to run this app here. Every affordance is pushing back at me, saying I'm doing it wrong.
By contrast, on Android applications scale up to fill the space sensibly - buttons stay button sized, text and image areas expand well. Developers can add alternative layouts to better handle varying sizes, but if they don't things remain legible and touchable.
More pernicious is the artificial dichotomy that the iOS world leads our design thinking into. You're either on the held-in-one-hand phone, briefly standing somewhere, or you're sitting down in the evening using your iPad (so big and heavy that you have to rest it on your knees - Steve Jobs even brought out a sofa to sit on at the launch). This false 'mobile versus desktop' dichotomy even misled Mark Zuckerberg when he said "iPad's not mobile, it's a computer." and at the Facebook Home Launch, a tablet version was said to be "months away", though a working version was hacked together by outside programmers within days.
Meanwhile, nobody told Barnes and Noble, whose 7" Nook did so well that Amazon launched a Kindle range the same size, leading to the lovely Nexus 7 from Google and finally to the iPad Mini from Apple. This is the form factor, tested for years by paperback books, that makes one-handed long form reading comfortable. If you spend any time on public transit, being able to read standing up or in narrow bus seats is an obvious benefit. But the hermetically sealed company-coach commuters at Apple missed this for years.
Steve Jobs even said you'd have to file down your fingers to use it. The thing is, on iOS it does feel like that. The iPad sized apps have too-small buttons, the iPhone ones are too big if zoomed. There is no way for an app to know how big your finger is compared to the screen, let alone a website.
The supposed advantage of iOS is fewer sizes to design for, but now you need 12 different layouts to cope with the horizontal and vertical orientations of each device, and the general layout tools don't handle this as well as Android, requiring complex updates. Chiu-Ki Chan explains the pain, whereas Android Studio just made this even easier for developers.
The other fragmentation problem on iOS are the missing links. Not in the evolutionary sense, but the ability to readily connect between applications using URLs, as we're used to on the web. Applications can't claim parts of a URL, only a whole scheme. As Apple says:
If more than one third-party app registers to handle the same URL scheme, there is currently no process for determining which app will be given that scheme.
On Android, any app can claim any scheme, host and path, and the OS will resolve as appropriate, giving you a choice if more than one possibility is available.
On iOS, each app ends up embedding webviews inside it rather than linking to a browser, fragmenting your web history. I have to remember whether I got to the page via Twitter, or Facebook or email to go back to it later on, and I only get to share it to the iOS-approved places, Twitter, Facebook, email or iMessage. On Android, any installed app that can handle that type of media is an option to share to, or create a photo, make a phonecall, any of hundreds of "intents" that act as bridges between apps and, through browsers, the web.
This means that Android apps don't end up doing everything, but hand off to each other as needed, meaning that you can replace any function cleanly across apps. Better keyboards are the obvious example here, but camera apps, browsers, and SMS apps can drop themselves in, with the choice made by the user.
On iOS, you have to explicitly link to the scheme, which is per-app. Ironically, this means that Google (or Facebook) can release a set of iOS apps that connect to each other preferentially, leaving the Apple applications out of the loop. But it also makes it harder for developers to specialise in doing one thing well and linking to the rest.
The other pernicious influence of iOS fragmentation has been the rise of the mobile-only site - the m. layout that was tuned for iPhone use, then slightly adapted for other mobile users, often giving rise to farcical single column layouts with huge whitespace on tablets. In the early iOS days this was a bonus, as it encouraged function-first design, without the organisation-chart-inspired cruft around the edges that websites accumulate over time. As the effective resolution of mobile devices has increased, now often exceeding what is available on desktops, the assumptions that drove mobile-specific sites are breaking down.
Now that Android is the dominant operating system, Google is getting serious about it as a web platform too, which is very welcome. The Android Browser was installed as part of the OS, and didn't get upgraded over time. This has changed, with Chrome now the default Android browser, and it is on a regular update pipeline like Desktop Chrome and Firefox. iOS's Safari updates are frequent too now, and Microsoft is now pleading with developers to give them modern web apps too.
Truly responsive design has succeeded mobile-first as the best choice for websites, and we're seeing this spread as browsers converge on HTML5 features. What this means is that the web platform is now evolving rapidly, without any one browser provider being a bottleneck. The installed base for SVG passed Flash a while back, and even Adobe is now backing the web fully, bringing their design know-how to HTML5 features such as regions and exclusions. Also in the pipeline for HTML5 is browser to browser audio, video and text chat via WebRTC.
This web platform revolution was catalysed by Apple with WebKit and Mozilla with Firefox, and picked up by Google, Microsoft, Adobe and others. We now have a Browser Battle to be more standards compliant and consistent, rather than a Browser War to be different and proprietary. What I'll be hoping for from Apple at next weeks WWDC is a clear recognition of these design lacunae and new and better ways for developers to succeed both with native apps and on the web.
This was also published on TechCrunch.
An announcement on Friday via Brendan Eich:
ORBX.js, a downloadable HD codec written in JS and WebGL. The advantages are many. On the good-for-the-open-web side: no encumbered-format burden on web browsers, they are just IP-blind runtimes. Technical wins start with the ability to evolve and improve the codec over time, instead of taking ten years to specify and burn it into silicon.I think the 'remote-screen viewing of videogames' use case is bogus (if anyone notices latency it's gamers), but this is a really important development for the reasons Brendan mentions and more.
Nine years ago, I wrote:
I'd say video compression is maybe 2-4 times as efficient (in quality per bit) than it was in 1990 or so when MPEG was standardised, despite computing power and storage having improved a thousandfold since then.
Not much has changed. The video compression techniques we're using everywhere are direct descendents of 1980s signal processing. They treat video as a collection of small 2D blocks that move horizontally and vertically over time, and encode all video this way. If you want to make a codec work hard, you just need to rotate the camera. Partly this is because of the huge patent thicket around video encoding, mostly it's because compression gets less necessary over time as network capacity and storage increases. However, it was obvious 10 years ago that this was out-dated.
Meanwhile, there has been a revolution in video processing. It's been going on in video games, and in movies and TV. The beautiful photorealistic scenes you now see in video games are because they are textured 3D models rendered on the fly for you. Even the cut scenes work this way, though their encoding is often what compression researchers dismissively call a 'Graduate Student Algorithm' - hand-tweaking the models and textures to play back well within the constraints of the device. Most movies and TV has also been through 3d-modelling and rendering, from Pixar through visual effects to the mundane superimposition of yard lines on sports. The proportion of YouTube that is animation, machinima or videogame run-throughs with commentary keeps growing too.
Yet codecs remain blind to this. All this great 3d work is reduced to small 2D squares. A trimesh and texture codec seems an obvious innovation - even phones have GPUs in them now, and desktops have for 20 years. Web browsers have been capable of complex animations for ages too. Now they have the ability to decode bytestreams to WebGL in real time, we may finally get codecs that are up to the standards we expect from videogames, TV and movies, with the additional advantage of resolution independence. It's time for this change.
Reading the tech news this week, there's a lot of talk about forking. Google Blink forking AppKit. Apple not forking Chromium because that would be hostile. Facebook 'forking' Android. Even Tim O'Reilly forking the memetic nature of Free Software into Open Source.
However not all of these things are really forks, and forking is no longer necessarily a hostile act. Lets go through them. Google Blink is a fork of Webkit, or rather of Webcore. Alex explains this is to reduce the amount of time they need to spend merging back to Webkit, but it doesn't preclude anyone continuing to do this if desired. Maciej explained that the reason for the difference in multiprocessing implementations that precipitated this was Apple not wanting to do so.
Facebook did not need to fork Android, because it is designed to support substitutable components. You can swap out any OS components, and you can communicate between apps using intents. Indeed, Facebook could make a deal with handset manufacturers or carriers that don't offer the 'with Google' experience to replace it with a Facebook one. Expect to see this in overseas markets, especially the ones where Facebook Zero works with carriers.
The more subtle thing is that forking is no longer perjorative. It used to be a last resort, what you did when your open source community had broken down. It meant that people had to pick sides, and choose which fork to adopt, because open source had a hierarchic nature. Now, forking is what you do to show interest. If you go to github, where much open source lives now, forking a project is a single click. Sucessful projects will have many forks, and will accept pull requests from some of them.
This is the real difference between the Free Software and Open Source worldviews that were debated this week - the web enables more parallel, less centralised forms of co-operation and ownership. The monolithic projects and integrations are giving way to ones with better defined boundaries between them, and the ability to combine components as needed. Which means tech companies don't get to tell each other to "Knife the baby" any more.
I linked to my boys' old blog today when Christopher asked for an April Fools prank idea, and noticed that the images were missing. This is due to the demise of iDisk, Apple's handy built-in version of DropBox that I paid them about $100/year for until they shut it down and broke all my links to it.
To fix this, I copied the old iDisk Sites folder to Google Drive, and manually changed the links in the blog posts. Then I had a thought - could I host my twitter archive this way? As you can see it works.
Here's how to do it:
Now you have your old tweets hosted on Google. Tweet a link to them.
Aha, I can just drag my Twitter archive onto Google Drive and publish it j.mp/kmtweets— Kevin Marks (@kevinmarks) April 2, 2013
We’re used to taking the web for granted. We expect it to be there as substrate, with its addresses, declaratory documents, universally available programming language and the links between pages.
Ah, the links. There’s the rub. How many times have you followed a link and got a 404 or a different page than you were expecting? Links rot. As Tim Berners-Lee says, eventually every domain becomes a porn site.
So we want to do better. We want to build a non-web web. A special place for ourselves and our friends that is self-contained, and where all the pages and links are in the same database, and they can’t rot.
Instead of these messy links with protocols and domains in we just use @names or +names and #topics and tag. It’s easier for people to do, and self-consistent and grows explosively. Biz dev gets excited about the reciprocal deals we can do with other content owners.
If you’ve read Nasim Taleb’s Antifragile, you know what comes next. By shielding people from the complexities of the web, by removing the fragility of links, we’re actually making things worse. We’re creating a fragility debt. Suddenly, something changes - money runs out, a pivot is declared, an aquihire happens, and the pent-up fragility is resolved in a Black Swan moment.
The special place disappears entirely. Or, if we’re lucky, the Archive Team lights the cat signal and emergency archivists preserve it in formaldehyde somewhere else, the clock stopped, the links severed.
Meanwhile, out there on the web, people can still connect and discuss and say what went wrong, and do better next time. The web itself is antifragile. It interprets our business models as damage and routes around them. If we’ve learned, we’ll respect this next time we make something.
Back in the 1980s, storage was expensive and slow. You had a copy of your document in memory and you would be asked every time you wanted to save it out to disk, because you didn't want to fill the disk up. That paradigm is so out of date now it's embarrassing to try to explain to my sons what the little floppy disk icon is in Microsoft Word. "What's that?" "it's a floppy disk." "Oh yeah, I think I saw one in the garage once"
The world view of having to load and save is being gradually eroded. Apple has changed the operating system - Mountain Lion no longer has Save and Save As, but instead a model of going back through edit histories. Google Docs originally didn’t have a floppy disk icon, but put it back because people were looking for it in user testing. Now it has been removed.
We've had source code control as programmers for a long time. But the github world takes that further: the first thing you do is clone a project into your own repository, then start forking it. You can eventually merge stuff back later, but there is the assumption that things are happening in parallel. As James Governor put it:
Open Source used to count download numbers as a measure of developer success.
Today, we increasingly use forks as the metric of traction.
Wikipedia has put this into the public consciousness by having publicly visible edit histories, so you can go back and forwards in time over the history of the article. The paradigm of "storage is not a problem, we should keep every version of everything ever" is moving through culture to be a default assumption.
This will be something we want on mobile too. The issue of "which have I got on the phone and which have I got in the cloud?” is what makes it tricky. I think there will be a battle between Apple and Google about how you present that to the user in a coherent way - Google Drive and iCloud are taking different paths here, with DropBox actually working between both. Google Drive not storing copies of Google Documents locally is a mistake I expect them to fix.
Everything is moving in this direction, even low level system design. The growth of functional programming is all about not having contention over a single copy of things in memory but having paths through data that are modifying things in their own version of the world.
Not everything can be written that way. The core of node is in C because it has to deal with the raw machine contention and deal with this routing, but the spread of the functional world view is a natural fit for web apps.
The other potential that node.js makes manifest is convergence of client and server code. If you're running and writing the same code in the same language, with the same libraries running on both the client and the server you can decide to migrate bits back and forth much more easily.
You don’t have to spend so much time deciding which is which and without worrying about the boundaries of the world and different shapes of the data structures. So if you're creating a JSON object on the server and passing it to the client you can decide whether you do that or not, at which point you do that, at which point you render it and which point you don't. That sort of fluidity is going to become more important over time. Pat Patterson showed how this can work for mobile apps too.
The programmers' world view has changed on this and it is permeates out to the world as programmers make those ideas available to the public in usable form. The invention of ‘Undo’ was a huge part of what made the Mac great - enabling users to experiment safely. Being able to retrospectively undo mistakes later, or learn from others’ public variations on a theme is going mainstream too.
This post is based on discussions I had on This Week In Google ep 143, with Gina Trapani, Jeff Jarvis and Tom Merritt which were transcribed by Michael Shook from this video of the show Updated: At the same time, JP Rangaswami wrote Warning: Contains Warnings which give more context about how 'Undo' helps protect innovation.
Dan Porter, CEO of OMGPop, has had a good week. His game, Draw Something (it is an asynchronous Pictionary for cellphones, like Words With Friends is an asynchronous Scrabble) has taken off like mad, and Zynga bought his company for over $200 million. However, one employee didn't go along to Zynga, and Dan's been whining on twitter:
What's so interesting about success is the number of failures who try to ride on your back. Shay Pierce is just one of many...— Dan Porter (@tfadp) March 31, 2012
This has drawn some reactions from others, eg Notch, CEO of Minecaraft:
The one omgpop employee who turned down joining Zynga was the weakest one on the whole team. Selfish people make bad games. Good riddance!— Dan Porter (@tfadp) March 31, 2012
and Dick Costolo, CEO of Twitter:
@tfadp You're an insane idiot.— Markus Persson (@notch) April 1, 2012
and the lovely Tom Coates:
@tfadp I'm sure you realize by now what a nitwit comment that was, but wow, what a nitwit comment that was.— dick costolo (@dickc) April 1, 2012
I hope I never say anything this awful: @tfadp "What's so interesting about success is the number of failures who try to ride on your back."— Tom Coates (@tomcoates) April 1, 2012
Now just before this crass public display of arrogance, he said something just as telling:
4 years of HS math I got every prob right. Teachers said was wrong way to the answer. Math, business, games, I'll stick with my wrong way.— Dan Porter (@tfadp) March 30, 2012
The thing is, Draw Something has a maths problem. The so-called Birthday Paradox is kicking in. This is named for the unexpected result that if you have 23 people in a room, there's a 50:50 chance two of them have the same birthday.
There's a similar effect with games. If you keep randomly picking a word from a list, you'll see repeats quickly. Classic board games understand this - this is why Balderdash, Pictionary, Trivial Pursuit etc insist you use a discard pile after shuffling and picking a question, so you only pick a new card from those you haven't seen. Draw Something isn't doing this, so we're all seeing words repeat, which is discouraging play. This is all over twitter too:
Too many repeat words in DrawSomething. :|— Shreya Ghoshal (@shreyaghoshal) March 30, 2012
I've played Draw Something too much so every word now is a repeat. #firstworldpain— ∆li (@AleeZaidee) April 1, 2012
Just now coming off a week-long "Draw Something" binge. The fact that I keep getting the same words over & over is helping.— Aimee Mann (@aimeemann) March 31, 2012
One way or another, I think Draw Something has peaked.
Jonathan Zittrain posted today that he is not the source of the quote widely attributed to him:
I participated in the Berkman Center’s fascinating HyperPublic symposium in the summer of 2011. When moderating a panel I invoked the aphorism that “When something online is free, you’re not the customer, you’re the product.” It’s a way of encapsulating the idea that online free services usually make money by extracting lots of data from users — and then selling that data, or using it for targeted availability of those users for advertising, to advertisers. In that sense, the advertisers are the clients, and the users enjoying free content are what’s being sold. (Of course, sometimes that happens even when the user pays.)
I didn’t coin the phrase, and since it was featured (and attributed to me!) in wordsmith.org’s wildly popular “word a day” as a thought for the day accompanying the word “enceinte” — I sought to nail down its provenance.
The first use of the quote that we can find is as a comment within the famed MetaFilter community in August 2010. The user’s name is blue_beetle, who might be someone named Andrew Lewis. It’s entirely possible I saw it there, as MeFi is one of my five favorite sites on the Web.
I was pretty sure this idea dates back further, so I went digging. First I found Josh Klein's 2009 blogpost, which cites Philip Broughton's 2008 book Ahead of the Curve: Two Years at Harvard Business School
"My favorite moment comes in an anecdote about an MBA candidate who, not getting his way, complains to an administrator, “I’m the customer! Why are you treating me so badly?”
To which the administrator responds, “you’re not the customer. You’re the product.”
But the sense is not quite the same there - an MBA is not a free web service after all. Going back a little further, this 2006 discussion at Joel on Software Is the Magical Fairy-tale For Google Engineers about to End? (nicely prefiguring James Whittaker's Why I left Google) includes this contribution from Drew K:
Like clam pointed out, Google's customers are the advertisers. "Skooter" is a user. Just like with ad-supported broadcast TV, you're not the customer, you're the product.
From a Google perspective, you're not the customer. The ad service buyer is the customer. You're the commodity. By making you a more attractive commodity, i.e. by making sure to only serve you an ad if you are in the target population for it, they are making the ads pay better for their customers, and they can reap a large part of the difference to their competitors, the other ad services.
This isn't a new idea then, as the analogy to television makes clear. The earliest, most thorough exegesis of this idea I have found is Claire Wolfe's 1999 article Little brother is watching you The Menace of Corporate America which opens with:
Perhaps because you're not the customer any more. You're simply a "resource" to be managed for profit. The customer is someone else now — and usually someone without your best interests at heart.And has a continuing refrain of “Who is the Customer? Not you”, ending with
Who is the customer? Not you, whose life is reduced to someone else's salable, searchable, investigatable data. The customer is everyone who wishes to own a piece of your life.
The underlying warning is definitely worth thinking about — Maciej Ceglowski eloquently made the case for why you should pay Software Artisans on a recent TummelVision — but the deeper changes to what it means to be a customer matter too. There are other things we take part in without paying or being sold, because we find shared value in them, and the net enables those too.
People have a problem finding your URL. You post a QR Code. Now they have 2 problems. Or more:
Conversely, with a URL they could type it in, take a photograph of it and type it in later, or if they have the right app, it will recognise the URL text from the image and make it clickable.
That is the irony of this. QR Codes ignore years of research and culture on how to communicate meaning in symbolic form designed to be captured by image processing tools behind a lens. We have this technology. It is called writing.
Written language has a set of symbols that are relatively unambiguous, that are formed of curves rather than hard edges making them resilient to noise, and have been market-tested for milennia. QR Codes don't just ignore this, they ignore the relative success of one dimensional barcodes. Notice something about a barcode? It has the number printed on it as well, so you can type it in if the scan fails. QR Codes don't do this, so it's far too easy to put the wrong one in, or fail to replace a mockup. Which is why so many QR codes link to Justin's site instead.
The only place you should use QR codes is if you have a dedicated reader for them, like a classic barcode scanner, and a workflow that is designed for this that actually saves time. If you do empirical research on using QR codes for the public, you'll likely see 80% worse performance than text like this museum did. By all means try the experiment and report your results. Put up a QR code and a printed URL and see which gets the most usage.
Or listen to others:
a majority of our respondents knew more or less what they were for, very few (n=2, or around 7%) were successfully able to use QR codes to resolve a URL, even when coached by a knowledgeable researcher.[..] A strong theme that emerged — which we certainly found entirely unsurprising, but which ought to give genuine pause to the cleverer sort of marketers — is that, even where respondents displayed sufficient awareness and understanding of QR codes to make use of them, virtually no one expressed any interest in actually doing so.
Is it really faster and better to use a QR code that will direct you to part of a marketing campaign rather than getting a broader sweep of information by simply using the browser that you already use all the time on your phone? In the instant cost-benefit analysis I do every time I see a QR code, it has yet to make sense for me to fire up the decoder app I have installed on my phone.
QR code at the bus stop to get time of next bus. Really useful in the dark. Not. yfrog.com/mgicpqj— Martin Geddes (@martingeddes) January 27, 2012
Today, after 7 months, Bradley Horowitz announced that Google Plus will accept some pseudonyms. Kinda. If you can prove you're already famous. And can convince their robot it looks like a name. However, Google Engineer Yonatan Zunger spills the beans in a comment on that thread:
First of all, you might ask why we have a names policy at all. (i.e., why we don’t simply go with the JWZ proposal) One thing which we have discovered, while putting some miles on the system, is that it is indeed important to have a name-based service rather than a handle-based service. This isn’t a matter of functionality so much as of community: You get a different kind of community when people are known as Mary Smith than when they are known as captaincrunch42, and for a social product in particular we decided that the first kind of community is the one we want to build. In order to do that, we want to establish a general norm that the names you put in to the system should be names, not handles.(posted in full, in case of subsequent retraction, and because G+ doesn't have permalinks for comments)
So one thing that our name checking flow tries to catch is handles, which should normally be nicknames, shown in addition to a name. The other important thing it’s trying to catch is people who are creating individual accounts, rather than +Pages, for non-human entities such as businesses or organizations. The behavior of +Pages is deliberately restricted in the system, and we don’t want people to be creating fake human accounts to circumvent that. The name check turns out to be a very powerful tool to catch these.
Our name check is therefore looking, not for things that don’t look like “your” name, but for things which don’t look like names, period. In fact, we do not give a damn whether the name posted is “your” name or not: we will not challenge you on this basis, nor is there any mechanism for other users to cause you to be challenged for this.
There are two main cases where the name check screws up. One is false positives: people (such as you) who have unusual names which get flagged because they looked like handles. Being able to appeal via things such as drivers’ licenses is useful for this case, since it’s a simple “oh, we got this wrong.” The other case is people such as +trench coat, who are so well-known under this handle that it would be bizarre not to let them onto the system under this name. For this case, we allow appeals based on being well-known under the name: thus the ability to prove the “established pseudonym.” We’ve deliberately set the threshold for that latter case fairly high for now, but we intend to continue to tune it; the objective is that the frequency of such names should basically be the same as their frequency in meatspace.
So to answer your questions one-by-one:
(2) “Meaningful following” only applies to cases of established pseudonyms which do not look like names. The definition of “meaningful” is deliberately vague so that we can tune it, so that it behaves in a natural fashion.
(3) That’s correct; drivers’ licenses are for false positives, not pseudonyms.
(4) Unusual names will indeed hit friction, because of false positives. We’re trying to minimize that, but it’s going to take some trial and error.
(5) Google+ can absolutely be your first identity online. No matter what your language, no matter where you come from. The “established pseudonym” logic should apply to a very small subset of people. If some groups are seeing a higher false positive rate than others, that’s a bug, not a feature, and we have the data available to spot this situation and remedy it.
Our name check is therefore looking, not for things that don’t look like “your” name, but for things which don’t look like names, period. In fact, we do not give a damn whether the name posted is “your” name or not: we will not challenge you on this basis, nor is there any mechanism for other users to cause you to be challenged for this.
Not long ago, Apple boasted that its products were made in America. Today, few are. Almost all of the 70 million iPhones, 30 million iPads and 59 million other products Apple sold last year were manufactured overseas.
Why can’t that work come home? Mr. Obama asked.
Mr. Jobs’s reply was unambiguous. “Those jobs aren’t coming back,” he said.
For computers, phones and tablets, it's hard to make a real premium product, as the economies of scale work so well - Tim Cook's Apple has closed in on PC prices by a focus on costs and suppliers, and by building fewer models and relying on Chinese flexibility to ramp them up.
The Gold iPad 2 had a huge premium price, but also weighed more the 3 times as much as a normal iPad.
Instead, what if Apple made premium USA iPads, MacBooks and iPhones? They could have a distinctive look, so people knew they were US made, focus on the higher-end models, and charge a premium markup for the warm glow of supporting US jobs.
How much more would it cost? Hard to say, according to the NYT:
It is hard to estimate how much more it would cost to build iPhones in the United States. However, various academics and manufacturing analysts estimate that because labor is such a small part of technology manufacturing, paying American wages would add up to $65 to each iPhone’s expense. Since Apple’s profits are often hundreds of dollars per phone, building domestically, in theory, would still give the company a healthy reward.
Another critical advantage for Apple was that China provided engineers at a scale the United States could not match. Apple’s executives had estimated that about 8,700 industrial engineers were needed to oversee and guide the 200,000 assembly-line workers eventually involved in manufacturing iPhones. The company’s analysts had forecast it would take as long as nine months to find that many qualified engineers in the United States.
In China, it took 15 days.
A few years after Mr. Saragoza started his job, his bosses explained how the California plant stacked up against overseas factories: the cost, excluding the materials, of building a $1,500 computer in Elk Grove was $22 a machine. In Singapore, it was $6. In Taiwan, $4.85. Wages weren’t the major reason for the disparities. Rather it was costs like inventory and how long it took workers to finish a task.
Compared the the huge price disparities for other goods, these seem modest; for example, Timoni found a nice carry-on bag recently:
Couldn't find carryon I wanted. Then found a nice one: "This is good, I could get this." Price? $8,000. *doh*— timoni west (@timoni) January 9, 2012
So here's my proposition for Tim Cook:
Reopen the Elk Grove Apple factory to sell top-line Apple products, designed for those who want 'designer' luxury goods, and are willing to pay more for exclusivity. Make the 'made in USA' a key argument for a premium price. that way you need fewer staff than in China, and paying them well just adds to the cachet of the devices. You could cover them in Jasper Johns Flag, visibly number them as a limited edition, or come up with something more creative. As a way of extending the product line to a new, higher price point, while quieting those who wish Apple did more in the US, it seems an a obvious move.
WASHINGTON —The following is a statement by Senator Chris Dodd, Chairman and CEO of the Motion Picture Association of America, Inc. (MPAA) on the so-called “Blackout Day” protesting anti-piracy legislation:
Senator and CEO - let's lead with the revolving door promises to politicians
“Only days after the White House and chief sponsors of the legislation responded to the major concern expressed by opponents and then called for all parties to work cooperatively together,
Why are my former colleagues listening to their constituents about legislation? Don't they stay bought?
some technology business interests are resorting to stunts that punish their users or turn them into their corporate pawns, rather than coming to the table to find solutions to a problem that all now seem to agree is very real and damaging.
Maybe if we keep saying copyright infringement is a real problem without evidence, they'll believe it.
It is an irresponsible response and a disservice to people who rely on them for information and use their services.
How dare they edit their sites unless we force them to under penalty of perjury and felony convictions?
It is also an abuse of power given the freedoms these companies enjoy in the marketplace today.
Tomorrow was supposed to be different, that's why we bought this legislation.
It’s a dangerous and troubling development when the platforms that serve as gateways to information intentionally skew the facts to incite their users in order to further their corporate interests.
A so-called “blackout” is yet another gimmick, albeit a dangerous one, designed to punish elected and administration officials who are working diligently to protect American jobs from foreign criminals.
I am high as a kite
It is our hope that the White House and the Congress will call on those who intend to stage this “blackout” to stop the hyperbole and PR stunts and engage in meaningful efforts to combat piracy.”
What have the Romans done for us? Apart from instantaneous global communications, digital audio and video editing, the DVD, Blu-ray, Digital projection, movie playback devices in everyone's pockets and handbags...
By choosing images over links, and by restricting markup, Facebook, Twitter and Google+ are hostile to HTML. This is leading to the plague of infographics crowding out text, and of video used to convey minimal information.
The rise of so-called infographics has been out of control this year, though the term was unknown a couple of years ago. I attribute this to the favourable presentation that image links get within Facebook, followed by Twitter and Google plus, and of course though other referral sites like Reddit. By showing a preview of the image, the item is given extra weight over a textual link; indeed even for a url link, Facebook and G+ will show an image preview by default.
Consequently, the dominant form of expression has become the image. This was already happening with LOLcats and other meme generators like Rage Comics, where a trite observation can be dressed up with an image or series of images.
Before this, in the blogging age, there was a weight given to prose pieces, and Facebook and Google preserve some of this, but the expressiveness of HTML through linking, quoting, using images inline, changing font weight and so on, is filtered out by the crude editing tools they make available.
Feeds and feed readers started out this way too, but rapidly gained the ability to include HTML markup. Twitter went back to the beginning, and added the extra constrain of 140 characters because of it's initial SMS focus. Now it is painfully reinventing markup, though the gigantic envelope and wrapper of metadata that accompanies every tweet. This now has an edit list for entities pointing into it, and instructions for how to parse this to regain the author's intent is part of the overhead of working with their API.
Image links, however — at least those from recognised partners — are given privileged treatment. Facebook and Google have emulated this too, leading to the 'trite quote as image' trope. The spillover of this to news organisations became complete this year, with blogs and newspapers falling over themselves to link to often-tendentious information presented in all-caps and crude histogram form.
So here's my plea for 2012: Twitter, Facebook, Google+: please provide equal space for HTML. And for authors and designers everywhere, stop making giant bitmaps when well-written text with charts that are worth the bytes spent on them could convey your message better.
Brilliant web essayist Maciej Cegłowski of Two Steaks and Pinboard fame has focused his considerable insight on the area of web standards I've been involved with for the past few years. You should go and read his The Social Graph is Neither now.
Maciej is spot on in his criticisms:
This obsession with modeling has led us into a social version of the Uncanny Valley, that weird phenomenon from computer graphics where the more faithfully you try to represent something human, the creepier it becomes. As the model becomes more expressive, we really start to notice the places where it fails.
Personally, I think finding an adequate data model for the totality of interpersonal connections is an AI-hard problem. But even if you disagree, it's clear that a plain old graph is not going to cut it.
Clearly you can't model human relationships exactly in software. Keeping track of a few hundred of them in all their nuanced subtlety is why our brains are so huge compared to other animals. As Douglas Adams put it:
Of course you can’t ‘trust’ what people tell you on the web anymore than you can ‘trust’ what people tell you on megaphones, postcards or in restaurants. Working out the social politics of who you can trust and why is, quite literally, what a very large part of our brain has evolved to do.
It is an act of hubris to attempt to represent such vital things as human relationships in a database, and those who have done so often do resemble Maciej's Mormon bartender - Orkut Büyükkökten, Mark Zuckerberg and Jonathan Abrams do seem to have made what danah boyd has called Autistic Social Software.
The thing is, people seem to find these attempts helpful. As Maciej points out, we're good at forming subcultures and relationships even around the most primitive of tools. He pokes fun at
opensocial.Enum.Drinker.HEAVILY, but when we were compiling the OpenSocial Person fields, we found a high degree of convergence between the 20 or so social network sites we reviewed. Despite their crudity, the billions of people using these sites do find something of interest in them.
People choose to model different relationships on different sites and applications, but being able to avoid re-entering them anew each time by importing some or all from another source makes this easier. The Social Graph API may return results that are a little frayed or out of date, but humans can cope with that and smart social sites will let them edit the lists and selectively connect the new account to the web. Having a common data representation doesn't mean that all data is revealed to all who ask; we have OAuth to reveal different subsets to different apps, if need be.
The real value comes from combining these imperfect, scrappy computerized representations of relationships with the rich, nuanced understandings we have stored away in our cerebella. With the face of your
crush next to what they are saying, your brain is instantly engaged and can decide whether they are joking, flirting or just being a grumpy poet again, and choose whether to signal that you have seen it or not.
As danah says:
While we want perfect reliability for our own needs, we also want there to be failures in the system so that we can blame technology when we don’t want to admit to our own weaknesses. In other words, we want plausible deniability. We want to be able to blame our spam filters when we failed to respond to an email that someone sent that we didn’t feel like answering. We want to blame cell phone reception when we’ve had enough of a conversation and “accidentally” hang up. The more reliable technology gets, the more we have to find new ways for blaming the technology so that we don’t have to do the socially rude thing.
So here's to approximate, incomplete social web standards.