Accueil Modifications Récentes Blog

KevinMarks

Kevin nous a alerté ici que CHIC était proposé pour traduire POSH ;)

photo Kevin Marks
San Jose USA
GEO : 37.2737, -121.9039

2014-10-05

  • 04:16 UTC How did Twitter become the hate speech wing of the free speech party?

    Twitter has changed. When it started out it was a haven from the unread count in your email inbox, from the tension of blogging and comments. You didn't have to write long form essays, you didn't have to moderate commenters. You didn't have to make all those bold lines go away to reach inbox zero. You just followed some people and typed thoughts that occurred to you. Those who followed you got texted them, or sent them in IM. Simple.

    Then, we came up with @ replies. They were a way of making it clear who you were responding to. Twitter sensibly adopted them, but they were off to one side. You had to click on the @ tab to see them. Your primary experience was still the people who you followed, which you controlled. I wrote about Twitter theory with this in mind.

    Twitter created permeable, overlapping publics, and championed free speech and anonymity. They said they were the "free speech wing of the free speech party".

    Twitter reinforced this by hiding tweets that start with an @ unless you followed the person being talked to. This reduced serendipitous friend discovery, but also damped down arguments. (You can still see this in twitter analytics: note the difference in reach versus engagement for posts versus replies).

    However, Twitter had a perceived problem - this very contained, comfortable nature meant that it took effort to get started and find people to follow. They worked really hard at changing this, building a sign up process that made it very hard not to follow lots of people, especially famous ones. Also, they changed the way we got notified.

    Emails and app notifications of new followers and @ replies were set up to drive engagement, encouraging you to return.  That @ tab got an unread count on it, just like the email inbox, and the app got a red number on it on iPhones.

    The problem with this is that responses do not follow a smooth distribution. Sure, most tweets get no responses, but some take off. Hashtags became another way to spread tweets sideways, beyond the follow graph.

    Twitter saw this as increased engagement, and most of the time it was good. They built special tools in for "verified" users—the celebrities and brands they used to woo the rest of us with on sign up. The Verified get to damp down the notification flood, or just see other verified people.

    The problem is that by making @ replies the most visible part of the app, they'd brought us back to email and blog comments again.

    Your tweet could win the fame lottery, and everyone on the Internet who thinks you are wrong could tell you about it. Or one of the "verified" could call you out to be the tribute for your community and fight in their Hunger Games.

    Say something about feminism or race, or sea lions and you'd find yourself inundated by the same trite responses from multitudes. Complain about it, and they turn nasty, abusing you, calling in their friends to join in. Your phone becomes useless under the weight of notifications; you can't see your friends support amongst the flood.

    Twitter has become the hate speech wing of the free speech party.

    The limited tools available - blocking, muting, going private - do not match well with these floods. Twitter's abuse reporting form takes far longer than a tweet, and is explicitly ignored if friends try to help.

    This is where we are now. There are new attempts to remake following-focused semi-permeable publics. Known, ello, Quirell are some. In the indieweb world we are just starting to connect sites together with webmentions, and we need to consider this history as we do.

    Also published on my own site

    Also published on Known

    Also published on ello

    Also published on Medium

2014-04-23

  • 18:35 UTC Fragmentions for Poets

    Since the original fragmention implementation and discussion, we've been trying it out in various contexts, including any wordpress blog, Shakespeare's complete works and even the indiewebcamp wiki. Also there have been a lot of reactions and suggestions. Some of these occur often enough that I thought I'd write some responses down.

    What if the linked-to text changes?

    Several people pointed to the New York Times Emphasis project, which builds IDs from initial letters of sentences in a paragraph to provide some degree of resilience against the linked-to text being changed. It also tries using edit distances if it can't find the text.

    Whether you still want to link to changed text is a tricky problem - if it has completely been removed, then the annotation or point of linking may have gone (pointing out a typo, or misstatement). Even a small change (adding the word 'not' for example) can mean that the point of linking has changed, so my first thought is that changing the text breaking the link can be reasonable.

    If you want some fuzzy matching to go on, having more of the linked-to text in the fragment can only help the linked-to page identify where in the text was intended. Indeed, if enough is included, you could show the difference between what was linked to and what is there now.

    What if the linked-to text occurs more than once?

    By default, go to the first instance. If you want a different link, use more words to create a unique reference. While there have been proposals to link to the nth occurrence using more complex syntax, I don't think this is actually a natural choice, and likely to be more fragile. The NYT Emphasis tool mentioned above switched from a nth sentence type model to a content dependent one for this reason; fragmentions simplify and extend this idea.

    I have tried to come up with a use case that fits this goal - the closest I can think of is referring to a particular repetition of a line of poetry, for example in a villanelle.

    I can link to the 3rd line of the 1st verse by citing it in full. If I wanted the final line, linking to it and the line before would work. I think this is clear, though linking to night & Rage would be enough.

    The only reason I can think of to link to specific lines would be to discuss them in the context of surrounding lines, so I think this works adequately.

    If a tool is made to allow readers to construct a link to a specific phrase, indicating that that phrase is not unique to encourage them to choose a longer phrase may be worth it.

    Could you combine an id and a fragmention?

    A link of form #id##some+words has been suggested, but again I'm not sure I see the utility. This is the nth occurrence idea in a different guise. It combines two addressing models in one, making it harder to construct and more fragile to resolve.

    Can we get rid of ## and just use #?

    The other thought, based on closer reading of the HTML5 spec ID attribute:

    The id attribute specifies its element's unique identifier (ID). [DOM]

    The value must be unique amongst all the IDs in the element's home subtree and must contain at least one character. The value must not contain any space characters.

    There are no other restrictions on what form an ID can take; in particular, IDs can consist of just digits, start with a digit, start with an underscore, consist of just punctuation, etc

    is that we may not need the ## (which technically makes an invalid URL) at all.

    If an HTML5 id cannot contain a space, then a fragment that contains one like #two+words can never match an id (as id="two words" would be invalid). If it can't be an id it should be treated as a fragmention.

    If you really want a one-word fragmention a trailing space like #word+ could be used.

    This means that the idea of fragmention could be simplified: if a fragment contains a space it MUST be a fragmention, and should be searched for in the text. If it doesn't match any IDs in the page, it COULD be a fragmention and should be searched for in the text anyway.

    Fragmentions become a fallback to be used when an id can't be found.

    Do not go gentle into that good night

    Do not go gentle into that good night,
    Old age should burn and rave at close of day;
    Rage, rage against the dying of the light.

    Though wise men at their end know dark is right,
    Because their words had forked no lightning they
    Do not go gentle into that good night.

    Good men, the last wave by, crying how bright
    Their frail deeds might have danced in a green bay,
    Rage, rage against the dying of the light.

    Wild men who caught and sang the sun in flight,
    And learn, too late, they grieved it on its way,
    Do not go gentle into that good night.

    Grave men, near death, who see with blinding sight
    Blind eyes could blaze like meteors and be gay,
    Rage, rage against the dying of the light.

    And you, my father, there on the sad height,
    Curse, bless, me now with your fierce tears, I pray.
    Do not go gentle into that good night.
    Rage, rage against the dying of the light.

    by Dylan Thomas

    hear him read it

    Originally on my own website

2014-04-17

  • 12:42 UTC Fragmentions - linking to any text

    A couple of weeks ago, I went to a w3c workshop about annotations on the web. It was an interesting day, hearing from academics, implementers, archivists and publishers about the ways they want to annotate things on the web, in the world, and in libraries. The more I listened, the more I realised that this was what the web is about. Each page that links to another one is an annotation on it.

    Tim Berners-Lee's invention of the URL was a brilliant generalisation that means we can refer to anything, anywhere. But it has had a few problems over time. The original "Cool URLs don't change" has given way to Tim's "eventually every URL ends up as a porn site".

    Instead of using URLs, Google's huge success means that searching for text can be more robust than linking. If I want to point you to Tom Stoppard's quote from The Real Thing:

    I don’t think writers are sacred, but words are. They deserve respect. If you get the right ones in the right order, you can nudge the world a little or make a poem which children will speak for you when you’re dead.

    the search link is more resilient than linking to Mark Pilgrim's deleted post about it, which I linked to in 2011.

    Another problem is that linking in HTML is defined to address pages as a whole, or fragments within them, but only if the fragments are marked up as an id on an element. I can link to a blog post within a page by using the link:

    http://epeus.blogspot.com/2003_02_01_archive.html#post-body-90336631

    because the page contains markup:

    <div class="post-body entry-content" id="post-body-90336631" >

    But to do that I had to go and inspect the HTML and find the id, and make a link specially, by hand.

    What if instead we combined these two ideas:

    • use a fragment to identify part of a page
    • mention words in the page as the identifier

    I've named these "fragmentions"

    To tell these apart from an id link, I suggest using a double hash - ## for the fragment, and then words that identify the text. For example:

    http://epeus.blogspot.com/2003_02_01_archive.html##annotate+the+web

    means "go to that page and find the words 'annotate the web' and scroll to show them"

    If you click the link, you'll see that it works. That's because when I mentioned this idea in the indiewebcamp IRC channel, Jonathan Neal wrote a script to implement this, and I added it to my blog and to kevinmarks.com. You can add it to your site too.

    However, we can't get every site to add this script. So, Jonathan also made a Chrome Extension so that these links will work on any site if you're running Chrome. (They degrade safely to linking to the page on other browsers).

    So, try it out. Contribute to the discussion on the Indiewebcamp Fragmentions page, or annotate this page by linking to it with a fragmention from your own blog or website.

    Maybe we can persuade browser writers that fragmentions should be included everywhere.

    Originally posted on kevinmarks.com

2013-06-12

  • 23:18 UTC How Apple's iOS fragmentation problems distort design thinking

    As someone who uses both Android and iOS regularly, I'm getting increasingly frustrated by fragmentation. However it's not on my Android devices I see this, but on the iOS ones. I install a popular, well-funded application like Instagram, Flickr, or Circa on my iPad, but when I launch it 3/4 of the screen is black bars, with a teensy little app in the middle. Or I can choose to scale it up without smoothing, so jagged pixels I have't seen since the 1990s reinforce the sense that I am doing something wrong by attempting to run this app here. Every affordance is pushing back at me, saying I'm doing it wrong.

    on Android Nexus 7 Instagram looks great, on iPad it looks like a Victorian death notice

    By contrast, on Android applications scale up to fill the space sensibly - buttons stay button sized, text and image areas expand well. Developers can add alternative layouts to better handle varying sizes, but if they don't things remain legible and touchable.

    One hand or on your knees?

    More pernicious is the artificial dichotomy that the iOS world leads our design thinking into. You're either on the held-in-one-hand phone, briefly standing somewhere, or you're sitting down in the evening using your iPad (so big and heavy that you have to rest it on your knees - Steve Jobs even brought out a sofa to sit on at the launch). This false 'mobile versus desktop' dichotomy even misled Mark Zuckerberg when he said "iPad's not mobile, it's a computer." and at the Facebook Home Launch, a tablet version was said to be "months away", though a working version was hacked together by outside programmers within days.

    Meanwhile, nobody told Barnes and Noble, whose 7" Nook did so well that Amazon launched a Kindle range the same size, leading to the lovely Nexus 7 from Google and finally to the iPad Mini from Apple. This is the form factor, tested for years by paperback books, that makes one-handed long form reading comfortable. If you spend any time on public transit, being able to read standing up or in narrow bus seats is an obvious benefit. But the hermetically sealed company-coach commuters at Apple missed this for years.

    Steve Jobs even said you'd have to file down your fingers to use it. The thing is, on iOS it does feel like that. The iPad sized apps have too-small buttons, the iPhone ones are too big if zoomed. There is no way for an app to know how big your finger is compared to the screen, let alone a website.

    The supposed advantage of iOS is fewer sizes to design for, but now you need 12 different layouts to cope with the horizontal and vertical orientations of each device, and the general layout tools don't handle this as well as Android, requiring complex updates. Chiu-Ki Chan explains the pain, whereas Android Studio just made this even easier for developers.

    No App is an island

    The other fragmentation problem on iOS are the missing links. Not in the evolutionary sense, but the ability to readily connect between applications using URLs, as we're used to on the web. Applications can't claim parts of a URL, only a whole scheme. As Apple says:

    If more than one third-party app registers to handle the same URL scheme, there is currently no process for determining which app will be given that scheme.

    On Android, any app can claim any scheme, host and path, and the OS will resolve as appropriate, giving you a choice if more than one possibility is available.

    On Android, you get a choice for links, if there is one.

    On iOS, each app ends up embedding webviews inside it rather than linking to a browser, fragmenting your web history. I have to remember whether I got to the page via Twitter, or Facebook or email to go back to it later on, and I only get to share it to the iOS-approved places, Twitter, Facebook, email or iMessage. On Android, any installed app that can handle that type of media is an option to share to, or create a photo, make a phonecall, any of hundreds of "intents" that act as bridges between apps and, through browsers, the web.

    This means that Android apps don't end up doing everything, but hand off to each other as needed, meaning that you can replace any function cleanly across apps. Better keyboards are the obvious example here, but camera apps, browsers, and SMS apps can drop themselves in, with the choice made by the user.

    On iOS, you have to explicitly link to the scheme, which is per-app. Ironically, this means that Google (or Facebook) can release a set of iOS apps that connect to each other preferentially, leaving the Apple applications out of the loop. But it also makes it harder for developers to specialise in doing one thing well and linking to the rest.

    What of the Web?

    The other pernicious influence of iOS fragmentation has been the rise of the mobile-only site - the m. layout that was tuned for iPhone use, then slightly adapted for other mobile users, often giving rise to farcical single column layouts with huge whitespace on tablets. In the early iOS days this was a bonus, as it encouraged function-first design, without the organisation-chart-inspired cruft around the edges that websites accumulate over time. As the effective resolution of mobile devices has increased, now often exceeding what is available on desktops, the assumptions that drove mobile-specific sites are breaking down.

    Now that Android is the dominant operating system, Google is getting serious about it as a web platform too, which is very welcome. The Android Browser was installed as part of the OS, and didn't get upgraded over time. This has changed, with Chrome now the default Android browser, and it is on a regular update pipeline like Desktop Chrome and Firefox. iOS's Safari updates are frequent too now, and Microsoft is now pleading with developers to give them modern web apps too.

    Truly responsive design has succeeded mobile-first as the best choice for websites, and we're seeing this spread as browsers converge on HTML5 features. What this means is that the web platform is now evolving rapidly, without any one browser provider being a bottleneck. The installed base for SVG passed Flash a while back, and even Adobe is now backing the web fully, bringing their design know-how to HTML5 features such as regions and exclusions. Also in the pipeline for HTML5 is browser to browser audio, video and text chat via WebRTC.

    Hoping Apple continues the revolution

    This web platform revolution was catalysed by Apple with WebKit and Mozilla with Firefox, and picked up by Google, Microsoft, Adobe and others. We now have a Browser Battle to be more standards compliant and consistent, rather than a Browser War to be different and proprietary. What I'll be hoping for from Apple at next weeks WWDC is a clear recognition of these design lacunae and new and better ways for developers to succeed both with native apps and on the web.

    This was also published on TechCrunch.

2013-05-06

  • 09:21 UTC Finally, some progress in video codecs.

    An announcement on Friday via Brendan Eich:

    ORBX.js, a downloadable HD codec written in JS and WebGL. The advantages are many. On the good-for-the-open-web side: no encumbered-format burden on web browsers, they are just IP-blind runtimes. Technical wins start with the ability to evolve and improve the codec over time, instead of taking ten years to specify and burn it into silicon.
    I think the 'remote-screen viewing of videogames' use case is bogus (if anyone notices latency it's gamers), but this is a really important development for the reasons Brendan mentions and more.

    Nine years ago, I wrote:

    I'd say video compression is maybe 2-4 times as efficient (in quality per bit) than it was in 1990 or so when MPEG was standardised, despite computing power and storage having improved a thousandfold since then.

    Not much has changed. The video compression techniques we're using everywhere are direct descendents of 1980s signal processing. They treat video as a collection of small 2D blocks that move horizontally and vertically over time, and encode all video this way. If you want to make a codec work hard, you just need to rotate the camera. Partly this is because of the huge patent thicket around video encoding, mostly it's because compression gets less necessary over time as network capacity and storage increases. However, it was obvious 10 years ago that this was out-dated.

    Meanwhile, there has been a revolution in video processing. It's been going on in video games, and in movies and TV. The beautiful photorealistic scenes you now see in video games are because they are textured 3D models rendered on the fly for you. Even the cut scenes work this way, though their encoding is often what compression researchers dismissively call a 'Graduate Student Algorithm' - hand-tweaking the models and textures to play back well within the constraints of the device. Most movies and TV has also been through 3d-modelling and rendering, from Pixar through visual effects to the mundane superimposition of yard lines on sports. The proportion of YouTube that is animation, machinima or videogame run-throughs with commentary keeps growing too.

    Yet codecs remain blind to this. All this great 3d work is reduced to small 2D squares. A trimesh and texture codec seems an obvious innovation - even phones have GPUs in them now, and desktops have for 20 years. Web browsers have been capable of complex animations for ages too. Now they have the ability to decode bytestreams to WebGL in real time, we may finally get codecs that are up to the standards we expect from videogames, TV and movies, with the additional advantage of resolution independence. It's time for this change.

2013-04-05

  • 02:13 UTC Forking, Spooning or Knifing?

    Reading the tech news this week, there's a lot of talk about forking. Google Blink forking AppKit. Apple not forking Chromium because that would be hostile. Facebook 'forking' Android. Even Tim O'Reilly forking the memetic nature of Free Software into Open Source.

    However not all of these things are really forks, and forking is no longer necessarily a hostile act. Lets go through them. Google Blink is a fork of Webkit, or rather of Webcore. Alex explains this is to reduce the amount of time they need to spend merging back to Webkit, but it doesn't preclude anyone continuing to do this if desired. Maciej explained that the reason for the difference in multiprocessing implementations that precipitated this was Apple not wanting to do so.

    Facebook did not need to fork Android, because it is designed to support substitutable components. You can swap out any OS components, and you can communicate between apps using intents. Indeed, Facebook could make a deal with handset manufacturers or carriers that don't offer the 'with Google' experience to replace it with a Facebook one. Expect to see this in overseas markets, especially the ones where Facebook Zero works with carriers.

    The more subtle thing is that forking is no longer perjorative. It used to be a last resort, what you did when your open source community had broken down. It meant that people had to pick sides, and choose which fork to adopt, because open source had a hierarchic nature. Now, forking is what you do to show interest. If you go to github, where much open source lives now, forking a project is a single click. Sucessful projects will have many forks, and will accept pull requests from some of them.

    This is the real difference between the Free Software and Open Source worldviews that were debated this week - the web enables more parallel, less centralised forms of co-operation and ownership. The monolithic projects and integrations are giving way to ones with better defined boundaries between them, and the ability to combine components as needed. Which means tech companies don't get to tell each other to "Knife the baby" any more.

2013-04-02

2013-03-15

  • 18:20 UTC DTLI Panel on 1201 rulemaking
    As I'm in twitter jail for tweeting too much, here's an old-fashioned liveblog.
     Speakers

    Rob Kasunic: the DMCA was passed in 1998. The new rule-making started in 2000.
    (someone is testing a radio mic on the same frequency. Ironic given White Spaces interference lobby by radio mic users).
    Rulemaking was originally designed to be formal, like a courtroom proceeding. It became less formal, and a periodic review of exemptions. The exemptions expire each three years and must be examined again. 2000 rulemaking was difficult as the provisions weren't yet in force. "It would be nice if legislation could be understood by the general public. Failing that it would be nice if it was understood by Copyright lawyers."
    We had to interpret what a "class of works" consisted of as that was what we coudl exempt, but was not defined in the statute.

    Marcia Hofmann: I've been involved in all rounds so far. What does a successful argument look like?

    Rob Kasunic: Seth Finkelstein documented what he had done to achieve exemptions. Look at what had been successful. Many people who have got them repeatedly were not lawyers. Presenting a very strong factual case is key, compared to making a legal argument. It helps to come to the hearing but that is not a requirement.

    Rebecca Tushnet: I work with Vidders, a remix artists community. The hearings feel like Alice in Wonderland - the content people say that screen capture and other tools are circumvention, but they are available, and they won't go after fair use but they reserve the right to decide what isn't and so no exemption is needed.
    Vidders are primarily made up of women, working with popular culture and non-commercial. Even though the Copyright Office should represent all creators, as outsiders we have to make this argument in perpetuity. Despite copyright protecting all creations, we need to make a case for creatiev quality and critical message, which is like explaining opera to non-fans.
    The question 'Who gets to say what tools artists can use?' is very difficult. The content people argued that screen capture software was good enough, so cracking encrypted DVDs wasn't needed, despite generational loss. They said you don't need nice-looking stuff to use in your art. They also said that if we weren't getting good enough quality from screen capture they were doing it wrong. Hearing a bunch of guys who don't edit video telling a bunch of women who do that they are doing it wrong is a feature of the proceedings. You have to come back every 18 months are start again, and eventual people give up - like the dongle guy did. The copyright office cuts down your proposal each time, and more so if you don't come to the hearings. The burden of proof and the standards required to show that the use is substantial requires you to break the law in advance to show that it should be legal, which is highly problematic. Bruce Lehman told us about the process of enacting the DMCA. People making the next generation of media don't have lobbyists. they don't even have drivers licences today. They will surprise us like Facebook or Google. We need to let them surprise us in future.

    Christian Genetski: The EFF brought an exemption request for jailbreaking consoles that followed that from jailbreaking cellphones. As we (Video Game Manufacturers) prevailed, I see this as a fair process. In the mobile phone example there was a competition issue wrapped up in the DMCA. We made the case that this was different for game consoles, where we were protecting 3rd party creative works - the games. We didn't question the legitimacy of homebrew and indie games, but said we were trying to promote these consistent with protecting commercial ones. The evidence showed that vast majority of the tools were used for infringement, not for development or indie distribution.
    I don't think the DMCA 1201 rule is broken per se, The use of the statute by creative litigators is not unique to the DMCA, there were other statutes cited in the same complaints.
    If we need to adjust tot he reality of what is being used, taking a fresh look every 18 months seems like a good idea. This is better than going to Congress and meeting on K Street. Perhaps there is an execution and burden-shifting problem.

    Rob Kasunic: Burden-shifting is something we should consider for existing exemptions, to move the burden to opponents. The rule-making is not necessarily the answer to these issues. It's an adjunct to the statute. But for the rule-making process, Vidders would have all been unlawful. The Copyright office is not assessing 'good' works or legitimate art, but non-infringing use. When we use the term 'substantial' we didn't mean a higher burden of proof, but it needs to not be just mere inconveniences or anecdotal use. With vidding it was questionable if the use was non-infringing, but there needed to be sufficient that were.
    Although 1201 exemptions only apply to the use exemptions, not the trafficking ones, when they passed they mean people can buy illegal tools for a legal purpose which is very odd.

    Rebecca Tushnet: The exemption we got said we could only use circumvention if necessary for sufficient quality. This was an artistic judgment encoded in the exemption.

     Granick: Trackphone continued to use 1201 against bulk unlockers as they were not doing it to unlock to connect to a network, but to profit by reselling phones, and have won these cases.

    Q: Why would it be up to Congress to change the burden? Couldn't the Copyright office change this?
    Kasunic: there is a lot of expectations from Congress in the lawmaking, even if not in the legislation. We're trying to implement what Congress intended. We would be delighted to have Congress give us more information.

    Q: Copyright prevents actual copying and derivative works. What of we narrowed it down so derivative works weren't protected? How many problems would go away then?
    Kasunic: Many of derivative works matter a lot eg movie adaptations. The line between derivative and transformative is a fine one.
    Genetski: in the game industry the expansion packs and sequels are derivative and need to be protected.
    Tushnet: people have been thinking hard about this. Substantial similarity has eaten this up. I wrote an article about this.


2012-12-06

  • 19:38 UTC The Antifragility of the Web

    We’re used to taking the web for granted. We expect it to be there as substrate, with its addresses, declaratory documents, universally available programming language and the links between pages.

    Ah, the links. There’s the rub. How many times have you followed a link and got a 404 or a different page than you were expecting? Links rot. As Tim Berners-Lee says, eventually every domain becomes a porn site.

    So we want to do better. We want to build a non-web web. A special place for ourselves and our friends that is self-contained, and where all the pages and links are in the same database, and they can’t rot.

    Instead of these messy links with protocols and domains in we just use @names or +names and #topics and tag. It’s easier for people to do, and self-consistent and grows explosively. Biz dev gets excited about the reciprocal deals we can do with other content owners.

    If you’ve read Nasim Taleb’s Antifragile, you know what comes next. By shielding people from the complexities of the web, by removing the fragility of links, we’re actually making things worse. We’re creating a fragility debt. Suddenly, something changes - money runs out, a pivot is declared, an aquihire happens, and the pent-up fragility is resolved in a Black Swan moment.

    The special place disappears entirely. Or, if we’re lucky, the Archive Team lights the cat signal and emergency archivists preserve it in formaldehyde somewhere else, the clock stopped, the links severed.

    Meanwhile, out there on the web, people can still connect and discuss and say what went wrong, and do better next time. The web itself is antifragile. It interprets our business models as damage and routes around them. If we’ve learned, we’ll respect this next time we make something.

2012-05-24

  • 22:35 UTC Keep ALL the versions

    Back in the 1980s, storage was expensive and slow. You had a copy of your document in memory and you would be asked every time you wanted to save it out to disk, because you didn't want to fill the disk up. That paradigm is so out of date now it's embarrassing to try to explain to my sons what the little floppy disk icon is in Microsoft Word. "What's that?" "it's a floppy disk." "Oh yeah, I think I saw one in the garage once"

    The world view of having to load and save is being gradually eroded. Apple has changed the operating system - Mountain Lion no longer has Save and Save As, but instead a model of going back through edit histories. Google Docs originally didn’t have a floppy disk icon, but put it back because people were looking for it in user testing. Now it has been removed.

    We've had source code control as programmers for a long time. But the github world takes that further: the first thing you do is clone a project into your own repository, then start forking it. You can eventually merge stuff back later, but there is the assumption that things are happening in parallel. As James Governor put it:

    Open Source used to count download numbers as a measure of developer success.

    Today, we increasingly use forks as the metric of traction.

    Wikipedia has put this into the public consciousness by having publicly visible edit histories, so you can go back and forwards in time over the history of the article. The paradigm of "storage is not a problem, we should keep every version of everything ever" is moving through culture to be a default assumption.

    This will be something we want on mobile too. The issue of "which have I got on the phone and which have I got in the cloud?” is what makes it tricky. I think there will be a battle between Apple and Google about how you present that to the user in a coherent way - Google Drive and iCloud are taking different paths here, with DropBox actually working between both. Google Drive not storing copies of Google Documents locally is a mistake I expect them to fix.

    Everything is moving in this direction, even low level system design. The growth of functional programming is all about not having contention over a single copy of things in memory but having paths through data that are modifying things in their own version of the world.

    If you think about the difference between the way JavaScript handles stuff and the way Java does, Java still has a ‘data structure being passed around’ world view; JavaScript has closures that are passed around that contain the entire state of the current machine at the time of that event which are held as you go off and do something else and come back. One of the reasons that node.js feels so nice when you're writing web programming is that it has the feeling you’re used to on the client side that you do something and then call something, passing it a callback. You end up writing the server side stuff in the same way, and it's just naturally parallelizable. It's easy to spin up more machines for it because you've written stuff with the presumption that each instance of it is wholly independent.

    Not everything can be written that way. The core of node is in C because it has to deal with the raw machine contention and deal with this routing, but the spread of the functional world view is a natural fit for web apps.

    The other potential that node.js makes manifest is convergence of client and server code. If you're running and writing the same code in the same language, with the same libraries running on both the client and the server you can decide to migrate bits back and forth much more easily.

    You don’t have to spend so much time deciding which is which and without worrying about the boundaries of the world and different shapes of the data structures. So if you're creating a JSON object on the server and passing it to the client you can decide whether you do that or not, at which point you do that, at which point you render it and which point you don't. That sort of fluidity is going to become more important over time. Pat Patterson showed how this can work for mobile apps too.

    The programmers' world view has changed on this and it is permeates out to the world as programmers make those ideas available to the public in usable form. The invention of ‘Undo’ was a huge part of what made the Mac great - enabling users to experiment safely. Being able to retrospectively undo mistakes later, or learn from others’ public variations on a theme is going mainstream too.

    This post is based on discussions I had on This Week In Google ep 143, with Gina Trapani, Jeff Jarvis and Tom Merritt which were transcribed by Michael Shook from this video of the show Updated: At the same time, JP Rangaswami wrote Warning: Contains Warnings which give more context about how 'Undo' helps protect innovation.

2012-04-01

  • 09:23 UTC Draw Something CEO, grace and high school mathematics

    Dan Porter, CEO of OMGPop, has had a good week. His game, Draw Something (it is an asynchronous Pictionary for cellphones, like Words With Friends is an asynchronous Scrabble) has taken off like mad, and Zynga bought his company for over $200 million. However, one employee didn't go along to Zynga, and Dan's been whining on twitter:

    This has drawn some reactions from others, eg Notch, CEO of Minecaraft:
    and Dick Costolo, CEO of Twitter:
    and the lovely Tom Coates:

    Now just before this crass public display of arrogance, he said something just as telling:

    The thing is, Draw Something has a maths problem. The so-called Birthday Paradox is kicking in. This is named for the unexpected result that if you have 23 people in a room, there's a 50:50 chance two of them have the same birthday.

    There's a similar effect with games. If you keep randomly picking a word from a list, you'll see repeats quickly. Classic board games understand this - this is why Balderdash, Pictionary, Trivial Pursuit etc insist you use a discard pile after shuffling and picking a question, so you only pick a new card from those you haven't seen. Draw Something isn't doing this, so we're all seeing words repeat, which is discouraging play. This is all over twitter too:

    One way or another, I think Draw Something has peaked.

2012-03-21

  • 19:09 UTC When you're the merchandise, not the customer

    Jonathan Zittrain posted today that he is not the source of the quote widely attributed to him:

    I participated in the Berkman Center’s fascinating HyperPublic symposium in the summer of 2011. When moderating a panel I invoked the aphorism that “When something online is free, you’re not the customer, you’re the product.” It’s a way of encapsulating the idea that online free services usually make money by extracting lots of data from users — and then selling that data, or using it for targeted availability of those users for advertising, to advertisers. In that sense, the advertisers are the clients, and the users enjoying free content are what’s being sold. (Of course, sometimes that happens even when the user pays.)

    I didn’t coin the phrase, and since it was featured (and attributed to me!) in wordsmith.org’s wildly popular “word a day” as a thought for the day accompanying the word “enceinte” — I sought to nail down its provenance.

    The first use of the quote that we can find is as a comment within the famed MetaFilter community in August 2010. The user’s name is blue_beetle, who might be someone named Andrew Lewis. It’s entirely possible I saw it there, as MeFi is one of my five favorite sites on the Web.

    I was pretty sure this idea dates back further, so I went digging. First I found Josh Klein's 2009 blogpost, which cites Philip Broughton's 2008 book Ahead of the Curve: Two Years at Harvard Business School

    "My favorite moment comes in an anecdote about an MBA candidate who, not getting his way, complains to an administrator, “I’m the customer! Why are you treating me so badly?”

    To which the administrator responds, “you’re not the customer. You’re the product.”

    But the sense is not quite the same there - an MBA is not a free web service after all. Going back a little further, this 2006 discussion at Joel on Software Is the Magical Fairy-tale For Google Engineers about to End? (nicely prefiguring James Whittaker's Why I left Google) includes this contribution from Drew K:

    Like clam pointed out, Google's customers are the advertisers. "Skooter" is a user. Just like with ad-supported broadcast TV, you're not the customer, you're the product.

    The idea is pretty well-expressed there, but I think we can go back further. In 2004, Coding Forums discussed the then-new Gmail, and liorian commented:

    From a Google perspective, you're not the customer. The ad service buyer is the customer. You're the commodity. By making you a more attractive commodity, i.e. by making sure to only serve you an ad if you are in the target population for it, they are making the ads pay better for their customers, and they can reap a large part of the difference to their competitors, the other ad services.

    This isn't a new idea then, as the analogy to television makes clear. The earliest, most thorough exegesis of this idea I have found is Claire Wolfe's 1999 article Little brother is watching you The Menace of Corporate America which opens with:

    Perhaps because you're not the customer any more. You're simply a "resource" to be managed for profit. The customer is someone else now — and usually someone without your best interests at heart.
    And has a continuing refrain of “Who is the Customer? Not you”, ending with
    Who is the customer? Not you, whose life is reduced to someone else's salable, searchable, investigatable data. The customer is everyone who wishes to own a piece of your life.

    The underlying warning is definitely worth thinking about — Maciej Ceglowski eloquently made the case for why you should pay Software Artisans on a recent TummelVision — but the deeper changes to what it means to be a customer matter too. There are other things we take part in without paying or being sold, because we find shared value in them, and the net enables those too.

2012-01-28

  • 17:40 UTC QR Codes: bad idea or terrible idea?

    People have a problem finding your URL. You post a QR Code. Now they have 2 problems. Or more:


    1. They see a chunk of robot barf on your poster, and have to realise it isn't a crossword puzzle, but a QR code.
    2. They need to take a digital photograph of it with their phone. If they have a laptop, even with a camera, this requires physical contortions
    3. They need an application on their phone that can make sense of a QR code.
    4. They need a lot of patience as they fiddle with it.
    5. They need a working network connection to resolve it.

    Conversely, with a URL they could type it in, take a photograph of it and type it in later, or if they have the right app, it will recognise the URL text from the image and make it clickable.

    That is the irony of this. QR Codes ignore years of research and culture on how to communicate meaning in symbolic form designed to be captured by image processing tools behind a lens. We have this technology. It is called writing.

    Written language has a set of symbols that are relatively unambiguous, that are formed of curves rather than hard edges making them resilient to noise, and have been market-tested for milennia. QR Codes don't just ignore this, they ignore the relative success of one dimensional barcodes. Notice something about a barcode? It has the number printed on it as well, so you can type it in if the scan fails. QR Codes don't do this, so it's far too easy to put the wrong one in, or fail to replace a mockup. Which is why so many QR codes link to Justin's site instead.

    The only place you should use QR codes is if you have a dedicated reader for them, like a classic barcode scanner, and a workflow that is designed for this that actually saves time. If you do empirical research on using QR codes for the public, you'll likely see 80% worse performance than text like this museum did. By all means try the experiment and report your results. Put up a QR code and a printed URL and see which gets the most usage.

    Or listen to others:

    a majority of our respondents knew more or less what they were for, very few (n=2, or around 7%) were successfully able to use QR codes to resolve a URL, even when coached by a knowledgeable researcher.[..] A strong theme that emerged — which we certainly found entirely unsurprising, but which ought to give genuine pause to the cleverer sort of marketers — is that, even where respondents displayed sufficient awareness and understanding of QR codes to make use of them, virtually no one expressed any interest in actually doing so.

    As Alexis Madrigal puts it:

    Is it really faster and better to use a QR code that will direct you to part of a marketing campaign rather than getting a broader sweep of information by simply using the browser that you already use all the time on your phone? In the instant cost-benefit analysis I do every time I see a QR code, it has yet to make sense for me to fire up the decoder app I have installed on my phone.

2012-01-23

  • 22:29 UTC Google Plus admits they want fake names

    Today, after 7 months, Bradley Horowitz announced that Google Plus will accept some pseudonyms. Kinda. If you can prove you're already famous. And can convince their robot it looks like a name. However, Google Engineer Yonatan Zunger spills the beans in a comment on that thread:

    First of all, you might ask why we have a names policy at all. (i.e., why we don’t simply go with the JWZ proposal) One thing which we have discovered, while putting some miles on the system, is that it is indeed important to have a name-based service rather than a handle-based service. This isn’t a matter of functionality so much as of community: You get a different kind of community when people are known as Mary Smith than when they are known as captaincrunch42, and for a social product in particular we decided that the first kind of community is the one we want to build. In order to do that, we want to establish a general norm that the names you put in to the system should be names, not handles.

    So one thing that our name checking flow tries to catch is handles, which should normally be nicknames, shown in addition to a name. The other important thing it’s trying to catch is people who are creating individual accounts, rather than +Pages, for non-human entities such as businesses or organizations. The behavior of +Pages is deliberately restricted in the system, and we don’t want people to be creating fake human accounts to circumvent that. The name check turns out to be a very powerful tool to catch these.

    Our name check is therefore looking, not for things that don’t look like “your” name, but for things which don’t look like names, period. In fact, we do not give a damn whether the name posted is “your” name or not: we will not challenge you on this basis, nor is there any mechanism for other users to cause you to be challenged for this.

    There are two main cases where the name check screws up. One is false positives: people (such as you) who have unusual names which get flagged because they looked like handles. Being able to appeal via things such as drivers’ licenses is useful for this case, since it’s a simple “oh, we got this wrong.” The other case is people such as +trench coat, who are so well-known under this handle that it would be bizarre not to let them onto the system under this name. For this case, we allow appeals based on being well-known under the name: thus the ability to prove the “established pseudonym.” We’ve deliberately set the threshold for that latter case fairly high for now, but we intend to continue to tune it; the objective is that the frequency of such names should basically be the same as their frequency in meatspace.

    So to answer your questions one-by-one:

    (2) “Meaningful following” only applies to cases of established pseudonyms which do not look like names. The definition of “meaningful” is deliberately vague so that we can tune it, so that it behaves in a natural fashion.

    (3) That’s correct; drivers’ licenses are for false positives, not pseudonyms.

    (4) Unusual names will indeed hit friction, because of false positives. We’re trying to minimize that, but it’s going to take some trial and error.

    (5) Google+ can absolutely be your first identity online. No matter what your language, no matter where you come from. The “established pseudonym” logic should apply to a very small subset of people. If some groups are seeing a higher false positive rate than others, that’s a bug, not a feature, and we have the data available to spot this situation and remedy it.
    (posted in full, in case of subsequent retraction, and because G+ doesn't have permalinks for comments)

    Yonatan admits what Bradley obscures:that this is an Identity Theatre issue. They don't want your name, They don't care if you have a forename in one language and a surname in another. Let me quote this exactly:

    Our name check is therefore looking, not for things that don’t look like “your” name, but for things which don’t look like names, period. In fact, we do not give a damn whether the name posted is “your” name or not: we will not challenge you on this basis, nor is there any mechanism for other users to cause you to be challenged for this.

    This is what I suspected when I wrote Google Plus must stop this Identity Theatre

    Google+ is letting an algorithm decide what is a name and what isn't. You will be forced into it's Procrustean idea of what names are, or be harassed for it. You have to pass as normal, like call centre workers forced to learn to sound American.

    You can create disposable accounts with fake names, as long as they look plausible to Yonatan's bot.


    This algorithm has allowed people called 'panel heater' 'The Phoenix Rising' 'tous les mais du monde' and Mehr Decent , a bot with a well-known actress's photo posting links to a single website to follow me (and that's just in the most recent 30 I checked).

    So Google continues to encourage fakers and discourage those who need a pseudonym for good reasons.
  • 10:14 UTC Could Apple make premium devices in the USA?

    After This American Life's disturbing episode on Apple's Chinese factories, the NYT wrote a defence of Apple, which said it was just too expensive to build their products in the USA:

    Not long ago, Apple boasted that its products were made in America. Today, few are. Almost all of the 70 million iPhones, 30 million iPads and 59 million other products Apple sold last year were manufactured overseas.

    Why can’t that work come home? Mr. Obama asked.

    Mr. Jobs’s reply was unambiguous. “Those jobs aren’t coming back,” he said.

    For computers, phones and tablets, it's hard to make a real premium product, as the economies of scale work so well - Tim Cook's Apple has closed in on PC prices by a focus on costs and suppliers, and by building fewer models and relying on Chinese flexibility to ramp them up.

    The Gold iPad 2 had a huge premium price, but also weighed more the 3 times as much as a normal iPad.

    Instead, what if Apple made premium USA iPads, MacBooks and iPhones? They could have a distinctive look, so people knew they were US made, focus on the higher-end models, and charge a premium markup for the warm glow of supporting US jobs.

    How much more would it cost? Hard to say, according to the NYT:

    It is hard to estimate how much more it would cost to build iPhones in the United States. However, various academics and manufacturing analysts estimate that because labor is such a small part of technology manufacturing, paying American wages would add up to $65 to each iPhone’s expense. Since Apple’s profits are often hundreds of dollars per phone, building domestically, in theory, would still give the company a healthy reward.
    [...]
    Another critical advantage for Apple was that China provided engineers at a scale the United States could not match. Apple’s executives had estimated that about 8,700 industrial engineers were needed to oversee and guide the 200,000 assembly-line workers eventually involved in manufacturing iPhones. The company’s analysts had forecast it would take as long as nine months to find that many qualified engineers in the United States.

    In China, it took 15 days.
    [...]
    A few years after Mr. Saragoza started his job, his bosses explained how the California plant stacked up against overseas factories: the cost, excluding the materials, of building a $1,500 computer in Elk Grove was $22 a machine. In Singapore, it was $6. In Taiwan, $4.85. Wages weren’t the major reason for the disparities. Rather it was costs like inventory and how long it took workers to finish a task.

    Compared the the huge price disparities for other goods, these seem modest; for example, Timoni found a nice carry-on bag recently:


    So here's my proposition for Tim Cook:
    Reopen the Elk Grove Apple factory to sell top-line Apple products, designed for those who want 'designer' luxury goods, and are willing to pay more for exclusivity. Make the 'made in USA' a key argument for a premium price. that way you need fewer staff than in China, and paying them well just adds to the cachet of the devices. You could cover them in Jasper Johns Flag, visibly number them as a limited edition, or come up with something more creative. As a way of extending the product line to a new, higher price point, while quieting those who wish Apple did more in the US, it seems an a obvious move.


DossierPagePersonnelle

EditNearLinks: AlexSchroeder OddMuse

The same page on other sites:
MicroFormateurs:KevinMarks