Skip navigation

A couple of weeks ago, in avoiding a the close attentions of a 25 bus on the Vogue Gyratory, one of my panniers clipped my rear wheel and was sent spiralling backwards towards Sainsbury’s. As this particular bag held my Macbook and all original interview material, it is precious and costly to think about replacing, and though the bike was leaning in towards the curb, my instinctive response was to ‘sling on the anchors’ as The Sweeney would have it, and I braked hard. The bike, now lighter than usual, rocked forward, flipping over the axis of the front wheel, and I was sent sprawling on to the road’s surface hands first. The second instinct was yelling ‘Not the face!’, but my arms broke my fall, before I slid into the curb on my left side and knee. Blood and panic washed everywhere as the blue transit van, that had been following me round the exit back onto the Lewes Road stopped and the passenger checked to see if I was ok. Being busy collecting my things from up and down the near side lane of the road, I mumbled a shocked Yes, I’m fine, and moved by bike, bags and grazed self onto the pavement.

So far, so unfortunate; should have fitted the panniers so they don’t foal the wheels, and would have avoided putting my knee through my jeans and bruising a rib. Or so I thought. Later that day, as I attempted to cycle home (I had continued on to University as I had meetings and a teaching commitment to attend to) I realised I couldn’t hold the handlebars with my thumb of my right hand. Broken wrist was my amateur diagnosis, and so the x-ray at A&E appeared to confirm, when I was speedily seen at the Royal Sussex later that evening. Suspected fracture to the end of the ulna. Common in any trauma where you land on the flat of you hand, palm open and perpendicular to your arm. I have done this before, twice; once resulting in a fractured scaphoid requiring long term plaster, once similar to now, caused by landing heavily while playing football, and resulting in wearing a splint for a month. An appointment was made for the Fracture Clinic for a hand specialist to confirm the original diagnosis and to clarify the follow-up care. Come back Tuesday morning.

Hating hospitals with the clarity of any healthy, non-health-professional human, I make sure I was being seen early, so I could avoid all the sick and infirm. My own frailties are embarrassing enough, without the demand that I am exposed to other peoples frailties and the concerns of their caring relatives. See me first please, and 08:30, first appointment in the book. I arrived nice and early, prompt meaning I am 10 minutes in advance of my appointment time, and take a seat in the comfortable but worryingly large waiting area. How many people do these people have to cater for? Is there an epidemic of brittle boned cyclists in Brighton? In hindsight, I why they have such nicely padded chairs in the waiting room; don’t really want anyone to cause any further damage to a broken skeleton do we, but my main concern was getting seen and walking back to North Laine and a cup of tea.

My appointment duly arrived and was swiftly aborted. The x-ray server was down. Now, for those of you who are too pre-cautious to cycle to work of a morning, or more likely too sensible to fall off regularly, you may not know but x-rays are no longer printed. The examination of you bones is now carried out digitally, with the results being stored electronically and served to a web browser for inspection. This is a national service, enabling the rapid recall of x-ray material as part of your medical notes and allowing the speedy transfer of material between departments and institutions. Sadly on this morning, the server was down and no x-rays were retrievable. The consultant who explained the situation, mentioned that the clinic has 80 patients due to visit today, all of which would have x-rays that would need to be inspected. Could I return to the waiting area until the software was working? Upon further investigation, it was discovered that the issue was not a local network connection problem, but that the national service was being restarted. No x-rays were currently available for inspection.

My predicament was soon remedied and the arm is mending happily, if slightly smelly, with the support of the splint. However, the failure of an x-ray server bears long consideration. If the x-rays were taken in preparation for surgery and they were required to confirm the status of a patient prior to any procedure, or worse, during the operation itself, how would the care of the patient be guaranteed? Does the theatre nurse have to go back and check a cached copy? Is there a hospital policy about making sure that the browser history is retained for future reference? What sort of server is used for this delicate information? Is it sitting on virtual machine servers, with data redundancy to ensure that a seamless service is maintained? Well obviously not that tuesday morning. Who specifies this stuff.

As an academic who is interested in the continual creep of the digital and virtual into the realm of the physical object, this is fascinating. How are the roles of the consultant and nurses responsible for first line diagnosis and clinic support affected by the digitalisation of records and test materials, and how often is the service failing to deliver for the patients? I suspect that no one is counting, no one is asking how the staff are reacting to these changes, and this is worrying me far more than this itchy splint.

Advertisements

The first reviews of the Kindle Fire have started to appear on line. These early opinions are often reflections on the new reader as soon as the box has been opened, as the first shipment to Amazon customers began at the start of this week. The look of the product and the performance of the software on the Android operating device are among the issues raised and examined by bloggers, tweeters and journalists alike, ever since the Amazon Kindle Team (@AmazonKindle) tweeted that the wait is over and that shipping had begun.

The launch of any new device is a spectacle for modern media to judge and participate in. The opportunity for a blogger to be there early and find their blog linked to and quoted by a multitude of other internet outlets is such, that the time required for reflection is removed. The technological review are then aggregated to form meta-reviews, with further media outlets reviewing the relative temperature of the reviews thus far, taking a view on the relative idealogical bias of the news outlet involved and the relative proximity to the internet leviathan responsible for the hardware/software/user experience being dissected. Somer tech writers will be privileged and see the device early enough to form their opinions and polish their prose in advance of the first few hours of internet traffic. For the humble blogger, this is a race, entered into as a privateer, dependent on the performance of the local logistics agent.

While internet news fails to keep ones chips out of ones lap, it does provide with speed and colour information that once would have taken weeks to disseminate. Twitter is currently receiving tweets that include the words “Kindle Fire” at a rate of 160 per minute, covering all the topic areas mentioned above and far more I have neither the guile to imagine nor the patience to document here. This flash flood of information, in some cases, be reviewed later, with inaccuracies challenged possibly and more likely exposed with other blogs and later postings. Monthly updates of sales achieved and anticipated will be discussed. Advertising for the product and press releases placed to influence its social shaping will be deconstructed and further analysed. How the product is adopted and appropriated by the user becomes an activity that is directed by the socialisation of the product. Its success is in some senses given by its ability to acquire meaning and relevance for those beyond the realms of people who already own the product. The technical or software object gains significance for the wider public, achieves recognition and acquires a myth. Whereas once this process was one fuelled by word of mouth, and then by the different forms of media created to spread this word further and faster, this is a process enabled and documented on text based web enabled communications. The discourse created can be measured and examined, reread and appropriated, questioned and further written about. So while the bloggers are opening boxes to find that the kindle has been set up with their name as part of the Fire welcome page, an opportunity to examine the processes through which the products appropriation are contested is presented to the ever vigilant digital academic.

Also posted at http://splash.sussex.ac.uk/blog/for/rp236/2011/11/15/the-design-is-just-incredibly-unoriginal

Not exactly close to either Brighton or Hove, and not part of this year’s festival (topical reference – tick), but the video games selected for an exhibition being hosted by the Smithsonian Art Museum, entitled ‘The Art of Video Games’ have been announced. This list was collated with the help of a poll of the public, with gamers able to nominate their personal favourites.

The list is speckled with some classical titles, including work by Jeff Minter, Fumito Ueda, Hideo Kojima and Tetsuya Mizuguchi. The list includes both console and PC based gaming, and covers most formats, including the early cassette based software that powered the likes of the Commodore 64. While it is always lovely to gaze at the visual beauty of ‘Shadow of the Colossus’, I am glad to see that the game play and originality of Worms has found a place in this exalted company.

This exhibition, which is taking place between March and September next year, is a further step on the acceptance of the video game as an aesthetic object; one where the narrative is co determined by designer and player, and one where the graphics are frequently supplemented by the imagination of the gamer.

Video games code is all about compromise. How can the game code provide a realistic impression of the physical laws of the world, at least a consistent rendering for the world on which the action is based, while running on a piece of hardware that is frequently underpowered and may be of a specification that is up to 5 years old (PlayStation 2 had a production life of 8 years, with the same hardware specifications from day one). Graphics have to be rendered and dropped with amazing efficiency, and any lag in the controls will appear to the gamer to be a failure in the game to respond, making the game difficult to play, reducing the gaming life of the game, and ruining the reputation of the software company responsible for creating it.

The most affecting video game will be remembered with smooth rendering graphics, which slickly invite the gamer into a new paradise of ludic challenges. The games of yesteryear are remembers for the joy created by playing them, and it is always a shock to the memory to see how blocky or pixelated the graphics now appear. Time is unforgiving, and each subsequent generation of video console reduced the previous principle of polygon performance to the status of a bundle of hopeless line drawings.

So speed is all; not necessarily in the game’s action, but definitely in the smooth progress of the game code. This, in part, has driven the segmentation and specialism within the game development industry, with game engine companies providing development software and game play specialists supporting designers and graphical specialists. As with all other industrial structures, specialisation is key to developing efficiencies in production. Which is great when the fruits of the development cycle are the likes of Heavy Rain or the forthcoming LA Noire, where the spirit of the auteur is channelled by producers looking to explore a new creative media and develop narratives to take advantage of the higher levels of affect available to play with. Anyone who doubts if gaming creates an embodied response in the gamer should try to cut their finger off, as demanded on one of the games in the Smithsonian list. However, I hope there is still space in world of social mobile gaming for the development of a new Daredevil Denis!

This post was initiated after a meeting of the Research Centre in Material Digital Culture at the end of February. After reading the two papers and coming armed with notes, I had a view that one of the papers was defining the world wide web in a manner I felt was technologically determinist and reductionary. Adamant, I shared my views with the other members of the group present, who included both my supervisors, who politely refrained from commenting on my own, reactionary attitude to the papers. I repeated the charge, confident that my reading of the author’s views was illuminating and progressive. A calm descended, and the quiet that ensued was just long enough to allow me to consider the silence of the other readers, and allow the discussion to move on to the actual views expressed in the paper in question, and not the views or opinions I had inferred from my reading of the authors intent.

Hum. I have reread the papers in question this evening, reviewed my notes and reconsidered the evidence I had for the charge I had in mind. On review, its thin. “Of what was I charging the authors?” I hear you ask. Well, I thought that the paper included a descriptive definition of what “the internet does”. Now, why had this impertinence developed into a intellectual equivalent of a  ‘ear-worm’; a song so catchy it is impossible to remove it from one’s head, regardless of the gossamer thin nature of the melody or the shallow veneer of the lyric. On reflection, and after rereading the papers, it is simply because the two papers concerned, which discuss the aspects of crowd-sourcing in action in wikipedia in one paper and the commercial application of the concept in another, are descriptive of the process and are not interested in a critical appraisal of the product of the process. Wikipedia is critically appraised and the claims made of the website investigated through an analysis of data relating to the creation and maintenance of the sites pages. The implementation of software bots to translate pages from English into other minority languages is hailed as a process of co-authorship, removing opportunities for bias and inaccuracy to enter into the text. In the smaller languages, the number of bots outnumber the human editors. For me, the obvious outcome of this reliance on automated software, designed to ensure that minority languages are included in Wikipedia, is to limited the opportunities for the creation of original content in those languages. If a page for Super Furry Animals is translated into Cornish from Welsh or English, we will never receive the benefit of a page created from a Cornish perspective. All content is replicated and interlinked until it becomes part of the protocol of control active on most of the existing internet, and in this process, the bots have a great part of the workload.

I still feel that, while the activity of bots is important maintain to the content of wikipedia, elevating the processes they facilitate to collaboration is kin to giving the stapler a credit on a term time paper, or acknowledging the printer in the bibliography of your dissertation. These are tools, just as the robotic lawn mower tested last week on the lawns of Sussex uni is a tool. It may be able to know what is grass and what is a sleeping second year (at least I hope that was part of the programming), but it doesn’t not find a bare spot and lay new seed to fill in the missing turf unless it has been asked to, by the programmer. Collaborators have to bring something of themselves to the party.

Also posted here

For the last week, I have found myself transfixed by the disaster that has befallen Japan. Mortified, concerned for friends in Niigata, Tokyo and Ichinoseki, a town of 124,000 people, that sits north of Sendai by about 40 minutes on a train. The full bleak passage of events has been brought home by the constant update of news channels and the update of newspaper websites, but most of all by a site that reports the world’s seismic activity.

The United States Geological Survey run a site that monitors and reports on all seismic activity across the globe. I first found the site after being woken by an earthquake while on holiday in 2007. The tremor was a very minor event, enough to shake the room, but not enough for the hotel management to have noticed, as they were unaware of the tremor the following morning. The site reported this first earthquake as 4.8. The same page on the site now documents the 500 tremors experienced by Japan since 05:46 GMT on 11th March.

After the first major earthquake, a tsunami wave, created by the upward movement of the plate the island of Japan is part of, forced its way onto the country, and the towns and cities of the coastal reach of Iwate and Myagi Prefectures were swept away. The brutal force of the wave was captured in a video recording made from a helicopter hovering over the coastline. I can only wonder what was going through the minds of the pilot and camera operator as they saw the wave speed in land, towards the city and airport, decorated by the fires caused by the destruction if houses and gas mains in the path of the wave.

Later video recordings made by people trying to avoid the wave were released. Closer to the ground, they focus on the vans and cars being swept away, the boats pushed into alien city centres, and the houses that appear to levitate as they start to drift, unsecured, released from their foundations. The videos coincided with the emergence of a hash tag ‘#prayforjapan’ as people used twitter and facebook along with email to track down information about friends and family. A google search of the hash tag reports over 8 million results. The fight to contain the nuclear accident at Fukushima nuclear power station, and the need to provide food and shelter for people in the north of Japan, now covered by snow, may continue for months.

If you compare this to the news reported of the protests in Libya the difference is stark. A militaristic regime maintains control over the communication systems across a desert country, limiting access for overseas media organisations, and limiting the inteligence available for the local opposition. The utopian forces of social media cannot function in a country where the infrastructure is not open to public use. Updates of the progress of the government forces are reported by family and friends using telephones. The only video footage available is that provided by the state, in the form of official speeches or celebrations with flags, the only announcements restricted to interviews or statements by phone by academics who formed part of the original demonstrations for democracy.

There is a phatic performative aspect to social media that has been highlighted in numerous articles and books since its emergence. It is easy to place a hash tag on a tweet and show the world that you are a caring, sensitive, citizen of the globe. You may be motivated by the concern for the safety of friends and family, and I defy anyone not to be moved by the thought of the loss of 10,000 people from a country in the way they were swept away from the North east coast of Japan.

I also dare the same imaginary philistine to hear the interviews with Dr Jalal Al Gallal of the National Libyan Transitional Council as she discusses her fears for the future of the people she is seeking protection for in demanding a No Fly Zone and support in their cause against the government of Libya.  The same goes for the interview with a member of staff hiding in Salmaniya Hospital as government forces clear the area of protestors. The panic in the woman’s voice appears to throw the presenter, as she concedes that they will soon be found by the troops clearing the hospital.

While the live blogs, maps and citizen videos are illustrating the story of a human disaster in Japan, the media coverage of Libya and Bahrain fall back upon trusted modes of story acquisition. The power of Internet based media, social or media-institutional, has been illustrated with the speed of the information about the earthquake and tsunami, while the limits of that freedom and demonstrated by the actions in the Middle East. There is no utopia in technology without communication and agreement on their use, and even then, it will always be fragile.

As of later today, there is likely to be a story about the new iPad, it’s launch date, or more likely, various market specific dates, the price and finally the technical specifications of the tablet. The debate on what the new product will offer was triggered by the announcement of a press conference, two weeks ago, and has been covered in detail on Apple focused websites like 9to5mac.com, newspaper sites The Guardian and The Wall Street  Journal.

The event has been covered on mainstream media, and Apple, with iOS, iPhone and iPad along with their range of laptop computing have placed themselves in the vanguard of the mobile digital revolution. These promise to be devices that will do whatever you want, wherever you are. Well, at least they will, as long as Apple agrees that you can, as through the ownership of iOS along with the hardware design, and their control over all the software applications that can be purchased via the app store, Apple have control on what you may or may not do with your iPhone or iPad.

How this control exists is pertinent to all networked devices. As explored by Galloway, the concept of Protocol[1]; the establishment of rules, standardisation of application activity and the limitation of a device to restrict the activity it may be used for, are aspects of protocological control. These structures enable control to be enforced without resorting to centralisation or forcing a hierarchy. The user opts in, attracted to the devices for the sense of being at a pinnacle of technology and mobility , and opts into the controls of the contract with Apple.

Not that this is a stick with which to beat Apple Inc. exclusively. All network devices are nodes on a network of convivial compliance. The limits placed on user intervention is part of a protocol for any network, and is determined by the owner of the network and not of the individual node. The recent PlayStation 3 hacking is a case in point. The defence used by hacker “graf_chokolo” is that he wants to assert ownership over the games console he has paid for.

Videogames consoles have all been the site of a battle between the format owner and the hacker community. Several versions of Linux developed to run on both Xbox and 360, and while Sony supported the option of installing a secondary operating system onto the PS3 when the console was launched, this was removed with the first redesign of the console in 2009.

Homebrew software has long been distributed for Nintendo DS systems and PlayStation Portable, with software to play games and read books stored on the memory cards developed to extend the use of the consoles, and to allow the console user to extend their rights of ownership. This is my console, see what I can do with it!

Beyond the world of the hacker community, the Free Software Foundation as campaigned for free distribution of software. They describe devices such as iPhone and iPad as a threat to user and developer freedom. The Unix based operating system of the iPhone is a proprietary system. All software developed for the operating system is required to comply with a strict licensing agreement, and are required to maintain the controls implemented by Apple’s Digital Rights Management system. This restricts the opportunities for a user to share their purchased software, even if the creator of the application wishes that this should be possible. This is an example of a hardware copyright system that removes the right of the developer to have control over the object of their own labour, their own software copyright.

Richard Stallman, he of the beard who started the FSF in 1996, will be speaking at Sussex in March. The details of the lecture are here and you can register for the event here.


[1] Galloway, Alex. Protocol , or , How Control Exists after Decentralization. Rethinking Marxism 13, no. 3/4 (2001): 81-88.

Also posted at http://splash.sussex.ac.uk/blog/for/rp236/2011/03/02/so-is-this-one-mine

In 2002, Katherine Hayles wrote a small book called ‘Writing Machines’ that was published by MIT Press. It is not a strictly academic type of text, more autobiographical in tone, and Hayles describes it as an experiment. The text weaves a narrative of the developing academic career of ‘Kaye’ as she progresses in research that leans on a love and knowledge of Literature to explore the emerging world of Multimedia work.

Exploring the worlds created by Michael Joyce and Talan Memmott, Hayles examines how a text, its content and the materiality of a media work is integrated into the processes through which multimedia textual art is read. She relates this to examples of Art books of Tom Philips, where the materiality of the text is clearly an integral aspect of the narrative. Finally, she explores how novels such as House of Leaves by Mark Z Danielewski rely on digital techniques to create “technotexts” that stetch at the perception of what a book can be, and play with the contemporary subjectivity. To illustrate the concepts of the text, the book is illustrated using different fonts, page edge decoration and a ‘lexicon linkmap’ of key words from the text.

Following in 2008, ‘Electronic Literature’ expanded and deepened this analysis. A scholarly tome, this volume retraces some of the works discussed in the first volume, but the reference points are now expanded. Encompassing the work of Friedrich Kittler, Mark B.N. Hansen and Walter Benjamin and, and introducing the concept of Intermediation to explain how the text of a multimedia text carries forward the components and figurative tropes of the media text has existed in earlier epochs (oral tradition, manuscript, print). In place of the decorated form the of the first volume, here examples of the texts discussed by Hayles are collated, included on a DVD with the book, and collected together as a webpage called eliterature.org. Here the early experiments of ‘Twelve Blue’ and ‘Lexia to Perplexia’ can be explored and their texts read/interpreted/appropriated.

All very interesting, but why am I using 400 words to pass this on to you? Well, Volume 2 of eliterature has recently been released. Both volumes have been published by The Electronic Literature Organization (ELO), who intention is to enable the widest possible readership for the emerging media of electronic literature. The new collection includes work from Sharon Daniel and Erik Loyer (Public Secrets), Ton Ferret (The Fugue Book), and Daniel C. Howe and Bebe Molina (Roulette). The sixty-three pieces form a full overview of the possibilities of multimedia texts. Created using Flash and JavaScript, hypertext links and generative algorithms, shockwave video and word play to form texts that explore how materiality in virtual forms represents the remediation of older text forms, and creates new opportunities for a viewer to read new narratives and read new textual worlds formed through the aesthetics of the emerging visual form. I invite you to come in and play…

 

In the world of videogames console, the changeover from disc based ‘box product’ video games to Downloadable Content (DLC) distributed digitally is one of the current open discourses of the games industry. Do people value box product above the DLC games? Do gamers play free titles differently? How does the industry provide the product the gamers want most effectively (ie. at the greatest return).

The current model favoured for ‘AAA Pillar’ games, the blockbuster titles that receive the highest investment and are expected to generate the largest returns for the developer/publisher, is a half way house of a boxed product for the initial release of the main game, with a DLC addition made available to games via the internet based network for that console (PSN or XBLA), usually sometime after the original release. Last year saw the release of a zombie episode for cowboy epic “Red Dead Redemption” by Rockstar, and this year EA have adopted this strategy for their major game releases.

EA CEO John Riccitiello used this discussion as part of a presentation to the Goldman Sachs Technology and Internet Conference earlier this week. Arguing that EA had not made the transition to the current generation of video game hardware very successfully, he used the adoption of a ‘Free-to-Play’ (F2P) model for additional content to illustrate how EA has adapted to the new opportunities offered by the current video console. Networked console with internal hard drive storage enables the game publisher to distribute a game direct to the gamer, removing the wasteful distribution of plastic boxes, and limiting the hurtful opportunities for game piracy. Content can be personalised to the individual console, using the system of complex keys implemented in both Xbox 360 and PlayStation 3.

So where does the money come from in a F2P game? Where else, but from the activity of the gamer of course!

The revenues for “FIFA Ultimate Team quadrupled” when the game became a free download. In game advertising and on purchases by the gamer generated $40 million.

So, can anyone gain access to this bonanza of profit generated from the gaming labour of your customers? It appears not. In a separate discussion of the future of game development, the size required by game development companies to enable their survival was cause for concern.

David Perry, CEO of Gaikai described the fragile nature of companies developing for mobile devices.

“People who making Kleenex games – ones that you can blow your nose on and throw away – that’s not necessarily a safe place to be betting your money. If your game design is two pages, unless you’ve found the next Tetris, I would start to worry.”

So how do social games succeed? Mike Capps of Epic, in the same article, see this as part of the consumption cycle.

“There’s a reason FarmVille and Zynga’s games are so successful – it’s advertising dollars. Not because FarmVille’s design is particularly brilliant or they put a massive investment into it – it’s advertising dollars, followed by metrics, and watching what their users are doing.”

Tracking the activity of the gamers when they are using their product is a key component to generating their profits over the lifecycle of the game. Monitoring the immaterial labour of the gamer, and selling this labour; eyeball time, or developing new iterations of successful episodes, using happy customer to pass on the joy of the gaming experience to the friends or buying in game content as a gift for another gamer all becomes more important for the revenue of the company as they look to extend the life of every successful game.

All of which is likely to spell the end of box product in those countries where the Broadband network is mature and console connection rate is high enough for the major game developers to take the jump. It is likely that a greater percentage of games will be DLC only in the next 12 months, with any game that finds a sufficiently big audience spawning additional free episodes tuned to extend the gaming experience and harvest the fruit of the gamers labour. How much of the game is “yours” after you have bought it is open to question when you every move is monitored and examined to create further revenue. How many companies are prepared to risk €30 Million for a new ‘Heavy Rain’ when their who company is riding on the outcome? More generic, thin, socially phatic gaming maybe the best we can expect, at least for the time being. Interesting, some of the games industry are starting to think in a similar manner. http://is.gd/AwiLaq (registration required) Blog updated 17/02/2011.

The next instalment of the ongoing debate about the content of videogames and the impact on the people who use them has started in the US this week. Bulletstorm is a first person shooter that EA describe as featuring “an arsenal of over-the-top combat moves and outrageously large guns” (http://www.ea.com/games/bulletstorm) as you character battles to survive “hordes of mutants and flesh eating gangs”. Described as “tongue in cheek” by CNN in January, this game has been aimed exclusively at an adult audience, and it has been granted a ‘M’ Mature rating under the ESRB rating system.

However, the game has been made the object of a piece run by Fox News, entitled ‘Is Bulletstorm the Worst Video Game in the World?’, where psychologist and author Carol Lieberman baldly states

“The increase in rapes can be attributed in large part to the playing out of scenes in video games.”

(quoted on MCVUK: http://is.gd/dFvAPQ). Later, Dr Jerry Weichman, a clinical psychologist at the Hoag Neurosciences Institute in Southern California, adds:

“Violent video games like Bulletstorm have the potential to send the message that violence and insults with sexual innuendos are the way to handle disputes and problems.”

EA have responded to the claims made by Fox, with a press release made by Vice-President of Public Relations, Tammy Schachter, published by Game Informer (http://is.gd/GjQice).

“Epic, People Can Fly and EA are avid supporters of the ESA and believe in the Entertainment Software Ratings Board (ESRB) rating system. We believe in and abide by the policies put in place by the ESRB.

Bulletstorm is rated M for Mature for blood and gore, intense violence, partial nudity, sexual themes, strong language and use of alcohol. The game and its marketing adhere to all guidelines set forth by the ESRB; both are designed for people 17+. Never is the game marketed to children.”

Later, Schachter compares the game to the work of film directors Quentin Tarantino and Robert Rodriguez, where hyperreal settings offer a context for acts of comic book violence.

Game Informer have published links to trailers for the game, and includes further quotes from the Fox coverage. Confusingly, the article signs off by describing the game as “sensationally violent”. The tone of the coverage on the website glories in the sensational nature of the videogame, playing up the extreme hyperreal content on offer.

This type of media story has frequently been repeated in the UK press, and this story may well make the leap from the videogame specialist websites to the mainstream press in advance of the game release.

EA are walking a fine line with the release of a game, where the target audience have been primed with trailers, promotional downloads and interviews from the creative team who designed the game as long ago as E3, in June last year, all with the aim of maximizing the day one sales. EA have experienced a period of losses, recently reported as $322 million (http://is.gd/sbR3wC – comments) and Bulletstorm is expected to be the start of a turn around for what is one of the original videogame software companies. The current wave of media interest can only help EA in keeping the game in the public eye, and allow them to profit handsomely.

Looking further into the future, the need to enforce the age rating system may further enhance the move towards direct delivery of videogames via broadband networks. The purchase of downloadable content (DLC) can be controlled by limiting the transaction to a credit card, thus ensuring that the owner is of a certain age. Whether the owner of the card is the purchaser is another matter, and one the software company will feel they can leave to the household in question. The same is true for a streaming service like Gameplay, where the game code is streamed across a network and no disc is required for a gamer to start playing. Furthermore, the game software is protected from any threat of piracy by the removal of any disc to copy.

EA have been in the vanguard of the development of broadband delivery systems for videogame content, supporting X-Box Live Arena and PlayStation Network from the launch both services, and looking to gain revenue from the sale of in game content over the full lifetime of the game. With the removal of the disc-object and the direct sale of the game to the player, a software company can ensure that they derive the return of all surplus value from the game sale.

The re-emergence of dial up as the technology of choice to avoid the controls of a government that has maintained rule by emergency law for 42 of the last 43 years, is a timely lesson in the frailties present in any or all centrally licensed communications systems. Old fashioned copper wire communications are harder to close down completely without wiping out any economic activity. A mobile network can be manipulated locally by the removal of power to the appropriate mast, as Vodaphone customer on campus have discovered recently, but the older hardwired communications network is a web of interconnecting exchanges, the closure of which cuts a wide swage of the local area from communications. Hard to disguise to the foreign press in the nearby and equally affected hotel.

A new part of the ongoing development of the use of mobile electronic devices to resist the forces of the state has been developed this year by a group of students at UCL. Sukey uses a combination of google maps and open source software, gps, email and sms, to update demonstrators on the location of amenities on the route of their demonstration, and more importantly, provides a way to communicate quickly any potential activity by the police. As reported in today’s Guardian, the system was used to avoid a potential kettling situation during a demonstration last saturday.

The use of the internet to subvert or control dissident populations is an extension of previously existing network behaviors. The concept of disinformation is in no way a new one, and social relations are ripe for the circulation of inaccurate and interesting stories. Social media networks are more open to this as the element of trust, of who you are and what validity you information holds, is thinner or more difficult to establish, while the providence of your facts may never get tested. Local “white hat” developments like Sukey can fill the void in accurate and safe information, only as long as the authorities choose to allow the underlying infrastructure to be available to support it. In this case, that will be only so long as the cost of removing the infrastructure in question is not so large as to warrant a threat to the survival of the regime in question.

Mobile technologies and cloud services are inherently fragile, relying on power and being a node on a network or relying on access to limited domain addresses to function, or the activity of an individual ISP. Hence the safety of the copper cable, and a telephone system that the regime cannot afford to be seen to turn off, just to limit the communications of a minority of its population.

Also posted at: http://splash.sussex.ac.uk/blog/for/rp236/2011/02/03/the-safety-of-copper