butuh klik Headline Animator

Selasa, 24 Februari 2009

What is GBO-3?

Global Biodiversity Outlook is the flagship publication of the Convention on Biological Diversity and preparations are currently underway for the production of its third edition. Global Biodiversity Outlook 3 will be formally launched in 2010, the year proclaimed as the International Year of Biodiversity. Several ancillary products, including but not limited to, brochures, fliers, presentations, key messages and a web-based data portal, are also planned. Global Biodiversity Outlook 3 (GBO-3) will be an important vehicle for informing a variety of audiences of the importance of biodiversity and the progress made in meeting the 2010 Biodiversity Target.

Information regarding the status and trends of biodiversity, both at global and regional levels, will be presented as will information regarding the progress made in mainstreaming biodiversity issues into the development agenda. There will be an emphasis on case studies that illustrate the positive actions taken to effectively conserve and sustainably use biodiversity. Global Biodiversity Outlook 3 will use information provided by Parties in their National Reports to highlight the practical actions taken to promote biodiversity initiatives. The information provided by the Parties will be supplemented by information, including biodiversity indicators, from various assessments and partner agencies.

Working Group on Access and Benefit Sharing

The Ad Hoc Open-ended Working Group on Access and Benefit-sharing (Working Group) was established by the Conference of the Parties at its fifth meeting, in May 2000, in Nairobi, Kenya. As set out in decision V/26, the initial mandate of the Working Group was to develop guidelines and other approaches to assist Parties and stakeholders with the implementation of the access and benefit-sharing provisions of the Convention.

The Bonn Guidelines on access to genetic resources and the fair and equitable sharing of the benefits arising from their utilization, developed by the Working Group, were adopted by the Conference of the Parties at its sixth meeting in The Hague, in 2002. The Working Group was then reconvened in order to further examine outstanding issues, including use of terms, other approaches, measures to support compliance with prior informed consent and mutually agreed terms, and capacity-building needs (COP decision VI/24).

The Working Group was given a new mandate at the seventh meeting of the Conference of the Parties, in Kuala Lumpur, 2004. Following the call for action by Governments at the World Summit on Sustainable Development to negotiate an international regime for the fair and equitable sharing of benefits arising out of the utilization of genetic resources, the Conference of the Parties decided “to mandate the Ad Hoc Open-ended Working Group on Access and benefit-sharing with the collaboration of the Ad Hoc Open ended Inter-sessional Working Group on Article 8 (j) and related provisions, ensuring the participation of indigenous and local communities, non-governmental organizations, industry and scientific and academic institutions, as well as intergovernmental organizations, to elaborate and negotiate an international regime on access to genetic resources and benefit-sharing with the aim of adopting an instrument/instruments to effectively implement the provisions in Article 15 and Article 8 (j) of the Convention and the three objectives of the Convention.” The COP also agreed on the terms of reference for the Working Group, including the process, nature, scope and elements for consideration in the elaboration of the regime (COP decision VII/19).

At its eighth meeting, in Curitiba, in 2006, the Working Group on ABS was requested by the Conference of the Parties to continue the elaboration and negotiation of the international regime and instructed to complete its work at the earliest possible time before the tenth meeting of the Conference of the Parties. Two meetings of the Working were to be held prior to COP 9.

The Ad Hoc Open-ended Working Group on Access and Benefit-sharing held its fifth meeting in Montreal, Canada, from 8 to 12 October 2007, and its sixth meeting in Geneva, Switzerland, from 21 to 25 January 2008. In line with decision VIII/4 of the eighth meeting of the Conference of the Parties, the Working Group continued the elaboration and negotiation of the international regime on access and benefit-sharing. Recommendations to the ninth meeting of the Conference of the Parties on the way forward are included in the report of the sixth meeting. Click here for the respective reports of the fifth and sixth meeting.

At its ninth meeting in Bonn, in May 2008, in decision IX/12, paragraph 2, the COP reiterated its instruction to the Working Group to complete the elaboration and negotiation of the international regime at the earliest possible time before the tenth meeting of the Conference of the Parties. In paragraph 3, the Conference of the Parties further instructed the Working Group to finalize the international regime and to submit for consideration and adoption by the Conference of the Parties at its tenth meeting an instrument\instruments to effectively implement the provisions in Article 15 and Article 8(j) of the Convention and its three objectives, without in any way prejudging or precluding any outcome regarding the nature of such instrument\instruments. The Conference of the Parties also decided that the Ad Hoc Open-ended Working Group on Access and Benefit-sharing should meet three times prior to the tenth meeting of the Conference of the Parties.

Cooperation

Each Contracting Party shall, as far as possible and as appropriate, cooperate with other Contracting Parties, directly or, where appropriate, through competent international organizations, in respect of areas beyond national jurisdiction and on other matters of mutual interest, for the conservation and sustainable use of biological diversity.

Too Little Vitamin D May Mean More Colds and Flu

The finding is based on an assessment of vitamin D levels, nutritional habits and respiratory infection rates among nearly 19,000 American men and women.

"We don't want to jump ahead of ourselves," said study author Dr. Adit Ginde, an assistant professor of surgery in the division of emergency medicine at the University of Colorado Denver School of Medicine. "But our study provides support that lower levels of vitamin D are associated with an increased risk for respiratory infections, such as the common cold and the flu. And people who have pre-existing respiratory disease -- like asthma an emphysema -- appear to be at an increased risk for this association."

Ginde's team, from Harvard Medical School and Children's Hospital Boston, reports its findings in the Feb. 23 issue of the Archives of Internal Medicine.

Vitamin D can be found in such foods as canned tuna, cereal and fortified milk or juice, according to the American Dietetic Association (ADA). The body can also be triggered to naturally produce vitamin D after adequate exposure to sunlight.

In addition to its well-established role as a calcium builder and bone fortifier, vitamin D has recently been touted as having a protective role against both colon cancer and multiple sclerosis, the ADA noted.

And in December, a review of studies conducted by researchers at the Mid-America Heart Institute in Kansas City suggested that those with vitamin D deficiency -- a designation estimated to include about half of American adults and nearly one in three children -- might face an increased risk for heart attack and stroke.

To gauge the specific relationship between vitamin D and respiratory risk, Ginde's team analyzed data from the Third National Health and Nutrition Examination Survey, collected from 1988 to 1994.

Participants were aged 12 and up -- with an average age of 38 -- and three-quarters were white. All completed nutritional and health surveys and had physical examinations. Blood samples were taken to measure levels of 25-hydroxyvitamin D, considered to be the optimal measure of vitamin D status.

The researchers found that those with less than 10 nanograms of vitamin D per milliliter of blood, considered low, were nearly 40 percent more likely to have had a respiratory infection than those with vitamin D levels of 30 ng or higher. The finding was consistent across all races and ages.

In particular, people who had a history of asthma or some form of chronic obstructive pulmonary disease (COPD) were even more likely to suffer from vitamin D deficiencies.

Asthma patients with the lowest vitamin D levels had five times the risk for respiratory infection, and vitamin D-deficient COPD patients had twice the risk.

"We still need to do the clinical trials that we already have planned to definitely say whether supplementation with vitamin D would actually reduce the risk we found," Ginde cautioned. "But I think we can say that most Americans probably do need more vitamin D for its effects on bone health, as well as for its general benefits with respect to the immune system."

Lona Sandon, an assistant professor of clinical nutrition at the University of Texas Southwestern Medical Center and a spokeswoman for the American Dietetic Association, said that evidence of a vitamin D-immune system connection seems "pretty strong."

"There does seem to be a link because, when we're not getting enough vitamin D, our immune system appears not to function at its best," she said.

Sandon noted, however, that getting enough vitamin D from food alone can be difficult.

"The best sources are salmon with the bones, or three cups a day of milk," she said. "But not many people get that. So I would say, get outside and expose some skin to the sun. Dermatologists don't always like that advice because they're concerned with skin cancer, but just 15 minutes a day at the sun's peak -- roughly 11 to 1 -- does the trick."

The $300 Million Button

How Changing a Button Increased a Site's Annual Revenues by $300 Million

It's hard to imagine a form that could be simpler: two fields, two buttons, and one link. Yet, it turns out this form was preventing customers from purchasing products from a major e-commerce site, to the tune of $300,000,000 a year. What was even worse: the designers of the site had no clue there was even a problem.

The form was simple. The fields were Email Address and Password. The buttons were Login and Register. The link was Forgot Password. It was the login form for the site. It's a form users encounter all the time. How could they have problems with it?

The problem wasn't as much about the form's layout as it was where the form lived. Users would encounter it after they filled their shopping cart with products they wanted to purchase and pressed the Checkout button. It came before they could actually enter the information to pay for the product.

The team saw the form as enabling repeat customers to purchase faster. First-time purchasers wouldn't mind the extra effort of registering because, after all, they will come back for more and they'll appreciate the expediency in subsequent purchases. Everybody wins, right?

"I'm Not Here To Be In a Relationship"

We conducted usability tests with people who needed to buy products from the site. We asked them to bring their shopping lists and we gave them the money to make the purchases. All they needed to do was complete the purchase.

We were wrong about the first-time shoppers. They did mind registering. They resented having to register when they encountered the page. As one shopper told us, "I'm not here to enter into a relationship. I just want to buy something."

Some first-time shoppers couldn't remember if it was their first time, becoming frustrated as each common email and password combination failed. We were surprised how much they resisted registering.

Without even knowing what was involved in registration, all the users that clicked on the button did so with a sense of despair. Many vocalized how the retailer only wanted their information to pester them with marketing messages they didn't want. Some imagined other nefarious purposes of the obvious attempt to invade privacy. (In reality, the site asked nothing during registration that it didn't need to complete the purchase: name, shipping address, billing address, and payment information.)

Not So Good For Repeat Customers Either

Repeat customers weren't any happier. Except for a very few who remembered their login information, most stumbled on the form. They couldn't remember the email address or password they used. Remembering which email address they registered with was problematic - many had multiple email addresses or had changed them over the years.

When a shopper couldn't remember the email address and password, they'd attempt at guessing what it could be multiple times. These guesses rarely succeeded. Some would eventually ask the site to send the password to their email address, which is a problem if you can't remember which email address you initially registered with.

(Later, we did an analysis of the retailer's database, only to discover 45% of all customers had multiple registrations in the system, some as many as 10. We also analyzed how many people requested passwords, to find out it reached about 160,000 per day. 75% of these people never tried to complete the purchase once requested.)

The form, intended to make shopping easier, turned out to only help a small percentage of the customers who encountered it. (Even many of those customers weren't helped, since it took just as much effort to update any incorrect information, such as changed addresses or new credit cards.) Instead, the form just prevented sales - a lot of sales.
The $300,000,000 Fix

The designers fixed the problem simply. They took away the Register button. In its place, they put a Continue button with a simple message: "You do not need to create an account to make purchases on our site. Simply click Continue to proceed to checkout. To make your future purchases even faster, you can create an account during checkout."

The results: The number of customers purchasing went up by 45%. The extra purchases resulted in an extra $15 million the first month. For the first year, the site saw an additional $300,000,000.

On my answering machine is the message I received from the CEO of the $25 billion retailer, the first week they saw the new sales numbers from the redesigned form. It's a simple message: "Spool! You're the man!" It didn't need to be a complex message. All we did was change a button.

The Library in the New Age

1.

Information is exploding so furiously around us and information technology is changing at such bewildering speed that we face a fundamental problem: How to orient ourselves in the new landscape? What, for example, will become of research libraries in the face of technological marvels such as Google?

How to make sense of it all? I have no answer to that problem, but I can suggest an approach to it: look at the history of the ways information has been communicated. Simplifying things radically, you could say that there have been four fundamental changes in information technology since humans learned to speak.

Somewhere, around 4000 BC, humans learned to write. Egyptian hieroglyphs go back to about 3200 BC, alphabetical writing to 1000 BC. According to scholars like Jack Goody, the invention of writing was the most important technological breakthrough in the history of humanity. It transformed mankind's relation to the past and opened a way for the emergence of the book as a force in history.

The history of books led to a second technological shift when the codex replaced the scroll sometime soon after the beginning of the Christian era. By the third century AD, the codex—that is, books with pages that you turn as opposed to scrolls that you roll—became crucial to the spread of Christianity. It transformed the experience of reading: the page emerged as a unit of perception, and readers were able to leaf through a clearly articulated text, one that eventually included differentiated words (that is, words separated by spaces), paragraphs, and chapters, along with tables of contents, indexes, and other reader's aids.
PEN / Global Correspondences
The codex, in turn, was transformed by the invention of printing with movable type in the 1450s. To be sure, the Chinese developed movable type around 1045 and the Koreans used metal characters rather than wooden blocks around 1230. But Gutenberg's invention, unlike those of the Far East, spread like wildfire, bringing the book within the reach of ever-widening circles of readers. The technology of printing did not change for nearly four centuries, but the reading public grew larger and larger, thanks to improvements in literacy, education, and access to the printed word. Pamphlets and newspapers, printed by steam-driven presses on paper made from wood pulp rather than rags, extended the process of democratization so that a mass reading public came into existence during the second half of the nineteenth century.

The fourth great change, electronic communication, took place yesterday, or the day before, depending on how you measure it. The Internet dates from 1974, at least as a term. It developed from ARPANET, which went back to 1969, and from earlier experiments in communication among networks of computers. The Web began as a means of communication among physicists in 1981. Web sites and search engines became common in the mid-1990s. And from that point everyone knows the succession of brand names that have made electronic communication an everyday experience: Web browsers such as Netscape, Internet Explorer, and Safari, and search engines such as Yahoo and Google, the latter founded in 1998.

When strung out in this manner, the pace of change seems breathtaking: from writing to the codex, 4,300 years; from the codex to movable type, 1,150 years; from movable type to the Internet, 524 years; from the Internet to search engines, nineteen years; from search engines to Google's algorithmic relevance ranking, seven years; and who knows what is just around the corner or coming out the pipeline?

Each change in the technology has transformed the information landscape, and the speed-up has continued at such a rate as to seem both unstoppable and incomprehensible. In the long view—what French historians call la longue durée—the general picture looks quite clear—or, rather, dizzying. But by aligning the facts in this manner, I have made them lead to an excessively dramatic conclusion. Historians, American as well as French, often play such tricks. By rearranging the evidence, it is possible to arrive at a different picture, one that emphasizes continuity instead of change. The continuity I have in mind has to do with the nature of information itself or, to put it differently, the inherent instability of texts. In place of the long-term view of technological transformations, which underlies the common notion that we have just entered a new era, the information age, I want to argue that every age was an age of information, each in its own way, and that information has always been unstable.


Let's begin with the Internet and work backward in time. More than a million blogs have emerged during the last few years. They have given rise to a rich lore of anecdotes about the spread of misinformation, some of which sound like urban myths. But I believe the following story is true, though I can't vouch for its accuracy, having picked it up from the Internet myself. As a spoof, a satirical newspaper, The Onion, put it out that an architect had created a new kind of building in Washington, D.C., one with a convertible dome. On sunny days, you push a button, the dome rolls back, and it looks like a football stadium. On rainy days it looks like the Capitol building. The story traveled from Web site to Web site until it arrived in China, where it was printed in the Beijing Evening News. Then it was taken up by the Los Angeles Times, the San Francisco Chronicle, Reuters, CNN, Wired.com, and countless blogs as a story about the Chinese view of the United States: they think we live in convertible buildings, just as we drive around in convertible cars.

Other stories about blogging point to the same conclusion: blogs create news, and news can take the form of a textual reality that trumps the reality under our noses. Today many reporters spend more time tracking blogs than they do checking out traditional sources such as the spokespersons of public authorities. News in the information age has broken loose from its conventional moorings, creating possibilities of misinformation on a global scale. We live in a time of unprecedented accessibility to information that is increasingly unreliable. Or do we?

I would argue that news has always been an artifact and that it never corresponded exactly to what actually happened. We take today's front page as a mirror of yesterday's events, but it was made up yesterday evening—literally, by "make-up" editors, who designed page one according to arbitrary conventions: lead story on the far right column, off-lead on the left, soft news inside or below the fold, features set off by special kinds of headlines. Typographical design orients the reader and shapes the meaning of the news. News itself takes the form of narratives composed by professionals according to conventions that they picked up in the course of their training—the "inverted pyramid" mode of exposition, the "color" lead, the code for "high" and "the highest" sources, and so on. News is not what happened but a story about what happened.

Of course, many reporters do their best to be accurate, but they must conform to the conventions of their craft, and there is always slippage between their choice of words and the nature of an event as experienced or perceived by others. Ask anyone involved in a reported happening. They will tell you that they did not recognize themselves or the event in the story that appeared in the paper. Sophisticated readers in the Soviet Union learned to distrust everything that appeared in Pravda and even to take nonappearances as a sign of something going on. On August 31, 1980, when Lech Walesa signed the agreement with the Polish government that created Solidarity as an independent trade union, the Polish people refused at first to believe it, not because the news failed to reach them but because it was reported on the state-controlled television.

I used to be a newspaper reporter myself. I got my basic training as a college kid covering police headquarters in Newark in 1959. Although I had worked on school newspapers, I did not know what news was—that is, what events would make a story and what combination of words would make it into print after passing muster with the night city editor. When events reached headquarters, they normally took the form of "squeal sheets" or typed reports of calls received at the central switchboard. Squeal sheets concerned everything from stray dogs to murders, and they accumulated at a rate of a dozen every half hour. My job was to collect them from a lieutenant on the second floor, go through them for anything that might be news, and announce the potential news to the veteran reporters from a dozen papers playing poker in the press room on the ground floor. The poker game acted as a filter for the news. One of the reporters would say if something I selected would be worth checking out. I did the checking, usually by phone calls to key offices like the homicide squad. If the information was good enough, I would tell the poker game, whose members would phone it in to their city desks. But it had to be really good—that is, what ordinary people would consider bad—to warrant interrupting the never-ending game. Poker was everyone's main interest—everyone but me: I could not afford to play (cards cost a dollar ante, a lot of money in those days), and I needed to develop a nose for news.

I soon learned to disregard DOAs (dead on arrival, meaning ordinary deaths) and robberies of gas stations, but it took time for me to spot something really "good," like a holdup in a respectable store or a water main break at a central location. One day I found a squeal sheet that was so good—it combined rape and murder—that I went straight to the homicide squad instead of reporting first to the poker game. When I showed it to the lieutenant on duty, he looked at me in disgust: "Don't you see this, kid?" he said, pointing to a B in parentheses after the names of the victim and the suspect. Only then did I notice that every name was followed by a B or a W. I did not know that crimes involving black people did not qualify as news.


Having learned to write news, I now distrust newspapers as a source of information, and I am often surprised by historians who take them as primary sources for knowing what really happened. I think newspapers should be read for information about how contemporaries construed events, rather than for reliable knowledge of events themselves. A study of news during the American Revolution by a graduate student of mine, Will Slauter, provides an example. Will followed accounts of Washington's defeat at the Battle of Brandywine as it was refracted in the American and European press. In the eighteenth century, news normally took the form of isolated paragraphs rather than "stories" as we know them now, and newspapers lifted most of their paragraphs from each other, adding new material picked up from gossips in coffeehouses or ship captains returning from voyages. A loyalist New York newspaper printed the first news of Brandywine with a letter from Washington informing Congress that he had been forced to retreat before the British forces under General William Howe. A copy of the paper traveled by ship, passing from New York to Halifax, Glasgow, and Edinburgh, where the paragraph and the letter were reprinted in a local newspaper.

The Edinburgh reprints were then reprinted in several London papers, each time undergoing subtle changes. The changes were important, because speculators were betting huge sums on the course of the American war, while bears were battling bulls on the Stock Exchange, and the government was about to present a budget to Parliament, where the pro-American opposition was threatening to overthrow the ministry of Lord North. At a distance of three thousand miles and four to six weeks of travel by ship, events in America were crucial for the resolution of this financial and political crisis.

What had actually happened? Londoners had learned to mistrust their newspapers, which frequently distorted the news as they lifted paragraphs from each other. That the original paragraph came from a loyalist American paper made it suspect to the reading public. Its roundabout route made it look even more doubtful, for why would Washington announce his own defeat, while Howe had not yet claimed victory in a dispatch sent directly from Philadelphia, near the scene of the action? Moreover, some reports noted that Lafayette had been wounded in the battle, an impossibility to British readers, who believed (wrongly from earlier, inaccurate reports) that Lafayette was far away from Brandywine, fighting against General John Burgoyne near Canada.

Finally, close readings of Washington's letter revealed stylistic touches that could not have come from the pen of a general. One—the use of "arraying" instead of "arranging" troops—later turned out to be a typographical error. Many Londoners therefore concluded that the report was a fraud, designed to promote the interests of the bull speculators and the Tory politicians—all the more so as the press coverage became increasingly inflated through the process of plagiarism. Some London papers claimed that the minor defeat had been a major catastrophe for the Americans, one that had ended with the annihilation of the rebel army and the death of Washington himself. (In fact, he was reported dead four times during the coverage of the war, and the London press declared Benedict Arnold dead twenty-six times.)

Le Courrier de l'Europe, a French newspaper produced in London, printed a translated digest of the English reports with a note warning that they probably were false. This version of the event passed through a dozen French papers produced in the Low Countries, the Rhineland, Switzerland, and France itself. By the time it arrived in Versailles, the news of Washington's defeat had been completely discounted. The comte de Vergennes, France's foreign minister, therefore continued to favor military intervention on the side of the Americans. And in London, when Howe's report of his victory finally arrived after a long delay (he had unaccountably neglected to write for two weeks), it was eclipsed by the more spectacular news of Burgoyne's defeat at Saratoga. So the defeat at Brandywine turned into a case of miswritten and misread news—a media non-event whose meaning was determined by the process of its transmission, like the blogging about the convertible dome and the filtering of crime reports in Newark's police headquarters.


Information has never been stable. That may be a truism, but it bears pondering. It could serve as a corrective to the belief that the speedup in technological change has catapulted us into a new age, in which information has spun completely out of control. I would argue that the new information technology should force us to rethink the notion of information itself. It should not be understood as if it took the form of hard facts or nuggets of reality ready to be quarried out of newspapers, archives, and libraries, but rather as messages that are constantly being reshaped in the process of transmission. Instead of firmly fixed documents, we must deal with multiple, mutable texts. By studying them skeptically on our computer screens, we can learn how to read our daily newspaper more effectively—and even how to appreciate old books.

Bibliographers came around to this view long before the Internet. Sir Walter Greg developed it at the end of the nineteenth century, and Donald McKenzie perfected it at the end of the twentieth century. Their work provides an answer to the questions raised by bloggers, Googlers, and other enthusiasts of the World Wide Web: Why save more than one copy of a book? Why spend large sums to purchase first editions? Aren't rare book collections doomed to obsolescence now that everything will be available on the Internet?

Unbelievers used to dismiss Henry Clay Folger's determination to accumulate copies of the First Folio edition of Shakespeare as the mania of a crank. The First Folio, published in 1623, seven years after Shakespeare's death, contained the earliest collection of his plays, but most collectors assumed that one copy would be enough for any research library. When Folger's collection grew beyond three dozen copies, his friends scoffed at him as Forty Folio Folger. Since then, however, bibliographers have mined that collection for crucial information, not only for editing the plays but also for performing them.

They have demonstrated that eighteen of the thirty-six plays in the First Folio had never before been printed. Four were known earlier only from faulty copies known as "bad" quartos—booklets of individual plays printed during Shakespeare's lifetime, often by unscrupulous publishers using corrupted versions of the texts. Twelve were reprinted in modified form from relatively good quartos; and only two were reprinted without change from earlier quarto editions. Since none of Shakespeare's manuscripts has survived, differences between these texts can be crucial in determining what he wrote. But the First Folio cannot simply be compared with the quartos, because every copy of the Folio is different from every other copy. While being printed in Isaac Jaggard's shop in 1622 and 1623, the book went through three very different issues. Some copies lacked Troilus and Cressida, some included a complete Troilus, and some had the main text of Troilus but without its prologue and with a crossed-out ending to Romeo and Juliet on the reverse side of the leaf containing Troilus's first scene.

The differences were compounded by at least one hundred stop-press corrections and by the peculiar practices of at least nine compositors who set the copy while also working on other jobs—and occasionally abandoning Shakespeare to an incompetent teenage apprentice. By arguing from the variations in the texts, bibliographers like Charlton Hinman and Peter Blayney have reconstructed the production process and thus arrived at convincing conclusions about the most important works in the English language. This painstaking scholarship could not have been done without Mr. Folger's Folios.

Of course, Shakespeare is a special case. But textual stability never existed in the pre-Internet eras. The most widely diffused edition of Diderot's Encyclopédie in eighteenth-century France contained hundreds of pages that did not exist in the original edition. Its editor was a clergyman who padded the text with excerpts from a sermon by his bishop in order to win the bishop's patronage. Voltaire considered the Encyclopédie so imperfect that he designed his last great work, Questions sur l'Encyclopédie, as a nine-volume sequel to it. In order to spice up his text and to increase its diffusion, he collaborated with pirates behind the back of his own publisher, adding passages to the pirated editions.

In fact, Voltaire toyed with his texts so much that booksellers complained. As soon as they sold one edition of a work, another would appear, featuring additions and corrections by the author. Their customers protested. Some even said that they would not buy an edition of Voltaire's complete works—and there were many, each different from the others—until he died, an event eagerly anticipated by retailers throughout the book trade.

Piracy was so pervasive in early modern Europe that best-sellers could not be blockbusters as they are today. Instead of being produced in huge numbers by one publisher, they were printed simultaneously in many small editions by many publishers, each racing to make the most of a market unconstrained by copyright. Few pirates attempted to produce accurate counterfeits of the original editions. They abridged, expanded, and reworked texts as they pleased, without worrying about the authors' intentions. They behaved as deconstructionists avant la lettre.

2.

The issue of textual stability leads to the general question about the role of research libraries in the age of the Internet. I cannot pretend to offer easy answers, but I would like to put the question in perspective by discussing two views of the library, which I would describe as grand illusions—grand and partly true.

To students in the 1950s, libraries looked like citadels of learning. Knowledge came packaged between hard covers, and a great library seemed to contain all of it. To climb the steps of the New York Public Library, past the stone lions guarding its entrance and into the monumental reading room on the third floor, was to enter a world that included everything known. The knowledge came ordered into standard categories which could be pursued through a card catalog and into the pages of the books. In colleges everywhere the library stood at the center of the campus. It was the most important building, a temple set off by classical columns, where one read in silence: no noise, no food, no disturbances beyond a furtive glance at a potential date bent over a book in quiet contemplation.

Students today still respect their libraries, but reading rooms are nearly empty on some campuses. In order to entice the students back, some librarians offer them armchairs for lounging and chatting, even drinks and snacks, never mind about the crumbs. Modern or postmodern students do most of their research at computers in their rooms. To them, knowledge comes online, not in libraries. They know that libraries could never contain it all within their walls, because information is endless, extending everywhere on the Internet, and to find it one needs a search engine, not a card catalog. But this, too, may be a grand illusion—or, to put it positively, there is something to be said for both visions, the library as a citadel and the Internet as open space. We have come to the problems posed by Google Book Search.

In 2006 Google signed agreements with five great research libraries—the New York Public, Harvard, Michigan, Stanford, and Oxford's Bodleian—to digitize their books. Books in copyright posed a problem, which soon was compounded by lawsuits from publishers and authors. But putting that aside, the Google proposal seemed to offer a way to make all book learning available to all people, or at least those privileged enough to have access to the World Wide Web. It promised to be the ultimate stage in the democratization of knowledge set in motion by the invention of writing, the codex, movable type, and the Internet.

Now, I speak as a Google enthusiast. I believe Google Book Search really will make book learning accessible on a new, worldwide scale, despite the great digital divide that separates the poor from the computerized. It also will open up possibilities for research involving vast quantities of data, which could never be mastered without digitization. As an example of what the future holds, I would cite the Electronic Enlightenment, a project sponsored by the Voltaire Foundation of Oxford. By digitizing the correspondence of Voltaire, Rousseau, Franklin, and Jefferson—about two hundred volumes in superb, scholarly editions—it will, in effect, recreate the transatlantic republic of letters from the eighteenth century.

The letters of many other philosophers, from Locke and Bayle to Bentham and Bernardin de Saint-Pierre, will be integrated into this database, so that scholars will be able to trace references to individuals, books, and ideas throughout the entire network of correspondence that undergirded the Enlightenment. Many other such projects—notably American Memory sponsored by the Library of Congress[1] and the Valley of the Shadow created at the University of Virginia[2]—have demonstrated the feasibility and usefulness of databases on this scale. But their success does not prove that Google Book Search, the largest undertaking of them all, will make research libraries obsolete. On the contrary, Google will make them more important than ever. To support this view, I would like to organize my argument around eight points.

1. According to the most utopian claim of the Googlers, Google can put virtually all printed books on-line. That claim is misleading, and it raises the danger of creating false consciousness, because it may lull us into neglecting our libraries. What percentage of the books in the United States—never mind the rest of the world—will be digitized by Google: 75 percent? 50 percent? 25 percent? Even if the figure is 90 percent, the residual, nondigitized books could be important. I recently discovered an extraordinary libertine novel, Les Bohémiens, by an unknown author, the marquis de Pelleport, who wrote it in the Bastille at the same time that the marquis de Sade was writing his novels in a nearby cell. I think that Pelleport's book, published in 1790, is far better than anything Sade produced; and whatever its aesthetic merits, it reveals a great deal about the condition of writers in pre-Revolutionary France. Yet only six copies of it exist, as far as I can tell, none of them available on the Internet.[3] (The Library of Congress, which has a copy, has not opened its holdings to Google.)

If Google missed this book, and other books like it, the researcher who relied on Google would never be able to locate certain works of great importance. The criteria of importance change from generation to generation, so we cannot know what will matter to our descendants. They may learn a lot from studying our Harlequin novels or computer manuals or telephone books. Literary scholars and historians today depend heavily on research in almanacs, chapbooks, and other kinds of "popular" literature, yet few of those works from the seventeenth and eighteenth centuries have survived. They were printed on cheap paper, sold in flimsy covers, read to pieces, and ignored by collectors and librarians who did not consider them "literature." A researcher in Trinity College, Dublin recently discovered a drawer full of forgotten ballad books, each one the only copy in existence, each priceless in the eyes of the modern scholar, though it had seemed worthless two centuries ago.

2. Although Google pursued an intelligent strategy by signing up five great libraries, their combined holdings will not come close to exhausting the stock of books in the United States. Contrary to what one might expect, there is little redundancy in the holdings of the five libraries: 60 percent of the books being digitized by Google exist in only one of them. There are about 543 million volumes in the research libraries of the United States. Google reportedly set its initial goal of digitizing at 15 million. As Google signs up more libraries—at last count, twenty-eight are participating in Google Book Search—the representativeness of its digitized database will improve. But it has not yet ventured into special collections, where the rarest works are to be found. And of course the totality of world literature—all the books in all the languages of the world—lies far beyond Google's capacity to digitize.

3. Although it is to be hoped that the publishers, authors, and Google will settle their dispute, it is difficult to see how copyright will cease to pose a problem. According to the copyright law of 1976 and the copyright extension law of 1998, most books published after 1923 are currently covered by copyright, and copyright now extends to the life of the author plus seventy years. For books in the public domain, Google probably will allow readers to view the full text and print every page. For books under copyright, however, Google will probably display only a few lines at a time, which it claims is legal under fair use.

Google may persuade the publishers and authors to surrender their claims to books published between 1923 and the recent past, but will it get them to modify their copyrights in the present and future? In 2006, 291,920 new titles were published in the United States, and the number of new books in print has increased nearly every year for the last decade, despite the spread of electronic publishing. How can Google keep up with current production while at the same time digitizing all the books accumulated over the centuries? Better to increase the acquisitions of our research libraries than to trust Google to preserve future books for the benefit of future generations. Google defines its mission as the communication of information—right now, today; it does not commit itself to conserving texts indefinitely.

4. Companies decline rapidly in the fast-changing environment of electronic technology. Google may disappear or be eclipsed by an even greater technology, which could make its database as outdated and inaccessible as many of our old floppy disks and CD-ROMs. Electronic enterprises come and go. Research libraries last for centuries. Better to fortify them than to declare them obsolete, because obsolescence is built into the electronic media.

5. Google will make mistakes. Despite its concern for quality and quality control, it will miss books, skip pages, blur images, and fail in many ways to reproduce texts perfectly. Once we believed that microfilm would solve the problem of preserving texts. Now we know better.

6. As in the case of microfilm, there is no guarantee that Google's copies will last. Bits become degraded over time. Documents may get lost in cyberspace, owing to the obsolescence of the medium in which they are encoded. Hardware and software become extinct at a distressing rate. Unless the vexatious problem of digital preservation is solved, all texts "born digital" belong to an endangered species. The obsession with developing new media has inhibited efforts to preserve the old. We have lost 80 percent of all silent films and 50 percent of all films made before World War II. Nothing preserves texts better than ink imbedded in paper, especially paper manufactured before the nineteenth century, except texts written on parchment or engraved in stone. The best preservation system ever invented was the old-fashioned, pre-modern book.

7. Google plans to digitize many versions of each book, taking whatever it gets as the copies appear, assembly-line fashion, from the shelves; but will it make all of them available? If so, which one will it put at the top of its search list? Ordinary readers could get lost while searching among thousands of different editions of Shakespeare's plays, so they will depend on the editions that Google makes most easily accessible. Will Google determine its relevance ranking of books in the same way that it ranks references to everything else, from toothpaste to movie stars? It now has a secret algorithm to rank Web pages according to the frequency of use among the pages linked to them, and presumably it will come up with some such algorithm in order to rank the demand for books. But nothing suggests that it will take account of the standards prescribed by bibliographers, such as the first edition to appear in print or the edition that corresponds most closely to the expressed intention of the author.

Google employs hundreds, perhaps thousands, of engineers but, as far as I know, not a single bibliographer. Its innocence of any visible concern for bibliography is particularly regrettable in that most texts, as I have just argued, were unstable throughout most of the history of printing. No single copy of an eighteenth-century best-seller will do justice to the endless variety of editions. Serious scholars will have to study and compare many editions, in the original versions, not in the digitized reproductions that Google will sort out according to criteria that probably will have nothing to do with bibliographical scholarship.

8. Even if the digitized image on the computer screen is accurate, it will fail to capture crucial aspects of a book. For example, size. The experience of reading a small duodecimo, designed to be held easily in one hand, differs considerably from that of reading a heavy folio propped up on a book stand. It is important to get the feel of a book—the texture of its paper, the quality of its printing, the nature of its binding. Its physical aspects provide clues about its existence as an element in a social and economic system; and if it contains margin notes, it can reveal a great deal about its place in the intellectual life of its readers.

Books also give off special smells. According to a recent survey of French students, 43 percent consider smell to be one of the most important qualities of printed books—so important that they resist buying odorless electronic books. CaféScribe, a French on-line publisher, is trying to counteract that reaction by giving its customers a sticker that will give off a fusty, bookish smell when it is attached to their computers.

When I read an old book, I hold its pages up to the light and often find among the fibers of the paper little circles made by drops from the hand of the vatman as he made the sheet—or bits of shirts and petticoats that failed to be ground up adequately during the preparation of the pulp. I once found a fingerprint of a pressman enclosed in the binding of an eighteenth-century Encyclopédie—testimony to tricks in the trade of printers, who sometimes spread too much ink on the type in order to make it easier to get an impression by pulling the bar of the press.

I realize, however, that considerations of "feel" and "smell" may seem to undercut my argument. Most readers care about the text, not the physical medium in which it is embedded; and by indulging my fascination with print and paper, I may expose myself to accusations of romanticizing or of reacting like an old-fashioned, ultra-bookish scholar who wants nothing more than to retreat into a rare book room. I plead guilty. I love rare book rooms, even the kind that make you put on gloves before handling their treasures. Rare book rooms are a vital part of research libraries, the part that is most inaccessible to Google. But libraries also provide places for ordinary readers to immerse themselves in books, quiet places in comfortable settings, where the codex can be appreciated in all its individuality.

In fact, the strongest argument for the old-fashioned book is its effectiveness for ordinary readers. Thanks to Google, scholars are able to search, navigate, harvest, mine, deep link, and crawl (the terms vary along with the technology) through millions of Web sites and electronic texts. At the same time, anyone in search of a good read can pick up a printed volume and thumb through it at ease, enjoying the magic of words as ink on paper. No computer screen gives satisfaction like the printed page. But the Internet delivers data that can be transformed into a classical codex. It already has made print-on-demand a thriving industry, and it promises to make books available from computers that will operate like ATM machines: log in, order electronically, and out comes a printed and bound volume. Perhaps someday a text on a hand-held screen will please the eye as thoroughly as a page of a codex produced two thousand years ago.

Meanwhile, I say: shore up the library. Stock it with printed matter. Reinforce its reading rooms. But don't think of it as a warehouse or a museum. While dispensing books, most research libraries operate as nerve centers for transmitting electronic impulses. They acquire data sets, maintain digital repositories, provide access to e-journals, and orchestrate information systems that reach deep into laboratories as well as studies. Many of them are sharing their intellectual wealth with the rest of the world by permitting Google to digitize their printed collections. Therefore, I also say: long live Google, but don't count on it living long enough to replace that venerable building with the Corinthian columns. As a citadel of learning and as a platform for adventure on the Internet, the research library still deserves to stand at the center of the campus, preserving the past and accumulating energy for the future.

How to Write Killer Ads

If you're ready to start a virtual assistant business, you know you need customers, but you know you need buyers and must find a way to stand apart from the competition, however you define who your competitor are. If you want to be sure that your business accomplishes these two crucial tasks, you need to know as much as you can about the following:

Personal knowledge. Understanding the industries you want to work with is vital to assessing the market for your services. For example, you may plan on catering to nonprofits or legal firms, or you may take a more general approach and only seek to help out struggling small businesses with their daily administrative tasks. Personal knowledge of the industry develops from having contacts and experience in the business areas you're targeting and a general feel for the business world you're trying to penetrate with your marketing.

Customers. Do you know who your customers really are? Do you understand why they buy your services? Some companies may need to hire virtual assistants regularly, while others only hire seasonally to help with their overflow tasks. Become familiar with how your prospects' business seasons ebb and flow, and market your services accordingly. Business owners may be willing to discuss their businesses and to share advice on how your services may fit your need – for example, virtual assistants can easily replace temporary services for a fraction of the price a temp agency charges, and can offer the continuity a small office seeks. Let them know this and be open to their suggestions - often, potential client have all of the insight and experience that you need to know where your services will fit into the picture.

Competition. Who are your competitors? What are your competitors’ strengths and weaknesses? A quick survey of the competition's websites can easily show you how you can improve your services and create a more marketable VA service. For example, simple touches on a website such as full contact information, references, a full-blown resume, and a blog may make prove your professionalism and make you stand apart from those competitors that simply aren't as web-savvy as you. What other problems do the businesses seem to have?

Becoming a virtual assistant is exciting, but remember, it is a business like any other. Keep in mind that planning, marketing, and assessing where you stand in terms of the marketplace and the competition are all parts of preserving and increasing your income as a virtual assistant. Although you do have competition, remember that there are many friendly virtual assistant forums on the internet to help you along the way, and these are the places you'll want to go to trade tips and share marketing strategies without competition or hostility. The work-from-home crowd is actually a friendly bunch when they get together behind the scenes, and they will even often pass along job leads and overflow work once you get to know them, so don't overlook your competition for advice or opportunities – just remember to always be professional in your affairs, maintain your optimism, and enjoy the independence of owning your own business every step of the way.

the end

To this day, the willingness of a Wall Street investment bank to pay me hundreds of thousands of dollars to dispense investment advice to grownups remains a mystery to me. I was 24 years old, with no experience of, or particular interest in, guessing which stocks and bonds would rise and which would fall. The essential function of Wall Street is to allocate capital—to decide who should get it and who should not. Believe me when I tell you that I hadn’t the first clue.

I’d never taken an accounting course, never run a business, never even had savings of my own to manage. I stumbled into a job at Salomon Brothers in 1985 and stumbled out much richer three years later, and even though I wrote a book about the experience, the whole thing still strikes me as preposterous—which is one of the reasons the money was so easy to walk away from. I figured the situation was unsustainable. Sooner rather than later, someone was going to identify me, along with a lot of people more or less like me, as a fraud. Sooner rather than later, there would come a Great Reckoning when Wall Street would wake up and hundreds if not thousands of young people like me, who had no business making huge bets with other people’s money, would be expelled from finance.

When I sat down to write my account of the experience in 1989—Liar’s Poker, it was called—it was in the spirit of a young man who thought he was getting out while the getting was good. I was merely scribbling down a message on my way out and stuffing it into a bottle for those who would pass through these parts in the far distant future.

Unless some insider got all of this down on paper, I figured, no future human would believe that it happened.

I thought I was writing a period piece about the 1980s in America. Not for a moment did I suspect that the financial 1980s would last two full decades longer or that the difference in degree between Wall Street and ordinary life would swell into a difference in kind. I expected readers of the future to be outraged that back in 1986, the C.E.O. of Salomon Brothers, John Gutfreund, was paid $3.1 million; I expected them to gape in horror when I reported that one of our traders, Howie Rubin, had moved to Merrill Lynch, where he lost $250 million; I assumed they’d be shocked to learn that a Wall Street C.E.O. had only the vaguest idea of the risks his traders were running. What I didn’t expect was that any future reader would look on my experience and say, “How quaint.”

I had no great agenda, apart from telling what I took to be a remarkable tale, but if you got a few drinks in me and then asked what effect I thought my book would have on the world, I might have said something like, “I hope that college students trying to figure out what to do with their lives will read it and decide that it’s silly to phony it up and abandon their passions to become financiers.” I hoped that some bright kid at, say, Ohio State University who really wanted to be an oceanographer would read my book, spurn the offer from Morgan Stanley, and set out to sea.

Somehow that message failed to come across. Six months after Liar’s Poker was published, I was knee-deep in letters from students at Ohio State who wanted to know if I had any other secrets to share about Wall Street. They’d read my book as a how-to manual.

In the two decades since then, I had been waiting for the end of Wall Street. The outrageous bonuses, the slender returns to shareholders, the never-ending scandals, the bursting of the internet bubble, the crisis following the collapse of Long-Term Capital Management: Over and over again, the big Wall Street investment banks would be, in some narrow way, discredited. Yet they just kept on growing, along with the sums of money that they doled out to 26-year-olds to perform tasks of no obvious social utility. The rebellion by American youth against the money culture never happened. Why bother to overturn your parents’ world when you can buy it, slice it up into tranches, and sell off the pieces?

Citation Advantage of Open Access Articles

Open access (OA) to the research literature has the potential to accelerate recognition and dissemination of research findings, but its actual effects are controversial. This was a longitudinal bibliometric analysis of a cohort of OA and non-OA articles published between June 8, 2004, and December 20, 2004, in the same journal (PNAS: Proceedings of the National Academy of Sciences). Article characteristics were extracted, and citation data were compared between the two groups at three different points in time: at “quasi-baseline” (December 2004, 0–6 mo after publication), in April 2005 (4–10 mo after publication), and in October 2005 (10–16 mo after publication). Potentially confounding variables, including number of authors, authors' lifetime publication count and impact, submission track, country of corresponding author, funding organization, and discipline, were adjusted for in logistic and linear multiple regression models. A total of 1,492 original research articles were analyzed: 212 (14.2% of all articles) were OA articles paid by the author, and 1,280 (85.8%) were non-OA articles. In April 2005 (mean 206 d after publication), 627 (49.0%) of the non-OA articles versus 78 (36.8%) of the OA articles were not cited (relative risk = 1.3 [95% Confidence Interval: 1.1–1.6]; p = 0.001). 6 mo later (mean 288 d after publication), non-OA articles were still more likely to be uncited (non-OA: 172 [13.6%], OA: 11 [5.2%]; relative risk = 2.6 [1.4–4.7]; p < sd =" 2.5]" sd =" 2.0];" z =" 3.123;">p = 0.002; October 2005: 6.4 [SD = 10.4] versus 4.5 [SD = 4.9]; Z = 4.058; p < ratio =" 2.1">

From Brick to Slick: A History of Mobile Phones

It has been more than 35 years since Martin Cooper placed the first call on a mobile phone to his rival at Bell Labs while working at Motorola. Heck, it's been nearly 20 years since Saved by the Bell’s Zack Morris placed a phone call to Kelly Kapowski from his locker. In that time, phones have come a long way.

We now live in a golden age of mobile phones. Or, perhaps more accurately, the end of the age of mobile phones. The iPhone, the G2, the N95, the Bold: These are exceptionally small mobile computers with built-in telephony features.

It has been a long trek from the monstrous, if revolutionary, Motorola DynaTAC to the elegant and refined modern devices that not only allow us to make calls, but also to send e-mails, surf the web, track our movements, listen to music, watch movies and generally handle our varied communications. Please join Wired on a look back at some of the more notable phones that took us from Zack to Android.

Revealed: the environmental impact of Google searches

Clarification added 16th January: A report about online energy consumption (Google and you'll damage the planet, Jan 11) said that "performing two Google searches from a desktop computer can generate about the same amount of carbon dioxide as boiling a kettle" or about 7g of CO2 per search. We are happy to make clear that this does not refer to a one-hit Google search taking less than a second, which Google says produces about 0.2g of CO2, a figure we accept. In the article, we were referring to a Google search that may involve several attempts to find the object being sought and that may last for several minutes. Various experts put forward carbon emission estimates for such a search of 1g-10g depending on the time involved and the equipment used

Performing two Google searches from a desktop computer can generate about the same amount of carbon dioxide as boiling a kettle for a cup of tea, according to new research.

While millions of people tap into Google without considering the environment, a typical search generates about 7g of CO2 Boiling a kettle generates about 15g. “Google operates huge data centres around the world that consume a great deal of power,” said Alex Wissner-Gross, a Harvard University physicist whose research on the environmental impact of computing is due out soon. “A Google search has a definite environmental impact.”

Google is secretive about its energy consumption and carbon footprint. It also refuses to divulge the locations of its data centres. However, with more than 200m internet searches estimated globally daily, the electricity consumption and greenhouse gas emissions caused by computers and the internet is provoking concern. A recent report by Gartner, the industry analysts, said the global IT industry generated as much greenhouse gas as the world’s airlines - about 2% of global CO2 emissions. “Data centres are among the most energy-intensive facilities imaginable,” said Evan Mills, a scientist at the Lawrence Berkeley National Laboratory in California. Banks of servers storing billions of web pages require power.

Though Google says it is in the forefront of green computing, its search engine generates high levels of CO2 because of the way it operates. When you type in a Google search for, say, “energy saving tips”, your request doesn’t go to just one server. It goes to several competing against each other.

It may even be sent to servers thousands of miles apart. Google’s infrastructure sends you data from whichever produces the answer fastest. The system minimises delays but raises energy consumption. Google has servers in the US, Europe, Japan and China.

Wissner-Gross has submitted his research for publication by the US Institute of Electrical and Electronics Engineers and has also set up a website www.CO2stats.com. “Google are very efficient but their primary concern is to make searches fast and that means they have a lot of extra capacity that burns energy,” he said.

Google said: “We are among the most efficient of all internet search providers.”

Wissner-Gross has also calculated the CO2 emissions caused by individual use of the internet. His research indicates that viewing a simple web page generates about 0.02g of CO2 per second. This rises tenfold to about 0.2g of CO2 a second when viewing a website with complex images, animations or videos.

A separate estimate from John Buckley, managing director of carbonfootprint.com, a British environmental consultancy, puts the CO2 emissions of a Google search at between 1g and 10g, depending on whether you have to start your PC or not. Simply running a PC generates between 40g and 80g per hour, he says. of CO2 Chris Goodall, author of Ten Technologies to Save the Planet, estimates the carbon emissions of a Google search at 7g to 10g (assuming 15 minutes’ computer use).

Nicholas Carr, author of The Big Switch, Rewiring the World, has calculated that maintaining a character (known as an avatar) in the Second Life virtual reality game, requires 1,752 kilowatt hours of electricity per year. That is almost as much used by the average Brazilian.

“It’s not an unreasonable comparison,” said Liam Newcombe, an expert on data centres at the British Computer Society. “It tells us how much energy westerners use on entertainment versus the energy poverty in some countries.”

Though energy consumption by computers is growing - and the rate of growth is increasing - Newcombe argues that what matters most is the type of usage.

If your internet use is in place of more energy-intensive activities, such as driving your car to the shops, that’s good. But if it is adding activities and energy consumption that would not otherwise happen, that may pose problems.

Newcombe cites Second Life and Twitter, a rapidly growing website whose 3m users post millions of messages a month. Last week Stephen Fry, the TV presenter, was posting “tweets” from New Zealand, imparting such vital information as “Arrived in Queenstown. Hurrah. Full of bungy jumping and ‘activewear’ shops”, and “Honestly. NZ weather makes UK look stable and clement”.

Jonathan Ross was Twittering even more, with posts such as “Am going to muck out the pigs. It will be cold, but I’m not the type to go on about it” and “Am now back indoors and have put on fleecy tracksuit and two pairs of socks”. Ross also made various “tweets” trying to ascertain whether Jeremy Clarkson was a Twitter user or not. Yesterday the Top Gear presenter cleared up the matter, saying: “I am not a twit. And Jonathan Ross is.”

Such internet phenomena are not simply fun and hot air, Newcombe warns: the boom in such services has a carbon cost.

Rocket eBook (1998), Amazon Kindle (2007)

I was pitched headfirst into the world of e-books in 2002 when I took a job with Palm Digital Media. The company, originally called Peanut Press, was founded in 1998 with a simple plan: publish books in electronic form. As it turns out, that simple plan leads directly into a technological, economic, and political hornet's nest. But thanks to some good initial decisions (more on those later), little Peanut Press did pretty well for itself in those first few years, eventually having a legitimate claim to its self-declared title of "the world's largest e-book store."

Unfortunately, despite starting the company near the peak of the original dot-com bubble, the founders of Peanut Press lost control of the company very early on. In retrospect, this signaled an important truth that persists to this day: people don't get e-books.

A succession of increasingly disengaged and (later) incompetent owners effectively killed Peanut Press, first flattening its growth curve, then abandoning all of the original employees by moving the company several hundred miles away. In January of 2008, what remained of the once-proud e-book store (now called eReader.com) was scraped up off the floor and acquired by a competitor, Fictionwise.com.

Unlike previous owners, Fictionwise has some actual knowledge of and interest in e-books. But though the "world's largest e-book store" appellation still adorns the eReader.com website, larger fish have long since entered the pond.

And so, a sad end for the eReader that I knew (née Palm Digital Media, née Peanut Press). But this story is not just about them, or me. Notice that I used the present tense earlier: "people don't get e-books." This is as true today as it was ten years ago. Venture capitalists didn't get it then, nor did the series of owners that killed Peanut Press, nor do many of the players in the e-book market today. And then there are the consumers, their own notions about e-books left to solidify in the absence of any clear vision from the industry.

The sentiment seeping through the paragraphs above should seem familiar to most Ars Technica readers. Do you detect a faint whiff of OS/2? Amiga, perhaps? Or, more likely, the overwhelming miasma of "Mac user, circa 1996." That's right, it's the defiance and bitterness of the marginalized: those who feel that their particular passion has been unjustly shunned by the ignorant masses.

Usually, this sentiment marks the tail end of a movement, or a product in decline. But sometimes it's just a sign of a slow start. I believe this is the case with e-books. The pace of the e-book market over the past decade has been excruciatingly—and yes, you guessed it, unjustly—slow. My frustration is much like that of the Mac users of old. Here's an awesome, obvious, inevitable idea, seemingly thwarted at every turn by widespread consumer misunderstanding and an endemic lack of will among the big players.

I don't pretend to be able to move corporate mountains, but I do have a lot of e-book related things to get off my chest. And so, this will be part editorial, part polemic, part rant, but also, I hope, somewhat educational. As for Apple, that connection will be clear by the end, if it isn't already. Buckle up.

Senin, 23 Februari 2009

tukar klik iklan yuk

anggota harus klik semua yang ada di situs ini. jika sepakat ayo tukar klik iklan dan follow kami nanti aku pasang link anda di bawah tulisan link tukar klik dan semoga pendatang yang follow di blog ini saling klik secara bergantian:

koleksi situs kami

komunitas blog indonesia adsense iklan indonesia jurgen habermas collection of knowledge opini masyarakat blog islamic tukar klik iklan blog blog google earth koleksi tulisan filsafat barat komunitas tukar klik iklan impian bisnis internet online tukang adsense tukar follow tukar follow komentar dan klik iklan blog iklan iklan pasang iklan blog master bisnis online tukar klik iklan koleksi bacaan bayaran iklan collection of articles historical knowledge jin kampus gentayangan blog music online master kolektor trik bisnis blogger modernitas pemikiran blog optimisme adsense collection music rock koleksi situs-situs kolektor musik komunitas game online koleksi iklan gratis blog adsense google koleksi master booter teori dasar color code html koleksi mig33 trik adsense google latihan blogspot gratisan free downloads game blogspot blogspot community Download gratisan arni kediri Koleksi file: Wujud inventarisasi pengetahuan U-L-I-N download puas porn no 01 blog bidvertiser xxx free sex xxx hiburan dunia internet rahasia fxmarketing porn hot xxx download xxx xxx 01 xxx L-e-s-b-i-a-n master internet online free download sex klik iklan bersama bookmark collections blog adsensecamp kreasi pengembangan diri komunitas music alternative adsense google link site menarik xxx dewasa xxx homemademovies01 xxx free01 xxx master tutorial rahasia make money online koleksi artikel artikel notok 2000 boostore pemikiran notok notok 2002 paradigma notok sejarah-bangsa pemikiran kitab notok notok 2008 pemikiran notok2001 notok 2000 boostore pemikiran notok notok 2002 paradigma notok sejarah-bangsa collection video koleksi booter file collection blog game online komunitas diskusi blog televisi online optimisme adsense blog arabic intelektualitas organik blog blacklabelads life of modern society kerjasama klik iklan tutorial blog dollar Koleksi link Seks paradigma modernisme kreasi kritik wacana iklan blog gratis online television blog promosi blog gratis koleksi software gratis kumpulan cerita-cerita seks software koleksi blog link game kualitas blog tutorial adsense blog dunia maniak internet myblog collection code cbox gratis era postkolonialisme pasukan klik blog radio online blog translate injil barnabas iklan iklan gratis media informasi terkini profesionalisme thinking bisnis blog adsense snouck hourgronje rahasia bisnis internet online tutorial iklan blog tips tentang adsense free booter blog diskusi ulin nuha situs promosi daftar klik iklan kumpulan dunia blog ruang kreasi tulis blog koleksi file forum belajar bersama blog master booter diskusi blog koleksi media iklan iklan blog petunjuk dapat dollar blog belajar bersama bisnis iklan blog gratis kitab pesantren world crimes klik klik iklan peluang penghasil uang teori sosial collection snouck hurgronje sejarah wali songo komunitas blogspot game free online seksi knowledge of the world seks education cashfiesta bisnis online kolektor file epistemologis cashfiesta free tutorial adsense google adsense modernitas islam koleksi kitab sex education master booter