Google Wave's underlying technology will not only enable collaboration with other people, it also make it possible for bots to interact with what you've written. I think this is going to change the way we work. E.g., all applications which require a significant amount of typing will benefit from the statistical auto-correction provided by the Wave app Spelly. In effect, Spelly goes over the text as you're typing it and correcting the obvious mistakes, just as you would do a bit later.
In a similar vein, the proof-of-concept bot Igor is watching out for inserted references and automagically converts them to a citation and a reference list. When writing papers, I usually insert reminders: "REF Imming review", "REF PMID 16007907". If I adjust this convention a bit and provide a bit more detail, Igor can figure out by itself which paper is meant and fetch the citation. Google Wave and Igor save me the tiresome going back-and-forth between a reference manager and the editor to insert all the citation, and they remove distractions from the process of writing and editing the paper.
Of course, this is a proof of concept, so the style can't yet be customized. I further think it would be helpful to quickly look "what's inside" a particular citation. I don't know if Google Wave supports this, but it would be nice to click on a citation ("[23]") and be presented with a pop-up window showing not only infos about the article, but also links to PubMed / a DOI resolver.
Showing posts with label publishing. Show all posts
Showing posts with label publishing. Show all posts
Monday, July 27, 2009
One step towards writing papers in Google Wave
Friday, August 15, 2008
Mendeley = Mekentosj Papers + Web 2.0 ?
Via Ricardo Vidal: Mendeley seems to be a Windows (plus Mac/Linux) equivalent of Mekentosj Papers (which is Mac OS X only, and has been described as "iTunes for your papers"). In addition to handling your PDFs, it has an online component that allows sharing your papers and other Web 2.0 features (billing itself as "Last.fm for papers").
Here, I'm reviewing the Mac beta version (0.5.6). I am focusing most on the desktop side and compare it to Papers, because I have a working solution in place and I would only switch to Mendeley if the experience is as good as with Papers. (I.e., my main problem is off-line papers management, Web 2.0 features are icing on the cake.)
By Mac standards, the app is quite ugly. Both Mendeley and Papers allow full-text PDF searches, which is important if you want to avoid tagging/categorizing all your papers. Papers can show PDFs in the main window, copy the reference of the paper and email papers. Mendeley in principle can also copy the reference, but special characters are transformed to gibberish in this beta version. Papers allows you to match papers against PubMed, Web of Science etc., while Mendeley only offers to auto-extract often incomplete meta-data. This matching feature is extremely useful as you get all the authorative data from the source, and most often Papers can use the DOI in the PDF to immeadiately give you the correct reference. Update: Mendeley also uses DOIs to retrieve the correct metadata, if available. (Thanks, Victor for your comment.)
The beta version is quite rough, I just had to kill it because I found no way to close the "about" window. Extraction of meta-data and references doesn't always work, but this might be more of a problem of the information that's stored in the PDFs.
Of course, once there's a critical mass of people using Mendeley, there'll be all the Web 2.0 features that Papers doesn't have. Judging from the talk I think they might be trying to do too much: Connotea/CiteULike plus Dopplr plus LinkedIn. For me, a simple way to export new references from Papers to Connotea/CiteULike would be enough. More modularity is better, because it allows you to choose the best tool in each layer.
More info by the Mendely folks: Short demo, a little longer talk.
Here, I'm reviewing the Mac beta version (0.5.6). I am focusing most on the desktop side and compare it to Papers, because I have a working solution in place and I would only switch to Mendeley if the experience is as good as with Papers. (I.e., my main problem is off-line papers management, Web 2.0 features are icing on the cake.)
By Mac standards, the app is quite ugly. Both Mendeley and Papers allow full-text PDF searches, which is important if you want to avoid tagging/categorizing all your papers. Papers can show PDFs in the main window, copy the reference of the paper and email papers. Mendeley in principle can also copy the reference, but special characters are transformed to gibberish in this beta version. Papers allows you to match papers against PubMed, Web of Science etc., while Mendeley only offers to auto-extract often incomplete meta-data. This matching feature is extremely useful as you get all the authorative data from the source, and most often Papers can use the DOI in the PDF to immeadiately give you the correct reference. Update: Mendeley also uses DOIs to retrieve the correct metadata, if available. (Thanks, Victor for your comment.)
The beta version is quite rough, I just had to kill it because I found no way to close the "about" window. Extraction of meta-data and references doesn't always work, but this might be more of a problem of the information that's stored in the PDFs.
Of course, once there's a critical mass of people using Mendeley, there'll be all the Web 2.0 features that Papers doesn't have. Judging from the talk I think they might be trying to do too much: Connotea/CiteULike plus Dopplr plus LinkedIn. For me, a simple way to export new references from Papers to Connotea/CiteULike would be enough. More modularity is better, because it allows you to choose the best tool in each layer.
More info by the Mendely folks: Short demo, a little longer talk.
CiteWeb: Following citations made easy
One good way to keep up with the literature in a field is to track which new papers are citing seminal papers of the field. Each Friday, I get lots of citation alerts from ISI Web of Science, but often enough I see the same paper again and again (citing different papers that are on my watch list). So I set out to write an app that would take ISI's RSS feeds, coalesce them, and give them back to you. For example, in the screenshot one review paper is citing five of my tracked papers:
If you're using citation alerts from Web of Science, then give CiteWeb a try at citeweb.embl.de. If you find a bug, you can either comment here, or grab the source code and fix it. :-)
I started working on this to try out if Google App Engine was useful. It turned out that downloading many items from a remote host leads to time-outs from App Engine, so I ported the app to Django. The source code is released under the MIT License.
I started working on this to try out if Google App Engine was useful. It turned out that downloading many items from a remote host leads to time-outs from App Engine, so I ported the app to Django. The source code is released under the MIT License.
Tuesday, August 12, 2008
Google integrates Scholar into main page
I don't know if it's just me (sitting inside a research institution), but when I search for something that returns a paper, I get info from Google Scholar:
(See also the complete screenshot with notes on Flickr.) However, the order of the results is different: Google Scholar seems to weight by citations, Google by page rank.
Tuesday, March 11, 2008
Blogging for search engines
Related to my last post about the failings of Web 2.0 in biology, I want to ask the meta-question: Why do we blog? David Crotty proposes four reasons: Communication with other science bloggers, with non-specialists, with journalist and finally with search engine users. Unless you are a fairly well-known person, your regular audience will consist of your colleagues, collaborators and a random grad student or two. A journalist might only come by if you managed to get a press release about a Nature/Science/... paper out. But, Googlebot won't fail you and read all you posts!
Insightful blog posts won't stay without an audience. For one, the small circle of followers to your blog will spread the news if you write something worth sharing. Far more important are search engines. How do you survey a research area of interest? Most of us will query PubMed, but also do a Google search in the hope that some meaningful analysis is somewhere on a course website, in the text of a paper or maybe even in a blog.
Biologists use Google to query for their proteins of interest. STRING is a fairly successful database, and lots of people google for it by name. However, almost one quarter of all visitors from Google have actually searched for a protein name (random example) and found STRING. If you follow Lars J. Jensen's lead and publish your research observations and dead ends online, someone might serendipitously find them and use them for their own research. This will be the next steps towards open science (with open data, open notebooks—which we might never reach): "Publishing" small findings, data and back stories about papers on your blog, enabling others to gain insight.
Insightful blog posts won't stay without an audience. For one, the small circle of followers to your blog will spread the news if you write something worth sharing. Far more important are search engines. How do you survey a research area of interest? Most of us will query PubMed, but also do a Google search in the hope that some meaningful analysis is somewhere on a course website, in the text of a paper or maybe even in a blog.
Biologists use Google to query for their proteins of interest. STRING is a fairly successful database, and lots of people google for it by name. However, almost one quarter of all visitors from Google have actually searched for a protein name (random example) and found STRING. If you follow Lars J. Jensen's lead and publish your research observations and dead ends online, someone might serendipitously find them and use them for their own research. This will be the next steps towards open science (with open data, open notebooks—which we might never reach): "Publishing" small findings, data and back stories about papers on your blog, enabling others to gain insight.
Tuesday, February 5, 2008
Max Planck Society signs agreement with Springer
In October, I reported that the German Max Planck Society failed to reach a new license agreement with Springer. Now, via heise.de, I learn that they have signed an agreement on January 29, 2008. Here's the press release (there's also a German version).
They details are very sparse, presumably Springer had to come down with the price but they won't state that. However, the press release devotes a lot of space to Open Access, saying that the license agreement "also includes Open Choice™". Open Choice is Springer's author-pays-for-OA program. Now, what does this mean? It doesn't make sense to assume that the agreement talks about access to Open Choice articles, so I guess it must mean that all MPG articles are now going to be published under the Open Choice model. Querying PubMed a bit, I find that the MPG accounted for 6% of the total German research output, so this is certainly an interesting development.
They details are very sparse, presumably Springer had to come down with the price but they won't state that. However, the press release devotes a lot of space to Open Access, saying that the license agreement "also includes Open Choice™". Open Choice is Springer's author-pays-for-OA program. Now, what does this mean? It doesn't make sense to assume that the agreement talks about access to Open Choice articles, so I guess it must mean that all MPG articles are now going to be published under the Open Choice model. Querying PubMed a bit, I find that the MPG accounted for 6% of the total German research output, so this is certainly an interesting development.
Thursday, October 18, 2007
Max Planck Society cancels license agreement with Springer
heise.de reports that the Max Planck Society, a major German research organization with more than 80 institutes, canceled the negotiations for a renewal of a license agreement for online access to SpringerLink with the end of the year. This means that almost 20,000 scientists will not have access to anything published in 1200 journals after the end of the year. (The archive will still be available.)
The reason that the talks failed is that Springer wanted twice the amount the Max Planck Society considered as justifiable, although they say even the justifiable amount was higher than what other publishers charge.
The Max Planck Society is one of the main proponents of the Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities. It is the first major research organization that I'm aware of that breaks free of the stranglehold of the publishing industry. The question is, of course: what will happen next? Do they stay firm and begin to cause a shift towards Open Access journals? Or will Springer go down with the price just enough that the contract will be renewed?
Update: heise.de now reports this story in English as well.
Update 2: Well, they have reached an agreement on January 29, 2008.
The reason that the talks failed is that Springer wanted twice the amount the Max Planck Society considered as justifiable, although they say even the justifiable amount was higher than what other publishers charge.
The Max Planck Society is one of the main proponents of the Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities. It is the first major research organization that I'm aware of that breaks free of the stranglehold of the publishing industry. The question is, of course: what will happen next? Do they stay firm and begin to cause a shift towards Open Access journals? Or will Springer go down with the price just enough that the contract will be renewed?
Update: heise.de now reports this story in English as well.
Update 2: Well, they have reached an agreement on January 29, 2008.
Subscribe to:
Posts (Atom)