Skip to content →

neverendingbooks Posts

a spider for Paul Smithโ€™s list

One
of the best collections of links to homepages of people working in
non-commutative algebra and/or geometry is maintained by Paul Smith. At regular intervals I use it to check
up on some people, usually in vain as nobody seems to update their
homepage… So, today I wrote a simple spider to check for updates in
this list. The idea is simple : it tries to get the link (and when this
fails it reports that the link seems to be broken), it saves a text-copy
of the page (using lynx) on disc which it will check on a future
check-up for changes with diff. Btw. for OS X-people I got
lynx from the Fink Project. It then collects all data (broken
links, time of last visit and time of last change and recent updates) in
RSS-feeds for which an HTML-version is maintained at the geoMetry-site, again
using server side includes. If you see a 1970-date this means that I
have never detected a change since I let this spider loose (today).
Also, the list of pages is not alphabetic, even to me it is a surprise
how the next list will look. As I check for changes with diff the
claimed number of changed lines is by far accurate (the total of lines
from the first change made to the end of the file might be a better
approximation of reality… I will change this soon).
Clearly,
all of this is still experimental so please give me feedback if you
notice something wrong with these lists. Also I plan to extend this list
substantially over the next weeks (for example, Paul Smith himself is
not present in his own list…). So, if you want your pages to be
included, let me know at lieven.lebruyn@ua.ac.be.
For those on Paul\’s list, if you looked at your log-files today
you may have noticed a lot of traffic from lievenlb.local as
I was testing the script. I\’ll keep my further visits down to once a
day, at most…

Leave a Comment

arxiv RSS feeds available

If
you are interested in getting daily RSS-feeds of one (or more) of the
following arXiv
sections : math.RA, math.AG, math.QA and
math.RT you can point your news-aggregator to
lievenlb.local. Most of the solution to my first
Perl-exercise I did explain already yesterday but the current program
has a few changes. First, my idea was to scrape the recent-files
from the arXiv, for example for math.RA I would get http://www.arxiv.org/list/math.RA/recent but this
contains only titles, authors and links but no abstracts of the papers.
So I thought I had to scrape for the URLs of these papers and then
download each of the abstracts-files. Fortunately, I found a way around
this. There is a lesser known way to get at all abstracts from
math of the current day (or the few last days) by using the Catchup interface. The syntax of this interface is
as follows : for example to get all math-papers with
abstracts
posted on April 2, 2004 you have to get the page with
URL

http://www.arxiv.org/catchup?smonth=04&sday=02&num=50&archive=
math&method=with&syear=2004

so in order to use it I had
to find a way to parse the present day into a numeric
day,month,year format. This is quite easy as there is the very
well documented Date::Manip-module in Perl. Another problem with
arXiv is that there are no posts in the weekend. I worked around
this by requesting the Catchup starting from the previous
business day
(an option of the DateCalc-function. This means
that over the weekend I get the RSS feeds of papers posted on Friday, on
Monday I\’ll get those of Friday&Monday and for all other days I\’ll get
those of today&yesterday. But it is easy to change the script to allow
for a longer period so please tell me if you want to have RSS-feeds for
the last 3 or 4 days. Also, if you need feeds for other sections that
can easily be done, so tell me.
Here are the URLs to give to
your news-aggregator for these sections :

math.RA at
https://lievenlb.local/arxivRSS/mathRA/
math.QA at
https://lievenlb.local/arxivRSS/mathQA/
math.RT at
https://lievenlb.local/arxivRSS/mathRT/
math.AG at
https://lievenlb.local/arxivRSS/mathAG/

If
your news-aggregator is not clever then you may have to add an
additional index.xml at the end. If you like to use these feeds
on a Mac, a good free news-aggregator is NetNewsWire Lite. To get at the above feeds, click on the Subscribe
button
and copy one of the above links in the pop-up window. I
don\’t think my Perl-script breaks the Robots Beware rule of the arXiv. All it does it to download one page a day
using their Catchup-Method. I still have to set up a cron-job to
do this daily, but I have to find out at which (local)time at night the
arXiv refreshes its pages…

Leave a Comment

my first scraper

As
far as I know (but I am fairly ignorant) the arXiv does not
provide RSS feeds for a particular section, say mathRA. Still it would be a good idea for anyone
having a news aggregator to follows some weblogs and
news-channels having RSS syndication. So I decided to write one as my
first Perl-exercise and to my own surprise I have after a few hours work
a prototype-scraper for math.RA. It is not yet perfect, I still
have to convert the local URLs to global URLs so that they can be
clicked and at the moment I have only collected the titles, authors and
abstract-links whereas it would make more sense to include the full
abstract in the RSS feed, but give me a few more days…
The
basic idea is fairly simple (and based on an O\’Reilly hack).
One uses the Template::Extract module to
extract the goodies from the arXiv\’s template HTML. Maybe I am still
not used to Perl-documentation but it was hard for me to work out how to
do this in detail either from the hack or the online
module-documentation. Fortunately there is a good Perl Advent
Calendar
page giving me the details that I needed. Once one has this
info one can turn it into a proper RSS-page using the XML::RSS-module.
In fact, I spend far
more time trying to get XML::RSS installed under OS X than
writing the code. The usual method, that is via

iMacLieven:~
lieven$ sudo /usr/bin/perl -MCPAN -e shell Terminal does not support
AddHistory. cpan shell -- CPAN exploration and modules installation
(v1.76) ReadLine support available (try \'install
Bundle::CPAN\') cpan> install XML::RSS 

failed and even a
manual install for which the drill is : download the package from CPAN, go to the
extracted directory and give the commands

sudo /usr/bin/perl
Makefile.pl sudo make sudo make test sudo make
install

failed. Also a Google didn\’t give immediate results until
I did find this ADC page which set me on the right track.
It seems that the problem is in installing the XML::Parser for which one first need expat
to be installed. Now, the generic sourceforge page contains a
version for Linux but fortunately it is also part of the Fink
project
so I did a

sudo fink install expat

which worked
without problems but afterwards I still was not able to install
XML::Parser because Fink installs everything in the /sw
tree. But after

sudo perl Makefile.pl EXPATLIBPATH=/sw/lib
EXPATINCPATH=/sw/include

I finally got the manual installation
going. I will try to tidy up the script over the weekend…

Leave a Comment