Archive for the ‘Web sémantique’ Category

Agrégeons les missions de volontariat, bénévolat et mécénat de compétences

Monday, December 13th, 2010

Je suis en train de regarder la video de la conférence Digital4Change avec M.Yunus et M.Hirsch chez HEC la semaine dernière.

Et la première intervention de Martin Hirsch me laisse croire qu’il y a une action technologique à fort impact social qui pourrait être lancée par un geek qui s’ennuierait un peu (ça doit bien exister, non ?). L’objectif social serait de faciliter l’engagement des jeunes (et moins jeunes) dans des actions de volontariat (à temps plein, indemnisé), de bénévolat (sur le temps libre, non indemnisé) ou de mécénat (à temps partiel ou plein, sur le temps de travail) auprès d’associations d’intérêt général. Ce que je suggère, c’est d’agréger et publier en linked data l’ensemble des missions de volontariat publiées par les diverses agences de service civique, en commençant par l’agence nationale française. Il y a même déjà un flux RSS 2.0 des missions.

Ensuite, il y a Unis Cités, le précurseur français. Pas de flux RSS ?

En France, on a aussi les missions de bénévolat. Je pense à Passerelles & Compétences et de multiples autres organisations similaires.

A l’étranger, il y a des dizaines de sites équivalents. Je pense notamment à idealist.org (qui a aussi un flux RSS de missions) et d’autres.

Ce serait vraiment génial de publier des flux interopérables de toutes ces bases de volontariat et d’en publier une vue agrégée sur un site central, avec possibilité de recherche multicritère et d’alerte (en flux RSS ou par email).

Bon, ceci dit, c’est un peu nul de ma part de suggérer cette idée dans la mesure où je ne prendrai probablement pas le temps de lancer techniquement une telle initiative. Mais je serais ravi d’y participer et d’y inclure les flux de missions de plate-forme ouvertes de mécénat de compétences (Mecenova, Wecena, et pourquoi pas makesense et d’autres, …). Ca inspire un hacker de passage par ici ?

Plone + Freemind = eternal love ?

Thursday, June 19th, 2008

Congratulations to Plone and Freemind, two great open source software packages, which have celebrated weddings recently and have promptly released a new born “Plone Freemind v.1.0” extension product for Plone. I have been really fond of Plone and Freemind for several years now. It’s good news to learn that Freemind mindmaps can now be published and managed via a Plone site… even though I yet have to imagine some valuable use for this ! :)

Pierre Levy vs Tim Berners-Lee, round 0.1

Wednesday, May 14th, 2008

Yesterday, I attended a research seminar at the “Université de Paris 8″. Pierre Levy is a philosopher and professor and head of the collective intelligence chair at the University of Ottawa, Canada. He presented the latest developments in his work on IEML, which stands for Information Economy Meta Language. Things are taking shape on this side and this presentation gave me the opportunity to better understand how IEML compares to the technologies of the Semantic Web (SW).

IEML: not another layer on top of the SW cake

IEML is proposed as an alternative to SW ontologies. In SW, the basic technology is URI (Uniform Resource Identifier) which uniquely (and hopefully permanently) identify concepts (“resources”). Triples then combine these URIs into assertions which then form a graph of meaning that is called an ontology. IEML introduces identifiers which are not URIs. The main difference between URIs and IEML identifiers is that IEML identifiers are semantically rich. They carry meaning. They are meaningful. From a given IEML identifier, one could derive some (or ideally all?) of the semantics of the concept it identifies. Indeed these identifiers are composed of 6 semantic primitives. These 6 primitives are Emptiness, Virtual, Actual, Sign, Being, Thing (E,V,A,S,V and T) and were chosen to be as universal as possible, i.e. not dependent on any specific culture or natural language. The IEML grammar is a way to combine these primitives and logically build concepts with them (also using the notion of triples-based graphs). These primitives are comparable to the 4 bases of DNA (A,C,T and G) that are combined into a complex polymer (DNA) : with a limited alphabet, IEML can express an astronomically huge number of concepts in the same way the 4 letters-alphabet of DNA can express a huge number of phenotypes.

Meaningness of identifiers

When I realized that the meaningful IEML identifiers are similar in their role to URIs, my first reaction was of being horrified. I have struggled for years against “old-school” IT workers who tend to rely on database keys for deriving properties of records. In a former life in the IT department of big industrial corporation, I was highly paid to design and impose a meaningless unique person identifier in order to uniquely and permanently identify the 200 000 employees and contractors of that multinational company in its corporate directory. The main superiority in meaningless identifiers is probably that they can be permanent: you don’t have to change the identifier of an object (of a person for instance) when some property of this object changes over time (the color of the hair of the person, or Miss Dupont getting married and getting called Misses Durand while still keeping the same corporate identifier).

The same is true for URIs whenever it is feasible: if a given resource is to change over time, its URI should not be dependent on its variable property (http://someone.com/blond/big/MissDurand having to change into http://someone.com/white/big/MissesDupont is a bad thing).

The same may not be true when concepts (not people) are to be identified. Concepts are supposed to be permanent and abstract things with IEML (as in the SW I guess). If some meaningful semantic component of a given concept changes then… it’s no longer the same concept (even though we may keep using the same word in a natural language in order to identify this derived concept).

In the old days, IT workers used to introduce meaning in identifiers so that (database) records could more easily be managed by humans, especially during tasks like visually classifying or sorting records in a table or getting an immediate overview of what a given record is about. But this often got seen as a bad practice when the cost of storage (having specific fields for properties that used to be stored as part of a DB key) and the cost of computation (getting a GUI for querying/filtering a DB based on properties) got lower. More often that not, the meaningful key was not permanent and this introduced perverse effects including having to assign a new key to a given record when some property changed or managing human errors when the properties “as seen in the key” were no longer in sync with the “real” properties of the record according to some field.

That’s probably part of the rationale behind the best practices in URI design and web architecture: an URI should be as permanent as possible I guess, in order not to change when the properties of a resource it identifies change over time. Thus web architectures are made more robust to time.

With IEML, we are back to the ol’times of meaningful identifiers. Is it such a bad thing ? Probably not because the power of IEML relies in the meaningness of these identifiers which allow all sorts of computational operations on the concepts. Anyway, that’s probably one of the biggest basic difference between IEML and the SW ontologies.

Matching concepts with IEML

Another aspect of IEML struck me yesterday: IEML gives no magic solution to the problem of mapping (or matching) concepts together. In the SW universe, there is this recurring issue of getting two experts or ontologies agree on the equivalence of 2 resources/concepts: are they really the same concept expressed with distinct but equivalent URIs ? or are they distinct concepts ? How to solve semantic ambiguities ? Unless we get a solution to this issue, the grand graph of semantic data can’t be universally unified and people get isolated in semantic islands which are nothing more than badly interconnected domain ontologies. This is called the problem of semantic integration,  ontology mapping, ontology matching or ontology alignment.

A couple of years ago, I hoped that IEML would solve this issue. IEML being such a regular and to-be-universal language, one could project any concept onto the IEML semantic space and obtain the coordinates (identifier) of this concept in this space. A second person or expert or ontology could also project its own concepts. Then it would just be a matter of calculating the distance between these points in the IEML space. (IEML provides ways of calculating such distances). And if the distance was inferior to some threshold, 2 concepts could then be considered as equivalent for a given pragmatic purpose.

But yesterday, I realized that the art of projecting concepts into the IEML space (i.e. assigning an identifier to a concept) is very subjective. Even though a Pierre Levy could propose a 3000-concepts dictionary that assigns IEML coordinates (identifiers) to concepts that are also identified by a short natural language sentence (like in a classic dictionary), this would not prevent a Tim Berners-Lee to come with a very different dictionary that assigns different coordinates to the same described concepts. Thus the distance between a Pierre-Levy-based IEML word and a TBL-based IEML word would be … meaningless.

In the SW, there is a basic assumption that anyone may come with a different URI for the same concepts and the URIs have to be associated via a “same as” property so that they are said to refer to the very same concept. When you get to bunches of URIs (2 ontologies for instance), you then have to match these URIs which refer to the same concepts. You have to align these ontologies. This can be a very tedious, manual and tricky process. The SW does not unify concepts. It only provides a syntax to represent and handle them. Humans still have to interprete them and match them together when they want to communicate with each other and agree on the meaning that these ontologies carry.

The same is more or less true with IEML. With IEML, identifiers are not arbitrarily defined (meaningful identifiers) whereas SW URIs are almost arbitrarily defined (meaningless identifiers). But the meaningful IEML identifiers only carry human meaning if they refer to the same (or similar) human/IEML dictionary.

Hence it seems to me that IEML is only valuable if some consensus exists about how to translate human concepts into the IEML space. It is only valuable to the extent that there is some universally accepted IEML dictionary. At least for basic concepts (primitives and simple combinations of IEML primitives). The same is true in the universe of SW technologies and there are some attemps at building “top ontologies” that are proposed as shared referentials for ontology builders to align their own ontologies with. But the alignment process, even if theoretically made easier with the existence of these top ontologies is still tricky, tedious and costly. And the critical mass has not been reached in sharing the use of such top ontologies. There is no top consensus to refer to.

Pierre Levy proposes a dictionary of about 3000 IEML words (identifiers) that represent almost all possible low-level combinations of IEML primitives. He invites people to enhance or extend his dictionary, or to come with their own dictionaries. Let’s assume that only minor changes are made to the basic Pierre Levy dictionary. Let’s assume that several conflicting dictionary extensions are made for more precise concepts (higher-level combinations of IEML primitives) . Given the fact that these conflicting extensions still share a basic foundation (the basic Pierre Levy dictionary), would the process of comparing and possibly matching IEML-expressed concepts be made easier ? Even though IEML does not give any automagical solution to the problem of ontology mapping, I wonder whether it makes things easier or not.

In other words, is IEML a superior alternative to SW ontologies ?

Apples and bananas

Yesterday, someone asked: “If someone assigns IEML coordinates to the concept of bananas, how will these coordinates compare to the concept of apples ?” The answer did not satisfy me because it was along the lines of : “IEML may not be the right tool for comparing bananas to apples.”. I don’t see why it would be more suitable for comparing competencies to achievements than for comparing bananas to apples. Or I misunderstood the answer. Anyway…

Pierre Levy made much effort in describing the properties of his abstract IEML space so that IT programmers could start programming libraries for handling and processing IEML coordinates and operations. There even is a programming language being developped that allows semantic functions and operations to be applied to IEML graphs and to allow quantities (economic values, energy potentials, distances) to flow along IEML-based semantic graphs. Hence the name of Information Economy.

So there are (or will soon be) tools and services for surviving in the IEML space. But I strongly feel that there is a lack of tools for moving back and forth between the world of humans and the IEML space. How would you say “bananas” in IEML ? Assuming this concept is not already in a consensual dictionary.

As far as I understand the process of assigning IEML coordinates to the concept of “bananas” is somehow similar to the process of guessing the “right” (or best?) chinese ideogram for bananas. I don’t speak chinese at all. But I imagine one would have to combine existing ideograms that would best describe what a banana is. For instance, “bananas” could be written with a combination of the ideograms that mean “fruits of herbaceous plant cultivated throughout the tropics and grow in hanging clusters“. It could also be written with a combination of the ideograms that mean “fruits of the plants of the genus Musa that are native to the tropical region of Southeast Asia and Australia.” Distinct definitions of bananas could refer to distinct combinations of existing IEML concepts (fruits + herbaceous plant + hanging clusters + tropics or fruits + plants + genus Musa + Southeast Asia + Australia). Would the resulting IEML coordinates be far away from each other ? Could a machine infer that these concepts are closely related if not practically equivalent to each other ? How dependent would the resulting distance be on conflicts or errors in underlying IEML dictionaries ?

I ended the day with this question in my mind: How robust is the IEML translation process to human conflicts, disagreements and errors ? Is it more robust than the process of building and aligning SW ontologies ? Its robustness seems to me as the main determinent factor of the feasibility of the new collective-intelligence-based civilization Pierre Levy promises. If only there were a paper comparing this process to what the SW already provides, I guess people would realize the value of IEML.

Web scraping, web mashing

Thursday, March 8th, 2007

5 Ways to Mix, Rip, and Mash Your Data introduces promising web and desktop applications that extract structured data feeds from web sites and mix them together into something possibly useful to you. Think of things like getting filtered Monster job ads as a convenient RSS feed, along with job ads from your other favorite job sites. This reminds me my Python hacks for automating web crawling and web scraping. Sometimes, I wish I could find time for working a bit further on that…

WikiCalc: Web 2.0 spreadsheets wikified

Wednesday, July 26th, 2006

WikiCalc is a nice piece of GPLed software that pusblishes wiki pages that are structured like Excel spreadsheets are: one can view and edit tables, modify calculation formulas in cells, manage their formatting through the web browser, etc. It brings to spreadsheets the inherent advantages of many wikis: ease of use for Web publications, ease of modification, revisions track for undoing unwanted changes by other users, RSS views on recent changes made to the page. It brings to wikis the inherent advantages of spreadsheets: live calculations, nice formatting, compliance with corporate way of thinking and managing things (will we see a WikiSlides with bulletpoints and animations in some future?). More than this, WikiCalc lets spreadsheets grab input data from external web sites and do live calculations from it: some formulas generate HTTP requests to web services in order to retrieve the latest value for a stock quote, weather forecasts, and so on. Last but not least, the flexible architecture of WikiCalc allows an offline use still via the user’s browser and a synchronization mechanism will let the online version get updated once the connection is restored.

A nice 10 min long WikiCalc screencast with audio is available here.

In a former life, I was managing a team of web project managers in a multinational industrial corporation. As my boss wanted to get simple-to-update weekly/monthly status report about every project, we had tried using a wiki page per project in order to publish and update those reports. It was tedious and not nicely formatted for a corporate environment. I imagine that a nice immediate use of WikiCalc would be to let small project teams update project status reports on an intranet, including nicely formatted timelines and budget indicators. It would still maintain the update effort at a minimal and convenient level and would preserve the wiki flexibility of linking to the project documentation and resources.

We knew structured wiki pages for managing forms or category schemes. WikiCalc introduces spreadsheet structures while preserving the open and unstructured spirit of wikis. Next steps for future wikis would be to allow semantic structures to be managed the wiki-way, like in some early semantic wiki prototypes. [update: see Danny Ayers blog entries on how WikiCalc could relate to the Semantic Web vision]

Mise en relation par le web sémantique

Wednesday, February 15th, 2006

Le projet européen de recherche Vikef vise à développer des technologies de mise en relation de personnes grâce aux technologies du Web Sémantique. Principale application envisagée: la mise en relation de professionnels dans des salons et de scientifiques lors de conférences. Lancé en avril 2004, le projet prendra fin en mars 2007. Ce projet est mené notamment par Xerox, l’insitut Fraunhofer et Telefonica.
Spontanément, je ne peux m’empêcher de me réjouir d’un tel projet et de m’inquiéter de l’utilisabilité des applications qui vont en découler: va-t-on demander aux utilisateurs de modéliser leurs centres d’intérêts? Ca ne me paraît pas très réaliste. Je demande à voir !

Invention d’un système de coaching automatique sur téléphone mobile

Tuesday, November 1st, 2005

[Ceci est le résumé de l'une de mes réalisations professionnelles. Je m'en sers pour faire ma pub dans l'espoir de séduire de futurs partenaires. Plus d'infos à ce sujet dans le récit de mon parcours professionnel.]

En 2005, le projet de recherche informatique MobiLife, mené conjointement par 22 entreprises et universités européennes, dispose d’un logiciel pour téléphone mobile qui permet à un sportif de visualiser son contexte d’entraînement : rythme cardiaque, lieu, heure… En tant qu’ingénieur de recherche, je suis chargé d’inventer un système exploitant ce type de données pour offrir à l’utilisateur des recommandations personnalisées et dépendant du contexte. Je propose aux partenaires un scénario utilisateur qui est accepté puis j’en supervise l’implémentation. J’implémente une partie du système côté serveur (J2EE) et côté téléphone (J2ME). L’application devient ainsi capable d’apprendre les habitudes d’entraînement du sportif, bonnes ou mauvaises, de prédire ses prochains choix d’exercice, de les comparer à ce que recommenderait un entraîneur expert dans les mêmes conditions et, sur cette base, d’alerter le sportif par des petits clips videos personnalisés sur son téléphone : “Attention, il est tard et après 2 exercices de course sur le tapis roulant, vous avez habituellement tendance à trop forcer sur l’exercice suivant ; vous devriez plutôt passer sur le vélo pour un exercice de difficulté moyenne de 10 minutes“. Le système inventé est transposable dans d’innombrables situations de mobilité : coaching alimentaire, formation continue, gestes pour l’environnement, guides touristiques,… A l’occasion d’une journée portes ouvertes des laboratoires Motorola, j’organise la démonstration de cette application devant 40 journalistes et analystes européens.

Critiques du web sémantique

Tuesday, September 6th, 2005

Le Web Sémantique est l’objet de nombreuses critiques. On reproche principalement à cette vision technologiste son manque de pragmatisme. Voici les références de deux articles illustrateurs de ces critiques.

Clay Shirky soutient que la gestion d’ontologies n’est pas une sinécure et que les technologies de “social tagging”/”folksonomies” sont une alternative beaucoup plus adaptée à l’Internet que ne l’est la vision du Web Sémantique des spécialistes des ontologies. Selon moi, l’alternative proposée (le social tagging) est bonne mais la critique anti-ontologies est exagérée car je ne pense pas que la vision de Tim Berners-Lee du web sémantique soit autant portée sur une modélisation ontologique top-down des connaissances que l’on veut bien le dire. Bref, pour moi, la solution d’avenir ce serait quelque chose du genre “semantic social tagging”. Cet article est un bon point de départ pour découvrir les folksonomies, comparées à la modélisation ontologique des connaissances.

Sur un ton plus comique, on peut trouver une libre reprise d’un sketch des comiques anglais “Monty Python” qui se moque de l’approche top-down des spécialistes du web-sémantique ; la critique est aisée depuis que Tim Berners Lee a été adoubé chevalier par la reine d’Angleterre… Elle n’en reste pas moins tout à fait amusante et intéressante.

Au passage, je vous suggère une manière simple de se représenter le web sémantique : imaginez que le web devienne non pas simplement un gros document hyper-texte mais également une grosse base de données. Avec le web sémantique, les applicatifs (agents ou non) pourront “librement” traiter des données produites par d’autres applications.

From flat text to structured data

Monday, September 5th, 2005

This article shows an example of how to build structured data sets from flat text. The example given is the detection of existing relationships between (ex-)members of the British government by data-mining the wikipedia. This relates to “bubble-up folksonomies”. These folks at the BBC are smart !

Comparator

Sunday, July 24th, 2005

Comparator is a small Plone product I recently hacked for my pleasure. It’s called comparator until it gets a nicer name, if ever. I distribute it here under the GNU General Public License. It allows users to select any existing content type (object class) and to calculate a personnalized comparison of the instances of this class. For example, if you choose to compare “News Items”, then you select the news items properties you want to base your comparison upon (title, creation date, description, …). You give marks to any value of these properties (somewhat a tedious process at the moment but much room for improvement in the future, there). Comparator then let’s you give relative weights to these properties so that the given marks are processed and the compared instances are ranked globally.

It’s a kind of basic block for building a comparison framework, for building Plone applications that compare stuff (any kind of stuff that exists within your portal, including semantically agregated stuff). Let’s say that your Plone portal is full of descriptions of beers (with many details about all kinds of beers). Then adding a comparator to your portal will let your users give weights to every beer property and rank all the beers according to their personal tastes.

Comparator is based on Archetypes and was built from an UML diagram with ArchgenXML. Comparator fits well in my vision of semantic agregation. I hope you can see how. Comments welcome !

Daisy vs. Plone, feature fighting

Thursday, June 9th, 2005

A Gouri-friend of mine recently pointed me to Daisy, a “CMS wiki/structured/XML/faceted” stuff he said. I answered him it may be a nice product but not enough attractive for me at the moment to spend much time analyzing it. Nevertheless, as Gouri asked, let’s browse Daisy’s features and try to compare them with Plone equivalents (given that I never tried Daisy).

The Daisy project encompasses two major parts: a featureful document repository

Plone is based on an object-oriented repository (Zope’s ZODB) rather than a document oriented repository.

and a web-based, wiki-like frontend.

Plone has its own web-based fronted. Wiki features are provided with an additional product (Zwiki).

If you have different frontend needs than those covered by the standard Daisy frontend, you can still benefit hugely from building upon its repository part.

Plone’s frontend is easily customizable either with your own CSS, with inherting from existing ZPT skins or with a WYSIWYG skin module such as CPSSkin.

Daisy is a Java-based application

Plone is Python-based.

, and is based on the work of many valuable open source packages, without which Daisy would not have been possible. All third-party libraries or products we redistribute are unmodified (unforked) copies.

Same for Plone. Daisy seems to be based on Cocoon. Plone is based on Zope.

Some of the main features of the document repository are:
* Storage and retrieval of documents.

Documents are one of the numerous object classes available in Plone. The basic object in Plone is… an object that is not fully extensible by itself unless it was designed to be so. Plone content types are more user-oriented than generic documents (they implement specialized behaviours such as security rules, workflows, displays, …). They will be made very extensible when the next versions of the “Archetypes” underlying layer is released (they include through-the-web schema management feature that allow web users to extend what any existing content type is).

* Documents can consists of multiple content parts and fields, document types define what parts and fields a document should have.

Plone’s perspective is different because of its object orientation. Another Zope product called Silva is more similar to Daisy’s document orientation.

Fields can be of different data types (string, date, decimal, boolean, …) and can have a list of values to choose from.

Same for Archetypes based content types in Plone.

Parts can contain arbitrary binary data, but the document type can limit the allowed mime types. So a document (or more correctly a part of a document) could contain XML, an image, a PDF document, … Part upload and download is handled in a streaming manner, so the size of parts is only limitted by the available space on your filesystem (and for uploading, a configurable upload limit).

I imagine that Daisy allows the upload and download of documents having any structure, with no constraint. In Plone, you are constrained by the object model of your content types. As said above this model can be extended at run time (schema management) but at the moment, the usual way to do is to define your model at design time and then comply with it at run time. At run time (even without schema management), you can still add custom metadata or upload additional attached files if your content type supports attached files.

* Versioning of the content parts and fields. Each version can have a state of ‘published’ or ‘draft’. The most recent version which has the state published is the ‘live’ version, ie the version that is displayed by default (depends on the behaviour of the frontend application of course).

The default behaviour of Plone does not include real versioning but document workflows. It means that a given content can be in state ‘draft’ or ‘published’ and go from one state to another according to a pre-defined workflow (with security conditions, event triggering and so). But a given object has only one version by default.
But there are additional Plone product that make Plone support versioning. These products are to be merged into Plone future distribution because versioning has been a long awaited feature. Note that, at the moment, you can have several versions of a document to support multi-language sites (one version per language).

* Documents can be marked as ‘retired’, which makes them appear as deleted, they won’t show up unless explicitely requested. Documents can also be deleted permanently.

Plone’s workflow mechanism is much more advanced. A default workflow includes a similar retired state. But the admin can define new workflows and modify the default one, always referring to the user role. Plone’s security model is quite advanced and is the underlying layer of every Plone functionality.

* The repository doesn’t care much what kind of data is stored in its parts, but if it is “HTML-as-well-formed-XML”, some additional features are provided:
o link-extraction is performed, which allows to search for referers of a document.
o a summary (first 300 characters) is extracted to display in search results
o (these features could potentially be supported for other formats also)

There is no such thing in Plone. Maybe in Silva ? Plone’s reference engine allows you to define associations between objects. These associations are indexed by Plone’s search engine (“catalog”) and can be searched.

* all documents are stored in one “big bag”, there are no directories.

Physically, the ZODB repository can have many forms (RDBMS, …). The default ZODB repository is a single flat file that can get quite big : Data.fs

Each document is identified by a unique ID (an ever-increasing sequence number starting at 1), and has a name (which does not need to be unique).

Each object has an ID but it is not globally unique at the moment. It is unfortunately stored in a hierarchical structure (Zope’s tree). Some Zope/Plone developpers wished “Placeless content” to be implemented. But Daisy must still be superior to Plone in that field.

Hierarchical structure is provided by the frontend by the possibility to create hierarchical navigation trees.

Zope’s tree is the most important structure for objects in a Plone site. It is too much important. You can still create navigation trees with shortcuts. But in fact, the usual solution in order to have maximum flexibility in navigation trees is to use the “Topic” content type. Topics are folder-like object that contain a dynamic list of links to objects matching the Topic’s pre-defined query. Topic are like persistent searches displayed as folders. As a an example a Topic may display the list of all the “Photo”-typed objects that are in “draft” state in a specific part (tree branch) of the site, etc.

* Documents can be combined in so-called “collections”. Collections are sets of the documents. One document can belong to multiple collections, in other words, collections can overlap.

Topics too ? I regret that Plone does easily not offer a default way to display a whole set of objects in just one page. As an example, I would have enjoyed to display a “book” of all the contents in my Plone site as if it were just one single object (so that I can print it…) But there are some Plone additional products (extensions) that support similar functionalities. I often use “Content Panels” to build a page by defining its global layout (columns and lines) and by filling it with “views” from exisiting Plone objects (especially Topics). Content Panels mixed with Topics allow a high flexibility in your site. But this flexibility has some limits too.

* possibility to take exclusive locks on documents for a limitted or unlimitted time. Checking for concurrent modifications (optimistic locking) happens automatically.

See versioning above.

* documents are automatically full-text indexed (Jakarta Lucene based). Currently supports plain text, XML, PDF (through PDFBox), MS-Word, Excel and Powerpoint (through Jakarta POI), and OpenOffice Writer.

Same for Plone except that Plone’s search engine is not Lucene and I don’t know if Plone can read OpenOffice Writer documents. Note that you will require additional modules depending on your platform in order to read Microsoft files.

* repository data is stored in a relation database. Our main development happens on MySQL/InnoDB, but the provisions are there to add support for new databases, for example PostgreSQL support is now included.

Everything is in the ZODB. By default stored as a single file. But can also be stored in a relational database (but this is usually useless). You can also transparently mix several repositories in a same Plone instance. Furthermore, instead of having Plone directly writing in the ZODB’s file, you can configure Plone so that it goes through a ZEO client-server setup so that several Plone instances can share a common database (load balancing). Even better, there is a commercial product, ZRS, that allows you to transparently replicate ZODBs so that several Plone instances setup with ZEO can use several redundant ZODBs (no single point of failure).

The part content is stored in normal files on the file system (to offload the database). The usage of these familiar, open technologies, combined with the fact that the daisywiki frontend stores plain HTML, makes that your valuable content is easily accessible with minimal “vendor” lock-in.

Everything’s in the ZODB. This can be seen as a lock-in. But it is not really because 1/ the product is open source and you can script a full export with Python with minimal effort, 2/ there are default WebDAV + FTP services that can be combined with Plone’s Marshall extension (soon to be included in Plone’s default distribution) that allows you to output your content from your Plone site. Even better, you can also upload your structured semantic content with Marshall plus additional hacks as I mentioned somewhere else.

* a high-level, sql-like query language provides flexible querying without knowing the details of the underlying SQL database schema. The query language also allows to combine full-text (Lucene) and metadata (SQL) searches. Search results are filtered to only contain documents the user is allowed to access (see also access control). The content of parts (if HTML-as-well-formed-XML) can also be selected as part of a query, which is useful to retrieve eg the content of an “abstract” part of a set of documents.

No such thing in Plone as far as I know. You may have to Pythonize my friend… Except that Plone’s tree gives an URL to every object so that you can access any part of the site. But not with a granularity similar to Daisy’s supposed one. See silva for more document-orientation.

* Accesscontrol: instead of attaching an ACL to each individual document, there is a global ACL which allows to specify the access rules for sets of documents by selecting those documents based on expressions. This allows for example to define access control rules for all documents of a certain type, or for all documents in a certain collection.

Access control is based on Plone’s tree, with inheritance (similar to Windows security model in some way). I suppose Plone’s access control is more sophisticated and maintainable than Daisy’s one but it should require more investigation to explain why.

* The full functionality of the repository is available via an HTTP+XML protocol, thus providing language and platform independent access. The documentation of the HTTP interface includes examples on how the repository can be updated using command-line tools like wget and curl.

Unfortunately, Plone is not ReST enough at the moment. But there is some hope the situation will change with Zope 3 (Zope’s next major release that is coming soon). Note that Zope (so Plone) supports HTTP+XML/RPC as a generic web service protocol. But this is nothing near real ReSTful web services…

* A high-level, easy to use Java API, available both as an “in-JVM” implementation for embedded scenarios or services running in the daisy server VM, as well as an implementation that communicates transparently using the HTTP+XML protocol.

Say Python and XML/RPC here.

* For various repository events, such as document creation and update, events are broadcasted via JMS (currently we include OpenJMS). The content of the events are XML messages. Internally, this is used for updating the full-text index, notification-mail sending and clearing of remote caches. Logging all JMS events gives a full audit log of all updates that happened to the repository.

No such mechanism as far as I know. But Plone of course offers fully detailed audit logs of any of its events.

* Repository extensions can provide additional services, included are:
o a notification email sender (which also includes the management of the subscriptions), allowing subscribing to individual documents, collections of documents or all documents.

No such generic feature by default in Plone. You can add scripts to send notification in any workflow transition. But you need to write one or two lines of Python. And the management of subscriptions is not implemented by default. But folder-like object support RSS syndication so that you can agregate Plone’s new objects in your favorite news aggregator;

o a navigation tree management component and a publisher component, which plays hand-in-hand with our frontend (see further on)

I’ll see further on… :)

* A JMX console allows some monitoring and maintenance operations, such as optimization or rebuilding of the fulltext index, monitoring memory usage, document cache size, or database connection pool status.

You have several places to look at for this monitoring within Zope/Plone (no centralized monitoring). An additional Plone product helps in centralizing maintenance operations. Still some ground for progress here.

The “Daisywiki” frontend
The frontend is called the “Daisywiki” because, just like wikis, it provides a mixed browsing/editing environment with a low entry barrier. However, it also differs hugely from the original wikis, in that it uses wysiwyg editing, has a powerful navigation component, and inherits all the features of the underlying daisy repository such as different document types and powerful querying.

Well, then we can just say the same for Plone and rename its skins the Plonewiki frontend… Supports Wysiwyg editing too, with customizable navigation tree, etc.

* wysiwyg HTML editing
o supports recent Internet Explorer and Mozilla/Firefox (gecko) browsers, with fallback to a textarea on other browsers. The editor is customized version of HTMLArea (through plugins, not a fork).

Same for Plone (except it is not an extension of HTMLArea but of a similar product).

o We don’t allow for arbitrary HTML, but limit it to a small, structural subset of HTML, so that it’s future-safe, output medium independent, secure and easily transformable. It is possible to have special paragraph types such as ‘note’ or ‘warning’. The stored HTML is always well-formed XML, and nicely layed-out. Thanks to a powerful (server-side) cleanup engine, the stored HTML is exactly the same whether edited with IE or Mozilla, allowing to do source-based diffs.

No such validity control within Plone. In fact, the structure of a Plone document is always valid because it is managed by Plone according to a specific object model. But a given object may contain an HTML part (a document’s body as an example) that may not be valid. If your documents are to have a recurrent inner structure, then you are invited to make this structure an extension of an object class so that is no more handled as a document structure. See what I mean ?

o insertion of images by browsing the repository or upload of new images (images are also stored as documents in the repository, so can also be versioned, have metadata, access control, etc)

Same with Plone except for versioning. Note that Plone’s Photo content type support automatic server-side redimensioning of images.

o easy insertion document links by searching for a document

Sometimes yes, sometimes no. It depends on the type of link you are creating.

o a heartbeat keeps the session alive while editing

I don’t know how it works here.

o an exlusive lock is automatically taken on the document, with an expire time of 15 minutes, and the lock is automatically refreshed by the heartbeat

I never tried the Plone extension for versioning so I can’t say. I know that you can use the WebDAV interface to edit a Plone object with your favorite text processing package if you want. And I suppose this interface properly manages this kind of issues. But I never tried.

o editing screens are built dynamically for the document type of the document being edited.

Of course.

* Version overview page, from which the state of versions can be changed (between published and draft), and diffs can be requested. * Nice version diffs, including highlighting of actual changes in changed lines (ignoring re-wrapping).

You can easily move any object in its associated workflow (from one state to another, through transitions). But no versioning. Note that you can use Plone’s wiki extension and this extension supports supports diffs and some versioning features. But this is not available for any Plone content type.

* Support for includes, i.e. the inclusion of one document in the other (includes are handled recursively).

No.

* Support for embedding queries in pages.

You can use Topics (persistent queries). You can embed them in Content Panels.

* A hierarchical navigation tree manager. As many navigation trees as you want can be created.

One and only one navigation tree by default. But Topics can be nested. So you can have one main navigation tree plus one or more alternatives with Topics (but these alternatives are limited for some reasons.).

Navigation trees are defined as XML and stored in the repository as documents, thus access control (for authoring them, read access is public), versioning etc applies. One navigation tree can import another one. The nodes in the navigation tree can be listed explicitely, but also dynamically inserted using queries. When a navigation tree is generated, the nodes are filtered according to the access control rules for the requesting user. Navigation trees can be requested in “full” or “contextualized”, this last one meaning that only the nodes going to a certain document are expanded. The navigtion tree manager produces XML, the visual rendering is up to XSL stylesheets.

This is nice. Plone can not do that easily. But what Plone can do is still done with respect to its security model and access control, of course.

* A navigation tree editor widget allows easy editing of the navigation trees without knowledge of XML. The navigation tree editor works entirely client-side (Mozilla/Firefox and Internet Explorer), without annoying server-side roundtrips to move nodes around, and full undo support.

Yummy.

* Powerful document-publishing engine, supporting:
o processing of includes (works recursive, with detection of recursive includes)
o processing of embedded queries
o document type specific styling (XSLT-based), also works nicely combined with includes, i.e. each included document will be styled with its own stylesheet depending on its document type.

OK

* PDF publishing (using Apache FOP), with all the same features as the HTML publishing, thus also document type specific styling.

Plone document-like content type offer PDF views too.

* search pages:
o fulltext search
o searching using Daisy’s query language
o display of referers (“incoming links”)

Fulltext search is available. No query language for the user. Display of refers is only available for content type that are either wiki pages or have been given the ability to include references from other objects.

* Multiple-site support, allows to have multiple perspectives on top of the same daisy repository. Each site can have a different navigation tree, and is associated with a default collection. Newly created documents are automatically added to this default collection, and searches are limited to this default collection (unless requested otherwise).

It might be possible with Plone but I am not sure when this would be useful.

* XSLT-based skinning, with resuable ‘common’ stylesheets (in most cases you’ll only need to adjust one ‘layout’ xslt, unless you want to customise heavily). Skins are configurable on a per-site basis.

Plone’s skins are using the Zope Page Templates technology. This is a very nice and simple HTML templating technology. Plone’s skins make an extensive use of CSS and in fact most of the layout and look-and-feel of a site is now in CSS objects. These skins are managed as objects, with inheritance, overriding of skins and other sophisticated mechanism to configure them.

* User self-registration (with the possibility to configure which roles are assigned to users after self-registration) and password reminder.

Same is available from Plone.

* Comments can be added to documents.

Available too.

* Internationalization: the whole front-end is localizable through resource bundles.

Idem.

* Management pages for managing:
o the repository schema (the document types)
o the users
o the collections
o access control

Idem.

* The frontend currently doesn’t perform any caching, all pages are published dynamically, since this also depends on the access rights of the current user. For publishing of high-trafic, public (ie all public access as the same user), read-only sites, it is probably best to develop a custom publishing application.

Zope includes caching mechanisms that take care of access rights. For very high-trafic public sites, a Squid frontend is usually recommended.

* Built on top of Apache Cocoon (an XML-oriented web publishing and application framework), using Cocoon Forms, Apples (for stateful flow scenarios), and the repository client API.

By default, Zope uses its own embedded web server. But the usual setup for production-grade sites is to put an Apache reverse-proxy in front of it.

My conclusion : Daisy looks like a nice product when you have a very document-oriented project, with complex documents with structures varying much from documents to documents ; its equivalent in Zope’s world would be Silva. But Plone is much more appropriate for everyday CMS sites. Its object-orientation offers both a great flexibility for the developer and more ease of use for Joe-six-pack webmaster. Plone still lacks some important technical features for its future, namely ReSTful web service interfaces, plus placeless content paradigm. Versioning is expected soon.

This article was written in just one raw, late at night and with no re-reading reviewed once thanks to Gouri. It may be wrong or badly lacking information on some points. So your comments are much welcome !

Semantic Web reports for corporate social responsability

Thursday, May 26th, 2005

With that amount of buzzwords in the title, I must be ringing some warning bells in your minds. You would be right to get cautious with what I am going to say here because this is pure speculation. I would like to imagine how annual (quarterly ?) corporate reports should look like in some near future.

In my opinion, they should carry on the current trend on emphasizing corporate social responsability. In order to do so, they should both embrace innovative reporting standards and methodologies and support these methodologies by implementing them with “semantic web”-like technologies. In such a future, it would mean that financial analyst (and eventually stakeholders) should be able to browse through specialized web sites which would aggregate meaningful data published in these corporate reports. In such specialized web sites, investors should be able to compare comparable data, marks and ratings regarding their favorite corporations. They should be given functionalities like the one you find in multidimensional analysis tools (business intelligence), even if they are as simplified as in interactive purchase guides [via Fred]. In such a future, I would be able to subscribe to such a web service, give my preferences and filters in financial, social and environmental terms. This service would give me a snapshot of how the selected corporations compare one to each other regarding my preferences and filters. Moreover, I would receive as an RSS feed an alert whenever a new report is published or when some thresholds in performance are reached by the corporations I monitor.

Some technological issues still stand in the way of such future. They are fading away. But a huge amount of methodological and political issues stand there also… What if such technologies come to maturity ? Would they push corporations, rating agencies, analysts and stakeholders to change their minds and go in the right direction ?

From OWL to Plone

Thursday, April 28th, 2005

I found a working path to transform an OWL ontology into a working Plone content-type. Here is my recipe :

  1. Choose any existing OWL ontology
  2. With Protege equipped with its OWL plugin, create a new project from your OWL file.
  3. Still within Protege, with the help of its UML plugin, convert your OWL-Protege project into a UML classes project. You get an XMI file.
  4. Load this XMI file into an UML project with Poseidon. Save this project under the .zuml Poseidon format.
  5. From poseidon, export your classes a new xmi file. It will be Plone-friendly.
  6. With a text editor, delete some accentuated characters that Poseidon might have added to your file (for example, the Frenchy Poseidon adds a badly accentuated “Modele sans titre” attribute into your XMI) because the next step won’t appreciate them
  7. python Archgenxml.py -o YourProduct yourprojectfile.xmi turns your XMI file into a valid Plone product. Requires Plone and Archetypes (see doc) latest stable version plus ArchgenXML head from the subversion repository.
  8. Launch your Plone instance and install YourProduct as a new product from your Plone control panel. Enjoy YourProduct !
  9. eventually populate it with an appropriate marshaller.

Now you are not far from using Plone as a semantic aggregator.

Web scraping with Python (part II)

Friday, March 11th, 2005

The first part of this article dealt with retrieving HTML pages from the web with the help of a mechanize-propelled web crawler. Now your HTML pieces are safely saved locally on your hard drive and you want to extract structured data from them. This is part 2, HTML parsing with Python. For this task, I adopted a slightly more imaginative approach than for my crawling hacks. I designed a data extraction technology based on HTML templates. Maybe this could be called “reverse-templating” (or something like template-based reverse-web-engineering).

You may be used with HTML templates for producing HTML pages. An HTML template plus structured data can be transformed into a set of HTML pages with the help of a proper templating engine. One famous technology for HTML templating is called Zope Page Templates (because this kind of templates is used within the Zope application server). ZPTs use a special set of additional HTML tags and attributes referred to by the “tal:” namespace. One advantage of ZPT (over competing technologies) is that ZPT are nicely rendered in WYSIWYG HTML editors. Thus web designers produce HTML mockups of the screens to be generated by the application. Web developpers insert tal: attributes into these HTML mockups so that the templating engine will know which parts of the HTML template have to be replaced by which pieces of data (usually pumped from a database). As an example, web designers will say <title>Camcorder XYZ</title> then web developpers will modify this into <title tal:content=”camcorder_name”>Camcorder XYZ</title> and the templating engine will further produce a <title>Camcorder Canon MV6iMC</title> when it processes the “MV6iMC” record in your database (it replaces the content of the title element with the value of the camcorder_name variable as it is retrieved from the current database record). This technology is used to merge structured data with HTML templates in order to produce Web pages.

I took inspiration from this technology to design parsing templates. The idea here is to reverse the use of HTML templates. In the parsing context, HTML templates are still produced by web developpers but the templating engine is replaced by a parsing engine (known as web_parser.py, see below for the code of this engine). This engine takes HTML pages (the ones you previously crawled and retrieved) plus ZPT-like HTML templates as input. It then outputs structured data. First your crawler saved <title>Camcorder Canon MV6iMC</title>. Then you wrote <title tal:content=”camcorder_name”>Camcorder XYZ</title> into a template file. Eventually the engine will output camcorder_name = “Camcorder Canon MV6iMC”.

In order to trigger the engine, you just have to write a small launch script that defines several setup variables such as :

  • the URL of your template file,
  • the list of URLs of the HTML files to be parsed,
  • whether you would like or not to pre-process these files with an HTML tidying library (this is useful when the engine complains about badly formed HTML),
  • an arbitrary keyword defining the domain of your parsing operation (may be the name of the web site your HTML files come from),
  • the charset these HTML files are made with (no automatic detection at the moment, sorry…)
  • the output format (csv-like file or semantic web document)
  • an optional separator character or string if ever you chose the csv-like output format

The easiest way to go is to copy and modify my example launch script (parser_dvspot.py) included in the ZIP distribution of this web_parser.

Let’s summarize the main steps to go through :

  1. install utidylib into your python installation
  2. copy and save my modified version of BeautifulSoup into your python libraries directory (usually …/Lib/site-packages)
  3. copy and save my engine (web_parser.py) into your local directory or into you python libraries directory
  4. choose a set of HTML files on your hard drive or directly on a web site,
  5. save one of these files as your template,
  6. edit this template file and insert the required pseudotal attributes (see below for pseudotal instructions, and see the example dvspot template template_dvspot.zpt),
  7. copy and edit my example launch script so that you define the proper setup variables in it (the example parser_dvspot.py contains more detailed instructions than above), save it as my_script.py
  8. launch your script with a python my_script.py > output_file.cowl (or python my_script.py > output_file.cowl)
  9. enjoy yourself and your fresh output_file.owl or output_file.csv (import it within Excel)
  10. give me some feedback about your reverse-templating experience (preferably as a comment on this blog)

This is just my first attempt at building such an engine and I don’t want to make confusion between real (and mature) tal attributes and my pseudo-tal instructions. So I adopted pseudotal as my main namespace. In some future, when the specification of these reverse-templating instructions are somewhat more stabilized (and if ever the “tal” guys agree), I might adopt tal as the namespace. Please also note that the engine is somewhat badly written : the code and internal is rather clumsy. There is much room for future improvement and refactoring.

The current version of this reverse-templating engine now supports the following template attributes/instructions (see source code for further updates and documentation) :

  • pseudotal:content gives the name of the variable that will contain the content of the current HTML element
  • pseudotal:replace gives the name of the variable that will contain the entire current HTML element
  • (NOT SUPPORTED YET) pseudotal:attrs gives the name of the variable that will contain the (specified?) attribute(s ?) of the current HTML element
  • pseudotal:condition is a list of arguments ; gives the condition(s) that has(ve) to be verified so that the parser is sure that current HTML element is the one looked after. This condition is constructed as a list after BeautifulSoup fetch arguments : a python dictionary giving detailed conditions on the HTML attributes of the current HTML element, some content to be found in the current HTML element, the scope of research for the current HTML element (recursive search or not)
  • pseudotal:from_anchor gives the name of the pseudotal:anchor that is used in order to build the relative path that leads to the current HTML element ; when no from_anchor is specified, the path used to position the current HTML element is calculted from the root of the HTML file
  • pseudotal:anchor specifies a name for the current HTML element ; this element can be used by a pseudotal:from_anchor tag as the starting point for building the path to the element specified by pseudotal:from_anchor ; usually used in conjunction with a pseudotal:condition ; the default anchor is the root of the HTML file.
  • pseudotal:option describes some optional behavior of the HTML parser ; is a list of constants ; contains NOTMANDATORY if the parser should not raise an error when the current element is not found (it does as default) ; contains FULL_CONTENT when data looked after is the whole content of the current HTML element (default is the last part of the content of the current HTML element, i.e. either the last HTML tags or the last string included in the current element)
  • pseudotal:is_id_part a special ‘id’ variable is automatically built for every parsed resource ; this id variable is made of several parts that are concatenated ; this pseudotal:is_id_part gives the index the current variable will be used at for building the id of the current resource ; usually used in conjunction with pseudotal:content, pseudotal:replace or pseudotal:attrs
  • (NOT SUPPORTED YET) pseudotal:repeat specifies the scope of the HTML tree that describes ONE resource (useful when several resources are described in one given HTML file such as in a list of items) ; the value of this tag gives the name of a class that will instantiate the parsed resource scope plus the name of a list containing all the parsed resource

The current version of the engine can output structured data either as a CSV-like output (tab-delimited for example) or as an RDF/OWL document (of Semantic-Web fame). Both formats can easily be imported and further processed with Excel. The RDF/OWL format gives you the ability to process it with all the powerful tools that are emerging along the Semantic Web effort. If you feel adventurous, you may thus import your RDF/OWL file into Stanford’s Protege semantic modeling tool (or into Eclipse with its SWEDE plugin) and further process your data with the help of a SWRL rules-based inference engine. The future Semantic Web Rules Language will help at further processing this output so that you can powerfully compare RDF data coming from distinct sources (web sites). In order to be more productive in terms of fancy buzz-words, let’s say that this reverse-templating technology is some sort of a web semantizer. It produces semantically-rich data out of flat web pages.

The current version of the engine makes an extensive use of BeautifulSoup. Maybe it should have been based on a more XMLish approach instead (using XML pathes ?). But it would have implied that the HTML templates and HTML files to be processed should then have been turned into XHTML. The problem is that I would then have relied on utidylib but this library breaks too much some mal-formed HTML pages so that they are not valuable anymore.

Current known limitation : there is currently no way to properly handle some situations where you need to make the difference between two similar anchors. In some cases, two HTML elements that you want to use as distinct anchors have in fact exactly the same attributes and content. This is not a problem as long as these two anchors are always positioned at the same place in all the HTML page that you will parse. But, as soon as one of the anchors is not mandatory or it is located after a non mandatory element, the engine can get lost and either confuse the two anchors or complain that one is missing. At the moment, I don’t know how to handle this kind of situation. Example : long lists of specifications with similar names where some specifications are optional (see canon camcorders as an example : difference between lcd number of pixels and viewfinder number of pixels). The worst case scenario would be when there is a flat list of HTML paragraphs. The engine will try to identify these risks and should output some warnings in this kind of situations.


Here are the contents of the ZIP distribution of this project (distributed under the General Public License) :

  • web_parser.py : this is the web parser engine.
  • parser_dvspot.py : this is an example launch script to be used if you want to parser HTML files coming from the dvspot.com web site.
  • template_dvspot.zpt : this is the example template file corresponding to the excellent dvspot.com site
  • BeautifulSoup.py : this is MY version of BeautifulSoup. Indeed, I had to modify Leonard Richardson’s official one and I couldn’t obtain any answer from him at the moment regarding my suggested modifications. I hope he will soon answer me and maybe include my modifications in the official version or help me overcoming my temptation to fork. My modifications are based on the official 1.2 release of beautifulsoup : I added “center” as a nestable tag and added the ability to match the content of an element with the help of wildcards. You should save this BeautifulSoup.py file into the “Lib\site-packages” folder of your python installation.
  • README.html is the file you are currently reading, also published on my blog.

P2P + Web Sémantique + Réseaux sociaux + Bureautique = ?

Wednesday, March 9th, 2005

Prenez une once de peer-to-peer, trois coudées de web sémantique, deux livres de bureautique et un denier de réseau sociaux, malaxez avec énergie et vous obtenez… le “Networked Semantic Desktop”. Ca c’est de la convergence où je ne m’y connais pas… Projet de recherche, circulez, il n’y a rien à télécharger ! Vu également ici.

Jointure d’identité et réseaux sociaux

Wednesday, March 9th, 2005

IBM a récemment acquis SRD, éditeur d’un logiciel qui met en correspondance :

  • diverses descriptions informatiques d’un même individu (jointure avancée d’identités),
  • les relations établies entre individus d’après leurs points communs (établissements de réseaux sociaux)

Bref, un logiciel tout à fait adéquat pour qui veut se constituer la parfaite panoplie du petit big brother.
Cette technologie semble un peu similaire aux méta-annuaires qui se sont spécialisés dans la jointure simple d’identités électroniques. Mais elle semble trouver ses principales applications dans le domaine du data mining pour le marketing et la CRM, dans l’analyse du risque crédit et dans le renseignement d’Etat et la sécurité privée. Elle me fait penser à Semagix qui, tout en ciblant des champs d’application similaires (CRM, détection d’opérations financières frauduleuses et déconstruction de réseaux sociaux mafieux), mise sur une approche “web sémantique”.
Il serait sans doute intéressant de savoir sur quelle approche technologique s’appuie SRD et si cette techno est suffisamment robuste, fiable et automatisée pour apporter des éléments de solution dans des problématiques très opérationnelles de gestion des identités électroniques.

Zemantic: a Zope Semantic Web Catalog

Monday, February 14th, 2005

Zemantic is an RDF module for Zope (read its announcement). From what I read (not tested by me yet), it implements services similar to zope catalogs and enables universal management of references (such as the Archetypes reference engine but in a more sustainable way). It is based on RDFLib, similarly to ROPE.

I feel enthusiastic about this product since it sounds to me like a good future-proof solution for the management of metadata, references and structured data within content management systems and portals. Plus Zemantic would sit well in my vision of Plone as a semantic aggregator.

Web scraping with python (part 1 : crawling)

Wednesday, December 29th, 2004

Example One : I am looking for my next job. So I subscribe to many job sites in order to receive notifications by email of new job ads (example = Monster…). But I’d rather check these in my RSS aggregator instead of my mailbox. Or in some sort of aggregating Web platform. Thus, I would be able to do many filtering/sorting/ranking/comparison operations in order to navigate through these numerous job ads.

Example Two : I want to buy a digital camcorder. So I want to compare the available models. Such a comparison implies that I rank the most common models according to their characteristics. Unfortunately, the many sites providing reviews or comparisons of camcorders are not often comprehensive and they don’t offer me the capability of comparing them with respect to my way of ranking and weighting the camcorder features (example = dvspot). So I would prefer pumping all the technical stuff from these sites and manipulate this data locally on my computer. Unfortunately, this data is merged within HTML. And it may be complex to extract it automatically from all the presentation code.

These are common situations : interesting data spread all over the web and merged in HTML presentation code. How to consolidate this data so that you can analyze and process it with your own tools ? In some near future, I expect this data will be published so that it is directly processable by computers (this is what the Semantic Web is intending to do). For now, I was used to do it with Excel (importing Web data, then cleaning it and the like) and I must admit that Excel is fairly good at it. But I’d like some more automation for this process. I’d like some more scripting for this operation so that I don’t end with inventing complex Excel macros or formulas just to automate Web site crawling, HTML extraction and data cleaning. With such an itch to scratch, I tried to address this problem with python.

This series of messages introduces my current hacks that automate web sites crawling and data extraction from HTML pages. The current output of these scripts is a bunch of CSV files that can be further processed … in Excel. I wish I would output RDF instead of CSV. So there remains much room for further improvement (see RDF Web Scraper for a similar but approach). Anyway… Here is part One : how to crawl complex web sites with Python ?. The next part will deal with data extraction from the retrieved web pages, involving much HTML cleansing and parsing.

My crawlers are fully based on the John L. Lee’s mechanize framework for python. There are other tools available in Python. And several other approaches are available when you want to deal with automating the crawling of web sites. Note that you can also try to scrape the screens of legacy terminal-based applications with the help of python (this is called “screen scraping”). Some approaches of web crawling automation rely on recording the behaviour of a user equipped with a web browser and then reproduce this same behaviour in an automated session. That is an attractive and futuristic approach. But this implies that you find a way to guess what the intended automatic crawling behaviour will be from a simple example. In other words, with this approach, you have either to ask the user to click on every web link (all the job postings…) and this gives no value to the automation of the task. Or your system “guesses” what automatic behaviour is expected just by recording a sample of what a human agent would do. Too complex… So I preferred a more down-to-earth solution implying that you write simple crawling scripts “by hand”. (You may still be interested in automatically record user sessions in order to be more productive when producing your crawling scripts.) As a summary : my approach is fully based on mechanize so you may consider the following code as example of uses of mechanize in “real-world” situations.

For purpose of clarity, let’s first focus on the code part that is specific to your crawling session (to the site you want to crawl) . Let’s take the example of the dvspot.com site which you may try to crawl in order to download detailed description of camcorders :

    # Go to home page
    #
    b.open("http://www.dvspot.com/reviews/cameraList.php?listall=1&start=0")
    #
    # Navigate through the paginated list of cameras
    #
    next_page = 0
    while next_page == 0:
     #
     # Display and save details of every listed item
     #
     url = b.response.url
     next_element = 0
     while next_element >= 0:
      try:
       b.follow_link(url_regex=re.compile(r"cameraDetail"), nr=next_element)
       next_element = next_element + 1
       print save_response(b,"dvspot_camera_"+str(next_element))
       # go back to home page
       b.open(url)
       # if you crawled too many items, stop crawling
       if next_element*next_page > MAX_NR_OF_ITEMS_PER_SESSION:
          next_element = -1
          next_page = -1
      except LinkNotFoundError:
       # You certainly reached the last item in this page
       next_element = -1
    #
     try:
      b.open(url)
      b.follow_link(text_regex=re.compile(r"Next Page"), nr=0)
      print "processing Next Page"
     except LinkNotFoundError:
      # You reached the last page of the listing of items
      next_page = -1

You noticed that the structure of this code (conditional loops) depends on the organization of the site you are crawling (paginated results, …). You also have to specify the rule that will trigger “clicks” from your crawler. In the above example, your script first follows every link containing “cameraDetail” in its URL (url_regex). Then it follows every link containing “Next Page” in the hyperlink text (text_regex).

This kind of script is usually easy to design and write but it can become complex when the web site is improperly designed. There are two sources of difficulties. The first one is bad HTML. Bad HTML may crash the mechanize framework. This is the reason why you often have to pre-process the HTML either with the help of a HTML tidying library or with simple but string substitutions when your tidy library breaks the HTML too much (this may be the case when the web designer improperly decided to used nested HTML forms). Designing the proper HTML pre-processor for the Web site you want to crawl can be tricky since you may have to dive into the faulty HTML and the mechanize error tracebacks in order to identify the HTML mistakes and workaround them. I hope that future versions of mechanize would implement more robust HTML parsing capabilities. The ideal solution would be to integrate the Mozilla HTML parsing component but I guess this will be some hard work to do. Let’s cross our fingers.

Here are useful examples of pre-processors (as introduced by some other mechanize users and developpers) :

class TidyProcessor(BaseProcessor):
      def http_response(self, request, response):
          options = dict(output_xhtml=1,
                   add_xml_decl=1,
                   indent=1,
                   output_encoding='utf8',
                   input_encoding='latin1',
                   force_output=1
                   )
          r = tidy.parseString(response.read(), **options)
          return FakeResponse(response, str(r))
      https_response = http_response
#
class MyProcessor(BaseProcessor):
      def http_response(self, request, response):
          r = response.read()
          r = r.replace('"image""','"image"')
          r = r.replace('"','"')
          return FakeResponse(response, r)
      https_response = http_response
#
# Open a browser and optionally choose a customized HTML pre-processor
b = Browser()
b.add_handler(MyProcessor())

The second source of difficulties comes from non-RESTful sites. As an example the APEC site (a French Monster-like job site) is based on a proprietary web framework that implies that you cannot rely on links URLs to automate your browsing session. It took me some time to understand that, once loggin in, every time you click on a link, you are presented with a new frameset referring to the URLs that contain the interesting data you are looking for. And these URLs seem to be dependent on your session. No permalink, if you prefer. This makes the crawling process even more tricky. In order to deal with this source of difficulty when you write your crawling script, you have to open both your favorite text editor (to write the script) and your favorite web browser (Firefox of course !). One key knowledge is to know mechanize “find_link” capabilities. These capabilities are documented in _mechanize.py source code, in the find_link method doc strings. They are the arguments you will provide to b.follow_link in order to automate your crawler “clicks”. For more convenience, let me reproduce them here :

  • text: link text between link tags: <a href=”blah”>this bit</a> (as
    returned by pullparser.get_compressed_text(), ie. without tags but
    with opening tags “textified” as per the pullparser docs) must compare
    equal to this argument, if supplied
  • text_regex: link text between tag (as defined above) must match the
    regular expression object passed as this argument, if supplied
    name, name_regex: as for text and text_regex, but matched against the
    name HTML attribute of the link tag
  • url, url_regex: as for text and text_regex, but matched against the
    URL of the link tag (note this matches against Link.url, which is a
    relative or absolute URL according to how it was written in the HTML)
  • tag: element name of opening tag, eg. “a”
    predicate: a function taking a Link object as its single argument,
    returning a boolean result, indicating whether the links
  • nr: matches the nth link that matches all other criteria (default 0)

Links include anchors (a), image maps (area), and frames (frame,iframe).

Enough with explanations. Now comes the full code in order to automatically download camcorders descriptions from dvspot.com. I distribute this code here under the GPL (legally speaking, I don’t own the copyleft of this entire code since it is based on several snippets I gathered from the web and wwwsearch mailing list). Anyway, please copy-paste-taste !

from mechanize import Browser,LinkNotFoundError
from ClientCookie import BaseProcessor
from StringIO import StringIO
# import tidy
#
import sys
import re
from time import gmtime, strftime
#
# The following two line is specific to the site you want to crawl
# it provides some capabilities to your crawler for it to be able
# to understand the meaning of the data it is crawling ;
# as an example for knowing the age of the crawled resource
#
from datetime import date
# from my_parser import parsed_resource
#
"""
 Let's declare some customized pre-processors.
 These are useful when the HTML you are crawling through is not clean enough for mechanize.
 When you crawl through bad HTML, mechanize often raises errors.
 So either you tidy it with a strict tidy module (see TidyProcessor)
 or you tidy some errors you identified "by hand" (see MyProcessor).
 Note that because the tidy module is quite strict on HTML, it may change the whole
 structure of the page you are dealing with. As an example, in bad HTML, you may encounter
 nested forms or forms nested in tables or tables nested in forms. Tidying them may produce
 unintended results such as closing the form too early or making it empty. This is the reason
 you may have to use MyProcessor instead of TidyProcessor.
"""
#
class FakeResponse:
      def __init__(self, resp, nudata):
          self._resp = resp
          self._sio = StringIO(nudata)
#
      def __getattr__(self, name):
          try:
              return getattr(self._sio, name)
          except AttributeError:
              return getattr(self._resp, name)
#
class TidyProcessor(BaseProcessor):
      def http_response(self, request, response):
          options = dict(output_xhtml=1,
                   add_xml_decl=1,
                   indent=1,
                   output_encoding='utf8',
                   input_encoding='latin1',
                   force_output=1
                   )
          r = tidy.parseString(response.read(), **options)
          return FakeResponse(response, str(r))
      https_response = http_response
#
class MyProcessor(BaseProcessor):
      def http_response(self, request, response):
          r = response.read()
          r = r.replace('"image""','"image"')
          r = r.replace('"','"')
          return FakeResponse(response, r)
      https_response = http_response
#
# Open a browser and optionally choose a customized HTML pre-processor
b = Browser()
b.add_handler(MyProcessor())
#
""""
 Let's declare some utility methods that will enhance mechanize browsing capabilities
"""
#
def find(b,searchst):
    b.response.seek(0)
    lr = b.response.read()
    return re.search(searchst, lr, re.I)
#
def save_response(b,kw='file'):
    """Saves last response to timestamped file"""
    name = strftime("%Y%m%d%H%M%S_",gmtime())
    name = name + kw + '.html'
    f = open('./'+name,'w')
    b.response.seek(0)
    f.write(b.response.read())
    f.close
    return "Response saved as %s" % name
#
"""
Hereafter is the only (and somewhat big) script that is specific to the site you want to crawl.
"""
#
def dvspot_crawl():
    """
     Here starts the browsing session.
     For every move, I could have put as a comment an equivalent PBP command line.
     PBP is a nice scripting layer on top of mechanize.
     But it does not allow looping or conditional browsing.
     So I preferred scripting directly with mechanize instead of using PBP
     and then adding an additional layer of scripting on top of it.
    """
#
    MAX_NR_OF_ITEMS_PER_SESSION = 500
    #
    # Go to home page
    #
    b.open("http://www.dvspot.com/reviews/cameraList.php?listall=1&start=0")
    #
    # Navigate through the paginated list of cameras
    #
    next_page = 0
    while next_page == 0:
     #
     # Display and save details of every listed item
     #
     url = b.response.url
     next_element = 0
     while next_element >= 0:
      try:
       b.follow_link(url_regex=re.compile(r"cameraDetail"), nr=next_element)
       next_element = next_element + 1
       print save_response(b,"dvspot_camera_"+str(next_element))
       b.open(url)
       # if you crawled too many items, stop crawling
       if next_element*next_page > MAX_NR_OF_ITEMS_PER_SESSION:
          next_element = -1
          next_page = -1
      except LinkNotFoundError:
       # You reached the last item in this page
       next_element = -1
    #
     try:
      b.open(url)
      b.follow_link(text_regex=re.compile(r"Next Page"), nr=0)
      print "processing Next Page"
     except LinkNotFoundError:
      # You reached the last page of the listing of items
      next_page = -1
    #
    return
#
#
#
if __name__ == '__main__':
#
    """ Note that you may need to specify your proxy first.
    On windows, you do :
    set HTTP_PROXY=http://proxyname.bigcorp.com:8080
    """
    #
    dvspot_crawl()

In order to run this code, you will have to install mechanize 0.0.8a, pullparser 0.0.5b, clientcookie 0.4.19, clientform 0.0.16 and utidylib. I used Python 2.3.3. Latest clientcookie’s version was to be integrated into Python 2.4 I think. In order to install mechanize, pullparser, clientcookie and clientform, you just have to do the usual way :

python setup.py build
python setup.py install
python setup.py test

Last but not least : you should be aware that you may be breaking some terms of service from the website you are trying to crawl. Thanks to dvspot for providing such valuable camcorders data to us !

Next part will deal with processing the downloaded HTML pages and extract useful data from them.

RDF vu par un pouet

Thursday, December 9th, 2004

La technologie RDF est la brique de base du web sémantique. Dans une grange, un poète vous rappelle qu’à l’école maternelle, vous saviez déjà faire du RDF avec des ours funambules.

Portails / CMS en J2EE

Tuesday, October 5th, 2004

Pour créer un portail d’entreprise en J2EE, il y a le choix entre acheter un coûteux portail propriétaire (IBM ou BEA pour ne citer que les leaders des serveurs d’application J2EE) ou recourir à un portail J2EE open source. Mais autant l’offre open source en matière de serveurs d’application J2EE (JBoss, Jonas) atteint une certaine maturité qui la rend crédible pour des projets de grande envergure, autant l’offre open source en matière de portails J2EE semble largement immature. Ceci semble fermer à l’open source le marché des portails et de la gestion de contenu des grandes entreprises pour encore de nombreuses années.

Aux yeux de la communauté J2EE, des cabinets de conseil du secteur et des gros éditeurs, le meilleur produit du marché sera nécessairement celui qui supportera au moins les deux standards du moment : JSR 168 pour garantir la portabilité des portlets d’un produit à l’autre, et WSRP pour garantir l’interopérabilité des portlets distantes entre leur serveur d’application et le portail qui les agrège et les publie. Il y a donc dans cette gamme de produit une course à celui qui sera le plus dans la mode de la “SOA” (Service-Oriented Architecture). Comme portails J2EE open source, on cite fréquemment Liferay et Exo. Cette offre open source n’est pas étrangère à la fanfaronnade SOA (il faut bien marketer les produits, eh oui…). Du coup, l’effort de développement des portails J2EE open source semble davantage porter sur l’escalade de la pile SOA que sur l’implémentation de fonctionnalités utiles. C’est sûrement ce qui amène la communauté J2EE à constater que les portails J2EE open source manquent encore beaucoup de maturité et de richesse fonctionnelle surtout lorsqu’on les compare à Plone, leader du portail / CMS open source. En effet, Plone s’appuie sur un serveur d’application Python (Zope) et non Java (a fortiori non J2EE) ; il se situe donc hors de la course à JSR168 et semble royalement ignorer le bluff WSRP.

Nombreuses sont les entreprises qui s’évertuent à faire de J2EE une doctrine interne en matière d’architecture applicative. Confrontées au choix d’un portail, elles éliminent donc rapidement l’offre open source J2EE (pas assez mûre). Et, plutôt que de choisir un portail non J2EE reconnu comme plus mûr, plus riches en fonctionnalités et moins coûteux, elles préfèrent se cantonner à leur idéologie J2EE sous prétexte qu’il n’y a point de salut hors J2EE/.Net. Pas assez buzzword compliant, mon fils… Pfff, ne suivez pas mon regard… :-(