The idea of a pipeline model for processing came up in IRC conversation with
MichaelSparks some months (years?) ago. I can't remember who exactly thought of it, I think it was convergence.
The basic idea is to support the processing of text through a pipeline of atomic programs, in the way that the Unix shell and derivatives do. For example, you might have in a TWiki topic:
%PIPELINE search "fred" * | grep "bloggs" | tabulate
to get a table of lines that contain "fred" and "bloggs".
Of course search results (and other data, such as inline tables or other topics) are rarely one-dimensional, so you have to think of some way of structuring the data passing through the pipeline. Unix Shell cheats by requiring simple stream data, and then providing tools to rebuild structured representations.
--
CrawfordCurrie - 15 Feb 2006
One aspect of automation and structuring content is easy string processing. The Unix shell pipe model is a very good fit for this. It is KISS, easy to understand and powerful. A good fit for the
TWikiMission.
This pipe model would come handy also for an
event trigger feature.
--
PeterThoeny - 16 Feb 2006
Right, it's a good fit for a stream processing model. You can even use Unix to provide a menu of useful programs (sed, grep, find, col, uniq, sort etc). There is an obvious "plugin" model there as well; a plugin is simply a new atomic processer in the pipeline.
I never pursued it beyond a simple experiment, because my main interest/requirement has been a high performance structured model, and this concept is IMHO fundamentally performance limited.
--
CrawfordCurrie - 16 Feb 2006