Composing A Pipeline

With such a design, processors can simply be chained together:

A parser creates an IR, which is passed to the linker (creating a table of contents on the fly) which passes it further down to a formatter.

parser = ...
linker = ...
formatter = ...
ir = IR()
ir = parser.process(ir, input=['source.hh'])
ir = linker.process(ir)
ir = formatter.process(ir, output='html')

And, to be a little bit more scalable, and to allow the use of dependency tracking build tools such as make, the intermediate IRs can be persisted into files. Thus, the above pipeline is broken up into multiple pipelines, where the 'output' parameter of the parser is used to point to IR stores, and the 'input' parameter of the linker/formatter pipeline then contains a list of these IR store files.

Parse source1.hh and write the IR to source1.syn:

parser.process(IR(), input = ['source1.hh'], output = 'source1.syn')

Parse source2.hh and write the IR to source2.syn:

parser.process(IR(), input = ['source2.hh'], output = 'source2.syn')

Read in source1.syn and source2.syn, then link and format into the html directory:

formatter.process(linker.process(IR(), input = ['source1.syn', 'source2.syn']), output = 'html')