You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some producers will have existing content with only one really huge (3MB+) text content document (XHTML or DTBook format). We should have an option in our conversion scripts to split this into several smaller files in the EPUB output. Having several smaller text files improves performance dramatically in reading systems.
The html-utils module contains an XSLT that split an XHTML document based in its structure, but it would be nice to also have an option to split the text content document based on KB.
We'll probably need a better splitter later this year at NLB as we plan to use a single-HTML unzipped EPUB as our master format and then split into multiple files for distribution.
From [email protected] on July 17, 2013 11:44:27
Some producers will have existing content with only one really huge (3MB+) text content document (XHTML or DTBook format). We should have an option in our conversion scripts to split this into several smaller files in the EPUB output. Having several smaller text files improves performance dramatically in reading systems.
The html-utils module contains an XSLT that split an XHTML document based in its structure, but it would be nice to also have an option to split the text content document based on KB.
See also issue #309
Original issue: http://code.google.com/p/daisy-pipeline/issues/detail?id=351
The text was updated successfully, but these errors were encountered: