Skip to content

How should serializer::stream behave with compression enabled? #146

@ashtum

Description

@ashtum

The following is a hypothetical usage example of the stream interface of the serializer:

asio::awaitable<void>
example(
    http_proto::serializer& sr,
    http_proto::response& resp,
    asio::ip::tcp::socket& source,
    asio::ip::tcp::socket& client)
{
    auto stream = sr.start_stream(resp);

    do
    {
        if(source.is_open())
        {
            auto [ec, n] = co_await source.async_read_some(
                stream.prepare(), asio::as_tuple);

            if(ec == asio::error::eof)
                stream.close();
            else
                stream.commit(n);
        }

        co_await http_io::async_write_some(client, sr);

    } while(!sr.is_done());
}

If for what ever reason user commits 0 byte to stream (and not closing the stream), the call to serializer::prepare() return error::need_data if output buffer has been drained in the previous calls because there's nothing to provide (note the call to prepare happens in async_write_some).
All of this seems logical and easy to avoid. however, when a compression algorithm is in use, a single call to stream.commit(n) doesn't guarantee that bytes will be produced in the output.
We can do a force flush in compression algorithm and guarantee the production of output for inputs as small as 1 bytes, However this comes with inefficiency and more complex logic in the serializer side (Bortli interface mostly).
Considering that the stream interface of the serializer is designed for more advanced use cases, forcing a flush just to avoid a benign error::need_data might not be a good solution.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions