Buffer continued packet data to reduce seeking.
In commit 41c29626 I claimed that it was not worth the machinery to buffer an extra page to avoid a seek when we have continued packet data. That was a little unsatisfying, considering how much effort we make to avoid unnecessary seeking elsewhere, but in the general case, we might have to buffer an arbitrary number of pages, since a packet can span several. However, we already have the mechanism to do this buffering: the ogg_stream_state. There are a number of potentially nasty corner-cases, but libogg's page sequence number tracking prevents us from accidentally gluing extraneous packet data onto some other unsuspecting packet, so I believe the chance of introducing new bugs here is manageable. This reduces the number of seeks in Simon Jackson's continued-page test case by over 23%. This also means we can handle pages without useful timestamps (including those multiplexed from another stream) between the last timestamped page at or before our target and the first timestamped page after our target without any additional seeks. Previously we would scan all of this data, see that the 'page_offset' of the most recent page we read was way beyond 'best' (the end of the last timestamped page before our target), and then seek back and scan it all again. This should greatly reduce the number of seeks we need in multiplexed streams, even if there are no continued packets.