Skip to content
Snippets Groups Projects
  1. Jul 06, 2016
    • Ralph Giles's avatar
      Bump soname for v0.8. · 87c3f60b
      Ralph Giles authored
      No public API changes.
      87c3f60b
    • Timothy B. Terriberry's avatar
      Add support for OpenSSL 1.1.x. · 13a6a454
      Timothy B. Terriberry authored
      The API and ABI is not backwards-compatible.
      This is based on the prerelease version 1.1.0-pre5.
      It should continue to work with older versions of OpenSSL.
      
      Thanks to Ron Lee and the Debian project for reporting the build
       errors and testing the patch.
      13a6a454
  2. Jul 04, 2016
    • Timothy B. Terriberry's avatar
      d21816d6
    • Timothy B. Terriberry's avatar
      Fix free with uninitialized data in opus_tags_parse(). · 72f4f8a6
      Timothy B. Terriberry authored
      If the parsing fails before all comments are filled in, we will
       attempt to free any binary metadata at the position one past the
       last comment, which will be uninitialized.
      Introduced in commit 0221ca95.
      72f4f8a6
    • Timothy B. Terriberry's avatar
      Add missing NULL check to opus_tags_parse(). · bd607f5c
      Timothy B. Terriberry authored
      According to the API, you can pass in a NULL OpusTags object to
       simply check if the comment packet is valid, without storing the
       parsed results.
      However, the additions to store binary metadata in commit
       0221ca95 did not check for this.
      
      Fixes Coverity CID 149873.
      bd607f5c
    • Timothy B. Terriberry's avatar
      Fix NULL check in opus_tags_add_comment(). · 66a8c158
      Timothy B. Terriberry authored
      In 0221ca95 the allocation result went from being stored
       directly in "_tags->user_comments[ncomments]" to being stored in
       the temporary "comment".
      However, the NULL check for allocation failure was not updated to
       match.
      This meant this function would almost always fail, unless you had
       added binary metadata first.
      
      Fixes Coverity CID 149874.
      66a8c158
    • Timothy B. Terriberry's avatar
      Fix skipping logic for multiplexed non-Opus pages. · 78cd9bcf
      Timothy B. Terriberry authored
      This bug appears to have been present since the original code
       import.
      This was a "clever" rearrangement of the control flow from the
       _fetch_and_process_packet() in vorbisfile to use a do ... while(0)
       instead of a "while(1)".
      However, this also makes "continue" equivalent to "break": it does
       not actually go back to the top of the loop, because the loop
       condition is false.
      
      This bug was harmless, because ogg_stream_pagein() then refuses to
       ingest a page with the wrong serialno, but we can simplify things
       by fixing it.
      The "not strictly necessary" loop is now completely unnecessary.
      The extra checks that existed in vorbisfile have all been moved to
       later in the main loop, so we can just continue that one directly,
       with no wasted work, instead of embedding a smaller loop inside.
      
      Fixes Coverity CID 149875.
      78cd9bcf
    • Timothy B. Terriberry's avatar
      Note small inaccuracy in bitrate tracking. · 73909d7d
      Timothy B. Terriberry authored
      In the non-seekable case, we'll undercount some bytes at the start
       of a new link.
      Still thinking about the best way to address this, but leaving a
       comment so I don't forget.
      73909d7d
    • Timothy B. Terriberry's avatar
      Should a BOS page with no packets be an error? · 3abc3541
      Timothy B. Terriberry authored
      Going with "no" for now, but leave a reminder in the source code
       that this is a debatable question.
      3abc3541
  3. Jun 26, 2016
  4. Jun 19, 2016
  5. Jun 16, 2016
  6. Jun 01, 2016
  7. Jan 06, 2016
  8. Jan 05, 2016
  9. Jan 04, 2016
  10. Jan 01, 2016
  11. Dec 31, 2015
  12. Dec 30, 2015
    • Ralph Giles's avatar
      Update release instructions for mingw makefile. · 3f273e0c
      Ralph Giles authored
      Have successfully automated the build, but packaging
      is still manual.
      3f273e0c
    • Ralph Giles's avatar
      Update release instructions for new archive locations. · 688d2e4b
      Ralph Giles authored
      ftp.mozilla.org is now s3-backed archive.mozilla.org.
      Old urls should redirect.
      688d2e4b
    • Ralph Giles's avatar
      Add a makefile for cross-compiling on mingw. · 9f65b16e
      Ralph Giles authored
      This builds win32 versions of the library and examples
      on linux using the mingw-gcc cross toolchain and wine.
      
      It also automates downloading and building the
      required dependencies. Update the _URL and _SHA
      variables to build against newer upstream releases.
      
      Thanks to Ron and Mark Harris for help with the makefile.
      9f65b16e
    • Timothy B. Terriberry's avatar
      Fix potential memory leaks with OpusServerInfo. · 0b2fe85a
      Timothy B. Terriberry authored
      In op_[v]open_url() and op_[v]test_url(), if we successfully
       connected to the URL but fail to parse it as an Opus stream, then
       we would return to the calling application without clearing any
       OpusServerInfo we might have filled in when connecting.
      This contradicts the general contract for user output buffers in
       our APIs, which is that they do not need to be initialized prior
       to a call and that their contents are untouched if a function
       fails (so that an application need do no additional clean-up on
       error).
      It would have been possible for an application to avoid these leaks
       by always calling opus_server_info_init() before a call to
       op_[v]open_url() or op_[v]test_url() and always calling
       opus_server_info_clear() afterwards (even on failure), but our
       examples don't do this and no other API of ours requires it.
      
      Fix the potential leaks by wrapping the implementation of
       op_url_stream_vcreate() so we can a) tell if the information was
       requested and b) store it in a separate, local buffer and delay
       copying it to the application until we know we've succeeded.
      0b2fe85a
    • Timothy B. Terriberry's avatar
      Add API to access and preserve binary metadata. · 0221ca95
      Timothy B. Terriberry authored
      This adds support for accessing any binary metadata at the end of
       the comment header, as first specified in
       <https://tools.ietf.org/html/draft-ietf-codec-oggopus-05>.
      It also allows the data to be set and preserves the data when
       doing deep copies.
      0221ca95
  13. Dec 29, 2015
  14. Dec 15, 2015
    • Timothy B. Terriberry's avatar
      Buffer continued packet data to reduce seeking. · 661459b4
      Timothy B. Terriberry authored
      In commit 41c29626 I claimed that it was not worth the
       machinery to buffer an extra page to avoid a seek when we have
       continued packet data.
      That was a little unsatisfying, considering how much effort we make
       to avoid unnecessary seeking elsewhere, but in the general case,
       we might have to buffer an arbitrary number of pages, since a
       packet can span several.
      However, we already have the mechanism to do this buffering: the
       ogg_stream_state.
      There are a number of potentially nasty corner-cases, but libogg's
       page sequence number tracking prevents us from accidentally gluing
       extraneous packet data onto some other unsuspecting packet, so I
       believe the chance of introducing new bugs here is manageable.
      
      This reduces the number of seeks in Simon Jackson's continued-page
       test case by over 23%.
      
      This also means we can handle pages without useful timestamps
       (including those multiplexed from another stream) between the last
       timestamped page at or before our target and the first timestamped
       page after our target without any additional seeks.
      Previously we would scan all of this data, see that the
       'page_offset' of the most recent page we read was way beyond
       'best' (the end of the last timestamped page before our target),
       and then seek back and scan it all again.
      This should greatly reduce the number of seeks we need in
       multiplexed streams, even if there are no continued packets.
      661459b4
Loading