- Aug 02, 2017
-
-
Timothy B. Terriberry authored
op_fopen() and op_freopen() declare these arguments as non-NULL, so when building with mingw, the compiler reasonably complains when we check to see if they're NULL. We could remove the OP_ARG_NONNULL tags, but the behavior of _wopen/_wfreopen appears to be to crash on NULL for either parameter. On Linux, the behavior appears to be to handle a NULL path (fopen returns NULL with errno set to EFAULT, and freopen returns the passed FILE * with errno set to EFAULT), but crash on a NULL mode. Keeping the OP_ARG_NONNULL tags promises that passing NULL results in undefined behavior, which is at least consistent with the behavior being different on different platforms. It's also consistent with the ABI promises of previous releases, which compilers linking against libopusfile might have taken advantage of.
-
- Jul 07, 2017
-
-
Timothy B. Terriberry authored
We inherited the term "source" from vorbisfile's "datasource", but were using it interchangeably with stream. At least one user did not even realize the that the _source parameter passed to op_open_callbacks() was the same as the _stream parameter taken by those callbacks, which is reasonable because we never said so. Consistently use "stream" instead of "source" in both the documentation and the code.
-
- Jun 17, 2017
-
-
Timothy B. Terriberry authored
These two asserts used to be in separate branches, but the code was consolidated into a single branch by e6350334. However, the asserts were not updated at the same time. With an IPv6 address, the sockaddr_in assert would always fail.
-
- May 24, 2017
-
-
Timothy B. Terriberry authored
Thanks to Mark Harris for the report.
-
Timothy B. Terriberry authored
The tags have two possible representations of the binary suffix data. An implicit one: if comment_lengths and user_comments are NULL, then the length of the binary suffix data is 0 and the pointer to that data is also implicitly taken to be NULL. And an explicit one: if comment_lengths and user_comments are non-NULL, then comment_lengths[comments] and user_comments[comments] contains the length and pointer to the binary suffix data (where "comments" is the number of user comments). The implicit one allows us to initialize a tags struct without doing any allocations, which ensures it always succeeds. op_tags_ensure_capacity() may upgrade the implicit representation to the explicit representation. However, when doing so, it stores the implicit (0, NULL) values in their new explicit locations at the end of the array assuming the requested capacity will become the new size. If the caller of op_tags_ensure_capacity() then fails before enlarging the array to that size, then comment_lengths and user_comments will be non-NULL, but the explicit locations for the binary suffix data for the _old_ size may not have been initialized. In particular, in opus_tags_parse_impl(), if there was exactly one comment, and any of the three checks at the start of the main loop failed, then the explicit locations for the binary suffix data would remain uninitialized, and the call to opus_tags_clear() in the caller would attempt to free them. For counts larger than 1, the extra line marked "Needed by opus_tags_clear()" will update the explicit location as the array grows, but with a count of 1 this line never gets a chance to run. This patch fixes the problem by making op_tags_ensure_capacity() promote the implicit representation to explicit at _both_ the old and new array sizes. We could have fixed up opus_tags_parse_impl() to do this instead, as suggested in the original bug reports, but doing it this way ensures that the tags are always in a valid state when op_tags_ensure_capacity() returns. Thanks to the Google AutoFuzz Team for the report. Fixes #2324 Fixes #2325
-
- May 19, 2017
-
-
Timothy B. Terriberry authored
op_raw_seek() will fail if you try to seek to a byte position beyond the end of the file. However, the "end" is defined in terms of _of->end, which is specifically the end of the last Ogg page found in the underlying source (excluding any trailing non-Ogg data). op_raw_total(_of,-1) returns the total size of the stream by using _of->end, but it was also subtracting the offset of the first Opus page in the first link. Since there might have been other Ogg streams concurrently multiplexed with the first Opus stream whose BOS pages appear first, or there might simply be non-Ogg junk at the start, that left the caller with no way to determine the valid range of byte offsets that could be passed to op_raw_seek(). Instead, make op_raw_total() pretend the first link starts at offset 0, and explicitly document that it's what defines the range of valid values to op_raw_seek(). This is how our own seeking_example.c was using it, anyway.
-
Timothy B. Terriberry authored
Previously, when we encountered a hole (a gap in the page sequence numbers), we would save off all of the packets from the first page after the hole, but not timestamp them. That meant when they were actually decoded, op_pcm_tell() would report a timestamp of 0 until reaching the last packet on that page. Instead, handle holes just like a raw seek. We reset the granule position tracking, and attempt to timestamp packets backwards from the end of the page. If the first page after the hole is an EOS page, we just throw it away (rather than risk playing invalid samples due to incorrect end-trimming). We also throw away the first 80 ms of audio after a hole, to allow the decoder state to reconverge. This patch also updates the example to report the hole and continue decoding, rather than immediately stopping when a hole is encountered, in order to test the above features.
-
Timothy B. Terriberry authored
Just in case some bozo makes a chained stream with 272,389 links with 16 samples in each (coded at 16 Mbps, including overheads). This avoids quadratic behavior for simple straight-through playback: we no longer do a linear search on each chain boundary or each call to op_pcm_tell(). N seeks with M chains still requires O(N*log(M)) work.
-
Timothy B. Terriberry authored
No code changes.
-
Timothy B. Terriberry authored
As of version 1.0.2, OpenSSL can finally do automatic hostname validation for us. Their implementation is likely to have received much better review than ours, and there are other good reasons to prefer it, so use it when we can.
-
Timothy B. Terriberry authored
RFC 6125 says that if the host is an IP address, a subjectAltName of type iPAddress must (no 2119 caps) be present and must be used. We would still fall back to checking the Common Name if no subjectAltName was present. https://marc.info/?l=openssl-dev&m=139617145216047&w=2 interprets RFC 6125 to say that if the host is a DNS name, but the certificate only contains a subjectAltName of type iPAddress, then we should still fall back to checking the Common Name. We would only check the Common Name if there was no subjectAltName of any type. Restructure the hostname validation to check IP addresses up-front and fall back to checking the Common Name in the proper cases.
-
- Mar 10, 2017
-
-
If, e.g. using a differently configured config_types.h with a long vs. int conflict in the int32 typedef, this would have resulted in a build failure. Signed-off-by:
Timothy B. Terriberry <tterribe@xiph.org>
-
- Feb 08, 2017
-
-
This fixes a build failure from undefined references to ASN1_STRING_data in libopusurl.so. ASN1_STRING_data is deprecated in openssl-1.1.0. The new ASN1_STRING_get0_data is identical, except the returned string may not be modified, which we don't do anyway. Also include missing asn1.h header to silence compiler warnings. X-Gentoo-Bug: 592456 X-Gentoo-Bug-URL: https://bugs.gentoo.org/show_bug.cgi?id=592456
-
- Nov 04, 2016
-
-
Ralph Giles authored
Commit d0c82543 broke builds with --enable-assertions with error: 'tag_len' undeclared. Fixing the simple typo in the symbol reference resolves the issue.
-
- Sep 16, 2016
-
-
Timothy B. Terriberry authored
Some of these pointed to real potential overflows (given arbitrary inputs by the calling application). I was sad about stripping const qualifiers from the struct addrinfo pointers, but MSVC seems to erroneously think that an array of pointers to constant data is itself a pointer to constant data (or maybe that it is not compatible with a const void *?), and converting the memmove()s to for loops triggered an erroneous warning about out-of-bounds array accesses in gcc (but on only one of the two identical loops).
-
- Jul 06, 2016
-
-
Timothy B. Terriberry authored
The API and ABI is not backwards-compatible. This is based on the prerelease version 1.1.0-pre5. It should continue to work with older versions of OpenSSL. Thanks to Ron Lee and the Debian project for reporting the build errors and testing the patch.
-
- Jul 04, 2016
-
-
Timothy B. Terriberry authored
-
Timothy B. Terriberry authored
If the parsing fails before all comments are filled in, we will attempt to free any binary metadata at the position one past the last comment, which will be uninitialized. Introduced in commit 0221ca95.
-
Timothy B. Terriberry authored
According to the API, you can pass in a NULL OpusTags object to simply check if the comment packet is valid, without storing the parsed results. However, the additions to store binary metadata in commit 0221ca95 did not check for this. Fixes Coverity CID 149873.
-
Timothy B. Terriberry authored
In 0221ca95 the allocation result went from being stored directly in "_tags->user_comments[ncomments]" to being stored in the temporary "comment". However, the NULL check for allocation failure was not updated to match. This meant this function would almost always fail, unless you had added binary metadata first. Fixes Coverity CID 149874.
-
Timothy B. Terriberry authored
This bug appears to have been present since the original code import. This was a "clever" rearrangement of the control flow from the _fetch_and_process_packet() in vorbisfile to use a do ... while(0) instead of a "while(1)". However, this also makes "continue" equivalent to "break": it does not actually go back to the top of the loop, because the loop condition is false. This bug was harmless, because ogg_stream_pagein() then refuses to ingest a page with the wrong serialno, but we can simplify things by fixing it. The "not strictly necessary" loop is now completely unnecessary. The extra checks that existed in vorbisfile have all been moved to later in the main loop, so we can just continue that one directly, with no wasted work, instead of embedding a smaller loop inside. Fixes Coverity CID 149875.
-
Timothy B. Terriberry authored
In the non-seekable case, we'll undercount some bytes at the start of a new link. Still thinking about the best way to address this, but leaving a comment so I don't forget.
-
Timothy B. Terriberry authored
Going with "no" for now, but leave a reminder in the source code that this is a debatable question.
-
- Jun 26, 2016
-
-
Timothy B. Terriberry authored
-
- Jun 19, 2016
-
-
Timothy B. Terriberry authored
-
- Jun 16, 2016
-
-
Timothy B. Terriberry authored
This was set to 1 by all callers, so we can simplify logic by just removing it. This appears to have already been true in the libvorbisfile code from which this was adapted.
-
- Dec 30, 2015
-
-
Timothy B. Terriberry authored
In op_[v]open_url() and op_[v]test_url(), if we successfully connected to the URL but fail to parse it as an Opus stream, then we would return to the calling application without clearing any OpusServerInfo we might have filled in when connecting. This contradicts the general contract for user output buffers in our APIs, which is that they do not need to be initialized prior to a call and that their contents are untouched if a function fails (so that an application need do no additional clean-up on error). It would have been possible for an application to avoid these leaks by always calling opus_server_info_init() before a call to op_[v]open_url() or op_[v]test_url() and always calling opus_server_info_clear() afterwards (even on failure), but our examples don't do this and no other API of ours requires it. Fix the potential leaks by wrapping the implementation of op_url_stream_vcreate() so we can a) tell if the information was requested and b) store it in a separate, local buffer and delay copying it to the application until we know we've succeeded.
-
Timothy B. Terriberry authored
This adds support for accessing any binary metadata at the end of the comment header, as first specified in <https://tools.ietf.org/html/draft-ietf-codec-oggopus-05>. It also allows the data to be set and preserves the data when doing deep copies.
-
- Dec 29, 2015
-
-
Timothy B. Terriberry authored
This includes convenience routines specifying the album gain should be applied and for parsing the tag.
-
- Dec 15, 2015
-
-
Timothy B. Terriberry authored
In commit 41c29626 I claimed that it was not worth the machinery to buffer an extra page to avoid a seek when we have continued packet data. That was a little unsatisfying, considering how much effort we make to avoid unnecessary seeking elsewhere, but in the general case, we might have to buffer an arbitrary number of pages, since a packet can span several. However, we already have the mechanism to do this buffering: the ogg_stream_state. There are a number of potentially nasty corner-cases, but libogg's page sequence number tracking prevents us from accidentally gluing extraneous packet data onto some other unsuspecting packet, so I believe the chance of introducing new bugs here is manageable. This reduces the number of seeks in Simon Jackson's continued-page test case by over 23%. This also means we can handle pages without useful timestamps (including those multiplexed from another stream) between the last timestamped page at or before our target and the first timestamped page after our target without any additional seeks. Previously we would scan all of this data, see that the 'page_offset' of the most recent page we read was way beyond 'best' (the end of the last timestamped page before our target), and then seek back and scan it all again. This should greatly reduce the number of seeks we need in multiplexed streams, even if there are no continued packets.
-
Timothy B. Terriberry authored
We avoid seeking when the seek target lies within the packets buffered from the current page. However, the calculation of the page start time was _adding_ the first packet's duration to its end time, instead of subtracting it.
-
- Dec 07, 2015
-
-
Timothy B. Terriberry authored
When assertions are disabled, nothing else uses header_len.
-
- Dec 06, 2015
-
-
Timothy B. Terriberry authored
If the packet where we wanted to start decoding was continued from a previous page, and _other_ packets ended on that previous page, we wouldn't feed the previous page to the ogg_stream_state. That meant we wouldn't get the packet we wanted, and would fail with OP_EBADLINK (because the starting PCM offset of the first packet we did decode would already be after the one we wanted). Instead, check for continued packet data and feed in an extra page to prime the stream state. Thanks to Simon Jackson for the report and the excellent test case.
-
Timothy B. Terriberry authored
-
- Feb 27, 2015
-
-
Timothy B. Terriberry authored
OS X 10.5.8 does not define AI_NUMIERCSERV either, so instead of trying to enumerate the platforms that don't, just test for the value itself. Patch by Dave Evans at MacPorts. Fixes #2172
-
- Mar 26, 2014
-
-
Timothy B. Terriberry authored
Makes style slightly more consistent. Also fixes the return code of op_fetch_headers() to make it consistently return OP_EBADHEADER if the stream runs out of pages after a valid OpusHead packet is found. Previously, if a valid OpusHead was found, it would return OP_ENOTFORMAT if it ran out of pages before finding one without its BOS flag set, and OP_EBADHEADER if it ran out of pages after finding one without its BOS flag set.
-
- Mar 25, 2014
-
-
Timothy B. Terriberry authored
width=height=depth=colors=0 should be allowed, but wasn't thanks to some missing parentheses. Thanks to Lithopsian for the report.
-
- Mar 16, 2014
-
-
Timothy B. Terriberry authored
We were skipping the BOS page for non-Opus streams, but never read in the next page, causing setup to fail with a duplicate serial number validation error. Thanks to Lithopsian for the report.
-
- Mar 12, 2014
-
-
Timothy B. Terriberry authored
-
- Mar 03, 2014
-
-
Timothy B. Terriberry authored
It helps to increment the comment count. Thanks to lithopsian for the report.
-