User Tools

Site Tools


multi-broadcasting

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
multi-broadcasting [2017/08/21 14:13]
palakis More detailed feature states
multi-broadcasting [2019/01/25 08:00]
palakis
Line 2: Line 2:
 Google Summer of Code project by Stéphane Lepin Google Summer of Code project by Stéphane Lepin
  
-**Current State**: ​final coding period+**Current State** ​(as of 2019-01-25)Multi-broadcasting released in 2.1, Opus encoding merged (likely released in the upcoming 2.3), finishing touches on the FDK-AAC encoder
  
 ==== Project description ==== ==== Project description ====
-This project adds multi-broadcasting to Mixxx as well as support for some Opus and AAC/HE-AAC encoding (which has been requested by users in the past).+This project adds multi-broadcasting to Mixxx as well as support for Opus and AAC/HE-AAC encoding (which has been requested by users in the past).
 Multi-broadcasting is the ability to do live audio broadcasting to several streaming servers, each broadcasting connection having its own set of stream and encoding settings. One example use case of this would be a DJ willing to provide several levels of bitrates and audio formats to its listeners. Multi-broadcasting is the ability to do live audio broadcasting to several streaming servers, each broadcasting connection having its own set of stream and encoding settings. One example use case of this would be a DJ willing to provide several levels of bitrates and audio formats to its listeners.
 Opus and AAC encoders are implemented as recording and live streaming encoders, and the AAC encoder uses dynamic loading to avoid infringing both Mixx's FOSS license and AAC's licensing/​patent holders rights Opus and AAC encoders are implemented as recording and live streaming encoders, and the AAC encoder uses dynamic loading to avoid infringing both Mixx's FOSS license and AAC's licensing/​patent holders rights
  
-==== Source ​code ==== +==== Relevant source ​code ==== 
-  * Multi-broadcasting:​ [[https://​github.com/​mixxxdj/​mixxx/​pull/​1300|PR #1300 on Mixxx'​s repository]] +  ​* **Multi-broadcasting:​ [[https://​github.com/​mixxxdj/​mixxx/​pull/​1300|Mixx PR #1300 on GitHub]]** 
-    ​Preferences UI works (still with some interaction bugs though), audio engine side works without known problems. +  ​* **Opus encoder: [[https://​github.com/​mixxxdj/​mixxx/​pull/​1386|Mixxx PR #1386 on GitHub]]** 
-    ​Left to do: fix preferences UI bugs +  ​* **AAC/HE-AAC encoder using fdk-aac: [[https://​github.com/​mixxxdj/​mixxx/​pull/​1387|Mixxx PR #1387 on GitHub]]** 
-  * Opus encoder: [[https://​github.com/​Palakis/​mixxx/​pull/​3|Internal ​PR #on my fork of Mixxx]] +    * Live Broadcasting implemented with [[https://​launchpad.net/​~palakis/​+archive/​ubuntu/​libshout-aac|a version of libshout]] modified for AAC streaming
-    ​Works without major bugs, both in Recording and Live Broadcasting +
-    ​Left to do: needs a clean way to size the internal FIFO frame buffer, or get rid of it by telling the engine how many samples to pass on every encoding call +
-  * AAC/HE-AAC encoder using fdk-aac: [[https://​github.com/​Palakis/​mixxx/​pull/​4|Internal ​PR #on my fork of Mixxx]] +
-    * Works without major bugs (tested in Recording)+
     * Finds a dynamically-loadable libfdk-aac automatically. Windows version can even find and use B.U.T.T'​s ("​Broadcast Using This Tool" by Daniel Nöthen) version of the library.     * Finds a dynamically-loadable libfdk-aac automatically. Windows version can even find and use B.U.T.T'​s ("​Broadcast Using This Tool" by Daniel Nöthen) version of the library.
     * Supports AAC-LC (a.k.a traditional AAC), HE-AAC and HE-AACv2     * Supports AAC-LC (a.k.a traditional AAC), HE-AAC and HE-AACv2
     * Left to do:     * Left to do:
-      * Same as Opus: needs a clean way to size the internal FIFO frame buffer, or get rid of it by telling the engine how many samples to pass on every encoding call 
-      * Live Broadcasting integration:​ update libshout (Icecast/​Shoutcast library used by Mixxx) to support AAC streams 
       * Add options for VBR recording       * Add options for VBR recording
 +      * Add track metadata
  
 ==== GSoC Phase 1: Broadcasting profiles subsystem ==== ==== GSoC Phase 1: Broadcasting profiles subsystem ====
Line 51: Line 46:
  
 ==== GSoC Phase 2: Multiple broadcasting outputs ==== ==== GSoC Phase 2: Multiple broadcasting outputs ====
-This feature allows live audio streaming to several servers simultaneously and relies ​on the broadcasting ​profiles ​added in Phase 1. +Built on top of the profiles ​subsystem implement of Phase 1, these additions provide the actual multi-broadcasting functionalities which will allow users to do Live Broadcasting to several Icecast/​Shoutcast servers
-Management of the streaming connections is done through a new “Live Broadcasting” settings ​panel, which has a list of broadcasting outputs ​with the profile settings form under the connections table. +Management of the streaming connections is done in the Live Broadcasting ​Preferences ​panel. It shows the list of configured ​broadcasting outputs, and selecting ​a connection in the list shows its profile ​settings in an editable ​form below the connections list. 
-Selecting ​profile/connection in the list shows its settings in the aforementioned ​form (which ​is also editable).+While Live Broadcasting ​is active, each individual connection can be enabled, disabled and re-enabled again.
  
 {{::​multi-broadcasting.png?​nolink&​600|}} {{::​multi-broadcasting.png?​nolink&​600|}}
Line 59: Line 54:
  
 === Technical details === === Technical details ===
-  * The current libshout logic in EngineBroadcast ​must be separated from it and moved to a new class ShoutOutput (with QThread inheritance) +  * The streaming code originally ​in EngineBroadcast ​has been put into a new class named ShoutConnection,​ and EngineBroadcast has been deleted 
-    * ShoutOutput ​features: +    * ShoutConnection ​features: 
-      * Broadcasting ​profile ​(instance of BroadcastProfile) as constructor parameter+      * Linked with a Broadcasting ​Profile ​(instance of BroadcastProfile) ​passed ​as constructor parameter
       * Has its own FIFO buffer filled by the engine       * Has its own FIFO buffer filled by the engine
-      * Has its own thread (base on EngineBroadcast'​s) to process frames available in the FIFO buffer +      * Has its own thread (base on EngineBroadcast'​s) to process frames ​made available in the FIFO buffer 
-  The EngineBroadcast sidechain filter must be refactored to only act as a "​broadcast manager"​ that receives ​audio samples and pushes them to output ​instances +   ​SoundDeviceNetwork (audio engine part responsible for Live Broadcasting) now handles several outputs 
-    * Has an internal list of ShoutOutput ​instances, kept in sync with BroadcastSettings' ​profile list using signals and slots +   * Management of ShoutConnection ​instances ​is done by a refactored BroadcastManager. ​ 
-  * The Live Broadcasting settings UI must be updated (see UI mockup ​above)+    * Has an internal list of ShoutConnection ​instances, kept in sync with BroadcastSettings' ​profiles ​using signals and slots 
 +    * Manages output workers in EngineNetworkStream (which has been reworked to have several output workers and a seperate input worker) 
 +  * The Live Broadcasting settings UI has been updated (see description ​above)
  
 ---- ----
Line 73: Line 70:
  
 === Live Broadcasting:​ UI polishing === === Live Broadcasting:​ UI polishing ===
-There are several aspects to cover: 
   * Changes in LB preferences UI:   * Changes in LB preferences UI:
-    * Complete the "​connection ​removal" ​user experience +    * Simpler and complete ​"Remove ​connection" ​workflow 
-    * Show each connection ​state in the profile list+    * Show state of each connection in the profile list
   * Error reporting: show an error message when one or more active connections failed to connect   * Error reporting: show an error message when one or more active connections failed to connect
-  * Addition: Live Broadcasting Status dialog (a read-only list of each output connection) 
  
 === Broadcasting profiles: secure password storage === === Broadcasting profiles: secure password storage ===
 In XML broadcasting profiles, two fields are considered sensitive: "​Login"​ (username) and "​Password"​ In XML broadcasting profiles, two fields are considered sensitive: "​Login"​ (username) and "​Password"​
-Currently, these sensitive fields are stored in plaintext. These should ​be optionally (enabled/​disabled by a user setting) ​encrypted ​to avoid privacy and/or security issues. +By default, these sensitive fields are stored in plaintext ​within ​. These can be optionally ​encrypted ​(enabled/​disabled by a user setting) to avoid privacy and/or security issues. 
-One way of securely storing credentials ​would be to use the OS' keychain using the [[https://​github.com/​frankosterfeld/​qtkeychain|3rd-party QtKeychain library]], which is compatible with Windows, Linux and OS X. The broadcasting profile subsystem ​would then put and fetch sensitive information from the keychain instead of the profile's XML document. +One way of securely storing credentials ​is to use the OS' keychain using the [[https://​github.com/​frankosterfeld/​qtkeychain|3rd-party QtKeychain library]], which is compatible with Windows, Linux and OS X. With secure password storage enabled, the broadcasting profile subsystem ​puts and gets sensitive information ​into and from the encrypted OS' ​keychain instead of the plaintext ​profile document. 
-Broadcasting profiles are currently not meant for import/​export and sharing, so storing values outside of the XML document is fine. Users doing manual transfers of profiles from one system to another ​should then see empty values for the sensitive fields.+Broadcasting profiles are currently not meant for import/​export and sharing, so storing values outside of the XML document is fine. Users doing manual transfers of profiles from one system to another ​do so at their own responsibility and will see empty values for the sensitive fields ​on the target computer.
  
 Entries stored through QtKeychain have three attributes Entries stored through QtKeychain have three attributes
Line 98: Line 93:
 === AAC streaming support === === AAC streaming support ===
   * Confirmed on the wishlist: [[https://​bugs.launchpad.net/​mixxx/​+bug/​726991]]   * Confirmed on the wishlist: [[https://​bugs.launchpad.net/​mixxx/​+bug/​726991]]
-    ​Libshout support: see link above +  ​Uses libfdk-aac via dynamic loading 
-  * First solution: legally, it is possible to use FFmpeg'​s built-in AAC encoder, which is covered by LGPLv2.1 (see [[https://​github.com/​FFmpeg/​FFmpeg/​blob/​master/​libavcodec/​aacenc.c|source code]]), that license being seemingly [[https://​www.gnu.org/​licenses/​gpl-faq.html#​AllCompatibility|compatible]] with Mixxx'​s GPLv2. +    * Both AAC (LC) and AAC+ (HE-AAC and HE-AAC v2) are supported, among other AAC object types 
-    * Not possible for AAC+ with FFmpeg, because it relies on non-free libfdkaac. +    * Windows: extract libfdk-aac-1.dll from a BUTT installation,​ or find and use from BUTT installation
-    * FFmpeg is already a Mixxx dependency +
-  * Second solution: use dynamic loading ​of libfdkaac +
-    * Both AAC (LC) and AAC+ (HE-AAC and HE-AAC v2) would be supported +
-    * Windows: extract libfdk-aac-1.dll from a BUTT installation+
     * OS X: install fdk-aac from Homebrew (maybe too technical?)     * OS X: install fdk-aac from Homebrew (maybe too technical?)
     * Linux: install libfdkaac package     * Linux: install libfdkaac package
 +  * Libshout support: needs a custom version, because upstream doesn'​t support AAC streams (unsupported MIME)
 +  * Uses a FIFO buffer internally to always feed the encoder with a defined number of samples
  
 === Opus streaming support === === Opus streaming support ===
   * Confirmed on the wishlist: [[https://​bugs.launchpad.net/​mixxx/​+bug/​1338413]]   * Confirmed on the wishlist: [[https://​bugs.launchpad.net/​mixxx/​+bug/​1338413]]
   * libopus is quite easy to use   * libopus is quite easy to use
-  * The Opus datastream must be muxed in an Ogg stream+  * Opus encoded data is muxed in an Ogg stream 
 +  * Requires header frames specific to Opus streams: OpusHead and OpusTags 
 +    * These headers are standard. No library exist to generate these, however their structure is quite simple and similar to Vorbis'​ headers and comment packets. 
 +  * Same as fdk-aac encoder: uses a FIFO buffer internally to always feed the encoder with a defined number of samples 
 +    * Contrary to fdk-aac, this is clearly stated in the encoder'​s documentation
  
 ---- ----
Line 199: Line 196:
  
 Week 7's work is for the Preferences UI. Testing is already possible with the current UI, and the first WIP of the new UI will allow for testing on several streaming outputs. Week 7's work is for the Preferences UI. Testing is already possible with the current UI, and the first WIP of the new UI will allow for testing on several streaming outputs.
 +// EDIT: bad idea.//
  
 === Week 7 and 8: July 10 - July 23 === === Week 7 and 8: July 10 - July 23 ===
Line 229: Line 227:
  
 === Week 12: August 14 - August 18 === === Week 12: August 14 - August 18 ===
-//Writing ​in progresspublishing ASAP//+Opus, AAC and HE-AAC encoders are now a reality in Mixxx! These encoders are on the project'​s wishlist and much awaited by users. But in the process, other aspects mentioned in the previous report were slightly overlooked... 
 + 
 +The Opus encoder uses libopus for encoding, and the resulting encoded data in muxed into an Ogg stream using libogg. 
 +Opus streams embed in Ogg need a special "​OpusHead"​ header packet sent to make the stream recognized as Opus data, and a "​OpusTags"​ packet to provide stream/track comments (artist/title metadata) to players ​in a format identical to Vorbis comments. No library exists to generate instance of these two packetsbut fortunately the structure is easy to understand and simple enough to implement with bit manipulations. 
 + 
 +Each Opus frame has a fixed frame size in milliseconds,​ defined by the user or developer among a set of possible values. This means the encoder requires a specific amount of frames to be passed to it, no less, no more. Whether in Live Broadcasting or Recording uses, the engine provides way too much samples that the encoder can't process in one go, so a FIFO buffer is used to store samples and get a specific number of samples from it on each encoder call. 
 +The same situation happens in the AAC encoder, with one difference: the frame size/sample count is not configurable by the user. 
 + 
 +The AAC encoder uses libfdk-aac and the resulting encoded data doesn'​t need additional muxing or specific headers (these aspects are handled by libfdk-aac, depending on encoder configuation). Mixxx'​s encoder implementation currently supports AAC-LC (traditional plain AAC), HE-AAC (previously AAC+) and HE-AAC v2. 
 + 
 +The AAC encoder doesn'​t require libfdk-aac when compiling and distributing Mixxx. Instead, the external library is loaded at run-time (a process called dynamic loading) from a known name and location. This behaviour is similar to Mixxx'​s use of libmp3lame for MP3 encoding. The library can be placed in Mixxx'​s installation folder or searched for in potential locations (including finding it in B.U.T.T'​s installation folder on Windows if installed in AppData)
  
 === Final week: August 21 - August 27 === === Final week: August 21 - August 27 ===
-//​Work ​in progress//+Here it is. 
 +Three months and more than 170 (and counting) commits later, the official final coding period for the Google Summer of Code is over. Remaining engine issues ​in multi-broadcasting and, to a lesser extent, the fdk-aac and Opus encoders have been fixed. 
 +Actual work is not over yet! Discussion is still going on regarding some few specific details of the new Live Broadcasting user experience and takes place in the GitHub Pull Request for multi-broadcasting. 
 + 
 +It's been an honor as well as a great pleasure to work on Mixxx during GSoC. In the process, I got better at C++ and with Qt and had a glimpse at what an audio engine looks like. Once the work on multi-broadcasting is done, I'd be happy to contribute other features to Mixx outside of GSoC. 
 +Thanks to the Mixxx Team (and Daniel, my mentor) for letting me be part of this adventure!
multi-broadcasting.txt · Last modified: 2019/01/25 08:00 by palakis