SCP related tips and issue resolve

Today, I had trouble to send a file to remote target board with the command following:


jiafei427@CKUBU:~/tmp$ scp /home/jiafei427/tmp/slrclub_cert.crt root@10.177.247.35:/var/tmp/usr/ck
sh: scp: cannot execute - No such file or directory
lost connection

Damn, had no clue what that meant, and googled like a forever.

used “-vvv” option, but nothing that I can recognize.

 

Finally found solution by myself, and actually that was a bit too easy.

Solution is simply put a “scp” binary into the target board.

scp will use the binary both in client and server side. 😦

(didn’t know that..)

 

Here put more examples about scp:

What is Secure Copy?

scp allows files to be copied to, from, or between different hosts. It uses ssh for data transfer and provides the same authentication and same level of security as ssh.

Examples

Copy the file “foobar.txt” from a remote host to the local host

$ scp your_username@remotehost.edu:foobar.txt /some/local/directory

Copy the file “foobar.txt” from the local host to a remote host

$ scp foobar.txt your_username@remotehost.edu:/some/remote/directory

Copy the directory “foo” from the local host to a remote host’s directory “bar”

$ scp -r foo your_username@remotehost.edu:/some/remote/directory/bar

Copy the file “foobar.txt” from remote host “rh1.edu” to remote host “rh2.edu”

$ scp your_username@rh1.edu:/some/remote/directory/foobar.txt \
your_username@rh2.edu:/some/remote/directory/

Copying the files “foo.txt” and “bar.txt” from the local host to your home directory on the remote host

$ scp foo.txt bar.txt your_username@remotehost.edu:~

Copy the file “foobar.txt” from the local host to a remote host using port 2264

$ scp -P 2264 foobar.txt your_username@remotehost.edu:/some/remote/directory

Copy multiple files from the remote host to your current directory on the local host

$ scp your_username@remotehost.edu:/some/remote/directory/\{a,b,c\} .
$ scp your_username@remotehost.edu:~/\{foo.txt,bar.txt\} .

scp Performance

By default scp uses the Triple-DES cipher to encrypt the data being sent. Using the Blowfish cipher has been shown to increase speed. This can be done by using option -c blowfish in the command line.

$ scp -c blowfish some_file your_username@remotehost.edu:~

It is often suggested that the -C option for compression should also be used to increase speed. The effect of compression, however, will only significantly increase speed if your connection is very slow. Otherwise it may just be adding extra burden to the CPU. An example of using blowfish and compression:

$ scp -c blowfish -C local_file your_username@remotehost.edu:~

 

 

Ref.

http://www.hypexr.org/linux_scp_help.php

 

 

 

 

 

 

 

 

 

Advertisements

Mount QNX file system


# df -h
/dev/emmc/uda0.ms.2 95M 15M 80M 16% /efs/
/dev/emmc/uda0.97d7 2.9G 955M 2.0G 32% /base/
/dev/emmc/rpmb0 4.0M 4.0M 0 100%
/dev/emmc/boot1 2.0K 2.0K 0 100% /dev/emmc/boot1.
/dev/emmc/boot1 4.0M 4.0M 0 100%
/dev/emmc/boot0 4.0M 4.0M 0 100%
/dev/emmc/uda0 43G 43G 0 100% /dev/emmc/uda0.a
/dev/emmc/uda0 12G 12G 0 100% /dev/emmc/uda0.1
/dev/emmc/uda0 512K 512K 0 100% /dev/emmc/uda0.9
/dev/emmc/uda0 128K 128K 0 100% /dev/emmc/uda0.f
/dev/emmc/uda0 512K 512K 0 100% /dev/emmc/uda0.d
/dev/emmc/uda0 1.0M 1.0M 0 100% /dev/emmc/uda0.8
/dev/emmc/uda0 32M 32M 0 100% /dev/emmc/uda0.6
/dev/emmc/uda0 256M 256M 0 100% /dev/emmc/uda0.5
/dev/emmc/uda0 2.0M 2.0M 0 100% /dev/emmc/uda0.0
/dev/emmc/uda0 2.0M 2.0M 0 100% /dev/emmc/uda0.e
/dev/emmc/uda0 8.0K 8.0K 0 100% /dev/emmc/uda0.2
/dev/emmc/uda0 1.0K 1.0K 0 100% /dev/emmc/uda0.5
/dev/emmc/uda0 1.0M 1.0M 0 100% /dev/emmc/uda0.6
/dev/emmc/uda0 128K 128K 0 100% /dev/emmc/uda0.3
/dev/emmc/uda0 1.0M 1.0M 0 100% /dev/emmc/uda0.2
/dev/emmc/uda0 33M 33M 0 100% /dev/emmc/uda0.a
/dev/emmc/uda0 1.0K 1.0K 0 100% /dev/emmc/uda0.1
/dev/emmc/uda0 256K 256K 0 100% /dev/emmc/uda0.e
/dev/emmc/uda0 256K 256K 0 100% /dev/emmc/uda0.e
/dev/emmc/uda0 1.0K 1.0K 0 100% /dev/emmc/uda0.6
/dev/emmc/uda0 32M 32M 0 100% /dev/emmc/uda0.3
/dev/emmc/uda0 1.0M 1.0M 0 100% /dev/emmc/uda0.4
/dev/emmc/uda0 16M 16M 0 100% /dev/emmc/uda0.7
/dev/emmc/uda0 256K 256K 0 100% /dev/emmc/uda0.8
/dev/emmc/uda0 256K 256K 0 100% /dev/emmc/uda0.8
/dev/emmc/uda0 256K 256K 0 100% /dev/emmc/uda0.7
/dev/emmc/uda0 256K 256K 0 100% /dev/emmc/uda0.7
/dev/emmc/uda0 256K 256K 0 100% /dev/emmc/uda0.4
/dev/emmc/uda0 256K 256K 0 100% /dev/emmc/uda0.4
/dev/emmc/uda0 64M 64M 0 100% /dev/emmc/uda0.9
/dev/emmc/uda0 64M 64M 0 100% /dev/emmc/uda0.2
/dev/emmc/uda0 1.0M 1.0M 0 100% /dev/emmc/uda0.4
/dev/emmc/uda0 1.0M 1.0M 0 100% /dev/emmc/uda0.4
/dev/emmc/uda0 128K 128K 0 100% /dev/emmc/uda0.a
/dev/emmc/uda0 128K 128K 0 100% /dev/emmc/uda0.a
/dev/emmc/uda0 2.0M 2.0M 0 100% /dev/emmc/uda0.6
/dev/emmc/uda0 512K 512K 0 100% /dev/emmc/uda0.c
/dev/emmc/uda0 512K 512K 0 100% /dev/emmc/uda0.c
/dev/emmc/uda0 512K 512K 0 100% /dev/emmc/uda0.e
/dev/emmc/uda0 512K 512K 0 100% /dev/emmc/uda0.e
/dev/emmc/uda0 500K 500K 0 100% /dev/emmc/uda0.0
/dev/emmc/uda0 500K 500K 0 100% /dev/emmc/uda0.0
/dev/emmc/uda0 2.0M 2.0M 0 100% /dev/emmc/uda0.a
/dev/emmc/uda0 2.0M 2.0M 0 100% /dev/emmc/uda0.a
/dev/emmc/uda0 2.0M 2.0M 0 100% /dev/emmc/uda0.d
/dev/emmc/uda0 2.0M 2.0M 0 100% /dev/emmc/uda0.d
/dev/emmc/uda0 58G 58G 0 100%

 

PS. can't use this 43G of free spaces, so I needed to create a qnx6 filesystem on it and mount it

 

# df -n /dev/emmc/uda0.97d7b011-54da-4835-b3c4-917ad6e73d74.17
Filesystem Mounted on Type
/dev/emmc/uda0.97d7 /base/ qnx6

# df -n /dev/emmc/uda0.aa9a5c4c-4f1f-7d3a-014a-22bd33bf7191.47
Filesystem Mounted on Type
/dev/emmc/uda0 /dev/emmc/uda0.a blk-partition

# mkqnx6fs /dev/emmc/uda0.aa9a5c4c-4f1f-7d3a-014a-22bd33bf7191.47

# mount -t qnx6 /dev/emmc/uda0.aa9a5c4c-4f1f-7d3a-014a-22bd33bf7191.47 /shit/

# df -nh /shit
Filesystem Mounted on Type
/dev/emmc/uda0.aa9a /shit/ qnx6

# df -Ph /shit
Filesystem Size Used Available Capacity Mounted on
/dev/emmc/uda0.aa9a 43G 1.3G 41G 4% /shit/

 

Ref.

http://dooeui.blogspot.kr/2014/11/qnx-neutrino-v6.html

 

[node.js] Error: setuid user id does not exist

 


# npm config ls -g
Error: setuid user id does not exist
at /base/usr/npm/node_modules/uid-number/uid-number.js:49:16
at ChildProcess.exithandler (child_process.js:197:7)
at emitTwo (events.js:106:13)
at ChildProcess.emit (events.js:191:7)
at maybeClose (internal/child_process.js:877:16)
at Process.ChildProcess._handle.onexit (internal/child_process.js:226:5)

Error: ENOENT: no such file or directory, open 'npm-debug.log.3559331957'
at Error (native)

modify the file /base/usr/npm/node_modules/uid-number/uid-number.js

Goto line 11 and replace uidSupport = process.getuid && process.setuid with uidSupport = false

 

problem solved.

node.js dev. tips for myself

=======================================
default location for node_modules:


/usr/local/lib/node_modules$

<code>usr/lib/node_modules</code>

 

with following command you can check the node-modules location which npm will install to:


$ npm prefix -g
/usr/local

 

when you want to develop node.js code in Sublime, install Docblock(comment_generator) and Nodejs(autocomplete) for sublime!
https://github.com/spadgos/sublime-jsdocs
https://packagecontrol.io/packages/Nodejs

 

 

CRTL+B to run the program and observe result inside sublime

 

 

Ref.:

http://scottksmith.com/blog/2014/09/29/3-essential-sublime-text-plugins-for-node-and-javascript-developers/

Posting Source Code in WordPress

Posting Source Code

While WordPress.com doesn’t allow you to use potentially dangerous code on your blog, there is a way to post source code for viewing. We have created a shortcode you can wrap around source code that preserves its formatting and even provides syntax highlighting for certain languages, like so:

1
2
3
4
#button {
    font-weight: bold;
    border: 2pxsolid#fff;
}

To accomplish the above, just wrap your code in these tags:

your code here

The language (or lang) parameter controls how the code is syntax highlighted. The following languages are supported:

  • actionscript3
  • bash
  • clojure
  • coldfusion
  • cpp
  • csharp
  • css
  • delphi
  • diff
  • erlang
  • fsharp
  • go
  • groovy
  • html
  • java
  • javafx
  • javascript
  • latex (you can also render LaTeX)
  • matlab (keywords only)
  • objc
  • perl
  • php
  • powershell
  • python
  • r
  • ruby
  • scala
  • sql
  • text
  • vb
  • xml

If the language parameter is not set, it will default to “text” (no syntax highlighting).

Code in between the source code tags will automatically be encoded for display, you don’t need to worry about HTML entities or anything.

Configuration Parameters

The shortcodes also accept a variety of configuration parameters that you may use to customize the output. All are completely optional.

  • autolinks (true/false) — Makes all URLs in your posted code clickable. Defaults to true.
  • collapse (true/false) — If true, the code box will be collapsed when the page loads, requiring the visitor to click to expand it. Good for large code posts. Defaults to false.
  • firstline (number) — Use this to change what number the line numbering starts at. It defaults to 1.
  • gutter (true/false) — If false, the line numbering on the left side will be hidden. Defaults to true.
  • highlight (comma-seperated list of numbers) — You can list the line numbers you want to be highlighted. For example “4,7,19”.
  • htmlscript (true/false) — If true, any HTML/XML in your code will be highlighted. This is useful when you are mixing code into HTML, such as PHP inside of HTML. Defaults to false and will only work with certain code languages.
  • light (true/false) — If true, the gutter (line numbering) and margin (see below) will be hidden. This is helpful when posting only one or two lines of code. Defaults to false.
  • padlinenumbers (true/false/integer) — Allows you to control the line number padding. true will result in automatic padding, false will result in no padding, and entering a number will force a specific amount of padding.
  • title (string) — Set a label for your code block. Can be useful when combined with the collapseparameter.

 

Ref.:

https://en.support.wordpress.com/code/posting-source-code/#toc

cross-compile node.js for QNX

I was able to cross-compile the node-0-10-ver from this link.

FYI. my build-script was abit differ from the original one as follow:


#!/bin/bash#!/bin/bash
if [ ! -d "${QNX_HOST}" ]; then  echo "QNX_HOST must be set to the path of the QNX host toolchain."  exit 1fi
if [ ! -d "${QNX_TARGET}" ]; then  echo "QNX_TARGET must be set to the path of the QNX target toolchain."  exit 1fi
if [ "${QCONF_OVERRIDE}" != "" ]; then  cp -p $QCONF_OVERRIDE /tmp/owbqsk$$.mk  echo "all:" >>/tmp/owbqsk$$.mk  echo ' echo $(INSTALL_ROOT_nto)' >>/tmp/owbqsk$$.mk  STAGE_DIR=`make -s -f /tmp/owbqsk$$.mk`  rm /tmp/owbqsk$$.mkfi
if [ "${STAGE_DIR}" == "" ]; then  echo Staging directory could not be determined. Using NDK.else  echo Using staging directory: ${STAGE_DIR}fi
if [ "${1}" == "clean" ]; then  make -f Makefile clean  exit 1fi

echo Building for aarch64 #CPU="aarch64"CPU="arm"CPUDIR="${CPU}le-v7"CPUTYPE="${CPU}v7le"BUSUFFIX="${CPU}v7"CPU_VER="cortex-a9"#CPU_CFLAGS="-mtune=${CPU_VER} -mfpu=vfpv3"CPU_CFLAGS="-mtune=${CPU_VER} -mfpu=vfpv3-d16"

QNX_TOOL_DIR="${QNX_HOST}/usr/bin"QNX_COMPILER="${QNX_TOOL_DIR}/ntoarmv7-gcc"QNX_COMPILER="${QNX_TOOL_DIR}/qcc"QNX_TOOL_PREFIX="${QNX_TOOL_DIR}/nto${BUSUFFIX}"
if [ "${STAGE_DIR}" == "" ]; then   QNX_LIB="${QNX_TARGET}/${CPUDIR}/lib"  QNX_USR_LIB="${QNX_TARGET}/${CPUDIR}/usr/lib"  QNX_INC="${QNX_TARGET}/usr/include"else  QNX_LIB="${STAGE_DIR}/${CPUDIR}/lib"  QNX_USR_LIB="${STAGE_DIR}/${CPUDIR}/usr/lib"  QNX_INC="${STAGE_DIR}/usr/include"fi
COMP_PATHS=" \  -Wl,-rpath-link,${QNX_LIB} \  -Wl,-rpath-link,${QNX_USR_LIB} \  -L${QNX_LIB} \  -L${QNX_USR_LIB} \  -I${QNX_INC}"
export CC="${QNX_COMPILER}"export CFLAGS="-V5.4.0,gcc_ntoarmv7le -g -Wformat -Wformat-security -Werror=format-security -Wl,-z,relro -fPIE -pie ${COMP_PATHS} ${CPU_CFLAGS}"#export CFLAGS="-V5.4.0,gcc_ntoarmv7le -g -Wformat -Wformat-security -Werror=format-security -Wl,-z,relro -fPIE -pie -D__QNXNTO__ ${COMP_PATHS} ${CPU_CFLAGS}"#export CFLAGS="-V5.4.0,gcc_ntoaarch64le -g -Wformat -Wformat-security -Werror=format-security -Wl,-z,relro -fPIE -pie ${COMP_PATHS} ${CPU_CFLAGS}"#export CFLAGS="-Vgcc_nto${CPUTYPE} -Wformat -Wformat-security -Werror=format-security -Wl,-z,relro -fPIE -pie ${COMP_PATHS} ${CPU_CFLAGS}"#export CFLAGS="-Vgcc_nto${CPUTYPE} -g -Wformat -Wformat-security -Werror=format-security -Wl,-z,relro -fPIE -pie ${COMP_PATHS} ${CPU_CFLAGS}"#export CFLAGS="-Vgcc_nto${CPUTYPE} -g -Wformat -Wformat-security -Werror=format-security -Wl,-z,relro -fPIE -D__QNXNTO65__ ${COMP_PATHS} ${CPU_CFLAGS}" for QNX650export CXX="${QNX_COMPILER}"#export CXXFLAGS="-Vgcc_nto${CPUTYPE}_cpp-ne -g -lang-c++ -Wformat -Wformat-security -Werror=format-security -Wl,-z,relro -fPIE -pie ${COMP_PATHS} ${CPU_CFLAGS}"#export CXXFLAGS="-Vgcc_nto${CPUTYPE}_cpp-ne -g -lang-c++ -Wformat -Wformat-security -Werror=format-security -Wl,-z,relro -fPIE -Wl,--export-dynamic ${COMP_PATHS} ${CPU_CFLAGS}"#export CXXFLAGS="-Vgcc_nto${CPUTYPE}_cpp-ne -lang-c++ -Wformat -Wformat-security -Werror=format-security -Wl,-z,relro -fPIE -Wl,--export-dynamic ${COMP_PATHS} ${CPU_CFLAGS}"#export CXXFLAGS="-V5.4.0,gcc_ntoaarch64le -g -lang-c++ -Wformat -Wformat-security -Werror=format-security -Wl,-z,relro -fPIE -Wl,--export-dynamic ${COMP_PATHS} ${CPU_CFLAGS}"export CXXFLAGS="-V5.4.0,gcc_ntoarmv7le -g -lang-c++ -Wformat -Wformat-security -Werror=format-security -Wl,-z,relro -fPIE -Wl,--export-dynamic ${COMP_PATHS} ${CPU_CFLAGS}"export AR="${QNX_TOOL_PREFIX}-ar"export LINK="${QNX_COMPILER}"export LDFLAGS="${CXXFLAGS} -lcrypto -lssl"export RANLIB="${QNX_TOOL_PREFIX}-ranlib"
export __QNXNTO=1
# The set of GYP_DEFINES to pass to gyp.export GYP_DEFINES="OS=qnx want_separate_host_toolset=0"#export GYP_GENERATORS="make-linux"
CONFIGURE_OPTIONS=""

CONFIGURE_OPTIONS="--dest-cpu=arm --dest-os=qnx --with-arm-float-abi=softfp --without-snapshot --without-dtrace"
./configure --shared-openssl --shared-zlib ${CONFIGURE_OPTIONS}
if [ "${1}" == "test" ]; then  make testelse  make -j4fi

 

Anyway I couldn’t to make it run on aarch64 since I’m running out of time.

 

 

 

Ref.:

http://blog.hemnik.com/2014/06/nodejs-for-qnx-source-code.html

http://fastr.github.io/articles/cross-compiling-node.js-for-arm.html

Streaming media on Web

Conventional video playback (also known as Progressive) involves a single video file at a single quality that is transferred as it is being played. If the user’s playback has caught up to how much of the video has been downloaded, the player pauses and buffers. YouTube subscribes to this method of playback but offers different quality levels that you manually select. You only watch a single quality unless you manually switch it.

With adaptive media streaming, a high quality base video source (often called a Mezzanine) is converted into a set of video files of varying qualities. This process is known as encoding. For example, you can take a mezzanine file and encode low, medium, high, and ultra quality versions of a video. These encoded files are then stored for distribution on a Server or Content Delivery Network (CDN).

When the user attempts to play a video adaptively, they are given a Manifest file that lists information for all these different video qualities. Adaptive streaming technologies then alternate between the different qualities (bitrates) depending on a user’s varying connection while playing the video in order to ensure that buffering is minimized. In order to start playback as soon as possible, adaptive streaming technologies usually begin playback at the lowest quality and then scale upwards after a few seconds. You may have noticed this happening when you start watching a movie or episode on NetFlix.

A video player (often referred to as a Client) that supports an adaptive media streaming technology will handle this process of switching between qualities automatically without a user’s involvement.

AMS&Progressive

What are some adaptive streaming technologies?

The two biggest smooth streaming technologies I’ve worked with in my time at Digiflare are Apple’s HTTP Live Streaming (HLS) and Microsoft’s Smooth Streaming (MSS) technologies. These technologies differ in terms of the video and audio formats they support as well as how they go about delivering the video content optimally.

Streaming – What does HLS, HDS and MPEG-DASH mean?

These are all ‘chunked HTTP’ streaming protocols. These work by breaking the content in small (a few seconds) chunks that can be delivered as separate files rather than a constant stream of content.  The advantage of this method is that it allows the client to make use of the ‘bursty’ nature of the internet and does not rely on a constant bandwidth being available.

Apple’s HTTP Live Streaming (HLS)

HLS stands for HTTP Live Streaming and was developed by Apple to serve its iOS and MAC OS devices.  It is also widely available for other devices notably Android.  Apple made the specification public by publishing it as a draft IEEE RFC. HLS usually makes use of MPEG -2 transport stream technology which carries a separate licensing cost which deters some manufacturers from implementing it in their devices.  It is a simple protocol that is quite easy to implement.

Summary:

  • Manifest: M3U8 playlist
  • Video: H.264
  • Audio: MP3 or HE-AAC
  • Container: MPEG-2
  • Server: No special server software

Microsoft’s Smooth Streaming (MSS)

Microsoft’s Smooth Streaming technology also involves encoding a mezzanine into various quality levels but MSS supports slightly different formats in the encoding process. Video can be encoded using H.264 or VC-1 and audio is encoded to AAC or WMA. The encoded quality level video is wrapped in an MP4 container with a *.ismv or *.isma file extension.

During the encoding process, XML manifest files are also generated. An *.ism file is generated for use by the server in describing the available bitrates while a *.ismc file is used by the client to inform it of available bit rates and other information required in presenting the content. One such piece of information is the chunk duration.

Unlike HLS, Microsoft’s Smooth Streaming doesn’t encode the individual qualities into a series of chunks. Instead, the server cuts the full content into chunks as it’s being delivered. This requires a specially set up server using Microsoft’s Internet Information Services (IIS).

For more information regarding the setup of IIS Servers and MSS manifest formatting see Microsoft’s guide on Getting Started with IIS Smooth Streaming and Smooth Streaming Deployment guides.

Summary:

  • Manifest: XML file with *.ism/ismc file extension
  • Video: VC-1 or H.264
  • Audio: AAC or WMA
  • Container: MP4 (with *.ismv/isma file extension)
  • Server: IIS (Internet Information Services) server
  • Additional: Only quality files are stored but server virtually splits them up into chunks at playback

HDS

HDS stands for HTTP Dynamic Streaming and was developed by Adobe to serve its Flash platform.  The BBC uses this protocol for its desktop browser presentations using the BBC Standard Media Player (SMP) which implements the Flash playback client.  Adobe has published the HDS protocol to register developers.  It is a more complex protocol and is harder than HLS to implement.

MPEG Dynamic Adaptive Streaming over HTTP (DASH)

MPEG-DASH stands for Motion Pictures Expert Group Dynamic Adaptive Streaming over HTTP.  This is a new completely open source protocol that is just starting to be adopted by content producers and client implementations.  It has the simplicity of HLS whilst being free of additional licencing other than that required by the codecs.

Unlike, HLS, HDS and Smooth Streaming, DASH is codec-agnostic.
DASH is audio/video codec agnostic. One or more representations (i.e., versions at different resolutions or bit rates) of multimedia files are typically available, and selection can be made based on network conditions, device capabilities and user preferences, enabling adaptive bitrate streaming[10] and QoE (Quality of Experience) fairness.

Summary:

  • Manifest: Media Presentation Description (MPD)
  • Video: Codec agnostic
  • Audio: Codec agnostic
  • Container MP4 or MPEG-2

MPEG DASH is the result of a collaborative effort from some of the biggest players (ie. Adobe, Apple, and Microsoft) of adaptive bitrate streaming. From a bird’s eye view it functions similarly to the technologies previously described, but differs in the details of its delivery to end users.

In DASH, the entirety of an available stream, made up of a media portion and a metadata manifest, is known as a Media Presentation. The manifest portion of this is called a Media Presentation Description (MPD). Much like an M3U8 or Smooth Streaming manifest, an MPD contains metadata for the media available.

The media portion of a presentation is made up of different quality levels of the same media. Each quality level is known as a Period. A period is a set of time-aligned contents (audio, video, captions, etc.) which form one entire viewing of the content at a single quality level. Each period consists of a collection of different media forms, each known as an Adaptation. So a period may consist of a separate video adaptation and audio adaptation. Each encoding of a particular adaptation is known as a Representation. Each representation is split into short chunks, dubbed segments. Using the terminology at hand, the entire stream consists of a set of periods where each period will typically contain a representation of each type of adaptation being delivered to a user in the presentation. Adaptive playback is facilitated appropriate quality as segments are downloaded as playback is taking place and connection speed is being monitored.

As confusing as that may have been to sort out, there is a significant theoretical advantage to this approach of different adaptations to build up a period versus the approaches previously described for MSS and HLS. This advantage is the codec agnostic nature of DASH. The media is served in either an MP4 or MPEG-2 container using whatever video and audio formats and the onus is put on players to be able to decode and render the video/audio/captions/etc. This eases up effort for content creators and distributors to prepare their content for adaptive streaming and also removes a lot of restrictions associated with proprietary solutions. That includes the IIS server set up for MSS and the proprietary encoding software for HLS.

However, this large scope of supported codecs does make for more complex player development. Communities have banded together to provide a plethora of player framework options for developing for DASH on a variety of platforms and for an assortment codecs. These frameworks vary in their supported platforms and features so a good amount investigation must be done in advance to find the right fit for the feature requirements of the player as well as the platform.

This is where subscribing to MPEG DASH as a solution may become problematic on more obscure platforms, and even on some of the more popular ones. This means MPEG DASH is not yet the answer to the segregation issue that exists with adaptive bitrate streaming.

hls_pro_con.png

DASH_pro_con.png

AdaptiveBitrateComparison

 

Sample data flow of MS video streaming service

streaming-diagram

Ref.:

http://iplayerhelp.external.bbc.co.uk/radio/other/streaming_hls_hds_mpeg

http://www.digiflare.com/adaptive-media-streaming-hls-vs-mss-vs-dash/

https://www.sitepoint.com/html5-video-understanding-compression-drm/

http://streaminglearningcenter.com/blogs/dash-or-hls-which-is-the-best-format-today.html

http://www.internetvideoarchive.com/documentation/video-api/progressive-download-vs-adaptive-bitrate/

https://bitmovin.com/mpeg-dash-vs-apple-hls-vs-microsoft-smooth-streaming-vs-adobe-hds/

https://en.wikipedia.org/wiki/Dynamic_Adaptive_Streaming_over_HTTP