Video Streaming using Icecast/P2PSP

Cristóbal Medina López
Juan Pablo García Ortiz
Juan Alvaro Muñoz Naranjo
José Juan Sánchez Hernández
Leocadio González Casado
Max Mertens
Vicente González Ruiz

SAL, UAL

Dic 7, 2017
https://github.com/P2PSP/slides

.

Internet transmission models

MODE TOPOLOGY SCOPE PROTOCOLS APPLICATIONS/SYSTEMS
Unicast Whole network HTTP (TCP) YouTube/Netflix
Broadcast Subnet (LAN) ARP -
Multicast Defined horizon alg. (routers/TTL) SLP (UDP), SDP (UDP) Movistar+, Ono TV
Anycast Internet DNS protocol (UDP) CDNs (DNS)

Streaming models

IP Multicast IP Unicast and Client/Server Model IP Unicast and P2P Model
Multicast Unicast-CS Unicast-P2P

Lab 1: Streaming with VLC

In [1]:
!cat labs/lab1.sh
echo "Killing all VLC instances (sources and listeners)"
killall vlc
sleep 1

echo "Creating the source"
cvlc ~/Videos/LBig_Buck_Bunny_small.ogv --sout "#duplicate{dst=http{dst=:8080/LBBB.ogv},dst=display}" --loop &
sleep 1

echo "Create two listeners"
cvlc http://localhost:8080/LBBB.ogv &
cvlc http://localhost:8080/LBBB.ogv &

Lab 2: Streaming with VLC and Icecast

In [4]:
!cat labs/lab2.sh
set -x

echo "Killing all VLC instances (sources and listeners)"
killall vlc
sleep 1

echo "Create two sources"
cvlc ~/Videos/LBig_Buck_Bunny_small.ogv --sout "#duplicate{dst=std{access=shout,mux=ogg,dst=source:hackme@localhost:8000/BBB.ogv},dst=display}" --loop &
#cvlc ~/Videos/LBig_Buck_Bunny_small.ogv --sout "#std{access=shout,mux=ogg,dst=source:hackme@localhost:8000/BBBs.ogv}" --loop &
sleep 1
cvlc  ~/Videos/Lchi84_14_m4.ogv --sout "#duplicate{dst=std{access=shout,mux=ogg,dst=source:hackme@localhost:8000/chi.ogv},dst=display}" --loop &
#cvlc ~/Videos/Lchi84_14_m4.ogv --sout "#std{access=shout,mux=ogg,dst=source:hackme@localhost:8000/LLL.ogv}" --loop &
sleep 1
read

echo "Check the infrastructure"
firefox http://localhost:8000 2> /dev/null &
sleep 5

echo "Create three listeners"
cvlc http://localhost:8000/BBB.ogv 2> /dev/null &
cvlc http://localhost:8000/BBB.ogv 2> /dev/null &
cvlc http://localhost:8000/chi.ogv 2> /dev/null &

set +x

Lab 3: Relaying

  • Icecast servers can be connected following a tree structure to increase scalability.
  • All or a subset of the streams (channels) can be relayed between servers.
  • Clients, the DNS (IP Anycast) or an intermediate server (which performs HTTP redirection) are in charge of selecting the most suitable server.
In [5]:
!cat labs/lab3.sh
set -x

echo "Killing all VLC instances"
killall vlc
sleep 1

echo "Run a second Icecast2 server listening at port 9000"
killall icecast2
sleep 1
# The file ~/icecast/icecast.xml must be configured to listen at port
# 9000 and to relay all the master's channels
/usr/bin/icecast2 -b -c ~/icecast/icecast.xml
sleep 5

echo "Feed the fist (8000) icecast server"
cvlc ~/Videos/LBig_Buck_Bunny_small.ogv --sout "#std{access=shout,mux=ogg,dst=source:hackme@localhost:8000/BBB.ogv}" --loop &
sleep 1
cvlc ~/Videos/Lchi84_14_m4.ogv --sout "#std{access=shout,mux=ogg,dst=source:hackme@localhost:8000/chi.ogv}" --loop &
sleep 1

echo "Feed the second (9000) icecast server"
cvlc ~/Videos/Lhcil2003_01.ogv --sout "#std{access=shout,mux=ogg,dst=source:hackme@localhost:9000/hcil.ogv}" --loop &
sleep 1

echo "Check the infrastructure"
firefox http://localhost:8000 2> /dev/null &
sleep 10
firefox http://localhost:9000  2> /dev/null
sleep 2
echo "Plase, push <enter> to continue"
read

echo "Run the listeners, one for the 8000 and two for the 9000"
cvlc http://localhost:8000/BBB.ogv 2> /dev/null &
sleep 1
cvlc http://localhost:9000/chi.ogv 2> /dev/null &
sleep 1
cvlc http://localhost:9000/hcil.ogv 2> /dev/null &

set +x

P2PSP

ALM (Application-Layer Multicast) versus NLM (Network-Layer Multicast)

Network Layer Multicast Client/Server Application Multicast Peer-to-Peer Application Multicast

Push-based versus Pull-based

Push-based Protocol Pull-based Protocol

DBS (Data Broadcasting Set of rules)

Provides connectivity among peers using unicast infrastructure.

Definitions

  1. ${\cal P}_i$ incomming peer.
  2. $\{{\cal P}_k\} = {\cal L}_j$ list of incorporated peers (which arrived before than $P_i$).
  3. ${\cal R}$ tracker.
  4. ${\cal T}_j = {\cal S}_j \cup {\cal L}_j$ $j$-th team.
  5. ${\cal S}_j$ splitter of team ${\cal T}_j$.

Task ${\cal S}_j$.SERVE_JOINING_PEERS

  1. While True:
    1. Wait for connection from ${\cal P}_i$
    2. if ${\cal P}_i \notin {\cal L}_j$:
      1. for all ${\cal P}_k \in {\cal L}_j$:
        1. $[{\cal P}_k] \Rightarrow {\cal P}_i$
      2. ${\cal L}_j$.append(${\cal P}_i$)

Task $P_i$.JOIN_TEAM

Run by incomming peers.

  1. $[S_j] \gets R$
  2. for all ${\cal P}_k\in [{\cal T}_j] \gets {\cal S}_j$
    1. $[\mathtt{hello}] \rightarrow {\cal P}_k$

Task $P_k$.ACCEPT_NEIGHBORS

Run by incorporated peers.

  1. While True:
    1. $[\mathtt{hello}] \gets {\cal P}_i$
    2. $F[{\cal P}_k] = F[{\cal P}_k] \cup {\cal P}_i$ # Forward chunks depending on their origin

Task ${\cal P}_k$.CONTROL

Run by peers when receiving a control message from other peers.

  1. While True:
    1. $\mathtt{message} \gets {\cal P}_x$
    2. if $\mathtt{message} == [\text{request}, \mathtt{chunk\_number}]$:
      1. $\mathtt{origin} = \mathtt{buffer}[\mathtt{chunk\_number}].\mathtt{ORIGIN}$
      2. $F[\mathtt{origin}] = F[\mathtt{origin}] \cup {\cal P}_x$
      3. $D[{\cal P}_x] = 0$
    3. else if $\mathtt{message} == [\text{prune}, \mathtt{chunk\_number}]$:
      1. $\mathtt{origin} = \mathtt{buffer}[\mathtt{chunk\_number}].\mathtt{ORIGIN}$
      2. $F[\mathtt{origin}].\text{remove}({\cal P}_x)$
    4. else if $\mathtt{message} == [\text{hello}]$:
      1. $F[{\cal P}_k].\text{append}({\cal P}_x)$
      2. $D[{\cal P}_x] = 0$
      3. $\mathtt{neighbor} = {\cal P}_x$
    5. else if $\mathtt{message} == [\text{goodbye}]$:
      1. for all $\mathtt{list} \in F$:
        1. $\mathtt{list}$.remove(${\cal P}_x$)
        2. $D$.remove(${\cal P}_x$)

Task ${\cal P}_k$.RECEIVE_CHUNK_AND_FLOOD

Run by peers when receiving a chunk.

  1. While True:
    1. $[\mathtt{chunk\_number}, \mathtt{chunk}, \mathtt{origin}] \gets {\cal P}_x$
    2. if $\mathtt{buffer}[\mathtt{chunk\_number}].\mathtt{CHUNK\_NUMBER} == \mathtt{chunk\_number}$:
      1. $[\text{prune}, \mathtt{chunk\_number}] \rightarrow {\cal P}_x$ # Duplicate chunk received, prune it
    3. else:
      1. $\mathtt{buffer}[\mathtt{chunk\_number}] = (\mathtt{chunk\_number}, \mathtt{chunk}, \mathtt{origin})$
      2. if ${\cal P_x}$ != ${\cal S}$: # If sender != splitter
        1. $D[{\cal P}_x] = D[{\cal P}_x] - 1$ # Decrement debt
        2. $F[{\cal P}_k] = F[{\cal P}_k] \cup {\cal P}_x$ # Consider ${\cal P}_x$ as a new neighbor
      3. for all $\mathtt{peer} \in F[\mathtt{origin}]$:
        1. $P[\mathtt{peer}] = P[\mathtt{peer}] \cup \mathtt{chunk\_number}$ # Pending chunks by peer
      4. for all $\mathtt{chunk\_number} \in P[\mathtt{neighbor}]$:
        1. $\mathtt{buffer}[\mathtt{chunk\_number}] \rightarrow P[\mathtt{neighbor}]$
        2. $P[\mathtt{neighbor}]$.remove($\mathtt{chunk\_number}$)
        3. $D[\mathtt{neighbor}] = D[\mathtt{neighbor}] + 1$
        4. if $D[\mathtt{neighbor}] > \mathtt{MAX\_CHUNK\_DEBT}$:
          1. $D$.remove($\mathtt{neighbor}$)
          2. $F$.remove($\mathtt{neighbor}$)
      5. $\mathtt{neighbor} =$ next($P[\mathtt{neighbor}]$)

Example

Iterations 0 and 1

Monitor ${\cal M}_0$ joins the team and ${\cal S}$ has sent chunks 0 and 1 to ${\cal M}_0$:

Iteration 2

${\cal P}_1$ joins the team. ${\cal P}_1$ has sent a $[\mathtt{hello}]$ to ${\cal M}_0$:

Iteration 3

${\cal P}_2$ joins the team. ${\cal P}_2$ sends a $[\mathtt{hello}]$ to ${\cal M}_0$ and ${\cal P}_1$:

Iteration 4

Iteration 5

Iteration 6

Iteration 7

Iteration 8

Iteration 9

Iteration 10

Iteration 11

Iteration 12

Iteration 13

Iteration 14

Lab 4: Scaling with P2PSP

  • A splitter can be connected to the Icecast tree as a listener.
In [1]:
!cat labs/lab4.sh
echo "Killing all VLC instances"
killall vlc
sleep 1

echo "Killing all user Icecast2 instances"
killall icecast2

echo "Run a second Icecast2 server listening at port 9000"
/usr/bin/icecast2 -b -c ~/icecast/icecast.xml
sleep 1

echo "Feed all icecast servers (2 movies for 8000 and 1 for 9000)"
cvlc ~/Videos/LBig_Buck_Bunny_small.ogv --sout "#std{access=shout,mux=ogg,dst=source:hackme@localhost:8000/BBB.ogv}" --loop &
sleep 1
cvlc ~/Videos/Lchi84_14_m4.ogv --sout "#std{access=shout,mux=ogg,dst=source:hackme@localhost:8000/chi.ogv}" --loop &
sleep 1
cvlc ~/Videos/Lhcil2003_01.ogv --sout "#std{access=shout,mux=ogg,dst=source:hackme@localhost:9000/hcil.ogv}" --loop &
sleep 1

#echo "Check the infrastructure"
#firefox http://localhost:8000 2> /dev/null &
#sleep 10
#firefox http://localhost:9000 2> /dev/null
#sleep 5

echo "Run a listener connected to the master Icecast server"
cvlc http://localhost:8000/BBB.ogv 2> /dev/null &
sleep 1

echo "Run a listener connected to the relay Icecast server"
cvlc http://localhost:9000/hcil.ogv 2> /dev/null &
sleep 1

echo "Create a P2PSP team"
xterm -e "~/P2PSP/p2psp-console/bin/splitter --source_addr 127.0.0.1 --source_port 8000 --splitter_port 8001 --channel BBB.ogv --header_size 30000" &
sleep 1
xterm -e "~/P2PSP/p2psp-console/bin/monitor --splitter_addr 127.0.0.1 --splitter_port 8001" &
sleep 1
cvlc http://localhost:9999 & # Monitor's player
sleep 1
xterm -e "~/P2PSP/p2psp-console/bin/peer --splitter_addr 127.0.0.1 --splitter_port 8001 --player_port 10000" &
sleep 1
cvlc http://localhost:10000 & # The first peer

  • A source client can be connected to each peer (exercise).