Statistics: Video On Demand
We have two differens sources for our Video On Demand (VOD) statistics. First, we have the Crawler that uses a pulling mechanism to gather information from Tribler peers. Second, we have a reporter that uses a pushing mechanism to deliver the same information from SwarmPlayer clients. Both mechanism's are described briefly below. Finally we give an example of the statistics that we gather.
The Crawler uses the OverlaySwarm to find Tribler peers. When a peer is found it is asked to give the VOD statistics that it has gathered so far. After a successfull request for statistics either the Crawler or the peer will reconnect to its counterpart periodically so new statictics can be delivered.
The Crawler stucture allows for multiple Crawler processes to crawl for the same information. To avoid duplicate requests to be send, each Tribler peer records the last time a Crawler request was recived. Based on a desired frequency, given in each Crawler request, a peer can decide to delay the transmission of statistics.
The SwarmPlayer continuously gathers statistics and reports them periodically (currently 60 seconds) to one of the available servers (reporter1.tribler.org through reporter9.tribler.org). Currently all reporterX.tribler.org urls point to the same machine, however, we can spread the load when needed by changing the dns record.
After each report is received the clients are given a new reporting frequency. If required this will allow us to reduce the load on the servers even more. In the worst case scenario we can even disable reporting.
The following example shows the static information that we record for
each attempted playback:
- num_pieces: 1088 // number of pieces
- nat: Port Restricted Cone Firewall // detected nat / firewall
- piece_size: 262144 // size of one piecec, in bytes
- bitrate: 131072 // video bitrate, from ffmpeg
- timestamp: 1242289138.45 // server side timestamp of
// (latest) report
To measure the user playback experience we gather the following events. Note that each event is accompanied with the timestamp when it occured:
- started download
- started playback
- paused playback
- resumed playback
- downloaded a high-priority piece
- downloaded a non high-priority piece
Because each event contains a timestamp we can reconstuct the playback progress, download speed, and possible reasons for failures (unavailable piece versus not enough available bandwith).
The example below is generated from 1101 events and is divided in lines where each line contains events in 60 second 'bins'. Although we can change the bin size to anything we want (since we collect timestamps with millisecond accuracy).
Line 1 shows a prebuffer (s) and playback (P) state where 65 high-priority pieces were downloaded at a rate or 1.1 piece per second, or in terms of bandwith: 277.3 kilobytes per second. It also shows the same information for the non-high priority pieces.
Line 2 though 7 shows only the playback state, indicating that playback continued without stalling.
Line 8 and 10 both show a pause during playback. By examaning the events in more detail (which we can because we have the events) we can make educated guesses as to the reasons for the stall, although it will mostly come down to a lack of available bandwidth.
1. sP high.65 1.1/s 277.3kb/s non-high.26 0.4/s 110.9kb/s
2. P high.9 0.1/s 38.4kb/s non-high.83 1.4/s 354.1kb/s
3. P high.90 1.5/s 384.0kb/s non-high.4 0.1/s 17.1kb/s
4. P high.89 1.5/s 379.7kb/s non-high.3 0.1/s 12.8kb/s
5. P high.86 1.4/s 366.9kb/s non-high.4 0.1/s 17.1kb/s
6. P high.91 1.5/s 388.3kb/s non-high.0 0.0/s 0.0kb/s
7. P high.92 1.5/s 392.5kb/s non-high.0 0.0/s 0.0kb/s
8. sP high.88 1.5/s 375.5kb/s non-high.4 0.1/s 17.1kb/s
9. P high.92 1.5/s 392.5kb/s non-high.0 0.0/s 0.0kb/s
10. sP high.91 1.5/s 388.3kb/s non-high.0 0.0/s 0.0kb/s
11. P high.92 1.5/s 392.5kb/s non-high.0 0.0/s 0.0kb/s
12. Ps high.54 0.9/s 230.4kb/s non-high.25 0.4/s 106.7kb/s