Figures & Experiments

2012 experiments with effort

See attachments. Running 1000 Dispersy instances. Measure CPU usage.

Figure 2: Records Dissemination Speed

Status: ALMOST DONE

  • Requires: rerun experiments without randomization
  • Scenarios: DONE
  • Experiment-output post-processing scripts: DONE
  • Plot scripts: DONE
  • Final figure: NOT DONE

Description

  • DAS4 setup: (25 nodes, 14 minutes = 840 sec)
  • 500 peers (20 peers per node)
  • sync every 20 sec, round is 20 sec
    • number of rounds: 42
    • number of syncs: 42
  • sync response size: 5K (199*25 records)
  • push records on creation: variable
  • single source, generating records in only the 1st round
  • make sure we use FullSync (because with LastSync records must be unique)
  • no randomness for syncs & round time (IMPLEMENTED)
  • DO NOT IMPLEMENT: no randomness for user selection - pick (my_id + 1 + round_id)% 500 not yet implemented
  • variable parameters:
    • 25 records, no-push, 5K - /var/scratch/mbardac/barter-experiment/fig-2/500p-25r_one_time
    • 25 records, push enabled, 5K - /var/scratch/mbardac/barter-experiment/fig-2/500p-25r_one_time
    • 25 records, no-push, 1K - /var/scratch/mbardac/barter-experiment/fig-2/500p-25r_one_time

Plot

  • 3 lines, one for each experiment
  • horizontal axis: Time in seconds (starts when at least one peer has all the records, presumably the peer that created them)
  • vertical axis: number of peers having all records at each point in time

Figure 3 - unknown title

Status: NOT DONE

Requires: generating experiments

Description

  • DAS4 setup: (25 nodes, 14 minutes = 840 sec)
  • 500 peers (20 peers per node)
  • sync every 20 sec, round is 20 sec
    • number of rounds: 42
    • number of syncs: 42
  • sync response size: 5K (199*25 records) {JOHAN: changed to be equal to prior experiment}
  • push records on creation: yes
  • single source, generating the same number of records every time
  • use FullSync, we can't create that many unique records
  • use randomness, to distribute load evenly
  • variable parameters:
    • 15 records injected every round
    • 25 records injected every round
    • 35 records injected every round
  • expected results:
    • peers will not be able to keep up, as records are continuously generated
    • all comments about this figure should take into account the performance of the system compared to the theoretical maximum limit

Plot

  • 3 graphs (a, b, c), one for each variable parameter
  • plot spans the entire page-width
  • horizontal axis: time in seconds, starting from the beginning of the experiment
  • vertical axis: total number of records of each peer + a line showing the total number of records in the system

Mircea: possible weak point - Why have we chosen these numbers? They need to be somehow correlated to realistic record generation rates. JOHAN: number relate to MAX-SYNC (equal and lower+bigger). This is just a synthetic load to see what an overload does to our system

Figure 4 - unknown title

Status

  • Scenarios: DONE
  • Experiment-output post-processing scripts: DONE
  • Plot scripts: DONE
  • Final figure: NOT DONE

Description

  • DAS4 setup: (25 nodes, 14 minutes = 840 sec)
  • 500 peers (20 peers per node)
  • sync every 20 sec, round is 20 sec
    • number of rounds: 42
    • number of syncs: 42
  • sync response size: 5K (199*25 records) {JOHAN: changed to be equal to prior experiment}
  • push records on creation: yes
  • every round, 25 random peers generate a record (25 new records appear in the system in one round)
  • use FullSync
  • inserting records only happens during the 500 seconds of the experiment (25 rounds)

Plot

  • horizontal axis: time in seconds, starting from the beginning of the experiment
  • vertical axis: total number of records of each peer + a line showing the total number of records in the system

Figure 5: Sync Cost

  • Scenarios: DISCUSS
  • Experiment-output post-processing scripts: DONE
  • Plot scripts: DONE
  • Final figure: NOT DONE

Description

  • DAS4 setup: (25 nodes, 14 minutes = 840 sec)
  • peer count: 0/100/200/300/400/500
  • sync every 10/20/30 sec, round is 20 sec
    • number of rounds: 42
    • number of syncs: 42
  • sync response size: 32K
  • push records on creation: yes

TO DISCUSS:

  • synthetic data: /var/scratch/mbardac/barter-experiment/fig-5_6_synth
  • every round, 25 random peers generate a record (25 new records appear in the system in one round)
  • inserting records only happens during the 500 seconds of the experiment (25 rounds)

OR:

  • real Filelist scenario: /var/scratch/mbardac/barter-experiment/fig-5_6_real
  • variable parameters:
    • peer count
    • sync interval

Plot

  • 3 lines, one for each sync interval
  • horizontal axis: swarm size (0, 100, 200, 300, 400, 500)
  • vertical axis: Average Bandwidth usage (Kbps)

Figure 6: Coverage

  • Scenarios: DISCUSS
  • Experiment-output post-processing scripts: DONE
  • Plot scripts: DONE
  • Final figure: NOT DONE

Description

  • DAS4 setup: (25 nodes, 14 minutes = 840 sec)
  • peer count: 0/100/200/300/400/500
  • sync every 10/20/30 sec, round is 20 sec
    • number of rounds: 42
    • number of syncs: 42
  • sync response size: 32K
  • push records on creation: yes

TO DISCUSS:

  • synthetic data: /var/scratch/mbardac/barter-experiment/fig-5_6_synth
  • every round, 25 random peers generate a record (25 new records appear in the system in one round)
  • inserting records only happens during the 500 seconds of the experiment (25 rounds)

OR:

  • real Filelist scenario: /var/scratch/mbardac/barter-experiment/fig-5_6_real
  • variable parameters:
    • peer count
    • sync interval

Plot

  • 3 lines, one for each sync interval
  • horizontal axis: swarm size (0, 100, 200, 300, 400, 500)
  • vertical axis: Coverage in % (peers having 100% of all records after the 500 seconds)

Attachments