6.824 Lab 1 - MapReduce


  • The map task creates nReduce output files, each reduce job takes its (single) output file from the output of each map task.
  • What state does the coordinator need to store?
    • Input files (this is also the number of map tasks)
    • Number of reduce tasks
    • Status of each map task
    • Status of each reduce task
  • What coordinator⇄worker messages need to be defined?
    • Worker polls for a job, coordinator gives it a map or reduce job (or asks it to shut itself down). Args:
      • Map: single input file for that map task
      • Reduce: reduce task ID + intermediate paths (if any exist yet)
    • Worker tells the coordinator that a map task is finished. Args:
      • Locations of nReduce intermediate files
    • Coordinator tells reduce workers that new intermediate files are available. Args:
      • Location of a single intermediate file (assuming no batching here to start with)
  • Starting with fairly coarse locks; I’ll possibly make these more fine later.
  • Go’s lack of enums is very annoying! I just used strings (in-progress, idle, finished) for now.
  • The reduce task here doesn’t gain anything by pre-fetching intermediate files before all maps are done because there’s no network IO involved.
  • Ran into an issue where using rand.Intn across many workers caused conflicts without an explicit seed.
  • Sort of a cop-out, but I wrote the reduce to just combine all intermediate data and sort in-memory. Something like merge-sort is going to be far more performant here, but I didn’t particularly want to waste time writing that up in Go.
  • Looks like everything but the final three edge-case plugins is now passing! Actually reading the paper first made this a lot quicker (~3h) than I expected.
  • The early_exit test was failing because the Big Sur bash version doesn’t support wait -n. Installing bash 5 via homebrew fixed this.
  • Crash safety was fun to implement; there’s a lot of depth here than I chose to ignore - my impl. is simply:
    • The coordinator tracks when each task was started, and moves any tasks that are in-progress for longer than 5s back to idle
    • In some cases reduce jobs use up all workers waiting for all intermediate data to show up, and there’s no room for (retried) map jobs to run to actually produce that data. Deadlock! Fixed by having reduce tasks abort if they don’t receive their full intermediate dataset in 5 or fewer retries.

Screen Shot 2021-07-25 at 11.37.40 PM.png

Edit