I need to scan very large JSONL files efficiently and am considering a parallel grep-style approach over line-delimited text.

Would love to hear how you would design it.

  • Jayjader@jlai.lu
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    2 days ago
    1. chunk_size := file_size / cpu_cores. Compile regex.

    2. spawn cpu_cores workers:
      2.a. worker #n starts at n * chunk_size bytes. If n > 0, skip bytes until newline encountered.
      2.b worker starts feeding bytes from file/chunk into regex. When match is found, write to output (stdout or file, whichever has better performance). When newline encountered, restart regex state automata.
      2.c after having read chunk_size bytes, continue until encountering a newline to ensure the whole file is covered by the parallel search.

    Optionally, keep track of byte number and attach them to the found matches when outputting, to facilitate eventually de-duplicating and/or navigating to said match in the file.

    To avoid problems, have each worker output to a separate file, and only combine these output files when the workers are all finished.

    As others have said, it’s going to be hard to get more speedup than this, and you will ultimately be limited by your storage’s read speed and throughput if the whole file cannot fit into memory.