NAME
    Data::Log::Shared - Append-only shared-memory log (WAL) for Linux

SYNOPSIS
        use Data::Log::Shared;

        my $log = Data::Log::Shared->new(undef, 1_000_000);
        my $off = $log->append("first entry");
        $log->append("second entry");

        # replay from beginning
        my $pos = 0;
        while (my ($data, $next) = $log->read_entry($pos)) {
            say "offset=$pos: $data";
            $pos = $next;
        }

        # iterate
        $log->each_entry(sub { say $_[0] });

        # tail: block until new entries
        my $count = $log->entry_count;
        $log->wait_for($count, 5.0);

        # file-backed / memfd
        $log = Data::Log::Shared->new('/tmp/log.shm', 1_000_000);
        $log = Data::Log::Shared->new_memfd("my_log", 1_000_000);
        $log = Data::Log::Shared->new_from_fd($fd);

DESCRIPTION
    Append-only log in shared memory. Multiple writers append
    variable-length entries via CAS on a tail offset. Readers replay from
    any position. Entries persist until explicit "reset".

    Unlike Data::Queue::Shared (consumed on read) and Data::PubSub::Shared
    (ring overwrites), the log retains all entries until truncation. Useful
    for audit trails, event sourcing, debug logging.

    Linux-only. Requires 64-bit Perl.

METHODS
  Append
        my $off = $log->append($data);  # returns offset, or undef if full

    $data must be non-empty (empty strings are rejected since len=0 is the
    internal uncommitted marker).

  Read
        my ($data, $next_off) = $log->read_entry($offset);
        # returns () if no entry at offset (end of log or uncommitted)

        my $final_pos = $log->each_entry(sub {
            my ($data, $offset) = @_;
        });
        my $final_pos = $log->each_entry(\&cb, $start_offset);

  Status
        my $off  = $log->tail_offset;   # byte offset past last entry
        my $n    = $log->entry_count;   # number of committed entries
        my $sz   = $log->data_size;     # total data region size
        my $free = $log->available;     # remaining bytes

  Waiting
        my $ok = $log->wait_for($expected_count);           # block until count changes
        my $ok = $log->wait_for($expected_count, $timeout);  # with timeout
        my $ok = $log->wait_for($expected_count, 0);         # non-blocking poll

    Returns 1 if new entries arrived (count != expected), 0 on timeout.

  Lifecycle
        $log->reset;    # clear all entries (NOT concurrency-safe)
        $log->sync;     # msync to disk
        $log->unlink;   # remove backing file

  Common
        my $p  = $log->path;
        my $fd = $log->memfd;
        my $s  = $log->stats;

  eventfd
        my $fd = $log->eventfd;
        $log->eventfd_set($fd);
        my $fd = $log->fileno;
        $log->notify;
        my $n  = $log->eventfd_consume;

STATS
    stats() returns: "data_size", "tail", "count", "available", "waiters",
    "appends", "waits", "timeouts", "mmap_size".

SECURITY
    The mmap region is writable by all processes that open it. Do not share
    backing files with untrusted processes.

BENCHMARKS
    Single-process (1M ops, x86_64 Linux, Perl 5.40):

        append (12B entries)     8.9M/s
        append (200B entries)    8.0M/s
        read_entry sequential   4.1M/s

    Multi-process (8 workers, 200K appends each):

        concurrent append       6.2M/s aggregate

SEE ALSO
    Data::Queue::Shared - FIFO queue (consumed on read)

    Data::PubSub::Shared - publish-subscribe ring (overwrites)

    Data::Stack::Shared - LIFO stack

    Data::Deque::Shared - double-ended queue

    Data::Pool::Shared - fixed-size object pool

    Data::Buffer::Shared - typed shared array

    Data::Sync::Shared - synchronization primitives

    Data::HashMap::Shared - concurrent hash table

AUTHOR
    vividsnow

LICENSE
    Same terms as Perl itself.

