Introduction
Welcome to the Scroll Rollup Node documentation.
What is Scroll?
Scroll is a zkRollup on Ethereum that enables scaling while maintaining security and decentralization through zero-knowledge proofs. By moving computation and state storage off-chain while posting transaction data and validity proofs to Ethereum L1, Scroll achieves higher throughput and lower transaction costs while inheriting Ethereum’s security guarantees.
What is the Rollup Node?
The rollup node is responsible for following the Scroll L2 chain using P2P data from the Scroll network, and consolidating this information with data posted to Ethereum L1. At its core, the rollup node implements a derivation function: given the L1 chain state, it deterministically produces the corresponding L2 chain. This allows it to follow the correct L2 chain in case malicious blocks are propagated on the P2P network.
Derivation Function
The derivation process works by:
- Monitoring L1: Watching for batch commitments, finalization events, and cross-chain messages posted to Ethereum
- Decoding Batches: Extracting and decoding batch data (including blob data) to reconstruct transaction lists
- Building Payloads: Constructing execution payloads with the appropriate transactions and L1 messages
- Executing Blocks: Applying payloads through the execution engine to advance the L2 state
Architecture
Built on the Reth framework, the rollup node employs an event-driven architecture where specialized components communicate through async channels:
- L1 Watcher: Tracks L1 events and maintains awareness of chain reorganizations
- Derivation Pipeline: Transforms batch data from L1 into executable L2 payloads
- Engine Driver: Interfaces with the execution engine via the Engine API
- Chain Orchestrator: Coordinates the overall flow from L1 events to L2 blocks
- Network Layer: Participates in the Scroll and Ethereum P2P network
Node Modes
The rollup node can operate in different configurations:
- Follower Node: Follows the L2 chain via P2P propagated blocks, consolidated by processing batch data posted to L1
- Sequencer Node: Actively sequences new transactions into blocks and posts batches to L1
About This Documentation
This documentation provides comprehensive guides for operating and understanding the Scroll rollup node, including setup instructions, configuration options, architecture details, and troubleshooting guidance.
Running a Scroll Rollup Node
This guide covers how to run a Scroll rollup node as a follower node.
Hardware Requirements
The following are the recommended hardware specifications for running a Scroll rollup node:
Recommended Requirements
- CPU: 2 cores @ 3 GHz
- Memory: 16 GB RAM
These specifications are based on production deployments and should provide sufficient resources for stable operation as a follower node.
Building the Node
Prerequisites
- Rust toolchain (stable)
- Cargo package manager
Compilation
To build the rollup node binary:
cargo build --release --bin rollup-node
This will create an optimized production binary at target/release/rollup-node.
For development builds (faster compilation, slower runtime):
cargo build --bin rollup-node
Running the Node
Basic Command
To run the node as a follower:
./target/release/rollup-node node \
--chain <CHAIN_NAME> \
--l1.url <L1_RPC_URL> \
--blob.s3_url <BLOB_SOURCE_URL> \
--http \
--http.addr 0.0.0.0 \
--http.port 8545
Replace:
<CHAIN_NAME>: The chain to follow (e.g.,scroll-mainnet,scroll-sepolia, ordev)<L1_RPC_URL>: HTTP(S) URL of an Ethereum L1 RPC endpoint<BLOB_SOURCE_URL>: Blob data source URL (use Scroll’s S3 URLs or your own beacon node)
Essential Configuration Flags
Chain Configuration
--chain <CHAIN>: Specify the chain to sync (scroll-mainnet,scroll-sepolia, ordev)-d, --datadir <PATH>: Directory for node data storage (default: platform-specific)
L1 Provider Configuration
--l1.url <URL>: L1 Ethereum RPC endpoint URL (required for follower nodes)--l1.cups <NUMBER>: Compute units per second for rate limiting (default: 10000)--l1.max-retries <NUMBER>: Maximum retry attempts for L1 requests (default: 10)--l1.initial-backoff <MS>: Initial backoff duration for retries in milliseconds (default: 100)--l1.query-range <BLOCKS>: Block range for querying L1 logs (default: 500)
Blob Provider Configuration
The node requires access to blob data for derivation. Configure one or more blob sources:
--blob.beacon_node_urls <URL>: Beacon node URLs for fetching blobs (comma-separated for multiple)--blob.s3_url <URL>: S3-compatible storage URL for blob data--blob.anvil_url <URL>: Anvil blob provider URL (for testing)
Scroll-Provided S3 Blob Storage:
Scroll provides public S3 blob storage endpoints for both networks:
- Mainnet:
https://scroll-mainnet-blob-data.s3.us-west-2.amazonaws.com/ - Sepolia:
https://scroll-sepolia-blob-data.s3.us-west-2.amazonaws.com/
These can be used as reliable blob sources without requiring your own beacon node infrastructure.
Consensus Configuration
--consensus.algorithm <ALGORITHM>: Consensus algorithm to usesystem-contract(default): Validates blocks against authorized signer from L1noop: No consensus validation (testing only)
--consensus.authorized-signer <ADDRESS>: Static authorized signer address (when using system-contract without L1 provider)
Database Configuration
--rollup-node-db.path <PATH>: Custom database path (default:<datadir>/scroll.db)
Network Configuration
--network.bridge: Enable bridging blocks from eth wire to scroll wire protocol (default: true)--network.scroll-wire: Enable the scroll wire protocol (default: true)--network.sequencer-url <URL>: Sequencer RPC URL for following the sequencer--network.valid_signer <ADDRESS>: Valid signer address for network validation
Chain Orchestrator Configuration
--chain.optimistic-sync-trigger <BLOCKS>: Block gap that triggers optimistic sync (default: 1000)--chain.chain-buffer-size <SIZE>: In-memory chain buffer size (default: 2000)
Engine Configuration
--engine.sync-at-startup: Attempt to sync on startup (default: true)
HTTP RPC Configuration
--http: Enable HTTP RPC server--http.addr <ADDRESS>: HTTP server listening address (default: 127.0.0.1)--http.port <PORT>: HTTP server port (default: 8545)--http.api <APIS>: Enabled RPC API namespaces (comma-separated)- Available:
admin,debug,eth,net,trace,txpool,web3,rpc,reth,ots,flashbots,miner,mev
- Available:
--http.corsdomain <ORIGINS>: CORS allowed origins (comma-separated)
Rollup Node RPC
--rpc.rollup-node=false: Disable the rollup node basic RPC namespace(default: enabled) (provides rollup-specific methods)--rpc.rollup-node-admin: Enable the rollup node admin RPC namespace (provides rollup-specific methods)
Example Configurations
Scroll Mainnet Follower
./target/release/rollup-node node \
--chain scroll-mainnet \
--datadir /var/lib/scroll-node \
--l1.url https://eth-mainnet.g.alchemy.com/v2/YOUR_API_KEY \
--blob.s3_url https://scroll-mainnet-blob-data.s3.us-west-2.amazonaws.com/ \
--http \
--http.addr 0.0.0.0 \
--http.port 8545 \
--http.api eth,net,web3,debug,trace
Scroll Sepolia Testnet Follower
./target/release/rollup-node node \
--chain scroll-sepolia \
--datadir /var/lib/scroll-node-sepolia \
--l1.url https://eth-sepolia.g.alchemy.com/v2/YOUR_API_KEY \
--blob.s3_url https://scroll-sepolia-blob-data.s3.us-west-2.amazonaws.com/ \
--http \
--http.addr 0.0.0.0 \
--http.port 8545 \
--http.api eth,net,web3
Logging and Debugging
The node uses Rust’s tracing framework for structured logging. Configure log levels using the RUST_LOG environment
variable.
Log Level Configuration
The general format is: RUST_LOG=<default_level>,<target>=<level>,...
Recommended Log Configuration
For production with detailed rollup-specific logging:
RUST_LOG=info,scroll=trace,rollup=trace,sqlx=off,scroll_txpool=trace ./target/release/rollup-node node ...
This configuration:
- Sets default log level to
info - Enables detailed
tracelogging for scroll-specific components - Enables detailed
tracelogging for rollup components - Disables verbose sqlx database query logging
- Enables detailed
tracelogging for transaction pool operations
Useful Log Targets
For debugging specific components, you can adjust individual log targets:
L1 Watcher
RUST_LOG=scroll::watcher=debug
Monitor L1 event watching, batch commits, and chain reorganizations.
Derivation Pipeline
RUST_LOG=scroll::derivation_pipeline=debug
Track batch decoding and payload attribute streaming.
Engine Driver
RUST_LOG=scroll::engine=debug
Debug engine API interactions and forkchoice updates.
Network Layer
RUST_LOG=scroll::network=debug
Monitor P2P networking and block propagation.
Database Operations
RUST_LOG=scroll::db=debug
Debug database queries and operations.
Log Level Options
Available log levels (from least to most verbose):
error: Only error messageswarn: Warnings and errorsinfo: General informational messages (recommended for production)debug: Detailed debugging informationtrace: Very detailed trace information (use for specific debugging)
Combined Example with Logging
RUST_LOG=info,scroll=debug,rollup=debug,sqlx=off \
./target/release/rollup-node node \
--chain scroll-mainnet \
--datadir /var/lib/scroll-node \
--l1.url https://eth-mainnet.g.alchemy.com/v2/YOUR_API_KEY \
--blob.s3_url https://scroll-mainnet-blob-data.s3.us-west-2.amazonaws.com/ \
--http \
--http.addr 0.0.0.0 \
--http.port 8545 \
--http.api eth,net,web3
Verifying Node Operation
Check Sync Status
Once the node is running, you can verify it’s syncing properly:
curl -X POST http://localhost:8545 \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "eth_syncing",
"params": [],
"id": 1
}'
Get Latest Block
curl -X POST http://localhost:8545 \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "eth_blockNumber",
"params": [],
"id": 1
}'
Check Node Health
Monitor the node logs for:
- Successful L1 event processing
- Block derivation progress
- Engine API health
- Database operations
Look for log entries indicating successful block processing and chain advancement.
Troubleshooting
Common Issues
Node not syncing:
- Verify L1 RPC endpoint is accessible and synced
- Ensure beacon node URLs are correct and responsive
- Check network connectivity
- Review logs with
RUST_LOG=debugfor detailed error messages
High memory usage:
- Adjust
--chain.chain-buffer-sizeto reduce memory footprint - Ensure system has adequate RAM (16GB recommended)
Database errors:
- Verify disk space is available
- Check file permissions on datadir
- Consider using a different database path with
--datadir
L1 connection errors:
- Verify L1 RPC endpoint is reliable and has sufficient rate limits
- Adjust
--l1.max-retriesand--l1.initial-backofffor unstable connections - Consider using a dedicated or archive L1 node
For additional support and detailed implementation information, refer to the project’s CLAUDE.md and source code documentation.
Running a Scroll Sequencer Node
This guide covers how to run a Scroll rollup node in sequencer mode. A sequencer node is responsible for ordering transactions, building new blocks, and proposing them to the network.
What is a Sequencer Node?
A sequencer node actively produces new blocks for the Scroll L2 chain. Unlike follower nodes that listen for blocks gossiped over L2 P2P and derive blocks from L1 data, sequencer nodes:
- Accept transactions from the mempool
- Order transactions and build new blocks
- Include L1 messages in blocks according to configured inclusion strategy
- Sign blocks with a configured signer (private key or AWS KMS)
- Broadcast blocks to the network
Note: Running a sequencer requires authorization. The sequencer’s address must be registered as the authorized signer in the L1 system contract for blocks to be accepted by the network.
Prerequisites
Hardware Requirements
Sequencer nodes have similar requirements to follower nodes:
- CPU: 2+ cores @ 3 GHz
- Memory: 16 GB RAM
Building the Binary
Build the rollup node binary in release mode for production use:
cargo build --release --bin rollup-node
The release binary will be located at target/release/rollup-node.
Signer Configuration
A sequencer node requires a signer to sign blocks. There are two signing methods available:
Option 1: Private Key File
Store your private key in a file and reference it with the --signer.key-file flag.
Private Key File Format:
- Hex-encoded private key (64 characters)
- Optional
0xprefix - No additional formatting or whitespace
Example private key file (/secure/path/sequencer.key):
0x1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef
Security Best Practices:
- Store the key file in a secure location with restricted permissions (
chmod 600) - Never commit private keys to version control
- Use file system encryption for the key file
- Regularly rotate sequencer keys
- Consider using hardware security modules (HSM) for production
Option 2: AWS KMS (not thoroughly tested!!)
Use AWS Key Management Service (KMS) to manage your sequencer’s signing key. This is the recommended approach for production deployments.
Requirements:
- AWS account with KMS access
- KMS key created with signing capabilities
- IAM permissions for the sequencer node:
kms:GetPublicKey- Retrieve the public keykms:Sign- Sign block hashes
KMS Key Format:
arn:aws:kms:REGION:ACCOUNT_ID:key/KEY_ID
Example:
arn:aws:kms:us-west-2:123456789012:key/12345678-1234-1234-1234-123456789012
Benefits of AWS KMS:
- Keys never leave AWS infrastructure
- Centralized key management and rotation
- Audit logging of signing operations
- Fine-grained access control via IAM
Test Mode (Development Only)
For development and testing, you can bypass signer requirements with the --test flag:
--test
Warning: Never use test mode in production. It disables signature verification and is only suitable for local development.
Sequencer Configuration Flags
Core Sequencer Flags
--sequencer.enabled: Enable sequencer mode (default:false)--sequencer.auto-start: Automatically start sequencing on node startup (default:false)
Block Production Configuration
--sequencer.block-time <MS>: Time between blocks in milliseconds (default:1000)--sequencer.payload-building-duration <MS>: Time allocated for building each payload in milliseconds (default:800)--sequencer.allow-empty-blocks: Allow production of empty blocks when no transactions are available (default:false)
Fee Configuration
--sequencer.fee-recipient <ADDRESS>: Address to receive block rewards and transaction fees (default:0x5300000000000000000000000000000000000005- Scroll fee vault)
L1 Message Inclusion
The sequencer can include L1 messages in blocks using different strategies:
--sequencer.l1-inclusion-mode <MODE>: Strategy for including L1 messages (default:"finalized:2")
Available modes:
finalized:N- Include messages from finalized L1 blocks which have a block number <= current finalized - N (e.g.,finalized:2)depth:N- Include messages from L1 blocks with a block number <= current head - N (e.g.,depth:10)
Example:
--sequencer.l1-inclusion-mode finalized:2
--sequencer.max-l1-messages <N>: Override maximum L1 messages per block (optional)
Signer Configuration
Mutually exclusive options (choose one):
--signer.key-file <PATH>: Path to hex-encoded private key file--signer.aws-kms-key-id <ARN>: AWS KMS key ID or ARN
Example Configurations
Development Sequencer (Test Mode)
For local development and testing:
./target/release/rollup-node node \
--chain dev \
--datadir ./data/sequencer \
--test \
--sequencer.enabled \
--sequencer.auto-start \
--sequencer.block-time 1000 \
--http \
--http.addr 0.0.0.0 \
--http.port 8545 \
--http.api admin,debug,eth,net,trace,txpool,web3,rpc,reth,ots,flashbots,miner,mev
Notes:
- Uses
--testflag to bypass signer requirement - Suitable for local development only
- Genesis funds available for testing
Production Sequencer with Private Key
For production deployment with private key file:
./target/release/rollup-node node \
--chain scroll-mainnet \
--datadir /var/lib/scroll-sequencer \
--l1.url https://eth-mainnet.g.alchemy.com/v2/YOUR_API_KEY \
--blob.s3_url https://scroll-mainnet-blob-data.s3.us-west-2.amazonaws.com/ \
--sequencer.enabled \
--sequencer.auto-start \
--sequencer.block-time 1000 \
--sequencer.payload-building-duration 800 \
--sequencer.l1-inclusion-mode finalized:2 \
--sequencer.fee-recipient 0x5300000000000000000000000000000000000005 \
--signer.key-file /secure/path/sequencer.key \
--http \
--http.addr 0.0.0.0 \
--http.port 8545 \
--http.api eth,net,web3,debug,trace
Production Sequencer with AWS KMS
For production deployment with AWS KMS:
./target/release/rollup-node node \
--chain scroll-mainnet \
--datadir /var/lib/scroll-sequencer \
--l1.url https://eth-mainnet.g.alchemy.com/v2/YOUR_API_KEY \
--blob.s3_url https://scroll-mainnet-blob-data.s3.us-west-2.amazonaws.com/ \
--sequencer.enabled \
--sequencer.auto-start \
--sequencer.block-time 1000 \
--sequencer.payload-building-duration 800 \
--sequencer.l1-inclusion-mode finalized:2 \
--sequencer.fee-recipient 0x5300000000000000000000000000000000000005 \
--signer.aws-kms-key-id arn:aws:kms:us-west-2:123456789012:key/YOUR-KEY-ID \
--http \
--http.addr 0.0.0.0 \
--http.port 8545 \
--http.api eth,net,web3,debug,trace
Sepolia Testnet Sequencer
For testing on Sepolia testnet:
./target/release/rollup-node node \
--chain scroll-sepolia \
--datadir /var/lib/scroll-sequencer-sepolia \
--l1.url https://eth-sepolia.g.alchemy.com/v2/YOUR_API_KEY \
--blob.s3_url https://scroll-sepolia-blob-data.s3.us-west-2.amazonaws.com/ \
--sequencer.enabled \
--sequencer.auto-start \
--signer.key-file /path/to/sepolia-sequencer.key \
--http \
--http.addr 0.0.0.0 \
--http.port 8545 \
--http.api eth,net,web3
Verifying Sequencer Operation
Check Sequencer is Producing Blocks
Monitor the logs for block production:
# Look for sequencer-related log entries
RUST_LOG=info,scroll=debug,rollup=debug ./target/release/rollup-node node ...
Expected log patterns:
Built payload- Payload constructionSigned block- Block signingBroadcast block- Block propagation
Query Latest Block
Verify the sequencer is producing new blocks:
# Check block number is incrementing
curl -X POST http://localhost:8545 \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "eth_blockNumber",
"params": [],
"id": 1
}'
Run this command multiple times (with --sequencer.block-time interval) to confirm blocks are being produced.
Logging and Debugging
Recommended Log Configuration for Sequencer
For production sequencer operation with detailed logging:
RUST_LOG=info,scroll=debug,rollup_node::sequencer=trace,scroll::engine=debug
Sequencer-Specific Log Targets
Sequencer Operations:
RUST_LOG=rollup_node::sequencer=trace
Monitor payload building, transaction selection, and block production.
Block Signing:
RUST_LOG=rollup_node::signer=debug
Track signing operations and signer interactions.
L1 Message Processing:
RUST_LOG=scroll::watcher=debug
Monitor L1 message detection and inclusion.
Troubleshooting
Sequencer Not Producing Blocks
Symptoms: No new blocks appearing, logs show no sequencer activity
Possible causes:
--sequencer.auto-startnot enabled - Start sequencing manually via RPC- Signer configuration error - Check
--signer.*flags - Not authorized on L1 - Verify sequencer address is registered in system contract
- Insufficient gas for transactions - Check mempool has valid transactions
--sequencer.allow-empty-blocksnot set and no transactions available
Solutions:
- Enable auto-start:
--sequencer.auto-start - Verify signer configuration in logs
- Check authorization on L1 system contract
- Review logs with
RUST_LOG=rollup_node::sequencer=trace
Signer Errors
Symptoms: Errors mentioning “signer”, “signature”, or “key”
With --signer.key-file:
- Verify file exists and is readable
- Check file permissions (
chmod 600) - Validate hex format (64 characters, optional
0xprefix) - Ensure no extra whitespace or newlines
With --signer.aws-kms-key-id:
- Verify AWS credentials are configured
- Check IAM permissions (
kms:GetPublicKey,kms:Sign) - Confirm KMS key ARN is correct
- Check network connectivity to AWS
- Review AWS CloudTrail logs for KMS denials
Block Signing Failed
Symptoms: Logs show “failed to sign”
Solutions:
- Check signer configuration is correct
- For AWS KMS, verify network connectivity and IAM permissions
- Ensure the signing key has not been disabled or deleted
- Check system time is synchronized (important for KMS)
L1 Message Inclusion Errors
Symptoms: Errors related to L1 messages, blocks rejected
Solutions:
- Verify
--l1.urlis accessible and synced - Check
--sequencer.l1-inclusion-modeconfiguration appropriately - Ensure L1 messages are being detected (check L1 watcher logs):
RUST_LOG=scroll::watcher=debug
Additional Resources
- Running a Follower Node - Configuration for non-sequencer nodes
- Docker Operations - Running node in Docker
- Sequencer Migration Guide - Migrating from l2geth to rollup-node
Running with Docker Compose
This guide covers how to run the Scroll rollup node using Docker Compose. The Docker setup provides a complete environment including the rollup node, monitoring infrastructure, and optional L1 devnet for shadow-fork testing.
Overview
The Docker Compose stack includes the following services:
- rollup-node: The main Scroll rollup node
- prometheus: Metrics collection and time-series database
- grafana: Visualization dashboard for metrics
- l1-devnet (optional): Local L1 Ethereum node for shadow-fork mode
Prerequisites
- Docker installed (version 20.10 or later)
- Docker Compose installed (version 1.28 or later)
- At least 16 GB RAM
- Sufficient disk space for chain data
Quick Start
1. Navigate to Docker Compose Directory
cd docker-compose
2. Configure Environment
The docker-compose setup uses a .env file for configuration. You must configure your own L1 RPC and beacon node
endpoints.
Key environment variables:
ENV: The network to connect to (sepolia,mainnet, ordev)SHADOW_FORK: Enable shadow-fork mode (trueorfalse)FORK_BLOCK_NUMBER: Block number to fork from (when shadow-fork is enabled)L1_URL: Your L1 Ethereum RPC endpoint URL (e.g., Alchemy, Infura, QuickNode)BEACON_URL: Your beacon node URL for blob data
Note: You must provide your own RPC endpoints. Configure these in your .env file before starting the stack.
3. Start the Stack
For standard operation (following public networks):
docker compose up -d
For shadow-fork mode:
docker compose --profile shadow-fork up -d
4. Access the Services
Once running, the following endpoints are available:
- Rollup Node JSON-RPC: http://localhost:8545
- Rollup Node WebSocket: ws://localhost:8546
- Rollup Node Metrics: http://localhost:6060/metrics
- Prometheus UI: http://localhost:19090
- Grafana Dashboards: http://localhost:13000
Operating Modes
Standard Mode (Follower Node)
Standard mode connects the rollup node to public Scroll networks (Sepolia or Mainnet).
Sepolia Testnet
Edit your .env file with your RPC endpoints:
ENV=sepolia
SHADOW_FORK=false
L1_URL=https://eth-sepolia.g.alchemy.com/v2/YOUR_API_KEY
BEACON_URL=https://eth-sepolia.g.alchemy.com/v2/YOUR_API_KEY
Replace YOUR_API_KEY with your actual API key from your RPC provider (Alchemy, Infura, QuickNode, etc.).
Start the services:
docker compose up -d
The node will:
- Connect to your configured L1 RPC endpoint
- Connect to your configured beacon node
- Sync from the Scroll Sepolia network
- Connect to trusted peers on the network
Mainnet
Edit your .env file with your RPC endpoints:
ENV=mainnet
SHADOW_FORK=false
L1_URL=https://eth-mainnet.g.alchemy.com/v2/YOUR_API_KEY
BEACON_URL=https://eth-mainnet.g.alchemy.com/v2/YOUR_API_KEY
Replace YOUR_API_KEY with your actual API key from your RPC provider (Alchemy, Infura, QuickNode, etc.).
Start the services:
docker compose up -d
The node will:
- Connect to your configured L1 RPC endpoint
- Connect to your configured beacon node
- Sync from the Scroll Mainnet network
- Connect to trusted peers on the network
Shadow-Fork Mode
Shadow-fork mode runs the rollup node against a forked L1 chain, useful for testing and development without affecting the live network.
Configuration
Edit your .env file:
ENV=sepolia # or mainnet
SHADOW_FORK=true
FORK_BLOCK_NUMBER=8700000 # Adjust to your desired fork point
Starting Shadow-Fork
docker compose --profile shadow-fork up -d
This will:
- Start an L1 devnet (Anvil) forked from the specified block
- Start the rollup node configured to use the local L1 devnet
- Start Prometheus and Grafana for monitoring
How Shadow-Fork Works
The l1-devnet service uses Foundry’s Anvil to create a local fork:
anvil --fork-url <SCROLL_L1_RPC> \
--fork-block-number <FORK_BLOCK_NUMBER> \
--chain-id <CHAIN_ID> \
--host 0.0.0.0 \
--block-time 12
The rollup node then connects to this local L1 at http://l1-devnet:8545 instead of the public L1 RPC.
Use Cases
Shadow-fork mode is ideal for:
- Testing node behavior at specific block heights
- Debugging derivation issues
- Development without consuming testnet resources
- Simulating historical scenarios
Development Mode
For local development with a completely isolated environment:
ENV=dev
This mode:
- Uses the
devchain spec - Enables sequencer mode
- Bypasses signing requirements with
--testflag - Disables peer discovery
- Sets fast block times (250ms)
Configuring RPC Endpoints
You must configure your own L1 RPC and beacon node URLs using a provider of your choice:
Recommended Providers:
- Alchemy - Offers free tier with generous limits
- Infura - Reliable infrastructure with free tier
- QuickNode - High-performance nodes
- Your own self-hosted L1 archive node
Requirements:
- L1 RPC endpoint must support Ethereum mainnet or Sepolia testnet
- Beacon node must provide EIP-4844 blob data access
- Both endpoints should be reliable with good uptime
Configuration Priority
The launch script determines which URLs to use in this order:
- User-provided URLs (via
L1_URLorBEACON_URL) - highest priority - Shadow-fork URLs (if
SHADOW_FORK=true) - uses local L1 devnet - Error - If no URLs are configured and not in shadow-fork mode, the node will fail to start
Example: Using Alchemy for Sepolia
Edit your .env file:
ENV=sepolia
SHADOW_FORK=false
L1_URL=https://eth-sepolia.g.alchemy.com/v2/YOUR_API_KEY
BEACON_URL=https://eth-sepolia.g.alchemy.com/v2/YOUR_API_KEY
Then start:
docker compose up -d
Verifying RPC Connectivity
After starting the node, verify connectivity to your RPC endpoints by checking the logs:
docker compose logs rollup-node | grep -i "l1\|beacon"
You should see logs indicating successful connection to your configured endpoints. If you see connection errors, verify your URLs and API keys are correct.
Service Details
Rollup Node Service
Image: scrolltech/rollup-node:v1.0.5
Port Mappings:
8545: JSON-RPC interface8546: WebSocket interface6060: Metrics endpoint
Volumes:
./volumes/l2reth: Node data directory (chain state, database)./launch_rollup_node.bash: Entrypoint script (read-only)
Configuration:
The launch_rollup_node.bash script configures the node based on the ENV variable:
- dev: Local sequencer mode with fast block times
- sepolia: Sepolia follower with optional shadow-fork
- mainnet: Mainnet follower with optional shadow-fork
Key command-line flags used:
--chain <CHAIN> # Chain specification
--datadir=/l2reth # Data directory
--metrics=0.0.0.0:6060 # Metrics endpoint
--disable-discovery # P2P discovery disabled in Docker
--http --http.addr=0.0.0.0 --http.port=8545
--ws --ws.addr=0.0.0.0 --ws.port=8546
--http.api admin,debug,eth,net,trace,txpool,web3,rpc,reth,ots,flashbots,miner,mev
--l1.url <URL> # L1 RPC endpoint
--blob.beacon_node_urls <URLS> # Beacon node endpoint
--blob.s3_url <URL> # S3 Blob URL
--trusted-peers <PEERS> # Trusted P2P peers
L1 Devnet Service (Shadow-Fork Only)
Image: ghcr.io/foundry-rs/foundry:v1.2.3
Port Mappings:
8543: JSON-RPC interface (mapped from internal 8545)8544: WebSocket interface (mapped from internal 8546)
Volumes:
./volumes/l1devnet: Anvil state directory./launch_l1.bash: Entrypoint script (read-only)
Profile: shadow-fork (only starts when this profile is active)
Prometheus Service
Image: prom/prometheus:v3.3.1
Port Mappings:
19090: Prometheus web UI
Volumes:
./resource/prometheus.yml: Configuration file (read-only)./volumes/prometheus: Time-series database storage
Configuration:
Prometheus scrapes metrics from the rollup node every 10 seconds at http://rollup-node:6060/metrics. The configuration
includes:
- Scrape interval: 10 seconds
- Retention time: 1 day
- Retention size: 512 MB
- WAL compression: Enabled
Grafana Service
Image: grafana/grafana:12.0.2
Port Mappings:
13000: Grafana web UI (mapped from internal 3000)
Volumes:
./resource/grafana-datasource.yml: Prometheus datasource config (read-only)./resource/grafana-dashboard-providers.yml: Dashboard provider config (read-only)./resource/dashboards/: Pre-built dashboard JSON files (read-only)./volumes/grafana: Grafana database and settings
Configuration:
- Anonymous access: Enabled with Admin role (for easy local access)
- Default home dashboard: Overview dashboard
- Pre-provisioned dashboards:
overview.json: High-level node metricsperformance.json: Performance and throughput metricsstate_history.json: State and history metricstransaction_pool.json: Transaction pool metricsrollup_node.json: Rollup-specific metrics
Monitoring and Observability
Accessing Grafana Dashboards
- Open http://localhost:13000 in your browser
- No login required (anonymous access enabled)
- Select from pre-configured dashboards in the sidebar
Available Dashboards
Overview Dashboard
Provides high-level metrics including:
- Block production rate
- Sync status
- L1/L2 block heights
- Network peer count
Performance Dashboard
Detailed performance metrics:
- CPU and memory usage
- Database operations per second
- RPC request latency
- Block processing time
Transaction Pool Dashboard
Transaction pool monitoring:
- Pending transaction count
- Transaction pool size
- Transaction arrival rate
- Gas price distribution
Rollup Node Dashboard
Rollup-specific metrics:
- L1 batch processing
- Derivation pipeline throughput
- Engine API calls
- Consolidation status
Prometheus Queries
Access Prometheus at http://localhost:19090 to run custom queries.
Example queries:
# Current block height
scroll_block_height
# Block processing rate (blocks per second)
rate(scroll_blocks_processed_total[1m])
# L1 batch processing time
histogram_quantile(0.95, scroll_l1_batch_processing_seconds_bucket)
# RPC request rate by method
rate(scroll_rpc_requests_total[5m])
Viewing Logs
All Services
docker compose logs -f
Specific Service
docker compose logs -f rollup-node
docker compose logs -f prometheus
docker compose logs -f grafana
With Timestamps
docker compose logs -f -t rollup-node
Last N Lines
docker compose logs --tail=100 rollup-node
Volume Management
Data Persistence
The Docker Compose setup uses local volumes in ./volumes/ to persist data:
volumes/
├── l1devnet/ # L1 devnet state (shadow-fork only)
├── l2reth/ # Rollup node data (chain state, database)
├── prometheus/ # Prometheus time-series data
└── grafana/ # Grafana configuration and dashboards
Backing Up Data
To backup your node data:
# Stop the services first
docker compose down
# Create a backup
tar -czf backup-$(date +%Y%m%d).tar.gz volumes/
# Restart services
docker compose up -d
Resetting Node Data
To completely reset and resync:
# Stop services
docker compose down
# Remove all volumes
rm -rf volumes/
# Restart (will create fresh volumes)
docker compose up -d
Resetting Specific Services
# Reset only L2 node data
docker compose down
rm -rf volumes/l2reth/
docker compose up -d
# Reset only monitoring data
docker compose down
rm -rf volumes/prometheus/ volumes/grafana/
docker compose up -d
Network Configuration
Port Usage
The Docker Compose stack uses the following host ports:
| Service | Port | Protocol | Purpose |
|---|---|---|---|
| rollup-node | 8545 | HTTP | JSON-RPC API |
| rollup-node | 8546 | WebSocket | WebSocket API |
| rollup-node | 6060 | HTTP | Metrics endpoint |
| l1-devnet | 8543 | HTTP | L1 JSON-RPC (shadow-fork) |
| l1-devnet | 8544 | WebSocket | L1 WebSocket (shadow-fork) |
| prometheus | 19090 | HTTP | Prometheus UI |
| grafana | 13000 | HTTP | Grafana UI |
Troubleshooting
Rollup Node Not Syncing
Check L1 connectivity:
For shadow-fork mode, ensure l1-devnet is running:
docker compose ps l1-devnet
For standard mode, verify L1 RPC endpoint is accessible.
Check logs for derivation errors:
docker compose logs -f rollup-node | grep -i error
Verify beacon node connectivity:
# From within the container
docker compose exec rollup-node curl http://l1reth-cl.sepolia.scroll.tech:5052/eth/v1/node/health
Shadow-Fork L1 Devnet Issues
Check Anvil is forking correctly:
docker compose logs l1-devnet
Verify fork block number exists:
Ensure FORK_BLOCK_NUMBER is not ahead of the current L1 chain tip.
Test L1 devnet connectivity:
curl -X POST http://localhost:8543 \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "eth_blockNumber",
"params": [],
"id": 1
}'
More
For general node operation and configuration, see the Running a Node guide.