Run BSP Node
Run & Build BSP Agent from Source
Install bsp-agent
Next, we’re going to install the agent that can transform the specimens to AVRO encoded blocks, prove that their data contains what is encoded and upload them to an object store.
Clone the
covalenthq/bsp-agent
repo in a separate terminal and build.
git clone https://github.com/covalenthq/bsp-agent.git cd bsp-agent make build
Create envrc file
Add a
.envrc
file to~/bsp-agent
and add the private key to your operator account address. (See below on how to do this for this workshop)Here we set up the required env vars for the bsp-agent. Other variables that are not secrets are passed as flags. Add the entire line below to the .envrc file with the replaced keys, rpc url and ipfs service token, and save the file.
cd bsp-agent touch .envrc export MB_RPC_URL=** moonbeam RPC url ** export MB_PRIVATE_KEY=** Your BSP Operator private key **
direnv allow
allow direnv to catch the exported constant and enable it with the direnv allow command.
direnv allow
NOTE: You should see something like this if the env variables have been correctly exported and ready to use. If you don’t see this prompt in the terminal please enable/install direnv using the instructions on install dependencies page of this guide.
direnv: loading ~/Documents/covalent/bsp-agent/.envrc direnv: export +MB_PRIVATE_KEY +MB_RPC_URL
Make sure that you replace $PROOF_CHAIN_CONTRACT_ADDR with the new copied “proof-chain” contract address in the command below for the --proof-chain-address flag and create a bin directory at ~/bsp-agent to store the block- specimens binary files with
NOTE: Moonbeam Proof-Chain Address: 0x7487b04899c2572A223A8c6eC9bA919e27BBCd36
Assuming the ipfs-pinner is running (last section) at the default http://127.0.0.1:3001/
, we can now start the bsp-agent
:
$ cd ../bsp-agent $ ./bin/bspagent \ --redis-url="redis://username:@localhost:6379/0?topic=replication" \ --avro-codec-path="./codec/block-ethereum.avsc" \ --binary-file-path="./bin/block-ethereum" \ --block-divisor=35 \ --proof-chain-address=0x7487b04899c2572A223A8c6eC9bA919e27BBCd36 \ --consumer-timeout=10000000 \ --log-folder ./logs/ \ --ipfs-pinner-server "http://127.0.0.1:3001"
Each of the agent’s flags and their functions are described below (some may have been taken out for simplifying this workshop) -
--redis-url
- this flag tells the agent where to find the bsp messages, at which stream topic key (replication) and what the consumer group is named with the field after # which in this case is replicate, additionally one can provide a password to the redis instance here but we recommend by adding the line below to the .envrcexport REDIS_PWD=your-redis-pwd
--codec-path
- tells the bsp agent the relative path to the AVRO .avsc files in the repo, since the agent ships with the corresponding avsc files this remains fixed for the time being--binary-file-path
- tells the bsp if local copies of the block-replica objects being created are to be stored in a given local directory. Please make sure the path (& directory) pre-exists before passing this flag.--block-divisor
- allows the operator to configure the number of block specimens being created, the block number divisible only by this number will be extracted, packed, encoded, uploaded and proofed.--proof-chain-address
- specifies the address of the proof-chain contract that has been deployed to the Moonbeam network.--consumer-timeout
- specifies when the agent stops accepting new msgs from the pending queue for encode, proof and upload.--log-folder
- specifies the location (folder) where the log files have to be placed. In case of error (like permission errors), the logs are not recorded in files.--ipfs-pinner-server
- specifies the http url where ipfs-pinner server is listening. By default, it is http://127.0.0.1:3001
NOTE: if the bsp-agent
command above fails with a message about permission issues to access ~/.ipfs/*
, run sudo chmod -R 700 ~/.ipfs
and try again.
If all the cli-flags are administered correctly (either in the makefile or the go run command) you should be able to see something like this from logs
time="2022-04-18T17:26:47Z" level=info msg="Initializing Consumer: fb78bb1c-1e14-4905-bb1f-0ea96de8d8b5 | Redis Stream: replication-1 | Consumer Group: replicate-1" function=main line=167 time="2022-04-18T17:26:47Z" level=info msg="block-specimen not created for: 10430548, base block number divisor is :3" function=processStream line=332 time="2022-04-18T17:26:47Z" level=info msg="stream ids acked and trimmed: [1648848491276-0], for stream key: replication-1, with current length: 11700" function=processStream line=339 time="2022-04-18T17:26:47Z" level=info msg="block-specimen not created for: 10430549, base block number divisor is :3" function=processStream line=332 time="2022-04-18T17:26:47Z" level=info msg="stream ids acked and trimmed: [1648848505274-0], for stream key: replication-1, with current length: 11699" function=processStream line=339 ---> Processing 4-10430550-replica <--- time="2022-04-18T17:26:47Z" level=info msg="Submitting block-replica segment proof for: 4-10430550-replica" function=EncodeProveAndUploadReplicaSegment line=59 time="2022-04-18T17:26:47Z" level=info msg="binary file should be available: ipfs://QmUQ4XYJv9syrokUfUbhvA4bV8ce7w1Q2dF6NoNDfSDqxc" function=EncodeProveAndUploadReplicaSegment line=80 time="2022-04-18T17:27:04Z" level=info msg="Proof-chain tx hash: 0xcc8c487a5db0fec423de62f7ac4ca81c630544aa67c432131cabfa35d9703f37 for block-replica segment: 4-10430550-replica" function=EncodeProveAndUploadReplicaSegment line=86 time="2022-04-18T17:27:04Z" level=info msg="File written successfully to: /scratch/node/block-ethereum/4-10430550-replica-0xcc8c487a5db0fec423de62f7ac4ca81c630544aa67c432131cabfa35d9703f37" function=writeToBinFile line=188 time="2022-04-18T17:27:04Z" level=info msg="car file location: /tmp/28077399.car\n" function=generateCarFile line=133 time="2022-04-18T17:27:08Z" level=info msg="File /tmp/28077399.car successfully uploaded to IPFS with pin: QmUQ4XYJv9syrokUfUbhvA4bV8ce7w1Q2dF6NoNDfSDqxc" function=HandleObjectUploadToIPFS line=102 time="2022-04-18T17:27:08Z" level=info msg="stream ids acked and trimmed: [1648848521276-0], for stream key: replication-1, with current length: 11698" function=processStream line=323
If you see the above log, you’re successfully running the entire block specimen producer workflow. The BSP-agent is reading messages from the redis streams topic, encoding, compressing, proving and uploading it to the gcp bucket in segments of multiple blocks at a time.
If however, that doesn’t happen and the agent fails and isn’t able to complete the workflow, fear not! It will automatically fail and the messages will be persisted in the stream where they were being read from! So when you restart correctly the same messages will be reprocessed till full success.
Please note any ERR / WARN / DEBUG messages that could be responsible for the failure. The messages should be clear enough to pinpoint the exact issue. Additionally, get support from Covalent's discord community!
Please note any ERR / WARN / DEBUG messages that could be responsible for the failure. The messages should be clear enough to pinpoint the exact issue. Additionally, get support from Covalent's discord community!
Sample Systemd Service Units
If the block specimen stack is running successfully and producing block specimens, Congrats! As a way to manage the services, you might want to use systemd.
We next provide sample systemd service files, so that any crash in one of the components would restart that component, rather than the system halting. Don't forget to replace the placeholders with actual values in these sample files.
BSP-Geth - Service Unit File
[Unit] Description=Bsp Geth service Wants=network-online.target After=network.target [Service] User=ubuntu Group=ubuntu Type=simple WorkingDirectory=/home/ubuntu/covalent/bsp-geth/ ExecStart=./build/bin/geth --mainnet --log.debug --syncmode snap --datadir $PATH_TO_GETH_MAINNET_CHAINDATA --replication.targets "redis://localhost:6379/?topic=replication" --replica.result --replica.specimen --replica.blob --log.folder "./logs/" Restart=always [Install] WantedBy=multi-user.target
Lighthouse - Service Unit File
[Unit] Description=lighthouse service Wants=network-online.target After=network.target [Service] User=ubuntu Group=ubuntu Type=simple WorkingDirectory=/home/ubuntu/covalent/lighthouse/ ExecStart=./build/bin/lighthouse bn --network mainnet --execution-endpoint http://localhost:8551 --execution-jwt /Users/<user>/repos/experiment/bsp_doc/bsp-geth/data/geth/jwtsecret --checkpoint-sync-url https://mainnet.checkpoint.sigp.io --disable-deposit-contract-sync Restart=always [Install] WantedBy=multi-user.target
IPFS-Pinner - Service Unit File
[Unit] Description=Bsp Agent service Wants=network-online.target [Unit] Description=ipfs-pinner client After=syslog.target network.target [Service] User=ubuntu Group=ubuntu Environment="AGENT_KEY=<<ASK_ON_DISCORD>>", "DELEGATION_PROOF_FILE=<<ASK_ON_DISCORD>>" Type=simple ExecStart=/opt/ipfs-pinner/bin/server Restart=always TimeoutStopSec=infinity [Install] WantedBy=multi-user.target
BSP-Agent - Service Unit File
[Unit] Description=Bsp Agent service Wants=network-online.target After=network.target [Service] User=ubuntu Group=ubuntu Type=simple WorkingDirectory=/home/ubuntu/covalent/bsp-agent/ ExecStart=./bin/bspagent --redis-url="redis://username:@localhost:6379/0?topic=replication" --avro-codec-path="/home/ubuntu/covalent/bsp-agent/codec/block-ethereum.avsc" --binary-file-path="/home/ubuntu/covalent/bsp-agent/data/bin/block-ethereum" --block-divisor=35 --proof-chain-address=0x7487b04899c2572A223A8c6eC9bA919e27BBCd36 --consumer-timeout=10000000 --log-folder /home/ubuntu/covalent/bsp-agent/logs/ --ipfs-pinner-server=http://127.0.0.1:3001/ Restart=always [Install] WantedBy=multi-user.target
Support
If you need any assistance with the onboarding process or have technical and operational concerns please contact Rodrigo or Krish in the Covalent Network Discord.