Run Archivist
As for now, Archivist is implemented only in Nim and can be found in archivist-node repository.
It is a command-line application which may be run in a different ways:
- Using binary
- Run as a service in Linux
- Run as a service in Windows (not supported yet)
- Using Docker
- Using Docker Compose
- On Kubernetes
During the run, it is required to pass configuration option to the application, which can be done in a different ways.
Configuration
It is possible to configure Archivist node in several ways:
The order of priority is the same as above:
CLI options --> Environment variables --> Configuration file.
Common information
Units
For some configuration options, we can pass values in common units like following:
--cache-size=1m/1M/1mb/1MB
--storage-quota=2m/2M/2mb/2MB
--block-mi=1s/1S/1m/1M/1h/1H/1d/1D/1w/1W
--block-ttl=2s/2S/2m/2M/2h/2H/2d/2D/2w/2WLogging
Archivist uses Chronicles logging library, which allows great flexibility in working with logs. Chronicles has the concept of topics, which categorize log entries into semantic groups.
Using the log-level parameter, you can set the top-level log level like --log-level="trace", but more importantly, you can set log levels for specific topics like --log-level="info; trace: marketplace,node; error: blockexchange", which sets the top-level log level to info and then for topics marketplace and node sets the level to trace and so on.
And one useful example, to get most important data for debugging
--log-level="trace; warn:discv5,providers,manager,cache,libp2p,multistream,switch,transport,tcptransport,semaphore,asyncstreamwrapper,lpstream,mplex,mplexchannel,noise,bufferstream,mplexcoder,secure,chronosstream,connection,connmanager,websock,ws-session,dialer,muxedupgrade,upgrade,identify,contracts,clock,archivist,pendingblocks,peerctxstore,discoveryengine,blockexcengine,blockexcnetwork,blockexcnetworkpeer,JSONRPC-CLIENT,JSONRPC-HTTP-CLIENT"CLI options
archivist --help
Usage:
archivist [OPTIONS]... command
The following options are available:
--config-file Loads the configuration from a TOML file [=none].
--log-level Sets the log level [=info].
--log-format Specifies what kind of logs should be written to stdout (auto, colors, nocolors,
json) [=auto].
--metrics Enable the metrics server [=false].
--metrics-address Listening address of the metrics server [=127.0.0.1].
--metrics-port Listening HTTP port of the metrics server [=8008].
-d, --data-dir The directory where the node will store configuration and data.
-i, --listen-addrs Multi Addresses to listen on [=/ip4/0.0.0.0/tcp/0].
--nat Specify method to use for determining public address. Must be one of: any, none,
upnp, pmp, extip:<IP> [=any].
-u, --disc-port Discovery (UDP) port [=8090].
--net-privkey Source of network (secp256k1) private key file path or name [=key].
-b, --bootstrap-node Specifies one or more bootstrap nodes to use when connecting to the network.
--max-peers The maximum number of peers to connect to [=160].
--num-threads Number of worker threads ("0" = use as many threads as there are CPU cores
available) [=DefaultThreadCount].
--agent-string Node agent string which is used as identifier in network [=Archivist Node].
--api-bindaddr The REST API bind address [=127.0.0.1].
-p, --api-port The REST Api port [=8080].
--api-cors-origin The REST Api CORS allowed origin for downloading data. '*' will allow all
origins, '' will allow none. [=Disallow all cross origin requests to download
data].
--repo-kind Backend for main repo store (fs, sqlite, leveldb) [=fs].
-q, --storage-quota The size of the total storage quota dedicated to the node [=$DefaultQuotaBytes].
-t, --block-ttl Default block timeout in seconds - 0 disables the ttl [=$DefaultBlockTtl].
--block-mi Time interval in seconds - determines frequency of block maintenance cycle: how
often blocks are checked for expiration and cleanup [=$DefaultBlockInterval].
--block-mn Number of blocks to check every maintenance cycle [=1000].
-c, --cache-size The size of the block cache, 0 disables the cache - might help on slow hardrives
[=0].
Available sub-commands:
archivist persistence [OPTIONS]... command
The following options are available:
--eth-provider The URL of the JSON-RPC API of the Ethereum node [=ws://localhost:8545].
--eth-account The Ethereum account that is used for storage contracts.
--eth-private-key File containing Ethereum private key for storage contracts.
--marketplace-address Address of deployed Marketplace contract.
--validator Enables validator, requires an Ethereum node [=false].
--validator-max-slots Maximum number of slots that the validator monitors [=1000].
If set to 0, the validator will not limit the maximum number of slots it
monitors.
--validator-groups Slot validation groups [=ValidationGroups.none].
A number indicating total number of groups into which the whole slot id space
will be divided. The value must be in the range [2, 65535]. If not provided, the
validator will observe the whole slot id space and the value of the
--validator-group-index parameter will be ignored. Powers of twos are advised
for even distribution.
--validator-group-index Slot validation group index [=0].
The value provided must be in the range [0, validatorGroups). Ignored when
--validator-groups is not provided. Only slot ids satisfying condition [(slotId
mod validationGroups) == groupIndex] will be observed by the validator.
--reward-recipient Address to send payouts to (eg rewards and refunds).
Available sub-commands:
archivist persistence prover [OPTIONS]...
The following options are available:
-cd, --circuit-dir Directory where the node will store proof circuit data [=data/circuits].
--circom-r1cs The r1cs file for the storage circuit [=data/circuits/proof_main.r1cs].
--circom-wasm The wasm file for the storage circuit [=data/circuits/proof_main.wasm].
--circom-zkey The zkey file for the storage circuit [=data/circuits/proof_main.zkey].
--circom-no-zkey Ignore the zkey file - use only for testing! [=false].
--proof-samples Number of samples to prove [=5].
--max-slot-depth The maximum depth of the slot tree [=32].
--max-dataset-depth The maximum depth of the dataset tree [=8].
--max-block-depth The maximum depth of the network block merkle tree [=5].
--max-cell-elements The maximum number of elements in a cell [=67].Environment variables
In order to set a configuration option using environment variables, first find the desired CLI option and then transform it in the following way:
- prepend it with
ARCHIVIST_ - make it uppercase
- replace
-with_
For example, to configure --log-level, use ARCHIVIST_LOG_LEVEL as the environment variable name.
Configuration file
A TOML configuration file can also be used to set configuration values. Configuration option names and corresponding values are placed in the file, separated by =. Configuration option names can be obtained from the archivist --help command, and should not include the -- prefix. For example, a node's log level (--log-level) can be configured using TOML as follows:
log-level = "trace"For option, like bootstrap-node and listen-addrs which accept multiple values we can specify data as an array
listen-addrs = [
"/ip4/0.0.0.0/tcp/1234",
"/ip4/0.0.0.0/tcp/5678"
]The Archivist node can then read the configuration from this file using the --config-file CLI parameter, like:
archivist --config-file=/path/to/your/config.tomlPlease check Run as a service in Linux for a full example of configuration file.
Auto-discovery
Archivist support marketplace contract auto-discovery based on the chain id, this mapping is done in the source code. In that way we can skip --marketplace-address argument and use it just to override a discovered value.
Run
Basically, we can run Archivist in three different modes:
- Archivist node - useful for local testing/development and basic/files sharing.
- Archivist node with marketplace support - you can share files and buy the storage, this is the main mode and should be used by the end users.
- Archivist storage node - should be used by storage providers or if you would like to sell your local storage.
We also will touch in some words Archivist bootstrap node.
Using binary
Archivist node
We can run Archivist in a simple way like following:
archivistBut, it will use a default data-dir value and we can pass a custom one:
archivist --data-dir=datadirThis will run Archivist as an isolated instance, and if we would like to join an existing network, it is required to pass a bootstrap node. We can pass multiple nodes as well:
archivist \
--data-dir=datadir \
--bootstrap-node=spr:CiUIAhIhA5mg11LZgFQ4XzIRb1T5xw9muFW1ALNKTijyKhQmvKYXEgIDARpJCicAJQgCEiEDmaDXUtmAVDhfMhFvVPnHD2a4VbUAs0pOKPIqFCa8phcQl-XFxQYaCwoJBE4vqKqRAnU6GgsKCQROL6iqkQJ1OipHMEUCIQDfzVYbN6A_O4i29e_FtDDUo7GJS3bkXRQtoteYbPSFtgIgcc8Kgj2ggVJyK16EY9xi4bY2lpTTeNIRjvslXSRdN5w \
--bootstrap-node=spr:CiUIAhIhAhmlZ1XaN44zPDuORyNJV8I79x2eSXt5-9AirVagKCAIEgIDARpJCicAJQgCEiECGaVnVdo3jjM8O45HI0lXwjv3HZ5Je3n70CKtVqAoIAgQ_OfFxQYaCwoJBAWhGBORAnVEGgsKCQQFoRgTkQJ1RCpHMEUCIQCgqSYPxyic9XmOcQYtJDKNprK_Uokz2UzjVZRnPYpOgQIgQ8m96ukov4XZG-j-XH53_vuoy3GkHuneUZ1Xe0luCxkIMPORTANT
Make sure you are using a proper value for the network you would like to join.
Also, to make your Archivist node accessible for other network participants, it is required to specify a public IP address which can be used to access your node:
archivist \
--data-dir=datadir \
--bootstrap-node=spr:CiUIAhIhA5mg11LZgFQ4XzIRb1T5xw9muFW1ALNKTijyKhQmvKYXEgIDARpJCicAJQgCEiEDmaDXUtmAVDhfMhFvVPnHD2a4VbUAs0pOKPIqFCa8phcQl-XFxQYaCwoJBE4vqKqRAnU6GgsKCQROL6iqkQJ1OipHMEUCIQDfzVYbN6A_O4i29e_FtDDUo7GJS3bkXRQtoteYbPSFtgIgcc8Kgj2ggVJyK16EY9xi4bY2lpTTeNIRjvslXSRdN5w \
--nat=anyTIP
We can set public IP using curl and IP lookup service, like ip.archivist.storage.
After that, node will announce itself using your public IP, default UDP (discovery) and dynamic TCP port (data transfer), which can be adjusted in the following way:
archivist \
--data-dir=datadir \
--bootstrap-node=spr:CiUIAhIhA5mg11LZgFQ4XzIRb1T5xw9muFW1ALNKTijyKhQmvKYXEgIDARpJCicAJQgCEiEDmaDXUtmAVDhfMhFvVPnHD2a4VbUAs0pOKPIqFCa8phcQl-XFxQYaCwoJBE4vqKqRAnU6GgsKCQROL6iqkQJ1OipHMEUCIQDfzVYbN6A_O4i29e_FtDDUo7GJS3bkXRQtoteYbPSFtgIgcc8Kgj2ggVJyK16EY9xi4bY2lpTTeNIRjvslXSRdN5w \
--nat=any \
--disc-port=8090 \
--listen-addrs=/ip4/0.0.0.0/tcp/8070In that way, node will announce itself using specified multiaddress and we can check that via API call:
curl -s localhost:8080/api/archivist/v1/debug/info | jq -r '.announceAddresses'[
"/ip4/<your public IP>/tcp/8070"
]Basically, for P2P communication we should specify and configure two ports:
| # | Protocol | Function | CLI option | Example |
|---|---|---|---|---|
| 1 | UDP | Discovery | --disc-port | --disc-port=8090 |
| 2 | TCP | Transport | --listen-addrs | --listen-addrs=/ip4/0.0.0.0/tcp/8070 |
And, also it is required to setup port-forwarding on your Internet router, to make your node accessible for participants.
So, a fully working basic configuration will looks like following:
archivist \
--data-dir=datadir \
--bootstrap-node=spr:CiUIAhIhA5mg11LZgFQ4XzIRb1T5xw9muFW1ALNKTijyKhQmvKYXEgIDARpJCicAJQgCEiEDmaDXUtmAVDhfMhFvVPnHD2a4VbUAs0pOKPIqFCa8phcQl-XFxQYaCwoJBE4vqKqRAnU6GgsKCQROL6iqkQJ1OipHMEUCIQDfzVYbN6A_O4i29e_FtDDUo7GJS3bkXRQtoteYbPSFtgIgcc8Kgj2ggVJyK16EY9xi4bY2lpTTeNIRjvslXSRdN5w \
--nat=any \
--disc-port=8090 \
--listen-addrs=/ip4/0.0.0.0/tcp/8070 \
--api-cors-origin="*"After node is up and running and port-forwarding configurations was done, we should be able to Upload a file/Download a file in the network using API.
You also can use Archivist App UI for files upload/download.
And to be able to purchase a storage, we should run Archivist node with marketplace support.
Archivist node with marketplace support
Marketplace support permits to purchase the storage in Archivist network. Basically, we should add just a persistence sub-command and required CLI options to the previous run.
For a daily use, we should consider to run a local blockchain node based on the network you would like to join. That process is described in the Join Archivist Testnet guide, but for a quick start we can use a public RPC endpoint.
Create a file with ethereum private key and set a proper permissions:
CAUTION
Please use key generation service for demo purpose only.
shellresponse=$(curl -s https://key.archivist.storage) awk -F ': ' '/private/ {print $2}' <<<"${response}" > eth.key awk -F ': ' '/address/ {print $2}' <<<"${response}" > eth.address chmod 600 eth.keyShow your ethereum address:
shellcat eth.address0x412665aFAb17768cd9aACE6E00537Cc6D5524Da9Fill-up your ethereum address with ETH and Tokens based on the the network you would like to join.
Specify bootstrap nodes based on the network you would like to join.
Run the node:
shellarchivist \ --data-dir=datadir \ --bootstrap-node=spr:CiUIAhIhA5mg11LZgFQ4XzIRb1T5xw9muFW1ALNKTijyKhQmvKYXEgIDARpJCicAJQgCEiEDmaDXUtmAVDhfMhFvVPnHD2a4VbUAs0pOKPIqFCa8phcQl-XFxQYaCwoJBE4vqKqRAnU6GgsKCQROL6iqkQJ1OipHMEUCIQDfzVYbN6A_O4i29e_FtDDUo7GJS3bkXRQtoteYbPSFtgIgcc8Kgj2ggVJyK16EY9xi4bY2lpTTeNIRjvslXSRdN5w \ --nat=any \ --disc-port=8090 \ --listen-addrs=/ip4/0.0.0.0/tcp/8070 \ --api-cors-origin="*" \ persistence \ --eth-provider=https://rpc.testnet.archivist.storage \ --eth-private-key=eth.key
NOTE
We could skip --marketplace-address argument and rely on marketplace contract Auto-discovery.
After node is up and running, and your address has founds, you should be able to Purchase storage using API.
You also can use Archivist App UI for storage purchase.
Archivist storage node
Archivist storage node should be run by storage providers or in case you would like to sell your local storage.
For that, additionally to the Archivist node with marketplace support we should use prover sub-command and required CLI options.
That sub-command will make Archivist to listen for a proof requests on the blockchain and answer them. To compute an answer for the proof request, Archivist will use stored data and circuit files generated by the code in the archivist-storage-proofs-circuits repository.
Every network uses its own generated set of the files which are stored in the archivist-contracts repository and also uploaded to the CDN. Hash of the files set is also known by the marketplace smart contract.
To download circuit files and make them available to Archivist app, we have a stand-alone utility - cirdl. It can be compiled from the sources or downloaded from the GitHub release page.
Create ethereum key file
example
CAUTION
Please use key generation service for demo purpose only.
shellresponse=$(curl -s https://key.archivist.storage) awk -F ': ' '/private/ {print $2}' <<<"${response}" > eth.key awk -F ': ' '/address/ {print $2}' <<<"${response}" > eth.address chmod 600 eth.keyShow your ethereum address:
shellcat eth.address0x412665aFAb17768cd9aACE6E00537Cc6D5524Da9To download circuit files, we should pass directory, RPC endpoint and optionally marketplace address to the circuit downloader:
shell# Create circuit files folder mkdir -p datadir/circuits chmod 700 datadir/circuits # Download circuit files cirdl \ datadir/circuits \ https://rpc.testnet.archivist.storageStart Archivist storage node
shellarchivist \ --data-dir=datadir \ --bootstrap-node=spr:CiUIAhIhA5mg11LZgFQ4XzIRb1T5xw9muFW1ALNKTijyKhQmvKYXEgIDARpJCicAJQgCEiEDmaDXUtmAVDhfMhFvVPnHD2a4VbUAs0pOKPIqFCa8phcQl-XFxQYaCwoJBE4vqKqRAnU6GgsKCQROL6iqkQJ1OipHMEUCIQDfzVYbN6A_O4i29e_FtDDUo7GJS3bkXRQtoteYbPSFtgIgcc8Kgj2ggVJyK16EY9xi4bY2lpTTeNIRjvslXSRdN5w \ --nat=any \ --disc-port=8090 \ --listen-addrs=/ip4/0.0.0.0/tcp/8070 \ persistence \ --eth-provider=https://rpc.testnet.archivist.storage \ --eth-private-key=eth.key \ prover \ --circuit-dir=datadir/circuits
NOTE
You would need to pass a bootstrap nodes and blockchain RPC endpoint based on the network you would like to join.
NOTE
We could skip --marketplace-address argument and rely on marketplace contract Auto-discovery.
After node is up and running, and your address has founds, you should be able to sell the storage using API.
You also can use Archivist App UI to sell the storage.
Archivist bootstrap node
Bootstrap nodes are used just to help peers with the initial nodes discovery and we need to run Archivist with just some basic options:
archivist \
--data-dir=datadir \
--nat=any \
--disc-port=8090To get bootstrap node SPR we can use API call:
curl -s localhost:8080/api/archivist/v1/debug/info | jq -r '.spr'spr:CiUIAhIhApd79-AxPqwRDmu7Pk-berTDtoIoMz0ovKjo85Tz8CUdEgIDARo8CicAJQgCEiECl3v34DE-rBEOa7s-T5t6tMO2gigzPSi8qOjzlPPwJR0Qjv_WtwYaCwoJBFxzjbKRAh-aKkYwRAIgCiTq5jBTaJJb6lUxN-0uNCj8lkV9AGY682D21kIAMiICIE1yxrjbDdiSCiARnS7I2zqJpXC2hOvjB4JoL9SAAk67That SPR record then can be used then by other peers for initial nodes discovery.
We should keep in mind some important things about SPR record (see ENR):
- It uses nodes public IP, discovery port (
--disc-port) and private key (--net-privkey) for record creation - Specified data is signed on each run and will be changed but still contain specified node data when decoded
- You can decode it by passing to the Archivist node at run and with
--log-level=trace
For bootstrap node, it is required to forward just discovery port on your Internet router.
Run as a service in Linux
We can run Archivist as a service via systemd using following steps
Create an user for Archivist
shellsudo useradd \ --system \ --home-dir /opt/archivist \ --shell /usr/sbin/nologin \ archivistIn case you would like to run commands using a created user, you could do it like following
sudo -u archivist ls -la /opt/archivist.Install Archivist using a script or build from sources
shell# archivist with cirdl curl -s https://get.archivist.storage/install.sh | INSTALL_DIR=/usr/local/bin CIRDL=true bashCreate directories
shellsudo mkdir -p /opt/archivist/data sudo mkdir -p /opt/archivist/logsCreate a configuration file
shellsudo vi /opt/archivist/archivist.conftomldata-dir = "/opt/archivist/data" listen-addrs = ["/ip4/0.0.0.0/tcp/8070"] nat = "extip:<Public IP>" disc-port = 8090 api-port = 8080 bootstrap-node = [ "spr:CiUIAhIhA5mg11LZgFQ4XzIRb1T5xw9muFW1ALNKTijyKhQmvKYXEgIDARpJCicAJQgCEiEDmaDXUtmAVDhfMhFvVPnHD2a4VbUAs0pOKPIqFCa8phcQl-XFxQYaCwoJBE4vqKqRAnU6GgsKCQROL6iqkQJ1OipHMEUCIQDfzVYbN6A_O4i29e_FtDDUo7GJS3bkXRQtoteYbPSFtgIgcc8Kgj2ggVJyK16EY9xi4bY2lpTTeNIRjvslXSRdN5w", "spr:CiUIAhIhAhmlZ1XaN44zPDuORyNJV8I79x2eSXt5-9AirVagKCAIEgIDARpJCicAJQgCEiECGaVnVdo3jjM8O45HI0lXwjv3HZ5Je3n70CKtVqAoIAgQ_OfFxQYaCwoJBAWhGBORAnVEGgsKCQQFoRgTkQJ1RCpHMEUCIQCgqSYPxyic9XmOcQYtJDKNprK_Uokz2UzjVZRnPYpOgQIgQ8m96ukov4XZG-j-XH53_vuoy3GkHuneUZ1Xe0luCxk", "spr:CiUIAhIhAnCcHA-aqMx--nwf8cJyZKJavc-PuYNKKROoW_5Q1JcREgIDARpJCicAJQgCEiECcJwcD5qozH76fB_xwnJkolq9z4-5g0opE6hb_lDUlxEQje7FxQYaCwoJBAXfFdCRAnVOGgsKCQQF3xXQkQJ1TipGMEQCIA22oUekTsDAtsIOyrgtkG702FJPn8Xd-ifEVTUSuu7fAiBv9YyAg9iKuYBhgZKsZBHYfX8l0sXvm80s6U__EGGY-g" ] storage-quota = "8gb" block-ttl = "24h" log-level = "info"Make sure to use bootstrap nodes for the network you would like to join, update
natvariable with a node Public IP and adjust other settings by your needs.Change folders ownership and permissions
shellsudo chown -R archivist:archivist /opt/archivistCreate systemd unit file
shellsudo vi /lib/systemd/system/archivist.serviceshell[Unit] Description=Archivist service Documentation=https://docs.archivist.storage After=local-fs.target network-online.target [Service] MemorySwapMax=0 TimeoutStartSec=infinity Type=exec User=archivist Group=archivist StateDirectory=archivist ExecStart=/usr/local/bin/archivist --config-file="/opt/archivist/archivist.conf" Restart=always RestartSec=3 StandardOutput=append:/opt/archivist/logs/archivist.log StandardError=append:/opt/archivist/logs/archivist.log [Install] WantedBy=multi-user.targetCheck
man systemd,man systemd.serviceandman systemd.directivesfor additional details.Enable and start Archivist service
shellsudo systemctl enable archivist sudo systemctl start archivistCheck service status
shellsudo systemctl status archivistEnable logs rotation using logrotate
shellsudo vi /etc/logrotate.d/archivistlogrotate/opt/archivist/logs/*.log { daily missingok rotate 5 copytruncate nocreate nomail dateext dateyesterday }Check the logs
shelltail -f /opt/archivist/logs/archivist.log
Run as a service in Windows
This functionality is not supported yet 🚧
Using Docker
We also ship Archivist in Docker containers, which can be run on amd64 and arm64 platforms.
Docker entrypoint
Docker entrypoint, supports some additional options, which can be used for easier configuration:
ENV_PATH- path to the file, in formenv=valuewhich will be sourced and available for Archivist at run. That is useful for Kubernetes Pods configuration.NAT_IP_AUTO- when set totrue, will setARCHIVIST_NATvariable with container internal IP address. It also is useful for Kubernetes Pods configuration, when we perform automated tests.NAT_PUBLIC_IP_AUTO- used to setARCHIVIST_NATto public IP address using lookup services, like ip.archivist.storage. Can be used for Docker/Kubernetes to set public IP in auto mode.ETH_PRIVATE_KEY- can be used to pass ethereum private key, which will be saved and passed as a value of theARCHIVIST_ETH_PRIVATE_KEYvariable. It should be considered as unsafe option and used for testing purposes only.- When we set
proversub-command, entrypoint will runcirdltool to download ceremony files, required by Archivist storage node. BOOTSTRAP_NODE_URL- Archivist node API URL in form ofhttp://bootstrap:8080, to be used to get it's SPR as a bootstrap node. That is useful for Docker and Kubernetes configuration.NETWORK- is a helper variable to simply a specific network join. It helps to automateBOOTSTRAP_NODE_FROM_URLvariable.BOOTSTRAP_NODE_FROM_URL- can be used to pass SPR nodes from an URL like spr.archivist.storage/testnet.MARKETPLACE_ADDRESS_FROM_URL- can be used to pass a Marketplace contract address from an URL like marketplace.archivist.storage/testnet/latest.
Docker network
When we are running Archivist using Docker with default bridge network, it will create a double NAT:
- One on the Docker side
- Second on your Internet router
If your Internet router does not support Full Cone NAT, you might have an issue and peer discovery and data transport will not work or might work unexpected.
In that case, we should consider the following solutions:
- Use host network for Docker, which is supported only in Linux
- Run Using binary
- Use VM/VPS in the Cloud to run Docker with bridge or host network
Run using Docker
And we basically can use same options we used for binary and additionally it is required to mount volumes and map the ports.
Create ethereum key file
example
CAUTION
Please use key generation service for demo purpose only.
shellresponse=$(curl -s https://key.archivist.storage) awk -F ': ' '/private/ {print $2}' <<<"${response}" > eth.key awk -F ': ' '/address/ {print $2}' <<<"${response}" > eth.address chmod 600 eth.keyShow your ethereum address:
shellcat eth.address0x412665aFAb17768cd9aACE6E00537Cc6D5524Da9Run Archivist:
docker run \
--rm \
-v $PWD/datadir:/datadir \
-v $PWD/eth.key:/opt/eth.key \
-p 8070:8070 \
-p 8080:8080 \
-p 8090:8090/udp \
durabilitylabs/archivist-node:latest \
archivist \
--data-dir=/datadir \
--bootstrap-node=spr:CiUIAhIhA5mg11LZgFQ4XzIRb1T5xw9muFW1ALNKTijyKhQmvKYXEgIDARpJCicAJQgCEiEDmaDXUtmAVDhfMhFvVPnHD2a4VbUAs0pOKPIqFCa8phcQl-XFxQYaCwoJBE4vqKqRAnU6GgsKCQROL6iqkQJ1OipHMEUCIQDfzVYbN6A_O4i29e_FtDDUo7GJS3bkXRQtoteYbPSFtgIgcc8Kgj2ggVJyK16EY9xi4bY2lpTTeNIRjvslXSRdN5w \
--nat=any \
--disc-port=8090 \
--listen-addrs=/ip4/0.0.0.0/tcp/8070 \
--api-cors-origin="*" \
--api-bindaddr=0.0.0.0 \
--api-port=8080 \
persistence \
--eth-provider=https://rpc.testnet.archivist.storage \
--eth-private-key=/opt/eth.key \
prover \
--circuit-dir=/datadir/circuitsNOTE
You would need to pass a bootstrap nodes and blockchain RPC endpoint based on the network you would like to join.
NOTE
We could skip --marketplace-address argument and rely on marketplace contract Auto-discovery.
Using Docker Compose
For Docker Compose, it is more suitable to use environment variables for Archivist configuration and we can reuse commands from example above, for Docker.
Create ethereum key file
example
CAUTION
Please use key generation service for demo purpose only.
shellresponse=$(curl -s https://key.archivist.storage) awk -F ': ' '/private/ {print $2}' <<<"${response}" > eth.key awk -F ': ' '/address/ {print $2}' <<<"${response}" > eth.address chmod 600 eth.keyShow your ethereum address:
shellcat eth.address0x412665aFAb17768cd9aACE6E00537Cc6D5524Da9Create
docker-compose.yamlfile:yamlservices: archivist: image: durabilitylabs/archivist-node:latest container_name: archivist command: - archivist - persistence - prover environment: - ARCHIVIST_DATA_DIR=/datadir - NAT_PUBLIC_IP_AUTO=https://ip.archivist.storage - ARCHIVIST_DISC_PORT=8090 - ARCHIVIST_LISTEN_ADDRS=/ip4/0.0.0.0/tcp/8070 - ARCHIVIST_API_CORS_ORIGIN="*" - ARCHIVIST_API_PORT=8080 - ARCHIVIST_API_BINDADDR=0.0.0.0 - ARCHIVIST_ETH_PROVIDER=https://rpc.testnet.archivist.storage - ARCHIVIST_ETH_PRIVATE_KEY=/opt/eth.key - ARCHIVIST_CIRCUIT_DIR=/datadir/circuits - BOOTSTRAP_NODE_FROM_URL=https://spr.archivist.storage/testnet ports: - 8080:8080/tcp # API - 8090:8090/udp # Discovery - 8070:8070/tcp # Transport volumes: - ./datadir:/datadir - ./eth.key:/opt/eth.key logging: driver: json-file options: max-size: 100m max-file: 5Run Archivist:
shelldocker compose up
NOTE
You would need to pass a bootstrap nodes and blockchain RPC endpoint based on the network you would like to join.
NOTE
We could skip ARCHIVIST_MARKETPLACE_ADDRESS variable and rely on marketplace contract Auto-discovery.
On Kubernetes
Helm chart code is available in helm-charts repository, but chart was not published yet.
How-tos
NAT Configuration
Use the --nat CLI flag to specify how your archivist node should handle NAT traversal. Below are the available options:
any(default): This option will automatically try to detect your public IP by checking the routing table or using UPnP/PMP NAT traversal techniques. If successful, it will use the detected public IP and port for the announce address.
upnp: This option exclusively uses UPnP to detect the public IP and create a port mapping entry, if your device supports UPnP.
pmp: This option uses only NAT-PMP to detect the public IP and create a port mapping entry, if your device supports NAT-PMP.
extIP:[Your Public IP]:Use this option if you want to manually specify an external IP address and port for the announce address. When selecting this option, you'll need to configure port forwarding on your router to ensure that incoming traffic is directed to the correct internal IP and port.
Port Forwarding
If you're running on a private network, you'll need to set up port forwarding to ensure seamless communication between the archivist node and its peers. It's also recommended to configure appropriate firewall rules for TCP and UDP traffic. While the specific steps required vary based on your router, they can be summarised as follows:
- Find your public IP address by either visiting ip-archivist or running
curl ip.archivist.storage - Identify your private IP address
- Access your router's settings by entering its IP address (typically http://192.168.1.1) in your web browser
- Sign in with administrator credentials and locate the port forwarding settings
- Set up the discovery port forwarding rule with these settings:
- External Port: 8090
- Internal Port: 8090
- Protocol: UDP
- IP Address: Your device's private IP address
- Set up the libp2p port forwarding rule with these settings:
- External Port: 8070
- Internal Port: 8070
- Protocol: TCP
- IP Address: Your device's private IP address
Determine your private IP
To determine your private IP address, run the appropriate command for your OS:
Linux:
ip addr show | grep "inet " | grep -v 127.0.0.1Windows:
ipconfig | findstr /i "IPv4 Address"MacOs:
ifconfig | grep "inet " | grep -v 127.0.0.1
