mirror of
https://github.com/autistic-symposium/blockchain-data-engineering-toolkit.git
synced 2025-05-04 15:55:17 -04:00
💾
This commit is contained in:
parent
95da45b3da
commit
45872d1dcc
3 changed files with 39 additions and 13 deletions
|
@ -12,7 +12,7 @@ TOKEN_DECIMALS =
|
||||||
#########################
|
#########################
|
||||||
|
|
||||||
MAX_RETRIES = 4
|
MAX_RETRIES = 4
|
||||||
SIZE_CHUNK_NEXT = 50000
|
SIZE_CHUNK_NEXT = 5000
|
||||||
|
|
||||||
|
|
||||||
#########################
|
#########################
|
||||||
|
|
|
@ -1,12 +1,21 @@
|
||||||
## token scanner api and cli
|
## 🛠🪙 token scanner api and cli
|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
##### 👉 this project implements a cli tool that indexes transfer events for a particular token, and is deployed to a restful api for fast balance and ownership statistics retrieval. this is the first step for training machine learning models on the chains (*e.g.*, high-frequency trading with deep learning).
|
||||||
|
|
||||||
|
##### 📚 more details can be found in my mirror post, **[quant #3: building a scalable event scanner for ethereum](https://mirror.xyz/steinkirch.eth/vSF18xcLyfXLIWwxjreRa3I_XskwgnjSc6pScegNJWI)**.
|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### setting up
|
||||||
|
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
#### installing dependencies
|
#### installing dependencies
|
||||||
|
|
||||||
create a venv, either using virtualenv, pipenv, or poetry.
|
|
||||||
|
|
||||||
because of some of the dependencies in this code, we will be developing on a python3.9 environment (install here if you don’t have that version on disk):
|
because of some of the dependencies in this code, we will be developing on a python3.9 environment (install here if you don’t have that version on disk):
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -18,9 +27,9 @@ pip3 install -r requirements.txt
|
||||||
|
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
#### add environment variables
|
#### adding environment variables
|
||||||
|
|
||||||
now, create an .env file and add an RPC_PROVIDER_URL to connect to ethereum mainnet nodes (you can pick from any of this list of nodes as a service):
|
create a `.env` file and add an `RPC_PROVIDER_URL` to connect to ethereum mainnet nodes (for example, from [this list](https://ethereumnodes.com/)):
|
||||||
|
|
||||||
```
|
```
|
||||||
cp .env.example .env
|
cp .env.example .env
|
||||||
|
@ -33,24 +42,40 @@ vim .env
|
||||||
|
|
||||||
```
|
```
|
||||||
make install
|
make install
|
||||||
indexer -h
|
|
||||||
```
|
```
|
||||||
|
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
#### deploying on production
|
----
|
||||||
|
|
||||||
we use vercel to deploy this app at .
|
### running
|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
to deploy new changes, first install vercel:
|
|
||||||
|
|
||||||
```
|
```
|
||||||
yarn
|
indexer -h
|
||||||
|
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
then run:
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### development
|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
#### deploying in production
|
||||||
|
|
||||||
|
we use vercel to deploy this app:
|
||||||
|
|
||||||
```
|
```
|
||||||
vercel login
|
vercel login
|
||||||
vercel .
|
vercel .
|
||||||
```
|
```
|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
|
|
@ -117,6 +117,7 @@ class TokenIndexer:
|
||||||
def _process_logs(self, logs: list) -> dict:
|
def _process_logs(self, logs: list) -> dict:
|
||||||
"""Process the logs and return a dictionary with the results."""
|
"""Process the logs and return a dictionary with the results."""
|
||||||
|
|
||||||
|
log_info(f'Processing {len(logs)} logs...')
|
||||||
processed_logs = defaultdict()
|
processed_logs = defaultdict()
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue