NOTE: This doc is designed for administrators of Bigtrace services NOT Bigtrace users. This is also designed for non-Googlers - Googlers should look at go/bigtrace
instead.
There are multiple ways to deploy Bigtrace on a single machine:
NOTE: Options 1 and 2 are intended for development purposes and are not recommended for production. For production purposes instead follow the instructions on Deploying Bigtrace on Kubernetes.
To build Bigtrace you must first follow the Quickstart setup and building steps but using tools/install-build-deps --grpc
in order to install the required dependencies for Bigtrace and gRPC.
To manually run Bigtrace locally with the executables you must first build the executables before running them as follows:
tools/ninja -C out/[BUILD] orchestrator_main tools/ninja -C out/[BUILD] worker_main
Run the Orchestrator and Worker executables using command-line arguments:
./out/[BUILD]/orchestrator_main [args] ./out/[BUILD]/worker_main [args]
Creates a service with an Orchestrator and three Workers which can be interacted with using the Python API locally.
tools/ninja -C out/linux_clang_release orchestrator_main tools/ninja -C out/linux_clang_release worker_main ./out/linux_clang_release/orchestrator_main -w "127.0.0.1" -p "5052" -n "3" ./out/linux_clang_release/worker_main --socket="127.0.0.1:5052" ./out/linux_clang_release/worker_main --socket="127.0.0.1:5053" ./out/linux_clang_release/worker_main --socket="127.0.0.1:5054"
To allow testing of gRPC without the overhead of Kubernetes, docker-compose can be used which builds the Dockerfiles specified in infra/bigtrace/docker and creates containerised instances of the Orchestrator and the specified set of Worker replicas.
cd infra/bigtrace/docker docker compose up # OR if using the docker compose standalone binary docker-compose up
This will build and start the Workers (default of 3) and Orchestrator as specified in the compose.yaml
.
A minikube cluster can be used to emulate the Kubernetes cluster setup on a local machine. This can be created with the script tools/setup_minikube_cluster.sh
.
This starts a minikube cluster, builds the Orchestrator and Worker images and deploys them on the cluster. This can then be interacted with using the minikube ip
:5051 as the Orchestrator service address through a client such as the Python API.