RTB4FREE - Demo Docker Swarm All Layers

Last updated: April 18, 2017

Overview

Because this is a Docker deployment you must have a working knowledge of Docker. You need Docker and Docker docker-compose installed. For information on Docker : look here.

You can deploy an entire, working RTB stack in your Docker Swarm infrastructure with the following steps. This will deploy these Docker containers.

  1. Create a Docker swarm environment.
  2. Copy the Docker compose file from here to your swarm.
  3. Deploy the Docker stack using the compose file.


The first step to create a Docker swarm is to install docker-ce on the server you wish to deploy. Use the standard Docker commands to create a swarm. In general, the commands are

docker swarm init

if you have additional servers you want to add to your swarm as workers, use the "docker swarm join" command.

Copy the following compose file to your server.

#
# Contains the full stack of the RTB4FREE stack
#
version: "3"
services:

  zookeeper:
    image: "zookeeper"

  kafka:
    image: "ches/kafka"
    environment:
      ZOOKEEPER_IP: "zookeeper"
    depends_on:
      - zookeeper

  zerospike:
    image: "jacamars/zerospike:v1"
    environment:
      BROKERLIST: "kafka:9092"
    depends_on:
      - kafka
    command: bash -c "sleep 5 && ./wait-for-it.sh kafka:9092 -t 120 && sleep 1; ./zerospike"

  bidder:
    image: "jacamars/rtb4free:v1"
    environment:
      BROKERLIST: "kafka:9092"
      PUBSUB: "zerospike"
      EXTERNAL: "http://localhost"
      ADMINPORT: "0"
      ACCOUNTING: "accountingsystem"
      FREQGOV: "false"
      INDEXPAGE: "/index.html"
    ports:
      - "80:8080"
    depends_on:
      - kafka
      - zerospike
    command: bash -c "sleep 5 && ./wait-for-it.sh kafka:9092 -t 120 && ./wait-for-it.sh zerospike:6000 -t 120 && sleep 1; ./rtb4free"

  crosstalk:
    image: "jacamars/crosstalk:v1"
    environment:
      REGION: "US"
      GHOST: "elastic1"
      AHOST: "elastic1"
      BROKERLIST: "kafka:9092"
      PUBSUB: "zerospike"
      CONTROL: "8100"
      JDBC: "jdbc:mysql://db/rtb4free?user=ben&password=test"
    depends_on:
      - kafka
      - zerospike

  db:
    image: ploh/mysqlrtb
    environment:
      - MYSQL_ROOT_PASSWORD=rtb4free
      - MYSQL_DATABASE=rtb4free
      - MYSQL_USER=ben
      - MYSQL_PASSWORD=test

  web:
    image: ploh/rtbadmin_open
    command: bash -c "./wait_for_it.sh db:3306 --timeout=120; bundle exec rails s -p 3000 -b '0.0.0.0' -e development"
    ports:
      - "3000:3000"
    environment:
      - CUSTOMER_NAME=RTB4FREE
      - RTB4FREE_DATABASE_HOST=db
      - RTB4FREE_DATABASE_PORT=3306
      - RTB4FREE_DATABASE_USERNAME=ben
      - RTB4FREE_DATABASE_PASSWORD=test
      - RTB4FREE_DATABASE_NAME=rtb4free
      - ELASTICSEARCH_ENABLE=true
      - ELASTICSEARCH_HOST=elastic1:9200
      - ELASTICSEARCH_KIBANA_URL=http://kibana:5601/
      - RTB_CROSSTALK_REGION_HOSTS={"US" => "crosstalk"}

  elastic1:
    image: ploh/elastic_pwd    #  from docker.elastic.co/elasticsearch/elasticsearch:6.2.2,  added demo data
    environment:
      - discovery.type=single-node
      - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
  logstash1:
    image: ploh/logstash_pwd
    environment:
      - "XPACK_MONITORING_ELASTICSEARCH_URL=http://elastic1:9200"
      - "XPACK_MONITORING_ENABLED=true"

  kibana:
    image: docker.elastic.co/kibana/kibana:6.2.2
    environment:
      - SERVER_NAME=elastic1
      - ELASTICSEARCH_URL=http://elastic1:9200
    ports:
      - "5601:5601"

  simulator:
    image: "jacamars/rtb4free:v1"
    environment:
      BIDDER: "bidder:8080"
      WIN:    "10"
      PIXEL:  "95"
      CLICK:  "2"
      SLEEP:  "100"
    command: bash -c "./wait-for-it.sh bidder:8080 -t 120 && sleep 60;  ./load-elastic -host $$BIDDER -win $$WIN -pixel $$PIXEL -click $$CLICK -sleep $$SLEEP"

Copy the foregoing compose file to your server, or download it from here.

docker stack deploy -c compose_file rtb4free

Playing with RTB4FREE

The RTB4FREE stack will take a few minutes to start all components. You can at then take the following actions.

  1. See simulated request, bids and wins on the Kibana view.
    • Open a browser to URL http://\:5601
    • Open the Discover view. Select "request" index to see the incoming simulated requests. You can do the same for the bids, wins, status, rtblogs indexes.
    • Open the Dashboard view to see the sample visualizations.
  2. Define or modify campaigns using the Campaign Manager application.
    • Open a browser to URL http://\:3000
    • Log in using id "demo@rtb4free.com" and password "rtb4free".
    • Click on Campaigns on the left menu to see how the demo campaign is configured.
    • Create your own campaigns. As soon as you add it to the database, you should see bidding activity for that campaign.
    • You can return to the Kibana view to see each bid record for your new campaign by searching the "adid" field, which matches the database campaign ID.
  3. If you have your own exchange, connect it to the demo swarm to see how RTB4FREE performs.
    • Point your exchange traffic to http://\/rtb/nexage (port 80) CHECK THIS!
    • Use the Kibana Discover view, requests index, to see the incoming traffic.
    • If you wish, you can first remove the simulator traffic by deleting the "simulator" service section from the compose file and re-deploy the stack. This will make it easier to view your new traffic feed.
    • Create new campaigns to match your SSP traffic and monitor the performance.

Production Considerations

This demo stack is meant to allow users try the basic RTB4FREE features. To run in a production mode, the following steps should be considered.

  1. Make the MySQL and Elasticsearch data persistent across stack restarts by using this compose file: compose-dmp.yml. This compose file uses volumes to map container disks to your physical server's disk. Make sure that you create the directories before deploying this file.
  2. To increase bidder capacity, horizontally scale by adding as many bidder containers as required.
  3. Frontend the bidder with a load balancer to distribute SSP traffic to the multiple bidders. This can be done with Amazon ELB (if deploying in AWS) or using a contianer with HAProxy.
  4. To increase data logging network capacity, horizontally scale by using a kafka cluster to handle data tranport between the bidder and ELK.
  5. To increase data log storage and reporting analytics capacity, horizontally scale by using a Elasticsearch cluster. You may also add Logstash instances to handle the ingestion load.
  6. Monitor your infrastructure. We recommend using open source tools like Grafana, Prometheus and Kafka Manager. The Elasticsearch XPack Monitor add-on is freely available from elastic.co to monitor your ELK stack.

Source Code

The RTB4FREE source code for all the services is located here.