Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.
  • Labs icon Lab
  • A Cloud Guru
Google Cloud Platform icon
Labs

ECE Practice Exam — Part 2

In Part 2 of the Elastic Certified Engineer practice exam, you will be tested on the following exam objectives: * Perform index, create, read, update, and delete operations on the documents of an index * Use the Reindex API and Update By Query API to reindex and/or update documents * Define and use an ingest pipeline that satisfies a given set of requirements, including the use of Painless to modify documents * Diagnose Shard Issues and Repair a Cluster’s Health * Write and execute a search query for terms and/or phrases in one or more fields of an index * Write and execute a search query that is a Boolean combination of multiple queries and filters * Highlight the search terms in the response of a query * Sort the results of a query by a given set of requirements * Implement pagination in the results of a search query * Apply fuzzy matching to a query * Define and Use a Search Template * Write and execute a query that searches multiple clusters * Write and execute metric and bucket aggregations * Write and execute aggregations that contain sub-aggregations * Write and execute pipeline aggregations * Back up and restore a cluster and/or specific indices * Configure a cluster for cross-cluster search

Google Cloud Platform icon
Labs

Path Info

Level
Clock icon Advanced
Duration
Clock icon 4h 0m
Published
Clock icon Jan 10, 2020

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Table of Contents

  1. Challenge

    Diagnose and Repair the "c1" cluster.

    Start Elasticsearch

    Using the Secure Shell (SSH), log in to the c1-data-1 node as cloud_user via the public IP address.

    Become the elastic user:

    sudo su - elastic
    

    Start Elasticsearch as a daemon:

    /home/elastic/elasticsearch/bin/elasticsearch -d -p pid
    

    Replicate the logs index

    Use the Kibana console tool on the c1 cluster to execute the following:

    PUT logs/_settings
    {
      "number_of_replicas": 1
    }
    

    Reduce the shakespeare index's replication

    Use the Kibana console tool on the c1 cluster to execute the following:

    PUT shakespeare/_settings
    {
      "number_of_replicas": 1
    }
    

    Remove allocation filtering for the bank index

    Use the Kibana console tool on the c1 cluster to execute the following:

    PUT bank/_settings
    {
      "index.routing.allocation.require._name": null
    }
    
  2. Challenge

    Transfer the "bank" index to the "c2" cluster.

    Configure the c2 cluster to remote reindex from the c1 cluster

    Using the Secure Shell (SSH), log in to the c2 cluster nodes as cloud_user via the public IP address.

    Become the elastic user:

    sudo su - elastic
    

    Add the following line to /home/elastic/elasticsearch/config/elasticsearch.yml:

    reindex.remote.whitelist: "10.0.1.101:9200, 10.0.1.102:9200, 10.0.1.103:9200, 10.0.1.104:9200"
    

    Stop Elasticsearch:

    pkill -F /home/elastic/elasticsearch/pid
    

    Start Elasticsearch as a background daemon and record the PID to a file:

    /home/elastic/elasticsearch/bin/elasticsearch -d -p pid
    

    Create the bank index on the c2 cluster

    Use the Kibana console tool on the c2 cluster to execute the following:

    PUT bank
    {
      "settings": {
        "number_of_shards": 1,
        "number_of_replicas": 0
      }
    }
    

    Reindex the bank index on the c2 cluster

    Use the Kibana console tool on the c2 cluster to execute the following:

    POST _reindex
    {
      "source": {
        "remote": {
          "host": "http://10.0.1.101:9200",
          "username": "elastic",
          "password": "la_elastic_409"
        },
        "index": "bank"
      },
      "dest": {
        "index": "bank"
      }
    }
    

    Delete the bank index on the c1 cluster

    Use the Kibana console tool on the c1 cluster to execute the following:

    DELETE bank
    
  3. Challenge

    Backup the "bank" index on the "c2" cluster.

    Configure the nodes

    Using the Secure Shell (SSH), log in to the c2-master-1 node as cloud_user via the public IP address.

    Become the elastic user:

    sudo su - elastic
    

    Create the repo directory:

    mkdir /home/elastic/snapshots
    

    Add the following line to /home/elastic/elasticsearch/config/elasticsearch.yml:

    path.repo: "/home/elastic/snapshots"
    

    Stop Elasticsearch:

    pkill -F /home/elastic/elasticsearch/pid
    

    Start Elasticsearch as a background daemon and record the PID to a file:

    /home/elastic/elasticsearch/bin/elasticsearch -d -p pid
    

    Create the local_repo repository

    Use the Kibana console tool on the c2 cluster to execute the following:

    PUT _snapshot/local_repo
    {
      "type": "fs",
      "settings": {
        "location": "/home/elastic/snapshots"
      }
    }
    

    Backup the bank index

    Use the Kibana console tool on the c2 cluster to execute the following:

    PUT _snapshot/local_repo/bank_1?wait_for_completion=true
    {
      "indices": "bank", 
      "include_global_state": true
    }
    
  4. Challenge

    Configure Cross-Cluster Search.

    Use the Kibana console tool on the c1 cluster to execute the following:

    PUT _cluster/settings
    {
      "persistent": {
        "cluster": {
          "remote": {
            "c2": {
              "seeds": [
                "10.0.1.105:9300"
              ]
            }
          }
        }
      }
    }
    
  5. Challenge

    Create, Update, and Delete Documents.

    Delete the bank documents

    Use the Kibana console tool on the c2 cluster to execute the following:

    DELETE bank/_doc/5
    DELETE bank/_doc/27
    DELETE bank/_doc/819
    

    Update the bank document

    Use the Kibana console tool on the c2 cluster to execute the following:

    POST bank/_update/67
    {
      "doc": {
        "lastname": "Alonso"
      }
    }
    

    Create the bank document

    Use the Kibana console tool on the c2 cluster to execute the following:

    PUT bank/_doc/1000
    {
      "account_number": 1000,
      "balance": 35550,
      "firstname": "Stosh",
      "lastname": "Pearson",
      "age": 45,
      "gender": "M",
      "address": "125 Bear Creek Pkwy",
      "employer": "Linux Academy",
      "email": "[email protected]",
      "city": "Keller",
      "state": "TX"
    }
    

    Update the shakespeare mapping

    Use the Kibana console tool on the c1 cluster to execute the following:

    PUT shakespeare/_mappings
    {
      "properties": {
        "line_id": {
          "type": "integer"
        },
        "line_number": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "play_name": {
          "type": "keyword"
        },
        "speaker": {
          "type": "keyword"
        },
        "speech_number": {
          "type": "integer"
        },
        "text_entry": {
          "type": "text",
          "fields": {
            "english": {
              "type": "text",
              "analyzer": "english"
            },
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "type": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        }
      }
    }
    

    Delete and update the shakespeare documents

    Use the Kibana console tool on the c1 cluster to execute the following:

    POST shakespeare/_update_by_query
    {
      "script": {
        "lang": "painless",
        "source": """
          if (ctx._source.line_number == "") {
            ctx.op = "delete"
          }
        """
      }
    }
    

    Create the ingest pipeline

    Use the Kibana console tool on the c1 cluster to execute the following:

    PUT _ingest/pipeline/fix_logs
    {
      "processors": [
        {
          "remove": {
            "field": "@message"
          }
        },
        {
          "split": {
            "field": "spaces",
            "separator": "\\s+"
          }
        },
        {
          "script": {
            "lang": "painless",
            "source": "ctx.relatedContent_count = ctx.relatedContent.length"
          }
        },
        {
          "uppercase": {
            "field": "extension"
          }
        }
      ]
    }
    

    Create the logs_new index

    Use the Kibana console tool on the c1 cluster to execute the following:

    PUT logs_new
    {
      "settings": {
        "number_of_shards": 2,
        "number_of_replicas": 1
      }
    }
    

    Reindex the logs documents

    Use the Kibana console tool on the c1 cluster to execute the following:

    POST _reindex
    {
      "source": {
        "index": "logs"
      },
      "dest": {
        "index": "logs_new",
        "pipeline": "fix_logs"
      }
    }
    
  6. Challenge

    Search Documents.

    Search the bank index

    Use the Kibana console tool on the c1 cluster to execute the following:

    GET c2:bank/_search
    {
      "from": 0,
      "size": 50,
      "sort": [
        {
          "age": {
            "order": "asc"
          }
        },
        {
          "balance": {
            "order": "desc"
          }
        },
        {
          "lastname.keyword": {
            "order": "asc"
          }
        }
      ], 
      "query": {
        "bool": {
          "must": [
            {
              "term": {
                "gender.keyword": {
                  "value": "F"
                }
              }
            },
            {
              "range": {
                "balance": {
                  "gt": 10000
                }
              }
            }
          ],
          "must_not": [
            {
              "terms": {
                "state.keyword": ["PA", "VA", "IL"]
              }
            }
          ],
          "filter": {
            "range": {
              "age": {
                "gte": 18,
                "lte": 35
              }
            }
          }
        }
      }
    }
    

    Search the shakespeare index

    Use the Kibana console tool on the c1 cluster to execute the following:

    GET shakespeare/_search
    {
      "from": 0,
      "size": 20, 
      "highlight": {
        "pre_tags": "<b>",
        "post_tags": "</b>",
        "fields": {
          "text_entry.english": {}
        }
      },
      "query": {
        "bool": {
          "should": [
            {
              "match": {
                "text_entry.english": "life"
              }
            },
            {
              "match": {
                "text_entry.english": "love"
              }
            },
            {
              "match": {
                "text_entry.english": "death"
              }
            }
          ],
          "minimum_should_match": 2
        }
      }
    }
    

    Search the logs index

    Use the Kibana console tool on the c1 cluster to execute the following:

    GET logs/_search
    {
      "highlight": {
        "fields": {
          "relatedContent.twitter:description": {},
          "relatedContent.twitter:title": {}
        }
      },
      "query": {
        "bool": {
          "must": [
            {
              "match": {
                "relatedContent.twitter:description": {
                  "query": "never",
                  "fuzziness": 2
                }
              }
            },
            {
              "match_phrase": {
                "relatedContent.twitter:title": "Golden State"
              }
            }
          ]
        }
      }
    }
    
  7. Challenge

    Aggregate Documents.

    Aggregate on the bank index

    Use the Kibana console tool on the c1 cluster to execute the following:

    GET c2:bank/_search
    {
      "size": 0, 
      "aggs": {
        "state": {
          "terms": {
            "field": "state.keyword",
            "size": 5,
            "order": {
              "avg_balance": "desc"
            }
          },
          "aggs": {
            "avg_balance": {
              "avg": {
                "field": "balance"
              }
            }
          }
        }
      },
      "query": {
        "range": {
          "age": {
            "gte": 30
          }
        }
      }
    }
    

    Aggregate on the shakespeare index

    Use the Kibana console tool on the c1 cluster to execute the following:

    GET shakespeare/_search
    {
      "size": 0, 
      "aggs": {
        "plays": {
          "terms": {
            "field": "play_name",
            "size": 10
          },
          "aggs": {
            "speakers": {
              "cardinality": {
                "field": "speaker"
              }
            }
          }
        },
        "most_parts": {
          "max_bucket": {
            "buckets_path": "plays>speakers"
          }
        }
      }
    }
    

    Aggregate on the logs index

    Use the Kibana console tool on the c1 cluster to execute the following:

    GET logs/_search
    {
      "size": 0,
      "aggs": {
        "hour": {
          "date_histogram": {
            "field": "@timestamp",
            "calendar_interval": "hour"
          },
          "aggs": {
            "clients": {
              "cardinality": {
                "field": "clientip.keyword"
              }
            },
            "cumulative_clients": {
              "cumulative_sum": {
                "buckets_path": "clients"
              }
            },
            "clients_per_minute": {
              "derivative": {
                "buckets_path": "cumulative_clients",
                "unit": "1m"
              }
            }
          }
        },
        "peak": {
          "max_bucket": {
            "buckets_path": "hour>clients"
          }
        }
      },
      "query": {
        "range": {
          "@timestamp": {
            "gte": "2015-05-19",
            "lt": "2015-05-20",
            "format": "yyyy-MM-dd"
          }
        }
      }
    }
    
  8. Challenge

    Create the Search Template.

    Use the Kibana console tool on the c2 cluster to execute the following:

    POST _scripts/accounts_search
    {
      "script": {
        "lang": "mustache",
        "source": {
          "from": "{{from}}{{^from}}0{{/from}}",
          "size": "{{size}}{{^size}}25{{/size}}",
          "query": {
            "bool": {
              "must": [
                {
                  "wildcard": {
                    "firstname.keyword": "{{first_name}}{{^first_name}}*{{/first_name}}"
                  }
                },
                {
                  "wildcard": {
                    "lastname.keyword": "{{last_name}}{{^last_name}}*{{/last_name}}"
                  }
                }
              ]
            }
          }
        }
      }
    }
    

The Cloud Content team comprises subject matter experts hyper focused on services offered by the leading cloud vendors (AWS, GCP, and Azure), as well as cloud-related technologies such as Linux and DevOps. The team is thrilled to share their knowledge to help you build modern tech solutions from the ground up, secure and optimize your environments, and so much more!

What's a lab?

Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.

Provided environment for hands-on practice

We will provide the credentials and environment necessary for you to practice right within your browser.

Guided walkthrough

Follow along with the author’s guided walkthrough and build something new in your provided environment!

Did you know?

On average, you retain 75% more of your learning if you get time for practice.

Start learning by doing today

View Plans