- Lab
-
Libraries: If you want this lab, consider one of these libraries.
- Cloud
Elastic Certified Analyst Practice Exam
This practice exam aims to test the readiness of someone who wishes to pass the Elastic Certified Analyst exam. All exam objectives will be tested during this practice exam. Before considering yourself ready to take the Elastic Certified Analyst exam, you should be able to complete this practice exam within the time limit and only using official Elastic documentation as a resource.
Lab Info
Table of Contents
-
Challenge
Analyze the Filebeat Data
Visualize
- In the Default space, create a
filebeat-*index pattern with the@timestampfield set as the time filter field. - Create a saved search from the
filebeat-*index pattern called Failed SSH Authentications that only shows events where the fieldsystem.auth.ssh.eventhas a value offailed. [NOTE: If Failed is not an option, you can choose to filter by any field present in your data]. Configure the search to display the following columns:- source.ip
- source.geo.continent_name
- source.geo.country_iso_code
- source.geo.city_name
- source.as.organization.name
- user.name
- Create a metric visualization from the
filebeat-*index pattern called Failed SSH Attempts that displays a count of events where the fieldsystem.auth.ssh.eventhas a value offailedlabelled as Failed Attempts. [NOTE: If Failed is not an option, you can choose to filter by any field present in your data] - Create a tag cloud visualization from the
filebeat-*index pattern called Top Failed SSH Users that shows the top 25 values ofuser.namebased on the count of events where the fieldsystem.auth.ssh.eventhas a value offailed. Configure the tag cloud to not show labels. - Create a map from the
filebeat-*index pattern called Failed SSH Authentication Geography with the following layers:- Default Road map layer.
- EMS Boundaries layer for
World Countriescalled Countries.- Display the
namefield in the tooltip - Add a term join between the
World CountriesfieldISO 3166-1 alpha-2 codeand thefilebeat-*fieldsource.geo.country_iso_codefield that performs a count of events labelled as "Failed Attempts" where the fieldsystem.auth.ssh.eventhas a value offailed. - Set the fill color to use the red color schema baed off the number of "Failed Attempts".
- Display the
- Documents (vector) layer on the
source.geo.locationfield called Failed Attempts.- Display the fields
source.as.organization.name,source.geo.city_name,source.ip, anduser.namein the tooltip. - Add a filter to only plot events where the field
system.auth.ssh.eventhas a value offailed. - Set the symbol to
marker. - Set the fill color to use a different color for each
source.ip. - Set the symbol size to
10.
- Display the fields
- Create a dashboard called Failed SSH Authentication Attempts that includes the saved objects Failed SSH Authentications, Failed SSH Attempts, Top Failed SSH Users, and Failed SSH Authentication Geography.
Analyze
- How many failed SSH authentication attempts have there been in the last 15 minutes?
- From what country (
source.geo.country_iso_code) has there been the most failed SSH authentication attempts? - For the country (
source.geo.country_iso_code) with the most failed authentication attempts, what was the most attempted username (user.name)? - What organization (
source.as.organization.name) made the most recent failed SSH authentication attempt and from what IP address (source.ip)?
- In the Default space, create a
-
Challenge
Analyze the Metricbeat Data
Visualize
- In the Default space, create a
metricbeat-*index pattern with the@timestampfield set as the time filter field. - Create a TSVB time series visualization from the
metricbeat-*index pattern called CPU Usage Over Time.- Create a green-colored series called User that displays the average value of the field
system.cpu.user.pctformatted as a percent value and visualized as a stacked and stepped line chart with a fill of 1, line width of 0, and point size of 0. - Create a blue-colored series called System that displays the average value of the field
system.cpu.system.pctformatted as a percent value and visualized as a stacked and stepped line chart with a fill of 1, line width of 0, and point size of 0. - Create a yellow-colored series called Steal that displays the average value of the field
system.cpu.steal.pctformatted as a percent value and visualized as a stacked and stepped line chart with a fill of 1, line width of 0, and point size of 0. - Create a red-colored series called IO Wait that displays the average value of the field
system.cpu.iowait.pctformatted as a percent value and visualized as a stacked and stepped line chart with a fill of 1, line width of 0, and point size of 0. - Create a black-colored series called Total that displays the average value of the field
system.cpu.total.pctformatted as a percent value and visualized as an invisible unstacked line chart by configuring a fill of 0, line width of 0, and point size of 0. - Configure the time interval to be greater than or equal to 10 seconds.
- Create a green-colored series called User that displays the average value of the field
- Create a TSVB metric visualization from the
metricbeat-*index pattern called System Load.- Create a series called Load that displays the average value of the field
system.load.1for the latest time interval. - Create a series called Overall that displays the average value of the field
system.load.1but for the entire time range. - Configure the time interval to be greater than or equal to 10 seconds.
- Configure the text color to turn green if the value is less than
0.75. - Configure the text color to turn yellow if the value is greater than or equal to
0.75. - Configure the text color to turn red if the value is greater than or equal to
1.
- Create a series called Load that displays the average value of the field
- Create a TSVB top n visualization from the
metricbeat-*index pattern called Top Users by CPU.- Create a series called CPU to display the average value of the field
system.process.cpu.total.pctformatted as a percent value and grouped by the top 5 terms of the fielduser.nameordered by the average CPU in descending order. - Configure the time interval to be greater than or equal to 10 seconds.
- Configure the bar color to turn green if the value is less than
0.5. - Configure the bar color to turn yellow if the value is greater than or equal to
0.5. - Configure the bar color to turn red if the value is greater than or equal to
0.9.
- Create a series called CPU to display the average value of the field
- Create a TSVB gauge visualization from the
metricbeat-*index pattern called Memory Usage Meter.- Create a series called Memory to display the average value of the field
system.memory.used.pctformatted as a percent value. - Configure the time interval to be greater than or equal to 10 seconds.
- Set the max value for the gauge to
1. - Configure the gauge color to turn green if the value is less than
0.75. - Configure the gauge color to turn yellow if the value is greater than or equal to
0.75. - Configure the gauge color to turn red if the value is greater than or equal to
0.95.
- Create a series called Memory to display the average value of the field
- Create a TSVB markdown visualization from the
metricbeat-*index pattern called Disk Usage.- Create a series called Total with a variable name of
total_bytesthat computes the average value of the fieldsystem.filesystem.totalformatted as a bytes number. - Create a series called Free with a variable name of
free_bytesthat computes the average value of the fieldsystem.filesystem.freeformatted as a bytes number. - Create a series called Used with a variable name of
used_bytesthat computes the average value of the fieldsystem.filesystem.used.bytesformatted as a bytes number. - Create a series called Used Percent with a variable name of
used_percentthat computes the average value of the fieldsystem.filesystem.used.pctformatted as a percent. - Configure the time interval to be greater than or equal to 1 minute.
- Enter the following markdown to render the visualization:
You are using **{{ used.used_bytes.last.formatted }} ({{ used_percent.used_percent.last.formatted }})** of disk space out of **{{ total.total_bytes.last.formatted }}** leaving **{{ free.free_bytes.last.formatted }}** of free disk space left. - Create a series called Total with a variable name of
- Create a TSVB table visualization from the
metricbeat-*index pattern called Top Processes.- Group by the top 10 of the
process.namefield labelled as Process. - Create a series called CPU that computes the average value of the field
system.process.cpu.total.pctformatted as a percentage with trend arrows enabled. - Create a series called Memory that computes the average value of the field
system.process.memory.rss.bytesformatted as a bytes number with trend arrows enabled.
- Group by the top 10 of the
- Create a dashboard called System Telemetry that includes the saved objects CPU Usage Over Time, System Load, Top Users by CPU, Memory Usage Meter, Disk Usage, and Top Processes.
Analyze
- What user is the top CPU consumer?
- How much CPU steal is the system currently experiencing?
- What percentage of disk space is currently being used?
- Is the current load average greater than or less than the overall average?
- How much memory is the
filebeatprocess (process.name) using, and is it trending up or down? - At what time in the last 15 minutes did our system experience the highest total CPU usage, and what was the total percent used?
- In the Default space, create a
-
Challenge
Analyze the eCommerce Data
Visualize
- Create a new space called eCommerce with only the following Kibana features enabled:
- Discover
- Visualize
- Dashboard
- Advanced Settings
- Index Pattern Management
- Saved Objects Management
- Machine Learning
- Maps
- In the eCommerce space, create a
ecommerceindex pattern with theorder_datefield set as the time filter field. - Configure the
products.pricefield in theecommerceindex pattern to display as a comma separated two decimal number with a leading dollar sign. - Define a single-metric machine learning job from the
ecommerceindex pattern called sales.- Use the full
ecommercedata for the time range. - Analyze the sum of
products.price. - Use a
1hbucket span. - Start the job to run real time.
- Use the full
- Create a stacked bar chart with Kibana Lens from the
ecommerceindex pattern called Sales by Category Per Day.- Show the sum of the
products.pricefield over theorder_datewith 1 day time intervals. - Break down the bar chart for each
order_dateby the top 10 values of thecategory.keywordfield ordered by the sum ofproducts.pricein descending order.
- Show the sum of the
- Create a pie chart from the
ecommerceindex pattern called Top Products.- Calculate the sum of
products.quantity. - Split the chart with a terms aggregation on the
customer_genderfield. - Split the slices with a top 10 terms aggregation on the
products.product_name.keywordfield. - Hide the legend and configure the visualization to show labels.
- Calculate the sum of
- Create a line visualization from the
ecommerceindex pattern called Sales Over Time.- Calculate the sum of
products.price.- Label it as Sales.
- Use a smooth line.
- Don't show dots.
- Color the line green.
- Calculate the moving average of Sales.
- Label it as Moving Average.
- Use a smooth line.
- Don't show dots.
- Color the line orange.
- Configure the x-axis as a date histogram of the
order_datefield with a auto interval and label it as Order Date. - Configure the legend to show at the top of the visualization.
- Calculate the sum of
- Create a metric visualization from the
ecommerceindex pattern called Sales Metrics.- Show the count of documents labelled as Orders.
- Show the sum of
products.quantitylabelled as Items Sold. - Show the unique count of
products.product_idlabelled as Unique Products. - Show the unique count of
products.category.keywordlabelled as Product Categories. - Show the unique count of
products.manufacturer.keywordlabelled as Manufacturers. - Show the unique count of
customer_idlabelled as Customers. - Show the sum of
products.pricelabelled as Sales.
- Create a data table visualization from the
ecommerceindex pattern called Orders.- Show the sum of
products.quantitylabelled as Items Purchased. - Show the sum of
products.pricelabelled as Amount Spent. - Split the rows on the top 100 of
order_idlabelled as Order ID and ordered alphabetically in descending order. - Split the rows on
order_datelabelled as Order Date. - Split the rows on
customer_idlabelled as Customer ID. - Split the rows on
customer_full_name.keywordlabelled as Customer Name. - Split the rows on
geoip.country_iso_codelabelled as Country. - Split the rows on
geoip.city_namelabelled as City. - Configure the table to show 25 results per page.
- Show the sum of
- Create a dashboard called Sales that has a "Last 7 Days" time range and includes the saved objects Sales by Category Per Day, Top Products, Sales Over Time, Orders, and Sales Metrics.
Analyze
- What is the most sold product (
product.product_name.keyword) purchased by men (customer_genderisMALE) and how many men purchased that product in the last 7 days? - What product category (
products.category.keyword) had the most sales (products.price) 3 days ago? - How many unique products with a price (
products.price) of $100 or more were sold in the last 7 days? - How many orders have been made so far today?
- How many manufacturers (
products.manufacturer.keyword) make products priced (products.price) less than $19.99 and what is the top product in this price range for men (customer_genderisMALE)? - When was the worst sales (sum of
products.price) anomaly and how much higher or lower was the actual value versus the typical value?
- Create a new space called eCommerce with only the following Kibana features enabled:
-
Challenge
Analyze the Flights Data
Visualize
- Create a new space called Flights with only the following Kibana features enabled:
- Discover
- Visualize
- Dashboard
- Advanced Settings
- Index Pattern Management
- Saved Objects Management
- Machine Learning
- Maps
- In the Flights space, create a
flightsindex pattern with thetimestampfield set as the time filter field. - Configure the
AvgTicketPricefield in theflightsindex pattern to display as a comma separated two decimal number with a leading dollar sign. - Create a scripted field for the
flightsindex pattern calledAvgTicketPricePerMile.- Configure the field to be a numeric data type.
- Divide the
AvgTicketPricebyDistanceMilesto get the average ticket price per mile but only ifDistanceMilesis greater than 0. - Format the field as a comma separated two decimal number with a leading dollar sign.
- Define a multi-metric machine learning job from the
flightsindex pattern called flights.- Use the full
flightsdata for the time range. - Analyze the count of flights (each document is a flight).
- Analyze the average of
AvgTicketPrice. - Analyze the sum of
FlightDelayMin. - Split the analysis on the
Carrierfield. - Set
CarrierandFlightDelayTypeas influencers. - Use a
1hbucket span. - Start the job to run real time.
- Use the full
- Create a gauge visualization from the
flightsindex pattern called Price Per Mile Per Carrier.- Show the average of
AvgTicketPricePerMilelabelled as Ticket Price Per Mile. - Split the gauge on the
Carrierfield and label it Carrier. - Set the gauge type to circle.
- Configure 3 ranges with 0.5 point increments (0-0.5, 0.5-1, 1-1.5) and use the "Green to Red" color schema.
- Hide both the legend and scale.
- Show the average of
- Create an unstacked horizontal bar visualization from the
flightsindex pattern called Delayed Flights by Delay Type Per Carrier.- Show the count of flights labelled as Flights.
- Split the x-axis on the
Carrierfield labelled asCarrier. - Split the series on the top 10 value of the
FlightDelayTypefield labelled as Delay Type and exclude the valueNo Delay. - Hide axis lines and labels for the y-axis.
- Configure the legend to display in the top of the visualization.
- Order the carriers by the sum of buckets.
- Show value labels on the chart.
- Create a vertical bar visualization from the
flightsindex pattern called Ticket Price Rate of Change Over Time.- Show the derivative of the average of
AvgTicketPricelabelled as Change in Ticket Price. - Split the x-axis with a date histogram and auto interval but drop partial buckets.
- Hide the legend.
- Show the derivative of the average of
- Create a controls visualization from the
flightsindex pattern called Flight Controls.- Add an option list for the
OriginCityNamefield labelled as Origin with multiselect and dynamic options enabled. - Add an option list for the
DestCityNamefield labelled as Destination with multiselect and dynamic options enabled and with a parent field to Origin. - Add a range slider for the
AvgTicketPricefield labelled as Ticket Price with a step size of 1 and 0 decimal places. - Configure the controls visualization to update Kibana filters on each change.
- Configure the controls visualization to use the time filter when determining control options.
- Add an option list for the
- Create a map from the
flightsindex pattern called Flight Geography with the following layers:- Default Road map layer.
- EMS Boundaries layer for
World Countriescalled Countries.- Configure the tooltip to display the country name.
- Add a term join for the
World CountriesfieldISO 3166-1 alpha-2 codeand theflightsfieldOriginCountrythat performs a count of events labelled as Outgoing Flights. - Add a term join for the
World CountriesfieldISO 3166-1 alpha-2 codeand theflightsfieldDestCountrythat performs a count of events labelled as Incoming Flights. - Color the regions based on the value of Incoming Flights with the blue color schema.
- Configure the border color to be solid black with a line width of 2.
- Documents (vector) layer for the field
OriginLocationcalled Origin Airports.- Configure the tooltip to display the
OriginAirportID. - Enable top hits per entity for the
OriginAirportIDfield with 1 document per entity. - Configure the symbol to be a yellow airport icon with a symbol size of 10.
- Configure the tooltip to display the
- Documents (vector) layer for the field
DestLocationcalled Destination Airports.- Configure the tooltip to display the
DestAirportID. - Enable top hits per entity for the
DestAirportIDfield with 1 document per entity. - Configure the symbol to be a orange airport icon with a symbol size of 10.
- Configure the tooltip to display the
- Point to point layer for the source field
OriginLocationand destination fieldDestLocationcalled Flights.- Add an aggregation for the average of
FlightDelayMinlabelled as Delayed Minutes. - Color the lines by the value of Delayed Minutes with the green to red color schema.
- Set the line width to 2.
- Add an aggregation for the average of
- Create a dashboard called Flights that has a "Today" time range and includes the saved objects Price Per Mile Per Carrier, Delayed Flights by Delay Type Per Carrier, Ticket Price Rate of Change Over Time, Flight Controls, and Flight Geography.
Analyze
- At what time today was the greatest increase in ticket prices (
AvgTicketPrice) and by how much did it increase? - What is the ticket price per mile (
AvgTicketPricePerMile) for the carrier (Carrier) "ES-Air"? - For the carrier (
Carrier) "Kibana Airlines", what is the most common delay type (FlightDelayType) and how many flights are delayed so far today with said type and carrier? - How many incoming and outgoing flights were there for the country of "Canada" yesterday where the ticket price (
AvgTicketPrice) was less than or equal to $500? - Which carrier is the most anomalous?
- For the carrier "JetBeats", when was the worst anomaly for the sum of
FlightDelayMin, what was theFlightDelayType, and how much higher or lower was the actual value versus the typical value?
- Create a new space called Flights with only the following Kibana features enabled:
-
Challenge
Analyze the Logs Data
Visualize
- Create a new space called Logs with only the following Kibana features enabled:
- Discover
- Visualize
- Dashboard
- Advanced Settings
- Index Pattern Management
- Saved Objects Management
- Machine Learning
- Maps
- In the Logs space, create a
logsindex pattern with the@timestampfield set as the time filter field. - Configure the
bytesfield in thelogsindex pattern to display as a human readable bytes number. - Define a population machine learning job for the
logsindex pattern called clients.- Use the full
logsdata for the time range. - Use the
clientipfield as the population field. - Analyze the high count of requests (each document is a request).
- Analyze the high sum of
bytes. - Use a
15mbucket span. - Set
clientipas an influencer. - Start the job to run real time.
- Use the full
- In the Logs space, create a
.ml-anomalies-sharedindex pattern with thetimestampfield set as the time filter field. - Create an area visualization from the
logsindex pattern called Requests by Response Code Over Time.- Show the count of events labelled as Requests.
- Split the series by the top 5 of
response.keywordin ascending order by Requests. - Split the x-axis with a date histogram and an automatic time interval.
- Color the
response.keywordvalues such that503is red,404is yellow, and200is green. - Configure the visualization to stack each split with a stepped line mode.
- Configure the legend to display at the top of the visualization.
- Create a TSVB time series visualization from the
logsindex pattern called Bytes Over Time.- Create a series called Bytes that calculates the sum of the
bytesfield displayed as a human readable bytes number and color the series blue. - Configure the visualization to hide the legend.
- Add an annotation to plot all anomalies with a red flag icon where the machine learning
job_idisclientsand thefunctionishigh_sum. Theclientipand the anomaly'srecord_scoreshould both be displayed in the annotation's tooltip.
- Create a series called Bytes that calculates the sum of the
- Create a markdown visualization called Contacts with the following markdown text:
# Contacts * For **visualization requests**, contact the Data Engineering team at <data@company.com>. * For **troubleshooting help**, contact the System Reliability Engineering team at <sre@company.com>. * For **incident reporting**, contact the Network Operations Center at <noc@company.com>. - Create a map from the
logsindex pattern called Client Geography with the following layers:- Default Road map layer.
- Grid layer for the
geo.coordinatesfield displaying as grid rectangles called Clients.- Add an aggregation called Clients that calculates the unique count of the
clientipfield. - Set the grid resolution to "finest".
- Configure the grid fill color based of the value of Clients using the green to red color schema.
- Add an aggregation called Clients that calculates the unique count of the
- Create a saved search from the
logsindex pattern called Requests with the following columns:- clientip
- url
- response
- bytes
- Create a dashboard called Web Requests that has a "Last 7 Days" time range and includes the saved objects Requests by Response Code Over Time, Bytes Over Time, Contacts, Client Geography, and Requests.
Analyze
- In the last 7 days, when did we experience the most server errors (
response.keywordis503) and how many were there? - For clients (
clientip) using eitherosxoriosoperating systems, what was the highest amount of requested bytes (bytes) yesterday and were there any anomalous clients around that time? - Who was the most recent client (
clientip) to download an RPM (extension.keywordisrpm) file and what file did they download? - What is the most anomalous
clientip? - When was the worst anomaly for the high count of requests, what was the
clientip, and how much higher or lower was the actual value versus the typical value? - When was the worst anomaly for the high sum of
bytes, what was theclientip, and how much higher or lower was the actual value versus the typical value?
- Create a new space called Logs with only the following Kibana features enabled:
About the author
Real skill practice before real-world application
Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.
Learn by doing
Engage hands-on with the tools and technologies you’re learning. You pick the skill, we provide the credentials and environment.
Follow your guide
All labs have detailed instructions and objectives, guiding you through the learning process and ensuring you understand every step.
Turn time into mastery
On average, you retain 75% more of your learning if you take time to practice. Hands-on labs set you up for success to make those skills stick.