This page explains how to set up and view prebuilt reports in BigQuery. These reports are created using logs from Cloud Logging. To set up reports, you need to do a one-time activity of creating a log sink to stream logging data into BigQuery and then execute the prebuilt script.
Required IAM role
The following IAM permissions are required to view prebuilt reports in BigQuery. Learn how to grant an IAM role.
Role | When to grant the role |
---|---|
Logs Configuration Writer (roles/logging.configWriter ) orLogging Admin ( roles/logging.admin ) andBigQuery Data Editor ( roles/bigquery.dataEditor ) |
To create a sink and BigQuery dataset from the Google Cloud console. |
Owner (roles/owner ) |
To create a sink and BigQuery dataset from the Google Cloud CLI. |
BigQuery Admin (bigquery.admin ) |
To write custom queries or download queries. |
Create a sink and route logs to BigQuery
BigQuery stores only the logs that are generated after a log sink is created. The logs that are generated before creating a log sink are not visible in BigQuery. You can create the log sink from the Google Cloud console or Google Cloud CLI.
To create sink and route logs in BigQuery, do the following:
Console
- In the Google Cloud console, go to the Log Router page:
- Select an existing Google Cloud project.
- Click Create sink.
- In the Sink details panel, enter the following fields:
- Sink name: enter the sink name as
BackupandDR_reports_sink
. You must use the sink nameBackupandDR_reports_sink
for the identification of Backup and DR reports from other sinks. - Sink description: Describe the purpose or use case for the sink.
- Sink name: enter the sink name as
In the Sink destination panel, do the following:
- In the Select sink service menu, select the BigQuery dataset sink service.
- In the Select BigQuery dataset, select Create new BigQuery dataset.
- On the Create dataset page, do the following:
- For Dataset ID, enter the dataset name as
BackupandDR_reports
to identify from other datasets. Don't change the dataset name fromBackupandDR_reports
. - For Location type, choose a geographic location for the dataset. After a dataset is created, the location can't be changed.
- Optional: If you want tables in this dataset to expire, select Enable table expiration, then specify the Default maximum table age in days.
- Click Create dataset.
- For Dataset ID, enter the dataset name as
In the Choose logs to include in sink panel, do the following:
In the Build inclusion filter field, enter the following filter expression that matches the log entries you want to include.
logName=~"projects/PROJECT_ID/logs/backupdr.googleapis.com%2Fgcb_*"
To verify you entered the correct filter, select Preview logs. This opens the Logs Explorer in a new tab with the filter prepopulated.
Optional: In the Choose logs to filter out of sink panel, do the following:
- In the Exclusion filter name field, enter a name.
In the Build an exclusion filter field, enter a filter expression that matches the log entries you want to exclude. You can also use the sample function to select a portion of the log entries to exclude.
Select Create sink.
You can see the dataset in the BigQuery Studio.
gcloud
- Go to Activate cloud shell and click Open editor.
- Click the icon, select File, and then select New text file.
Copy and paste the following script.
#!/bin/bash echo "This script will set up a log sink for BackupDR reports to be available in BigQuery" result=$(gcloud config set project PROJECT_ID) if [ $? -eq 0 ]; then current_project_id=$(gcloud config get-value project) if [ "$current_project_id" = "PROJECT_ID" ]; then echo "Successfully set the project to PROJECT_ID" else echo "The project is not set to PROJECT_ID. Exiting" exit 0 fi else echo "Error setting the project" exit 1 fi echo "For logs to be available in BigQuery, we need to enable BigQuery service in the project if not done already. This might mean additional costs incurred. Please check the associated costs before proceeding." read -p "Do you want to continue(Y/N)?" continue if [ "$continue" = "y" ] || [ "$continue" = "Y" ]; then result=$(gcloud services enable bigquery.googleapis.com --project PROJECT_ID) if [ $? -eq 0 ]; then echo "Successfully enabled BigQuery api for PROJECT_ID" else echo "Error in setting up the BigQuery api for the project. $result" exit 1 fi else exit 0 fi echo "Creating a BigQuery Dataset..." result=$(bq mk BackupandDR_reports) if [ $? -eq 0 ]; then echo "Created a BigQuery dataset BackupandDR_reports successfully" else echo "" echo "ERROR : Failed to create a BigQuery dataset." echo $result exit 1 fi log_filter="projects/PROJECT_ID/logs/backupdr.googleapis.com%2Fgcb_*" result=$(gcloud logging sinks create BackupandDR_reports_sink bigquery.googleapis.com/projects/PROJECT_ID/datasets/BackupandDR_reports --log-filter="logName=~\"$log_filter\"") if [ $? -eq 0 ]; then echo "Created a logsink BackupandDR_reports_sink successfully. You will now be able to see the logs in big query." else echo "" echo "ERROR : Failed to create logsink." echo "Performing cleanup and exiting." bq rm -r -f -d PROJECT_ID:BackupandDR_reports exit 1 fi result=$(gcloud projects add-iam-policy-binding $(gcloud projects describe PROJECT_ID --format="value(projectNumber)") --member=serviceAccount:service-$(gcloud projects describe PROJECT_ID --format="value(projectNumber)")@gcp-sa-logging.iam.gserviceaccount.com --role=roles/bigquery.dataEditor --condition=None) if [ $? -eq 0 ]; then echo "Added permission for cloud logging to write to BigQuery datasets" else echo "" echo "ERROR : Failed to add permissions for cloud logging to write to BigQuery datasets. Please make sure that you have Owner access rights in order to be able to proceed." echo "Performing cleanup and exiting." bq rm -r -f -d PROJECT_ID:BackupandDR_reports gcloud logging sinks delete BackupandDR_reports_sink exit 1 fi exit 0
Replace the following:
PROJECT_ID
: the name of your project.
Save the file with a name with a Bash (
.sh
) file extension, for example,script.sh
.Run the command bash using the file you just created. For example,
bash script.sh
.You can see the created dataset in the BigQuery Studio.
Set up prebuilt reports
You can set up prebuilt reports by executing the following script within the dataset where the logs are being routed through the log sink.
The script adds the following prebuilt reports:
- Backup and Recovery job reports
- Daily Schedule Compliance reports
- Protected Resource reports
- Storage Resource Utilization reports
- Mounted Image reports
- Unprotected Resource reports
- Connector Version reports
To set up prebuilt reports in BigQuery, do the following:
gcloud
- Go to Activate cloud shell and click Open editor.
- Create a new text file.
Copy and paste the following prebuilt reports script into the Google Cloud CLI.
backup_job_summary_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_recovery_jobs_*` WHERE DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) >= DATE_SUB(CURRENT_DATE(), INTERVAL 29 DAY) AND DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) < CURRENT_DATE() + 1 GROUP BY insertId ) SELECT DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) as Date, jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.job_type as Job_Type, COUNT(CASE WHEN jsonPayload.job_status = "succeeded" THEN 1 END)/COUNT(*) * 100 as Success_Percent, COUNT(*) as Total_Jobs, COUNT(CASE WHEN jsonPayload.job_status = "succeeded" THEN 1 END) as Succeeded, COUNT(CASE WHEN jsonPayload.job_status = "failed" THEN 1 END) as Failed, COUNT(CASE WHEN jsonPayload.job_status = "canceled" THEN 1 END) as Cancelled, COUNT(CASE WHEN jsonPayload.job_status = "notrun" THEN 1 END) as Not_Run, FROM filtered_data WHERE jsonPayload.job_category = "Backup Job" AND jsonPayload.job_status != "retry" GROUP BY Date, jsonPayload.job_type, jsonPayload.resource_id, jsonPayload.resource_name, jsonPayload.resource_type ORDER BY Date, Resource_Name;' recovery_job_summary_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_recovery_jobs_*` WHERE DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) >= DATE_SUB(CURRENT_DATE(), INTERVAL 29 DAY) AND DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) < CURRENT_DATE() + 1 GROUP BY insertId ) SELECT DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) as Date, jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.job_type as Job_Type, COUNT(CASE WHEN jsonPayload.job_status = "succeeded" THEN 1 END)/COUNT(*) * 100 as Success_Percent, COUNT(*) as Total_Jobs, COUNT(CASE WHEN jsonPayload.job_status = "succeeded" THEN 1 END) as Succeeded, COUNT(CASE WHEN jsonPayload.job_status = "failed" THEN 1 END) as Failed, COUNT(CASE WHEN jsonPayload.job_status = "canceled" THEN 1 END) as Cancelled, COUNT(CASE WHEN jsonPayload.job_status = "notrun" THEN 1 END) as Not_Run, FROM filtered_data WHERE jsonPayload.job_category = "Recovery Job" AND jsonPayload.job_status != "retry" GROUP BY Date, jsonPayload.job_type, jsonPayload.resource_id, jsonPayload.resource_name, jsonPayload.resource_type ORDER BY Date, Resource_Name;' failed_job_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_recovery_jobs_*` GROUP BY insertId ) SELECT jsonPayload.job_name as Job_Name, jsonPayload.job_category as Job_Category, jsonPayload.job_type as Job_Type, jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.error_code as Error_Code, jsonPayload.error_message as Error_Message, jsonPayload.hostname as Host_Name, jsonPayload.backup_rule_policy_name as Backup_Rule_Policy_Name, jsonPayload.backup_plan_policy_template as Backup_Plan_Policy_Template, jsonPayload.appliance_name as Appliance_Name, FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(jsonPayload.job_start_time)) as Job_Start_Time, DATE(jsonPayload.job_start_time) as Job_Date, jsonPayload.backup_type as Backup_Type FROM filtered_data WHERE jsonPayload.job_status = "failed" ORDER BY Job_Start_Time DESC, Resource_Name' job_details_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_recovery_jobs_*` GROUP BY insertId ) SELECT jsonPayload.job_name as Job_Name, jsonPayload.job_category as Job_Category, jsonPayload.job_type as Job_Type, jsonPayload.log_backup as Log_Backup, jsonPayload.job_status as Job_Status, DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) as Job_Date, jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.error_code as Error_Code, jsonPayload.error_message as Error_Message, jsonPayload.job_initiation_failure_reason as Job_Initiation_Failure_Reason, jsonPayload.appliance_name as Appliance_Name, jsonPayload.hostname as Host_Name, jsonPayload.target_appliance_name as Target_Appliance_Name, jsonPayload.target_pool_name as Target_Pool_Name, jsonPayload.target_host_name as Target_Host_Name, jsonPayload.backup_plan_policy_template as Backup_Plan_Policy_Template, jsonPayload.backup_rule_policy_name as Backup_Rule_Policy_Name, FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(CASE WHEN jsonPayload.job_queued_time !="" THEN jsonPayload.job_queued_time ELSE NULL END)) as Job_Queue_Time, FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(CASE WHEN jsonPayload.job_start_time!="" THEN jsonPayload.job_start_time ELSE NULL END)) as Job_Start_Time, FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(CASE WHEN jsonPayload.job_end_time!="" THEN jsonPayload.job_end_time ELSE NULL END)) as Job_End_Time, jsonPayload.job_duration_in_hours as Job_Duration_In_Hours, jsonPayload.backup_consistency as Backup_Consistency, jsonPayload.resource_data_size_in_gib as Resource_Data_Size_In_GiB, jsonPayload.snapshot_disk_size_in_gib as Snapshot_Disk_Size_in_GiB, jsonPayload.pre_compress_in_gib as Pre_Compress_In_GiB, jsonPayload.compression_ratio as Compression_Ratio, jsonPayload.data_sent_in_gib as Streamsnap_Data_Sent_In_GiB, jsonPayload.data_written_in_gib as Streamsnap_Data_Written_In_GiB, jsonPayload.data_copied_in_gib as Data_Copied_In_GiB, jsonPayload.data_change_rate as Data_Change_Rate_Percent, jsonPayload.backup_type as Backup_Type, FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(CASE WHEN jsonPayload.recovery_point!="" THEN jsonPayload.recovery_point ELSE NULL END)) as Recovery_Point, jsonPayload.onvault_pool_storage_consumed_in_gib as OnVault_Pool_Storage_Consumed_In_GiB, FROM filtered_data ORDER BY Job_Start_Time DESC, Resource_Name' unresolved_failures_query='WITH job_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_recovery_jobs_*` WHERE jsonPayload.job_status in ("succeeded", "failed") GROUP BY insertId ), -- Select resources which have their latest status as failed unresolved_failed_resources AS ( SELECT jsonPayload.appliance_name as Appliance_Name, jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.job_type as Job_Type, jsonPayload.job_status as Job_Status, jsonPayload.job_start_time as Job_Start_Time, jsonPayload.hostname as Hostname, jsonPayload.error_code as Error_Code, jsonPayload.error_message as Error_Message FROM job_data j1 WHERE jsonPayload.job_start_time = (SELECT MAX(jsonPayload.job_start_time) FROM job_data j2 WHERE j1.jsonPayload.appliance_name = j2.jsonPayload.appliance_name AND j1.jsonPayload.resource_name = j2.jsonPayload.resource_name AND j1.jsonPayload.resource_type = j2.jsonPayload.resource_type AND j1.jsonPayload.job_type = j2.jsonPayload.job_type) AND jsonPayload.job_status = "failed" ), -- Select all jobs for all the resources that have unresolved failures all_jobs_of_unresolved_failure_resources AS ( SELECT jsonPayload.appliance_name as Appliance_Name, jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.job_type as Job_Type, jsonPayload.job_category as Job_Category, jsonPayload.job_name as Job_Name, jsonPayload.recovery_point as Recovery_Point, jsonPayload.job_status as Job_Status, jsonPayload.job_start_time as Job_Start_Time, jsonPayload.hostname as Hostname, jsonPayload.error_code as Error_Code FROM job_data j1 JOIN unresolved_failed_resources fr ON j1.jsonPayload.appliance_name = fr.Appliance_Name AND j1.jsonPayload.resource_name = fr.Resource_Name AND j1.jsonPayload.resource_type = fr.Resource_Type AND j1.jsonPayload.job_type = fr.Job_Type ), -- Select the latest successful jobs for the resources with unresolved failures latest_success AS ( SELECT * FROM all_jobs_of_unresolved_failure_resources all_jobs WHERE all_jobs.Job_Status = "succeeded" AND all_jobs.Job_Start_Time = ( SELECT MAX(Job_Start_Time) FROM all_jobs_of_unresolved_failure_resources all_jobs_2 WHERE all_jobs_2.Appliance_Name = all_jobs.Appliance_Name AND all_jobs_2.Resource_Name = all_jobs.Resource_Name AND all_jobs_2.Resource_Type = all_jobs.Resource_Type AND all_jobs_2.Job_Type = all_jobs.Job_Type AND all_jobs_2.job_status = "succeeded" ) ), -- Select all failed jobs after the last success for the resources with unresolved failures failed_jobs_to_report AS ( SELECT all_jobs.Appliance_Name as Appliance_Name, all_jobs.Resource_Name as Resource_Name, all_jobs.Resource_Type as Resource_Type, all_jobs.Job_Type as Job_Type, all_jobs.Job_Name as Job_Name, all_jobs.Recovery_Point as Recovery_Point, all_jobs.Job_Status as Job_Status, all_jobs.Job_Start_Time as Job_Start_Time, all_jobs.Hostname as Hostname, all_jobs.Error_Code as Error_Code, all_jobs.Job_Category as Job_Category, FROM all_jobs_of_unresolved_failure_resources all_jobs LEFT JOIN latest_success success ON success.Appliance_Name = all_jobs.Appliance_Name AND success.Resource_Name = all_jobs.Resource_Name AND success.Resource_Type = all_jobs.Resource_Type AND success.Job_Type = all_jobs.Job_Type WHERE all_jobs.Job_Status = "failed" AND TIMESTAMP(all_jobs.Job_Start_Time) > COALESCE(TIMESTAMP(success.Job_Start_Time), TIMESTAMP(DATE_SUB(CURRENT_DATE(), INTERVAL 3 DAY))) ) SELECT failed_jobs.Resource_Name, failed_jobs.Resource_Type, failed_jobs.Job_Type, failed_jobs.Job_Category, COUNT(*) as Error_Count, failed_jobs.Error_Code, ANY_VALUE(ufr.Error_Message) as Error_Message, FORMAT_TIMESTAMP("%F %R %Z", MIN(TIMESTAMP(failed_jobs.Job_Start_Time))) as First_Failure, FORMAT_TIMESTAMP("%F %R %Z", MAX(TIMESTAMP(failed_jobs.Job_Start_Time))) as Last_Failure, TIMESTAMP_DIFF(CURRENT_TIMESTAMP,TIMESTAMP(MAX(failed_jobs.Job_Start_Time)), HOUR) as Hours_Since_Last_Failure, failed_jobs.Appliance_Name FROM failed_jobs_to_report failed_jobs LEFT JOIN unresolved_failed_resources ufr ON failed_jobs.Appliance_Name = ufr.Appliance_Name AND failed_jobs.Resource_Name = ufr.Resource_Name AND failed_jobs.Resource_Type = ufr.Resource_Type AND failed_jobs.Job_Type = ufr.Job_Type AND failed_jobs.Error_Code = ufr.Error_Code GROUP BY Appliance_Name, Resource_Name, Resource_Type, Job_Type, Job_Category, Error_Code ORDER BY Last_Failure DESC ' daily_schedule_compliance_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, ANY_VALUE(timestamp) as Report_Timestamp FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_daily_schedule_compliance_*` WHERE DATE(jsonPayload.date) >= DATE_SUB(CURRENT_DATE(), INTERVAL 8 DAY) AND DATE(jsonPayload.date) < CURRENT_DATE() + 1 GROUP BY insertId ) SELECT jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.backup_rule_policy_name as Backup_Rule_Policy_Name, jsonPayload.backup_plan_policy_template as Backup_Plan_Policy_Template, DATE(jsonPayload.date) as Date, jsonPayload.backup_window_start_time as Backup_Window_Start_Time, jsonPayload.job_type as Job_Type, jsonPayload.status as Status, jsonPayload.comment as Comment, jsonPayload.appliance_name as Appliance_Name, jsonPayload.host_name as Host_Name FROM filtered_data f1 WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id) ORDER BY Resource_Name, Date; ' protected_data_resource_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, ANY_VALUE(timestamp) as Report_Timestamp FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_protected_resource_*` GROUP BY insertId ) SELECT FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(Report_Timestamp)) as Last_updated_on, jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.host_name as Host_Name, jsonPayload.backup_plan_policy_template as Backup_Plan_Policy_Template, jsonPayload.protected_on as Protected_On, jsonPayload.recovery_point as Recovery_Point, jsonPayload.protected_data_in_gib as Protected_Data_in_Gib, jsonPayload.onvault_in_gib as OnVault_Data_in_Gib, jsonPayload.backup_plan_restrictions as Backup_Plan_Restrictions, jsonPayload.backup_inclusion_or_exclusion as Backup_Inclusion_or_Exclusion, jsonPayload.policy_overrides as Policy_Overrides, jsonPayload.appliance_name as Appliance_Name, jsonPayload.remote_appliance as Remote_Appliance, jsonPayload.source_appliance as Source_Appliance, FROM filtered_data f1 WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id) ORDER BY Resource_Name;' storage_utilization_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, ANY_VALUE(timestamp) as Report_Timestamp FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_storage_utilization_*` GROUP BY insertId ) SELECT FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(Report_Timestamp)) as Last_updated_on, jsonPayload.appliance_name as Appliance_Name, jsonPayload.pool_name as Pool_Name, jsonPayload.storage_type as Storage_Type, jsonPayload.total_capacity_in_gib as Total_Capacity_In_GiB, jsonPayload.used_capacity_in_gib as Used_Capacity_In_GiB, jsonPayload.utilization_percentage as Utilization_percentage, jsonPayload.appliance_id as Appliance_id, FROM filtered_data f1 WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id) ORDER BY Appliance_Name;' mounted_image_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, ANY_VALUE(timestamp) as Report_Timestamp FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_mounted_images_*` GROUP BY insertId ) SELECT FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(Report_Timestamp)) as Last_updated_on, jsonPayload.source_resource_name as Source_Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.appliance_name as Appliance_Name, jsonPayload.mounted_image_name as Mounted_Image_Name, jsonPayload.source_image_name as Source_Image_Name, jsonPayload.source_image_type as Source_Image_Type, jsonPayload.recovery_point_date as Recovery_Point_Date, jsonPayload.last_mount_date as Last_Mount_Date, jsonPayload.source_host_name as Source_Host_Name, jsonPayload.mounted_host_name as Mounted_Host_Name, jsonPayload.mounted_resource_name as Mounted_Resource_Name, jsonPayload.resource_virtual_size_in_gib as Resource_Virtual_Size_In_Gib, jsonPayload.storage_consumed_in_gib as Storage_Consumed_In_Gib, jsonPayload.mounted_resource_label as Mounted_Resource_Label, jsonPayload.restorable_object as Restorable_Object, jsonPayload.mounted_image_age_in_days as Mounted_Image_Age_In_Days, jsonPayload.user_name as User_Name, jsonPayload.read_mode as Read_Mode, jsonPayload.resource_size_in_gib as Resource_Size_In_Gib, jsonPayload.source_image_expiration_date as Source_Image_Expiration_Date, FROM filtered_data f1 WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id) ORDER BY Mounted_Resource_Name;' unprotected_resource_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, ANY_VALUE(timestamp) as Report_Timestamp FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_unprotected_resource_*` GROUP BY insertId ) SELECT FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(Report_Timestamp)) as Last_updated_on, jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.host_name as Host_Name, jsonPayload.instance_name as Instance_Name, jsonPayload.discovered_on as Discovered_On, jsonPayload.discovered_by as Discovery_Appliance, FROM filtered_data f1 WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id) ORDER BY Resource_Name;' connector_version_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, ANY_VALUE(timestamp) as Report_Timestamp FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_connector_version_*` GROUP BY insertId ) SELECT FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(Report_Timestamp)) as Last_updated_on, jsonPayload.appliance_name as Appliance_Name, jsonPayload.host_name as Host_Name, jsonPayload.host_os_type as Host_OS_Type, jsonPayload.host_ip_address as Host_IP_Address, jsonPayload.db_authentication as DB_Authentication, jsonPayload.installed_version as Installed_Version, jsonPayload.available_version as Available_Version, jsonPayload.version_check as Version_Check, jsonPayload.disk_preference as Disk_Preference, jsonPayload.transport as Transport, FROM filtered_data f1 WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id) ORDER BY Host_Name;' table_not_exists_error_flag=false check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_backup_recovery_jobs%'" | tail -n +2 | tr -d ' ') if [ $check_table == 'true' ]; then result=$(bq update --use_legacy_sql=false --expiration 0 --view "$backup_job_summary_query" DATASET_NAME.'Backup Jobs Summary') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$backup_job_summary_query" DATASET_NAME.'Backup Jobs Summary' else echo $result fi result=$(bq update --use_legacy_sql=false --expiration 0 --view "$recovery_job_summary_query" DATASET_NAME.'Recovery Jobs Summary') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$recovery_job_summary_query" DATASET_NAME.'Recovery Jobs Summary' else echo $result fi result=$(bq update --use_legacy_sql=false --expiration 0 --view "$failed_job_query" DATASET_NAME.'Failed Jobs') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$failed_job_query" DATASET_NAME.'Failed Jobs' else echo $result fi result=$(bq update --use_legacy_sql=false --expiration 0 --view "$job_details_query" DATASET_NAME.'Job Details') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$job_details_query" DATASET_NAME.'Job Details' else echo $result fi result=$(bq update --use_legacy_sql=false --expiration 0 --view "$unresolved_failures_query" DATASET_NAME.'Unresolved Failures') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$unresolved_failures_query" DATASET_NAME.'Unresolved Failures' else echo $result fi else echo "The table backupdr_googleapis_com_gcb_backup_recovery_jobs does not exist in the dataset DATASET_NAME." table_not_exists_error_flag=true fi check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_daily_schedule_compliance%'" | tail -n +2 | tr -d ' ') if [ $check_table == 'true' ]; then result=$(bq update --use_legacy_sql=false --expiration 0 --view "$daily_schedule_compliance_query" DATASET_NAME.'Daily Schedule Compliance') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$daily_schedule_compliance_query" DATASET_NAME.'Daily Schedule Compliance' else echo $result fi else echo "The table backupdr_googleapis_com_gcb_daily_schedule_compliance does not exist in the dataset DATASET_NAME." table_not_exists_error_flag=true fi check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_protected_resource%'" | tail -n +2 | tr -d ' ') if [ $check_table == 'true' ]; then result=$(bq update --use_legacy_sql=false --expiration 0 --view "$protected_data_resource_query" DATASET_NAME.'Protected Data Resource') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$protected_data_resource_query" DATASET_NAME.'Protected Data Resource' else echo $result fi else echo "The table backupdr_googleapis_com_gcb_protected_resource does not exist in the dataset DATASET_NAME." table_not_exists_error_flag=true fi check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_backup_storage_utilization%'" | tail -n +2 | tr -d ' ') if [ $check_table == 'true' ]; then result=$(bq update --use_legacy_sql=false --expiration 0 --view "$storage_utilization_query" DATASET_NAME.'Storage Resource Utilization') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$storage_utilization_query" DATASET_NAME.'Storage Resource Utilization' else echo $result fi else echo "The table backupdr_googleapis_com_gcb_backup_storage_utilization does not exist in the dataset DATASET_NAME." table_not_exists_error_flag=true fi check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_unprotected_resource%'" | tail -n +2 | tr -d ' ') if [ $check_table == 'true' ]; then result=$(bq update --use_legacy_sql=false --expiration 0 --view "$unprotected_resource_query" DATASET_NAME.'Unprotected Resource') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$unprotected_resource_query" DATASET_NAME.'Unprotected Resource' else echo $result fi else echo "The table backupdr_googleapis_com_gcb_unprotected_resource does not exist in the dataset DATASET_NAME." table_not_exists_error_flag=true fi check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_mounted_images%'" | tail -n +2 | tr -d ' ') if [ $check_table == 'true' ]; then result=$(bq update --use_legacy_sql=false --expiration 0 --view "$mounted_image_query" DATASET_NAME.'Mounted Image Details') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$mounted_image_query" DATASET_NAME.'Mounted Image Details' else echo $result fi else echo "The table backupdr_googleapis_com_gcb_mounted_images does not exist in the dataset DATASET_NAME." table_not_exists_error_flag=true fi check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_connector_version%'" | tail -n +2 | tr -d ' ') if [ $check_table == 'true' ]; then result=$(bq update --use_legacy_sql=false --expiration 0 --view "$connector_version_query" DATASET_NAME.'Connector Version Details') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$connector_version_query" DATASET_NAME.'Connector Version Details' else echo $result fi else echo "The table backupdr_googleapis_com_gcb_connector_version does not exist in the dataset DATASET_NAME." table_not_exists_error_flag=true fi if [ $table_not_exists_error_flag == true ]; then echo -e "\e[1m\e[33mAll the prebuilt reports could not be created successfully in BigQuery as one or more report logs are missing in the dataset DATASET_NAME." echo -e "Please ensure that you have waited for at least 8 hours after creating the sink, before running the script to create pre built reports. Try re-running the script again after some time to fix the issue." echo -e "Reach out to Google Cloud Support in case you are still facing this issue.\e[0m" fi # Check for errors while fetching data from the view fetch_result=$(bq query --format=csv --use_legacy_sql=false "SELECT COUNT(*) FROM \`DATASET_NAME.Storage Resource Utilization\`" 2>&1) if [[ $? -ne 0 ]]; then echo -e "\e[1m Error executing the BigQuery, This issue has been fixed as part of a software update that happened on 3rd June, 2024. We need to delete all the old tables prior to this date by following the next step. \e[0m" start_date="20240301" end_date="20240603" for ((date = start_date; date <= end_date; date++)); do table_name="DATASET_NAME.backupdr_googleapis_com_gcb_backup_storage_utilization_${date}" bq rm -f -t "$table_name" done fi
Replace the following:
PROJECT_ID
: the name of your project.DATASET_NAME
: the name of your BigQuery dataset. We recommend that you set the BigQuery dataset name for Backup and DR Service reports to the default dataset name, which isBackupandDR_reports
. If you have set a different name than the default dataset name while creating the sink, change the dataset name to match your BigQuery dataset.
Save the file with a name with a Bash (
.sh
) file extension, for example,backupdrreports.sh
.Run the command bash using the file you just created. For example,
bash backupdrreports.sh
.You can see the prebuilt reports under the dataset in the BigQuery Studio.
After you set up prebuilt reports, if you have access to Google Workspace, you can view the results in Google connected sheets. If you don't have access to Google Workspace, you can view the reports in Looker Studio or download prebuilt reports in a CSV file.
Pricing
You may be charged for viewing prebuilt reports in BigQuery. These charges are based on the volume of reporting data, that is streamed, stored, and queried in BigQuery. For the details, see Pricing.
What's next
- Learn how to Write custom queries in BigQuery
- Learn how to View prebuilt reports in Google connected sheets
- Learn how to View prebuilt reports in Looker