单选题 A data analyst is using Amazon QuickSight for data visualization across multiple datasets generated by applications. Each application stores files
Within a separate Amazon S3 bucket. AWS Glue Data Catalog is used as a central catalog across all application data in Amazon S3. A new
Application stores its data within a separate S3 bucket. After updating the catalog to include the new application data source, the data analyst
Created a new Amazon QuickSight data source from an Amazon Athena table, but the import into SPICE failed.
How should the data analyst resolve the issue?

A、 Edit the permissions for the AWS Glue Data Catalog from within the Amazon QuickSight console.
B、 Edit the permissions for the new S3 bucket from within the Amazon QuickSight console.
C、 Edit the permissions for the AWS Glue Data Catalog from within the AWS Glue console.
D、 Edit the permissions for the new S3 bucket from within the S3 console.
下载APP答题
由4l***v8提供 分享 举报 纠错

相关试题

单选题 A software company hosts an application on AWS, and new features are released weekly. As part of the application testing process, a solution
Must be developed that analyzes logs from each Amazon EC2 instance to ensure that the application is working as expected after each
Deployment. The collection and analysis solution should be highly available with the ability to display new information with minimal delays.
Which method should the company use to collect and analyze the logs?

A、 Enable detailed monitoring on Amazon EC2, use Amazon CloudWatch agent to store logs in Amazon S3, and use Amazon Athena for fast,
Interactive log analytics.
B、 Use the Amazon Kinesis Producer Library (KPL) agent on Amazon EC2 to collect and send data to Kinesis Data Streams to further push the
Data to Amazon OpenSearch Service (Amazon Elasticsearch Service) and visualize using Amazon QuickSight.
C、 Use the Amazon Kinesis Producer Library (KPL) agent on Amazon EC2 to collect and send data to Kinesis Data Firehose to further push the
Data to Amazon OpenSearch Service (Amazon Elasticsearch Service) and OpenSearch Dashboards (Kibana).
D、 Use Amazon CloudWatch subscriptions to get access to a real-time feed of logs and have the logs delivered to Amazon Kinesis Data
Streams to further push the data to Amazon OpenSearch Service (Amazon Elasticsearch Service) and OpenSearch Dashboards (Kibana).

单选题 A financial services company needs to aggregate daily stock trade data from the exchanges into a data store. The company requires that data be
Streamed directly into the data store, but also occasionally allows data to be modified using SQL. The solution should integrate complex, analytic
Queries running with minimal latency. The solution must provide a business intelligence dashboard that enables viewing of the top contributors to
Anomalies in stock prices.
Which solution meets the company's requirements?

A、 Use Amazon Kinesis Data Firehose to stream data to Amazon S3. Use Amazon Athena as a data source for Amazon QuickSight to create a
Business intelligence dashboard.
B、 Use Amazon Kinesis Data Streams to stream data to Amazon Redshift. Use Amazon Redshift as a data source for Amazon QuickSight to
Create a business intelligence dashboard.
C、 Use Amazon Kinesis Data Firehose to stream data to Amazon Redshift. Use Amazon Redshift as a data source for Amazon QuickSight to
Create a business intelligence dashboard.
D、 Use Amazon Kinesis Data Streams to stream data to Amazon S3. Use Amazon Athena as a data source for Amazon QuickSight to create a
Business intelligence dashboard.

单选题 A company uses Amazon OpenSearch Service (Amazon Elasticsearch Service) to store and analyze its website clickstream data. The company
Ingests 1 TB of data daily using Amazon Kinesis Data Firehose and stores one day's worth of data in an Amazon ES cluster.
The company has very slow query performance on the Amazon ES index and occasionally sees errors from Kinesis Data Firehose when attempting
To write to the index. The Amazon ES cluster has 10 nodes running a single index and 3 dedicated master nodes. Each data node has 1.5 TB of
Amazon EBS storage attached and the cluster is configured with 1,000 shards. Occasionally, JVMMemoryPressure errors are found in the cluster
Logs.
Which solution will improve the performance of Amazon ES?

A、 Increase the memory of the Amazon ES master nodes.
B、 Decrease the number of Amazon ES data nodes.
C、 Decrease the number of Amazon ES shards for the index.
D、 Increase the number of Amazon ES shards for the index.

单选题 A real estate company has a mission-critical application using Apache HBase in Amazon EMR. Amazon EMR is configured with a single master
Node. The company has over 5 TB of data stored on an Hadoop Distributed File System (HDFS). The company wants a cost-effective solution to
Make its HBase data highly available.
Which architectural pattern meets company's requirements?

A、 Use Spot Instances for core and task nodes and a Reserved Instance for the EMR master node. Configure the EMR cluster with multiple
Master nodes. Schedule automated snapshots using Amazon EventBridge.
B、 Store the data on an EMR File System (EMRFS) instead of HDFS. Enable EMRFS consistent view. Create an EMR HBase cluster with multiple
Master nodes. Point the HBase root directory to an Amazon S3 bucket.
C、 Store the data on an EMR File System (EMRFS) instead of HDFS and enable EMRFS consistent view. Run two separate EMR clusters in two
Different Availability Zones. Point both clusters to the same HBase root directory in the same Amazon S3 bucket.
D、 Store the data on an EMR File System (EMRFS) instead of HDFS and enable EMRFS consistent view. Create a primary EMR HBase cluster
With multiple master nodes. Create a secondary EMR HBase read-replica cluster in a separate Availability Zone. Point both clusters to the
Same HBase root directory in the same Amazon S3 bucket.

单选题 A company has a business unit uploading .csv files to an Amazon S3 bucket. The company's data platform team has set up an AWS Glue crawler
To do discovery, and create tables and schemas. An AWS Glue job writes processed data from the created tables to an Amazon Redshift database.
The AWS Glue job handles column mapping and creating the Amazon Redshift table appropriately. When the AWS Glue job is rerun for any reason
In a day, duplicate records are introduced into the Amazon Redshift table.
Which solution will update the Redshift table without duplicates when jobs are rerun?

A、 Modify the AWS Glue job to copy the rows into a staging table. Add SQL commands to replace the existing rows in the main table as
Postactions in the DynamicFrameWriter class.
B、 Load the previously inserted data into a MySQL database in the AWS Glue job. Perform an upsert operation in MySQL, and copy the results
To the Amazon Redshift table.
C、 Use Apache Spark's DataFrame dropDuplicates() API to eliminate duplicates and then write the data to Amazon Redshift.
D、 Use the AWS Glue ResolveChoice built-in transform to select the most recent value of the column.

单选题 A data analyst is using AWS Glue to organize, cleanse, validate, and format a 200 GB dataset. The data analyst triggered the job to run with the
Standard worker type. After 3 hours, the AWS Glue job status is still RUNNING. Logs from the job run show no error codes. The data analyst wants
To improve the job execution time without overprovisioning.
Which actions should the data analyst take?

A、 Enable job bookmarks in AWS Glue to estimate the number of data processing units (DPUs). Based on the profiled metrics, increase the
Value of the executor- cores job parameter.
B、 Enable job metrics in AWS Glue to estimate the number of data processing units (DPUs). Based on the profiled metrics, increase the value
Of the maximum capacity job parameter.
C、 Enable job metrics in AWS Glue to estimate the number of data processing units (DPUs). Based on the profiled metrics, increase the value
Of the spark.yarn.executor.memoryOverhead job parameter.
D、 Enable job bookmarks in AWS Glue to estimate the number of data processing units (DPUs). Based on the profiled metrics, increase the
Value of the num- executors job parameter.

单选题 A streaming application is reading data from Amazon Kinesis Data Streams and immediately writing the data to an Amazon S3 bucket every 10
Seconds. The application is reading data from hundreds of shards. The batch interval cannot be changed due to a separate requirement. The data
Is being accessed by Amazon
Athena. Users are seeing degradation in query performance as time progresses.
Which action can help improve query performance?

A、 Merge the files in Amazon S3 to form larger files.
B、 Increase the number of shards in Kinesis Data Streams.
C、 Add more memory and CPU capacity to the streaming application.
D、 Write the files to multiple S3 buckets.

单选题 A financial company hosts a data lake in Amazon S3 and a data warehouse on an Amazon Redshift cluster. The company uses Amazon QuickSight
To build dashboards and wants to secure access from its on-premises Active Directory to Amazon QuickSight.
How should the data be secured?

A、 Use an Active Directory connector and single sign-on (SSO) in a corporate network environment.
B、 Use a VPC endpoint to connect to Amazon S3 from Amazon QuickSight and an IAM role to authenticate Amazon Redshift.
C、 Establish a secure connection by creating an S3 endpoint to connect Amazon QuickSight and a VPC endpoint to connect to Amazon
Redshift.
D、 Place Amazon QuickSight and Amazon Redshift in the security group and use an Amazon S3 endpoint to connect Amazon QuickSight to
Amazon S3.