Snowflake SnowPro Advanced: Data Engineer - Practice Exam 2
The SnowPro Advanced: Data Engineer credential recognizes specific knowledge and skills used to apply comprehensive data engineering principles using Snowflake. Passing this Certification showcases your ability to source and transform data from Data Lakes and APIs, design end-to-end near-time streams, and evaluate performance metrics, among other things. 2 + years of hands-on Snowflake Practitioner experience in a Data Engineering role prior is recommended for attempting this exam.
Is it a good idea to use a Resource Monitor to control credit usage for Snowpipe?
Which of the following features incur in data transfer costs?
Which command can we use to list all the files that have been staged in a named external stage like Amazon S3 without copying them into Snowflake?
After running the following command
Snowflake returns the following information:How can we check that the Inventory table is not well-clustered?
Does the Search Optimization Service involve extra storage and computing costs?
What other parameter does Snowflake recommend adding when organizing files into logical paths?
If you want your Data Warehouse available with no lag time, is it a good idea to consider disable the auto-suspend functionality?
Which of these commands require a running warehouse?
Which of the following JOIN statements are supported by Snowflake?
Which command can we use to check the time zone in which your Snowflake account is running?
After creating the table MY_TABLE, we execute the following commands:
What will be the output of executing the following command?
How can we avoid exposing sequence-generated column values in UDFs?
What information does an API Integration object store?
Our company wants to execute a complicated ETL process inside Snowflake every day on the new data that is coming. What is the best way to automate it?
Which is generally the slowest option when using the COPY INTO command?
What types of services created and managed by the Cloud Providers does an event-driven pipeline in Snowflake depend on?
Does Snowflake increase the stream’s offset data retention period if it’s less than 90 days?
In which locations can you define the file format settings for your staged data?
Does Snowpipe use file-loading metadata associated with each pipe object to prevent reloading and duplicating the same files?
Should Data Integration Tech Partner solutions be used to load the data, whereas Snowflake should be used to deliver data to stages?
Is there any direct charge for using the Kafka connector?
When does a stream become stale?
What is the purpose of the “MINS_TO_BYPASS_MFA” command?
Are Scoped Transactions treated as nested transactions?
After how many days does the Snowflake data loading history (that can be checked using the COPY_HISTORY command) of the COPY INTO command and Snowpipe expire?
Are floating point data types and collations supported by the Search Optimization Service?
What two options can we specify in the FILE_FORMAT attribute from the COPY INTO command?
What are the supported values for the VALIDATION_MODE parameter while using the COPY INTO <TABLE> command?
What are some of the advantages when removing loaded data files from the stages?
After executing the following CTE (Common Table Expression), you get the following error:
"Recursion exceeded max iteration count (100)" How can you solve it?
Imagine we have a table called “VISITOR” with the following data:Which Snowflake function will allow us to return a concatenated list of countries in which each person have been?
When returning the clustering information with the SYSTEM$CLUSTERING_INFORMATION and SYSTEM$CLUSTERING_DEPTH commands, is it optional to add the column names?
Which are the benefits of the Search Optimization Service?
Is an existing Clustering Key supported when a table is created using the CREATE TABLE… AS SELECT command?
After querying a timezone column, where we supposedly had EST timezone data, we can see that the data has a “Z“at the end. How can this be possible?
Which of the following statements are possible ways to reduce the risk of SQL injection attacks while using Stored Procedures?
How many files can the COPY INTO operation load as the maximum when providing a discrete list of files?
When should we use the matchedClause in Snowflake?
Can data be inserted multiple times, causing duplicated records, if you run multiple instances of the Kafka connector listening to the same topics?
Can the same column be specified simultaneously in a masking policy signature and a row access policy signature?
Can functions used in the definition of a materialized view be non-deterministic?
Which of the following statements are correct about the TASK_HISTORY function?
Which two commands can we use to refresh the metadata of an external table hosted on the AWS Cloud Storage?
A Snowflake developer has created a masking policy with the following syntax:
Which statements are correct about this policy?
What does the “SECURE” optional parameter indicate when creating an external function?
Which statement is false about the following task?
Which of the following Spark DataFrames functions does the Snowflake connector support?
Which of the following statements is true about API Integration objects?
Which are the possible search methods that the ADD SEARCH OPTIMIZATION function accepts using the SEARCH_METHOD parameter?
Which of the following sentences are correct about the Snowflake Connector for Python?
How can we check the status of an asynchronous query using the Python Connector?
When should a Data Engineer enable the “STRIP_OUTER_ARRAY” file format option in the COPY INTO command?
What data will return the Snowflake after running the following command on the NOT-clustered INVENTORY table?
When should we use the notMatchedClause in Snowflake?
When creating a file format, which file format should we specify when using the "STRIP_OUTER_ARRAY = TRUE | FALSE" parameter?
How does Snowflake prevent data duplication when a COPY statement is executed?
Can natural JOINs be combined with “ON” clauses?
A table has been created using the following syntax:
After performing a COPY operation, all the results have the same timestamp in the “created_at” column. Why is this happening?
Which of the following options is not a compression technique for AVRO file formats?
After adding the Search Optimization service in a Snowflake table called MY_TABLE, we forgot to add it in two columns, so we ran the same command in the other two columns, as we can see in the following code:
What is going to be the result?
What types of tables are good candidates for implementing search optimization?
Which statements are true about the Snowflake Spark connector's internal and external transfer modes?
One query takes a lot of time, and you see in the query profiler the following information:What might be the cause of this?
Which of the following options is a good practice if we need to recreate a pipe (CREATE OR REPLACE PIPE to modify most pipe properties) that automates data loads using event notifications?
We tried to COPY INTO Snowflake several thousands of files. However, it didn’t load all of them. What is more likely to be the cause?