Comprehensive and Detailed 150 to 250 words of Explanation From Snowflake SnowPro Architect exam scope and all publicly documented material:
Clustering keys are most beneficial when they improve micro-partition pruning for common filter patterns and when the chosen columns provide a useful ordering that co-locates data. A common heuristic is to place lower-cardinality columns earlier (to quickly narrow partitions) and then add a higher-cardinality column that further reduces scanned partitions for selective access paths. Here, C_DATE has very low NDV (110), making it an excellent leading key to organize data by date and enable strong pruning for time-bound queries typical of event tables. Next, A_ID (11K) is moderate cardinality and can further segment data within a date range, helping point lookups or narrow scans by identifier. For the third key, the options force choosing between very high-cardinality event activity columns; selecting EVENT_ACT_0 (1.1G) is preferable to EVENT_ACT_4 (2.2G) because it is comparatively less distinct while still supporting additional pruning when queries filter by that attribute. This ordering aligns with Snowflake guidance: keep keys few, ordered to match common predicates, and avoid excessively high-cardinality keys unless they directly match frequent selective filters.
=========
QUESTION NO: 2 [Performance Optimization and Monitoring]
An Architect has implemented the search optimization service for a table. A user adds a new column to the table and there is a decrease in query performance. The Architect executes the DESCRIBE SEARCH OPTIMIZATION command and finds that the newly added column was not included in the search access path. Why did this occur?
A. The new column has a data type that is not supported by the search optimization service.
B. The new column is automatically included in the search access path, but there is a time delay before it takes effect.
C. The ON clause was used when enabling the search optimization service, which does not automatically include new columns.
D. The search optimization property needs to be dropped and then added back for the changes to take effect.
Answer: C
Comprehensive and Detailed 150 to 250 words of Explanation From Snowflake SnowPro Architect exam scope and all publicly documented material:
Search Optimization Service (SOS) builds and maintains a “search access path” for predicates on specific columns. When SOS is enabled without specifying columns, Snowflake can manage broader coverage depending on configuration; however, if SOS is enabled using an ON clause (targeting particular columns/expressions), the access path is explicitly defined and does not automatically expand to include newly added columns. That means queries filtering on the new column won’t benefit from SOS, and if workload patterns shift toward that column, perceived performance can drop relative to expectations. DESCRIBE SEARCH OPTIMIZATION exposing that the new column is absent from the access path is consistent with SOS being configured for a fixed set of columns. In SnowPro Architect terms, this is an operational/design consideration: SOS requires deliberate selection of access paths aligned to query predicates, and schema evolution (adding columns) may require revisiting SOS configuration to include new predicates that matter. This also reinforces the cost/performance tradeoff: SOS accelerates selective point-lookups and highly selective filters, but it must be targeted and maintained as query patterns and schemas change.
=========
QUESTION NO: 3 [Architecting Snowflake Solutions]
An Architect is defining transaction rules to adhere to ACID properties to ensure that executed statements are either committed or rolled back. Based on this scenario, what characteristics of transactions should be considered? (Select TWO).
A. The autocommit setting can be changed inside a stored procedure.
B. Explicit transactions should contain DDL, DML, and query statements.
C. Explicit transactions should contain only DML statements and query statements. All DDL statements implicitly commit active transactions.
D. An explicit transaction can be started by executing a BEGIN WORK statement and can be ended by executing a COMMIT WORK statement.
E. An explicit transaction can be started by executing a BEGIN TRANSACTION statement and can be ended by executing an END TRANSACTION statement.
Answer: C, D
Comprehensive and Detailed 150 to 250 words of Explanation From Snowflake SnowPro Architect exam scope and all publicly documented material:
Snowflake supports transactional behavior for DML (and SELECT statements within a transaction context), but DDL statements have special behavior: many DDL operations implicitly commit the current transaction. Because of that, mixing DDL with DML inside an explicit transaction undermines the “all-or-nothing” rollback expectations and complicates ACID-driven rules. Therefore, a key design characteristic is that explicit transactions should generally be limited to DML and query statements, while recognizing that DDL can implicitly commit and break transactional boundaries (Choice C). Additionally, Snowflake supports explicit transaction control statements using standard SQL forms; BEGIN WORK and COMMIT WORK are valid ways to open and close an explicit transaction (Choice D). From an architecting standpoint, this matters for pipeline design, error handling, and idempotency: multi-step loads/merges should group logically atomic DML steps into an explicit transaction so they commit together or roll back together, but DDL (like CREATE/ALTER) should be separated or executed with full awareness of implicit commits. This is consistent with SnowPro Architect expectations around reliable data engineering patterns and correctness under failure.
=========
QUESTION NO: 4 [Snowflake Data Engineering]
A Snowflake account has the following parameters:
MIN_DATA_RETENTION_TIME_IN_DAYS is set to 5 at the account level.
DATA_RETENTION_TIME_IN_DAYS is set to 4 on database DB1, and 6 on Schema1 in DB1.
DATA_RETENTION_TIME_IN_DAYS is set to 5 on database DB2, and 8 on Schema2 in DB2.What will be the result?
A. DB1 and Schema1 retained 5 days; DB2 and Schema2 retained 5 days.
B. DB1 and Schema1 retained 4 days; DB2 and Schema2 retained 5 days.
C. DB1 retained 4 days and Schema1 6 days; DB2 retained 5 days and Schema2 8 days.
D. DB1 retained 5 days and Schema1 6 days; DB2 retained 5 days and Schema2 8 days.
Answer: D
Comprehensive and Detailed 150 to 250 words of Explanation From Snowflake SnowPro Architect exam scope and all publicly documented material:
Snowflake Time Travel retention is governed by a parameter hierarchy and by edition-dependent limits, but the key behavior in this scenario is the minimum retention guardrail. MIN_DATA_RETENTION_TIME_IN_DAYS sets a floor: objects cannot effectively use a DATA_RETENTION_TIME_IN_DAYS value below that minimum. With an account-level minimum of 5 days, a database-level setting of 4 days is below the permitted minimum, so the effective retention for DB1 cannot be 4 days; it must be at least 5 days. Meanwhile, Schema1 is set to 6 days, which is above the minimum and should apply at the schema level (schemas can override their parent database settings within allowed bounds). For DB2, the database setting is 5 days, which matches the minimum and is valid; Schema2 at 8 days is also valid and applies to objects under that schema where applicable. From an architectural and operational perspective, this highlights two exam-relevant points: (1) retention is controlled via hierarchical parameters (account → database → schema → table), and (2) minimum retention settings enforce governance constraints across environments, preventing overly aggressive reductions that might violate recovery requirements.
=========
QUESTION NO: 5 [Performance Optimization and Monitoring]
An Architect has configured the search optimization service on a table, but metrics show that performance of a number of regularly executed queries is not improving. What could be causing this? (Select TWO).
A. The queries contain SEARCH functions.
B. The queries exceed the predicate limit.
C. The queries contain a predicate mismatch.
D. The queries are running against tables that contain semi-structured data.
E. The queries have a scalar subquery that queries the same table as the table in an outer query.
Answer: B, C
Comprehensive and Detailed 150 to 250 words of Explanation From Snowflake SnowPro Architect exam scope and all publicly documented material:
Search Optimization Service improves performance by maintaining additional search structures to accelerate selective predicates for supported query patterns. Two common reasons it may not help are (1) exceeding the predicate limit and (2) predicate mismatch. If queries include more predicates than SOS can leverage effectively (or beyond documented limits for the access path), Snowflake may not apply the optimization and will revert to normal micro-partition pruning and scanning behavior (Choice B). Predicate mismatch occurs when the query’s filter conditions do not align with the SOS access path definition—examples include using different columns than those optimized, applying functions/casts that prevent using the access path, or using non-supported predicate forms—so SOS cannot accelerate those queries (Choice C). Semi-structured data alone does not inherently prevent SOS benefits; SOS can be used with certain patterns, including on some expressions, but it still must match the configured access path. Likewise, the presence of SEARCH functions is not the core issue; SOS is about access paths and supported predicates, not requiring SEARCH() usage. Architecturally, this is why SOS should be selectively enabled for columns and predicate shapes that dominate high-cost workloads, and monitored/adjusted as query patterns evolve.