Snowpark workloads are often memory- and compute-intensive, especially when executing complex transformations, large joins, or machine learning logic inside stored procedures. In Snowflake, the MAX_CONCURRENCY_LEVEL warehouse parameter controls how many concurrent queries can run on a single cluster of a virtual warehouse. Lowering concurrency increases the amount of compute and memory available to each individual query.
Setting MAX_CONCURRENCY_LEVEL = 1 ensures that only one query can execute at a time on the warehouse cluster, allowing that query to consume the maximum possible share of CPU, memory, and I/O resources. This is the recommended configuration when the goal is to optimize performance for a single Snowpark job rather than maximizing throughput for many users. Higher concurrency levels would divide resources across multiple queries, reducing per-query performance and potentially causing spilling to remote storage.
For SnowPro Architect candidates, this question reinforces an important cost and performance tradeoff: concurrency tuning is a powerful lever. When running batch-oriented or compute-heavy Snowpark workloads, architects should favor lower concurrency to maximize per-query resources, even if that means fewer concurrent workloads.
=========
QUESTION NO: 12 [Cost Control and Resource Management]
An Architect executes the following query:
SELECT query_hash,
COUNT(*) AS query_count,
SUM(QH.EXECUTION_TIME) AS total_execution_time,
SUM((QH.EXECUTION_TIME / (1000 * 60 * 60)) * 8) AS c
FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY QH
WHERE warehouse_name = 'WH_L'
AND DATE_TRUNC('day', start_time) >= CURRENT_DATE() - 3
GROUP BY query_hash
ORDER BY c DESC
LIMIT 10;
What information does this query provide? (Select TWO).
A. It shows the total execution time and credit estimates for the 10 most expensive individual queries executed on WH_L over the last 3 days.
B. It shows the total execution time and credit estimates for the 10 most expensive query groups (identical or similar queries) executed on WH_L over the last 3 days.
C. It shows the total execution time and credit estimates for the 10 most frequently run query groups executed on WH_L over the last 3 days.
D. It calculates relative cost by converting execution time to minutes and multiplying by credits used.
E. It calculates relative cost by converting execution time to hours and multiplying by credits used.
Answer: B, E
This query groups results by QUERY_HASH, which represents logically identical SQL statements. As a result, the aggregation is performed at the query group level, not at the individual execution level. This allows architects to identify patterns where the same query (or same logical SQL) repeatedly consumes a large amount of compute (Answer B).
The cost calculation converts execution time from milliseconds to hours by dividing by (1000 * 60 * 60) and then multiplies the result by 8, which represents the hourly credit consumption of the WH_L warehouse size. This provides a relative estimate of credit usage per query group, not an exact billing value but a useful approximation for cost analysis (Answer E).
The query does not identify the most frequently executed queries; although COUNT(*) is included, the ordering is done by calculated cost (c), not by frequency. This type of analysis is directly aligned with SnowPro Architect responsibilities, helping architects optimize workloads, refactor expensive query patterns, and right-size warehouses to control costs.
=========
QUESTION NO: 13 [Architecting Snowflake Solutions]
An Architect is designing a disaster recovery plan for a global fraud reporting system. The plan must support near real-time systems using Snowflake data, operate near regional centers with fully redundant failover, and must not be publicly accessible.
Which steps must the Architect take? (Select THREE).
A. Create multiple replicating Snowflake Standard edition accounts.
B. Establish one Snowflake account using a Business Critical edition or higher.
C. Establish multiple Snowflake accounts in each required region with independent data sets.
D. Set up Secure Data Sharing among all Snowflake accounts in the organization.
E. Create a Snowflake connection object.
F. Create a failover group for the fraud data for each regional account.
Answer: B, C, F
Mission-critical, near real-time systems with strict availability and security requirements require advanced Snowflake features. Business Critical edition (or higher) is required to support failover groups and cross-region replication with higher SLA guarantees and compliance capabilities (Answer B). To meet regional proximity and redundancy requirements, multiple Snowflake accounts must be deployed in each required region, ensuring independence and isolation between regional environments (Answer C).
Failover groups are the core Snowflake mechanism for disaster recovery. They replicate selected databases, schemas, and roles across accounts and allow controlled promotion of secondary accounts to primary during failover events (Answer F). Secure Data Sharing alone does not provide DR or replication, and connection objects are unrelated to availability or redundancy.
This design aligns with SnowPro Architect best practices for multi-region disaster recovery, enabling low-latency regional access, controlled failover, and strong isolation without exposing systems to the public internet.
=========
QUESTION NO: 14 [Snowflake Data Engineering]
What transformations are supported in the following SQL statement? (Select THREE).
CREATE PIPE … AS
COPY INTO …
FROM ( … )
A. Data can be filtered by an optional WHERE clause.
B. Columns can be reordered.
C. Columns can be omitted.
D. Type casts are supported.
E. Incoming data can be joined with other tables.
F. The ON_ERROR = ABORT_STATEMENT command can be used.
Answer: A, B, D
Snowflake’s COPY INTO statement (including when used with Snowpipe) supports a limited but useful set of transformations. Data can be filtered using a WHERE clause when loading from a staged SELECT statement, enabling simple row-level filtering (Answer A). Columns can also be reordered by explicitly selecting fields in a different order than they appear in the source (Answer B). Additionally, type casting is supported, allowing raw data to be cast into target column data types during ingestion (Answer D).
However, COPY INTO does not support joins with other tables; it is designed for ingestion, not complex transformations. Columns can be omitted implicitly by not selecting them, but this is not considered a transformation feature in the context of Snowpipe exam questions. The ON_ERROR option is an error-handling configuration, not a transformation.
SnowPro Architect candidates are expected to recognize that COPY INTO and Snowpipe are ingestion-focused tools. More complex transformations should be handled downstream using streams and tasks, dynamic tables, or transformation frameworks like dbt.
=========
QUESTION NO: 15 [Security and Access Management]
A company wants to share selected product and sales tables with global partners. The partners are not Snowflake customers but do have access to AWS.
Requirements:
Data access must be governed.
Each partner should only have access to data from its respective region.What is the MOST secure and cost-effective solution?
A. Create reader accounts and share custom secure views.
B. Create an outbound share and share custom secure views.
C. Export secure views to each partner’s Amazon S3 bucket.
D. Publish secure views on the Snowflake Marketplace.
Answer: A
When sharing data with partners who are not Snowflake customers, Snowflake reader accounts provide the most secure and cost-effective solution. Reader accounts allow data providers to host and govern access within their own Snowflake environment while allowing consumers to query shared data without owning a Snowflake account (Answer A). This ensures strong governance, centralized billing, and no data movement.
By sharing custom secure views, the company can enforce row-level and column-level security so that each partner only sees data from its authorized region. Outbound shares require the consumer to have their own Snowflake account, which is not the case here. Exporting data to S3 introduces unnecessary data duplication, security risk, and operational overhead. Snowflake Marketplace is designed for broad distribution, not partner-specific regional restrictions.
For the SnowPro Architect exam, this question highlights best practices in secure data sharing, governance, and cost control when collaborating with external, non-Snowflake partners.