| Exam Name: | Google Professional Data Engineer Exam | ||
| Exam Code: | Professional-Data-Engineer Dumps | ||
| Vendor: | Certification: | Google Cloud Certified | |
| Questions: | 400 Q&A's | Shared By: | krish |
Flowlogistic is rolling out their real-time inventory tracking system. The tracking devices will all send package-tracking messages, which will now go to a single Google Cloud Pub/Sub topic instead of the Apache Kafka cluster. A subscriber application will then process the messages for real-time reporting and store them in Google BigQuery for historical analysis. You want to ensure the package data can be analyzed over time.
Which approach should you take?
Flowlogistic wants to use Google BigQuery as their primary analysis system, but they still have Apache Hadoop and Spark workloads that they cannot move to BigQuery. Flowlogistic does not know how to store the data that is common to both workloads. What should they do?