Perfomance in create table #25392
              
                Unanswered
              
          
                  
                    
                      ericfreitas3
                    
                  
                
                  asked this question in
                Q&A
              
            Replies: 0 comments
  
    Sign up for free
    to join this conversation on GitHub.
    Already have an account?
    Sign in to comment
  
        
    
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I'm setting up a Trino in production that will work together with DBT, Snowflake and SQL Server. I'm having problems with the "create table as select" command (used by DBT depending on the type of materialization).
The problem is: in Snowflake, the create for a table (+-16MM) using a warehouse (X-Small) runs in 10s, the same table using the same warehouse in Trino takes several hours. The planned use of Trino is to extract data from SQLServer and Snowflake and write it to Snow, through DBT commands (hence the tests with create table).
I would like to know if I can optimize using Trino's settings for this process. Regarding the cluster size, I've seen that Trino is very small [4 workers (CPU: 2 Memory: 9GB), 1 coordinator (CPU: 1 Memory 4GB)], and I'm already thinking about resizing the cluster to increase the pods. So far, the plan is: [8 workers (CPU: 8 Memory: 16GB), 1 coordinator (CPU: 4 Memory 8GB)], but I believe that this optimization alone, despite being necessary, is not enough. Could you shed some light?
Beta Was this translation helpful? Give feedback.
All reactions