Database query - slow the first couple of times.

Hi All,

we have field on the interface, when user enters a values and tabs out, we query a table to get information and load the rest of the page.

Issue is when a user logs in for the first time and enters the value, the query takes lot of time (7-8 seconds to load, soemtimes 10+seconds and the screen errors out).

But if he tries subsequently 2nd or 3rd time, the querying becomes very fast (100-200 ms).

The logs show that the execute times is what takes the maximum time.

Operation Type Operation Name Operation Detail Prepare Time (ms) Execute Time (ms) Transform Time (ms)
QUERY queryentity expression "[FAILED] Query[Selection[sl as sl (show), price as price (show), quantity as quantity (show), enter_date as enter_date (show), plant as plant (show), matnr as matnr (show), knumh as knumh (show)], criteria[(matnr = TypedValue[it=3,v=713-312126-002])], PagingInfo[startIndex=0, batchSize=-1, sort=[Sort[sl asc]]]]" 0 10013 0
QUERY queryentity expression "[FAILED] Query[Selection[sl as sl (show), price as price (show), quantity as quantity (show), enter_date as enter_date (show), plant as plant (show), matnr as matnr (show), knumh as knumh (show)], criteria[(matnr = TypedValue[it=3,v=713-335743-002])], PagingInfo[startIndex=0, batchSize=-1, sort=[Sort[sl asc]]]]" 0 10016 0

The table has 1.5 million rows and the column that we are searching on is properly indexed. So the table is properly optimized.

What could the possible reasons for it being really slow the first couple of times (1-2 times)

  Discussion posts and replies are publicly visible

  • Hi Siddharth

    Having an Index on a table of in itself is no guarantee that the Database engine is using it! Have you asked the DBA to perform an EXPLAIN on that table? If the Database internal statistics table doesn't reflect that actual data in the table (i.e. if the Database thinks there are only 100 rows rather than the 1.5 million rows) then the Database will perform a full table scan and ignore the index because it believes this is the most efficient way of retrieving the data. The DBA should be performing a RUNSTATS (this is the DB2 command) - or whatever the equivalent is for your Database platform - in order for the statistics tables to be updated. Even if this is up to date the EXPLAIN should still help you determine what the data retrieval plan is and whether other indexes need to be applied.

  • Hi Stewart, Thanks for responding.

    My DBA is looking into the indexes and see if the engine is using it.

    I had two follow up questions:

    1. APPIAN errors out after 10 seconds (cloud default limit) if it does not get a response for a query. What happens to the query after 10 seconds at the DB end, does it still run and complete or is it aborted?

    2. we are comparing the APPIAN logs with the DB trace to see where the delay is. For some of the queries, we see that the DB execution takes most of the time. For some of the queries we see significant difference between the two. For example the APPIAN log execute time shows 9 seconds for a query, but the db trace shows 6.5 seconds for the same. is the 3 seconds difference network latency?

  • 0
    A Score Level 2
    in reply to siddharths

    We see that you are using the batchSize as -1 that means it pulls all of the Rows, We suggest to use limited number of Batches. If this cannot help you can think about an alternative MV's aswell.

  • 0
    Certified Lead Developer
    in reply to Stewart Burchell

    This could be as simple as the sort you're applying... Are you actively applying a sort within the query entity?

  • No just a!pagingInfo(1,-1)

    The logs seems to show that the query is taking a specific column.

    PagingInfo[startIndex=0, batchSize=-1, sort=[Sort[sl asc]]]]"

  • 0
    Certified Lead Developer
    in reply to siddharths

    I'm guessing that's the primary key? Is the table already sorted in a particular order? Not sure it'll make a difference but might be worth checking. Also can you run the equivalent query directly on the database?

  • Thats not the primary key.

    Actually the table does not have a primary key on a single column. It has a composite primary key on 3 columns. i have made one of those 3 columns as the primary key when i mapped the cdt to the table.

  • 0
    Certified Lead Developer
    in reply to siddharths

    Well that will be a separate problem then - you may have missing values when you retrieve from this table, as Appian requires a single unique primary key.

    If you run the query equivalent to the one appian is running directly on the database, what sort of performance do you get?

  • we are not able to recreate the slowness directly in the database. Its pretty quick (200 ms). Even in APPIAN it happens only the first time (10+ seconds). The screen errors out and if we try again its pretty quick.

    we are trying to compare the APPIAN logs against the DB trace. in the process of getting more data points to come to a better conclusion. 

    So far we have only 2 data points - 

    Appian log shows 6.5 execution time, DB trace shows 6.4 seconds, 

    Appian log shows 9.5 seconds execution, DB trace shows 6.5 seconds. (Not sure where the 3 seconds delay is)

  • 0
    A Score Level 1
    in reply to siddharths

    Hey there Siddharth -

    I've been dealing with similar sizes of tables on a recent project. I couldn't give you too much direction when it comes to the database outside of potentially seeing if there's any way to implement a custom view, but with regard to your Appian build there's one particular item I came across recently that actually saved me a lot of time and headache. I would evaluate whether or not you could benefit from utilizing a a!queryAggregation() in your query, in order to only query distinct rows as well as only pull the columns/fields you need. Could shave a second or two off of your query time, which could make all the difference. Are you loading the data into a local variable? That might explain why your initial query is the only slow one and when you try to get the data afterwards it's considerably faster.

    Take a look at https://docs.appian.com/suite/help/19.1/Query_Recipes.html#overview to see if there's any way your Appian query could be optimized. If your DB is optimized, it could be worth reviewing the Appian query.