Hi,
We are facing the below error while trying to execute the expression rule to fetch data from data base table:
An error occurred while evaluating expression: listofIDtodelete:rule!abc(cons!xyz) (Expression evaluation error in rule 'abc' at function a!queryEntity [line 5]: An error occurred while retrieving the data. Details: Memory threshold reached during output conversion (rule: [queryentity expression], type: [DataReportDT7042], threshold: [1,048,576 bytes], objects successfully converted: [131,071])) (Data Outputs)
Please provide your help regarding the resolution of this issue.
Discussion posts and replies are publicly visible
Is this part of a larger DB cleanup process? Will it run periodically? Why is there so much to clean up? Could it be prevented?
Is the query to fetch the IDs a simple one? Like "select id from table where tobedeleted=1"? If yes, did you consider to use the Query Database Node and a statement like "delete from table where tobedeleted=1". There is some drawbacks going that way, but it might be worth it.
Hi Stefen,
Please find the expression rule code, we are using to fetch the data from database table:
a!entityDataIdentifiers( entity: ri!Entity, identifiers: { if( a!queryEntity( entity: ri!Entity, query: a!query( selection: a!querySelection( columns: { a!queryColumn( field: "id" ) } ), pagingInfo: a!pagingInfo( startIndex: 1, batchSize: - 1 ) ), fetchTotalCount: true ).Totalcount = 0, "", a!queryEntity( entity: ri!Entity, query: a!query( selection: a!querySelection( columns: { a!queryColumn( field: "id" ) } ), pagingInfo: a!pagingInfo( startIndex: 1, batchSize: - 1 ) ), fetchTotalCount: false ).data.id ) } )
Please let me know how can I optimize it as it is fetching all the IDs need to be deleted.
Hi Stewart,
We are fetching a single column (ID) data from one database table so will it be helpful to create a view in this case?
First, I strongly suggest to not do that query two times in the if() statement. Create a local variable and store the data there.
Next. This seems like you want to delete all rows in that table. MariaDB has a special comment for this. It is called "truncate". I suggest to write a small stored procedure and use that.
Hi Stefan,
How can I store the single column of data store entity in local variable?
https://docs.appian.com/suite/help/21.2/Local_Variables.html
academy.appian.com/
But, again, this will not solve your root problem. Trying to delete 100ks of rows in DB with that method is no good idea.
In addition to what Stefan said - in your first query, there is no need to pull back all rows of a query ("batchSize: -1") when you're just checking that the totalCount is zero. Instead set your batchsize to Zero, which makes the query pull back metadata (including TotalCount) but not pull back any rows of data.
I agree with Stefan. You're making this about two hundred thousand times harder than it needs to be. You're gathering 100,000 ids, storing 100,000 IDs in a PV, then doing 100 subprocesses that each handle a Delete from Datastore node that is MNI 1000 times. Or maybe trying a forEach with a delete from datastore rule that can be cloned up to a million times. For what? Since you're not filtering, your ultimate result will be an empty table.
Just use the TRUNCATE command. One step. No Appian, no PV with mountain of data eating your RAM, no looping. Just a few seconds and empty table. Just get a DBA to TRUNCATE the table if you want it to have 0 rows. Or "Execute Stored Procedure" and the stored procedure in question has only 1 line of code: TRUNCATE TABLE <name>;
Good point Mike, but we're in the realm of optimizing code before removing it.
Overall I agree with you guys about how best to truncate a table (assuming they want to make their PK IDs to start back at zero, unless I'm mistaken about what TRUNCATE does). But this inefficient querying is something I see across all sorts of use cases so I always try to point it out anyway ;-)
Agreed with the solutions suggested for truncating a table, but we still haven't gotten to the fundamental problem: why do you need to delete all of this data? Is this a temporary table that you're using for ETL purposes? How is the data getting populated in this table?
I'm concerned that this is an XY problem where we're trying to solve the wrong problem.