can we calculate batchsize based on datalength(Bytes) from table and amount of data we retrive from Appian query ? because i have a requirement to send data from multiple tables to API
Ex: For one table
i have table with 235417 of data in that and datalength is (18366464 Bytes) . As we know we can retrieve only 1MB(1000000Bytes) of data at a time from query entity , So i am expecting my batch size should be dynamically decided based on datalength(18366464) and amount of data retrived at a time (1000000bytes) to send whole data in chunks . If i give hardcore value as 10k or 5k this may again cause issue for some other table with large data or large column . So my thought is it's better to decide batchsize dynamically based on tablelength
Discussion posts and replies are publicly visible
That seems a bit over-engineered.
Why not just divide 1MB by the max size of a row. Round this down to some reasonable number and go with this?
tried with this and getting output as 1 . Also tried with ceiling(18366464/1000000) getting output as 19 , Both are not likely as batchsize
Any other suggested function otherthan Ceiling ?
18.366.464 B = 17.936 kB = 17,51 mB
So you tell me that you have 17,5 mB of data in each row in your DB?!?!?
That does not seem plausible!
I was asking for the max row size.
Ok Got it , i will use this Avg_row_length to calculate instead Data_length
As I said, I would go with the max row length to be on the save side.
Can you please suggest me with how to find max row length . I couldn't find in my above O/P except avglength and datalength
Thanks
I would just add up the numbers manually. You need to do that only once, right?
Or, if you have a larger number of bigger text fields and a lower number of integers, just take the test fields and do some guessing & estimating.