<?xml version="1.0" encoding="UTF-8" ?>
<?xml-stylesheet type="text/xsl" href="https://community.appian.com/cfs-file/__key/system/syndication/rss.xsl" media="screen"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:wfw="http://wellformedweb.org/CommentAPI/"><channel><title>Best way to handle large dataset in cloud database</title><link>https://community.appian.com/discussions/f/best-practices/38700/best-way-to-handle-large-dataset-in-cloud-database</link><description>Hello, 
 I am having trouble with the following use case : we receive 3 large Excel files daily (~60 000 rows and 40 columns), that we need to read and write into Appian Cloud Database. So far, we&amp;#39;ve been doing that this way : 
 1. Use an ETL to read</description><dc:language>en-US</dc:language><generator>Telligent Community 12</generator><item><title>RE: Best way to handle large dataset in cloud database</title><link>https://community.appian.com/thread/146280?ContentTypeID=1</link><pubDate>Sun, 16 Mar 2025 17:42:17 GMT</pubDate><guid isPermaLink="false">d3a83456-d57b-489c-a84c-4e8267bb592a:b9ce00ad-67ad-4192-ad03-b7b78fb1c0bb</guid><dc:creator>Mathieu Drouin</dc:creator><description>&lt;p&gt;&amp;nbsp;&lt;a href="/success/w/article/3048/how-to-create-memory-efficient-models"&gt;https://community.appian.com/success/w/article/3048/how-to-create-memory-efficient-models&lt;/a&gt;&amp;nbsp;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Use activity class parameters (as opposed to process variables) when possible to limit the process history size.&amp;nbsp;&lt;/li&gt;
&lt;li&gt;Release the system memory used by completed nodes by checking the&amp;nbsp;&lt;a href="https://docs.appian.com/suite/help/latest/Other_Tab.html#execution-options"&gt;Delete previously completed/cancelled instances&lt;/a&gt;&amp;nbsp;setting.&lt;/li&gt;
&lt;li&gt;If operating on large data sets frequently or for an extended period of time and through many steps, configure the process variable as&amp;nbsp;&lt;code&gt;Hidden&lt;/code&gt;. This way, frequent changes to these variables are not going to be reflected in process history, which will have a smaller impact on the process instance memory footprint.&lt;/li&gt;
&lt;/ul&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;</description></item><item><title>RE: Best way to handle large dataset in cloud database</title><link>https://community.appian.com/thread/146264?ContentTypeID=1</link><pubDate>Fri, 14 Mar 2025 19:46:41 GMT</pubDate><guid isPermaLink="false">d3a83456-d57b-489c-a84c-4e8267bb592a:c82dd129-14d1-493d-b117-4ff4d98458c9</guid><dc:creator>Stefan Helzle</dc:creator><description>[quote userid="60156" url="~/discussions/f/best-practices/38700/best-way-to-handle-large-dataset-in-cloud-database"]I am wondering what would be the best way to deal with this issue according to best practices. I&amp;#39;ve thought about using&amp;nbsp;a!writeToMultipleDataStoreEntities directly in our API instead of triggering a process, to avoid having too many process running, but I&amp;#39;m not sure if this would really be more efficient memory-wise ?[/quote]
&lt;p&gt;Yes, do this. It will dramatically reduce the overhead.&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;</description></item></channel></rss>