Given a list of files, having obtained the list of their relative sizes, I have to iterate on the list until I obtain a maximum size (for example 10MB) and from the next file start again with the same process, iterating until the end of the list
Discussion posts and replies are publicly visible
Can you describe in more detail what you want to achieve?
Sure, I have a list of generated documents, which need to be zipped and sent via email. Obviously the mail server as I expect will have its own size limit. Let's pretend that this size is 10MB as a precaution, I should be able to zip (let's assume at 0% compression) groups of 10MB and thus create a series of files that can be sent in single emails. I tried looking for zip plugins that directly allowed file splitting, but I couldn't find any (although in that case the loss of just one of the files would make extraction impossible [generally])
Something like this? You will have to adapt this to your specific use case.
a!localVariables( local!lastZip: ri!map.zips[count(ri!map.zips)], if( local!lastZip.size + ri!file.size > 1000000, a!update( ri!map, "zips", append( ri!map.zips, a!map(files: {ri!file.name}, size: ri!file.size) ) ), a!update( ri!map, "zips", a!update( ri!map.zips, count(ri!map.zips), a!map( files: append(local!lastZip.files, ri!file.name), size: local!lastZip.size + ri!file.size ) ) ) ) )
a!localVariables( local!files: a!forEach( items: enumerate(rand() * 100), expression: a!map(name: char(64 + fv!index), size: rand()*100000), ), reduce( rule!SSH_ZipBuilderHelper(map:_, file:_), a!map( zips: { a!map(files: {}, size: 0) } ), local!files ) )
Thank you very much indeed, I have the "vague impression" that I need to better understand the power of the 'reduce', because it seems to me to be the only perfect solution to pigeonhole in the development of the rest of the process. Thank you very much!
You are welcome :-)
Find more here: https://appian.rocks/2022/08/29/complex-algorithms-in-appian/