How to avoid MNI

Hi All,

Is is best practice to use MNI's in appian , if we need to avoid this could suggest me the ways to so ?

Thanks,

Bhargavi

  Discussion posts and replies are publicly visible

Parents
  • I would also add that a lot of things which used to require MNI to handle in-process can now be handled with a!forEach calls inside a script task. Not everything of course, but smartly designing this sort of thing can vastly cut down on MNI calls compared to what we had to do back in the day.
  • I am generating a pdf file for each record in an array of data records and using MNI on the "Generate PDF from Template"  smart service for this to iterate through the array.   There may be 1 to 10 records at any time in the array when it is invoked.  Also I am setting the instances to run one at a time.   I don't see any performance issues.  I am using MNI only because it is convenient.  Is this a bad idea from a design perspective?  Should I replace with an XOR gateway?

  • No, I wouldn't go with an XOR loop - this is a reasonable case for MNI usage, as long as you don't end up with problems storing your generated documents into a PV.

  • Yeah keep in mind that an XOR loop is actually worse than using MNI. Part of the reason MNI is often not recommended is that it  requires Appian to store execution information for each time the node executes. If you end up with a node running 500 times, this can end up being quite a bit of data. However, using an XOR loop is worse because now you have to run your node 500 times AND run the XOR gateway 500 times. So generally the recommendation is try a!forEach() first - if you can't use that, use MNI, and use an XOR looping flow as a last resort.

  • 0
    Certified Lead Developer
    in reply to Mike Schmitt

    In general, MNI isn't a BAD practice or something to AVOID or anything like that.

    It's simply just SLOW and limited to only 1000 iterations. 

    Looping can be a good alternative at times, if it makes more sense for your process model.  In general I would use looping when I expect several operations, not just one, to be done on each item in sequence.  If it's something where I need to run script task A, script task B, and write to database node C, and then run all three on the next record, and the next after that.  If it's impossible to do those steps out of order and get the correct result, I would loop.  Looping also has the same limitation of 1000 iterations.  It's also just as SLOW if not a little slower.

    a!forEach has 2 advantages.  It's FAST, maybe even several hundred times as fast.  It's limited to 1,000,000 iterations!  It has 3 severe limitations though.  First, it can only do expression rules.  If appain hasn't ported the smart service node you need to an expression rule yet, you can't a!forEach a solution.  Second, you have to code all the logic; you can't model it.  If it needs a looping or branching structure to it's logic, one that you could easily model in the process modeler, but one that you can't wrap your head around making a rule for, it's going to be painful to make an a!forEach for the solution.  Third, you can't a!forEach a subprocess.  a!startProcess is restricted from working in any looping functions.

    If you got an MNI working and the performance isn't a problem, congratulations!  Don't do anything.

    If you got an MNI working and the performance is a problem, try to determine if it would be worth taking a crack at replacing it with an a!forEach, or just optimizing the MNI or loop.  If both seem about equally feasible, go with the a!forEach.

Reply
  • 0
    Certified Lead Developer
    in reply to Mike Schmitt

    In general, MNI isn't a BAD practice or something to AVOID or anything like that.

    It's simply just SLOW and limited to only 1000 iterations. 

    Looping can be a good alternative at times, if it makes more sense for your process model.  In general I would use looping when I expect several operations, not just one, to be done on each item in sequence.  If it's something where I need to run script task A, script task B, and write to database node C, and then run all three on the next record, and the next after that.  If it's impossible to do those steps out of order and get the correct result, I would loop.  Looping also has the same limitation of 1000 iterations.  It's also just as SLOW if not a little slower.

    a!forEach has 2 advantages.  It's FAST, maybe even several hundred times as fast.  It's limited to 1,000,000 iterations!  It has 3 severe limitations though.  First, it can only do expression rules.  If appain hasn't ported the smart service node you need to an expression rule yet, you can't a!forEach a solution.  Second, you have to code all the logic; you can't model it.  If it needs a looping or branching structure to it's logic, one that you could easily model in the process modeler, but one that you can't wrap your head around making a rule for, it's going to be painful to make an a!forEach for the solution.  Third, you can't a!forEach a subprocess.  a!startProcess is restricted from working in any looping functions.

    If you got an MNI working and the performance isn't a problem, congratulations!  Don't do anything.

    If you got an MNI working and the performance is a problem, try to determine if it would be worth taking a crack at replacing it with an a!forEach, or just optimizing the MNI or loop.  If both seem about equally feasible, go with the a!forEach.

Children
  • Mike, Pete, David - Thank you all for the detailed responses!  I will stick with the MNI for now.  While the array of records itself may not exceed 10 to 12  in count at best and 1 at worst, the process itself may be invoked by about 100 users everyday on average depending on the business on that day - hopefully this is not an issue!  The number of users may grow to 200 someday but not in the near future.

    Dave, The create pdf from template is not available as an exp. rule and I am glad that it isn't Grin

    Mike, I am not sure what you meant by if storing the document in a pv is going to cause an issue, why because that's exactly what I am doing now -

    1.  Generating a document for each record in the records array.

    2.  Appending each generated doc to a document array which I am using in the next node (no MNI here) to merge the documents and create one master document which the user can download.  

    3.  I am planning on adding another node to delete the documents in the document array since they are not needed anymore!  Probably, I shall try using a!foreach for this!

    Pete, I specifically wanted to know if spawning multiple node instances (within the same execution engine I hear, not sure of that!) is going to be ok performance wise as opposed to executing the same node multiple times.  Hope it doesn't matter?

  • Just to be clear - spawning multiple node instances is the same as executing the same node multiple times. MNI just runs the same node based on different parameters you provide for each iteration.

    That being said, I don't foresee performance issues based on your scenario. It's a fairly low number of iterations, and you don't really have another option when generating documents.

    As you mentioned, using MNI does spawn all instances on the same execution engine. This can be an issue with a large number of instances (especially if your MNI is run on a sub-process node) because it can create a load unbalance between multiple execution engines. However, I don't think you need to worry about this either because again it is a small number of instances getting generated.

  • 0
    Certified Lead Developer
    in reply to karthip0001

    Unfortunately, I think all 3 options are going to leave you stuck running the entire sequence on the same execution analytics pair (term I hear is "shard").  The only way to get different instances to be load balanced on different shards is a!startProcess or Start Process smart service, which also have problems returning your PV's to the process that started them, so no dice there.

    You're also dealing with documents, which hits your Collaboration / Content engine mostly, and you only get one of those.  There's no real way I can think of to do this better than what you've suggested. 

    You're looking at about 2400 documents a day on the high end, assuming you meant the process might get used a total of 200 times a day, 875,000 documents in a year if they pull some weekend shifts.  So after about 20 years you'll hit the roughly 16 million object mark, which is where we started to see the slightest degradation of performance, maybe 10 years with all the other objects your system makes.  10 years without slowdown good enough?