Filter large data set for use in process model

I need help creating a process model to evaluate and batch update the status of a large group of users (30,000+ records) based on detailed criteria. The workflow logic is as follows, with any identifying info removed:

User Status Evaluation Process

Initial Conditions

  1. If either of the following conditions is true:

    • There is an entry in a related table where the status is not equal to "denied", OR
    • A specific approval flag is set to true.

    Then: No further action is needed for this record — it can be skipped.


Handling Rejected Entries

  1. If an entry exists in the related table but its status is equal to "denied":
    • Check if there is another flag indicating confirmation of a related process or action.

No Entry and No Approval

  1. If no entry exists in the related table and the specific approval flag is false:
    • First, check if a secondary task or decision exists with a status of "approved."
    • If no approved secondary task or decision exists, check for the confirmation flag mentioned above.

Actions for Confirmed Cases

  1. If the confirmation flag is set to true:
    • If the user has active records for a particular year (e.g., 2024):
      • No action needed — the record can be skipped.
    • If the user does not have active records for the specific year:
      • Update a related field or status to "Unused."

No Confirmation or Denied Task/Decision

  1. If all the following conditions are true:

    • No related table entry exists,
    • No specific approval flag is true, AND
    • There is no confirmation flag, or the related task/decision was denied:

    Then: Update a related field or status to "Unused."


The goal is to create a process model that evaluates these conditions for each record and performs the appropriate updates. Any guidance on how to structure this process model or handle complex filtering conditions efficiently would be greatly appreciated!

Initially, I thought of creating an expression rule that performs the necessary filtering using a query with nested logical expressions. However, given the size of the dataset (30,000+ records), I am concerned this approach may be inefficient. I'm wondering if there is a more effective or scalable way to handle the filtering and integrate it into the process model.

  Discussion posts and replies are publicly visible

Parents Reply Children
No Data