Populate CDT with values of Array

Hi,

 

I have a process variable CDT of type array, the CDT has this fields {Title1=,Title2=,Title3=}.

And i have an array with the values {3,4,7,q,a,1,3,9,0}.

And i want to have the PV populated like this: {{Title1=3,Title2=4,Title3=7},{Title1=q,Title2=a,Title3=1},{Title1=3,Title2=9,Title3=0}}

Do you have any suggestions?

It has to be a flexible solution because it has to be used with different CDT's, on the process.

 

Thanks

  Discussion posts and replies are publicly visible

Parents Reply Children
  • The code below will give you the expected output for the provided input, but the part about it being generic enough to work with any CDT is a lot trickier. Maybe you can define a rule like the example here for each of the CDTs you know about right and then create a wrapper rule that will act as a dispatcher.

    = load(
    /* ri!data is List of Text String, ri!fieldCount is Integer */
    local!count: count(ri!data)/ri!fieldCount,

    a!forEach(
    enumerate(local!count),
    {
    Title1: index(ri!data, 1+ri!fieldCount*fv!item, null),
    Title2: index(ri!data, 2+ri!fieldCount*fv!item, null),
    Title3: index(ri!data, 3+ri!fieldCount*fv!item, null)
    }
    )
    )
  • 0
    Certified Lead Developer
    in reply to Carlos Santander

    One thought I'd had is if the CDT fields are known, maybe we could pass them in as an array too.

    It can be tricky to dynamically name the dictionary field names, but one trick I've seen used before is to manually construct a JSON string and then back-translate that to actual dictionary data.  I've had moderate success so far with this approach (note the rule inputs here are "fields" and "data", both text strings):

    with(
     
      local!numFields: length(ri!fields),
     
      /* for the count we need to do "ceiling" on the division result or else we
      risk truncating any remainders in the data array.  doing it this way, the
      data array doesn't need to be an even multiple of the number of fields. */
      local!count: ceiling(count(ri!data) / local!numFields),
     
      local!jsonString: "[" &
      joinarray(
        a!foreach(
          enumerate(local!count),
          with(
            /* i assume this is the only way we can access the fv!item value from inside
            the lower nested a!forEach call - pretty painless at the end of the day but
            it took me a while to figure out to try this */
            local!cdtCount: fv!item
            ,
            "{" &
            joinarray(
              a!forEach(
                ri!fields,
                """" & fv!item & """:""" &
                index(
                  ri!data,
                  local!cdtCount * local!numFields + fv!index,
                  null()
                )
                & """"
              ),
              ","
            )
            & "}"
            
          )
        ),
        ","
      )
      & "]",
     
      a!fromJson(
        local!jsonString
      )
    )

    Side note 1: a!forEach really is a savior.
    Side note 2: you really want to use with() instead of load() in this rule, at least if it ever has even the slightest chance of being called from a SAIL form or anything, because load() will cause weird side-effects that catch someone off-guard.

  • 0
    A Score Level 1
    in reply to Mike Schmitt
    for getting dynamic fields of a cdt follow this code snippet that should work dynamically
    apply(
    xpathsnippet(
    _,
    "local-name(/*)"
    ),
    xpathsnippet(
    toxml(
    ri!cdt
    ),
    "/*/*"
    )
    )
  • Mike,
    Thanks very much for the help, since i am using the version 7.10 i don't have the function a!foreach any idea how i can get the same with the apply or doforeach?

    Thanks
  • Just create a new rule with the body of the a!forEach function, and pass that to apply

  • 0
    Certified Lead Developer
    in reply to Sachin
    Good point - though do you happen to know how well this would work if an empty CDT is passed in? I'd be afraid (though i'm not sure) that it might only pick up fields that are currently populated.
  • Right now i am using this solution:

    = load(
    /* ri!data is List of Text String, ri!fieldCount is Integer */
    local!count: count(ri!data)/ri!fieldCount,

    apply(
    'type!{urn:com:appian:types}ParseTest'(_,_,_),
    merge(
    index(ri!data, 1+ri!fieldCount*enumerate(local!count), null),
    index(ri!data, 2+ri!fieldCount*enumerate(local!count), null),
    index(ri!data, 3+ri!fieldCount*enumerate(local!count), null))
    )
    )

    But now i asked another question on the forum on how to replicate the: index(ri!data, 1+ri!fieldCount*enumerate(local!count), null) depending on the number of fields, i was trying like this:

    = load(
    /* ri!data is List of Text String, ri!fieldCount is Integer */
    local!count: count(ri!data)/ri!fieldCount,
    local!numListFields: 1+enumerate(ri!fieldCount),
    apply(
    type!ParseTest(_,_,_),
    apply(arraysForCDTPopulate(ri!data,ri!fieldCount,local!count,_),local!numListFields)
    )
    )

    where arraysForCDTPopulate(ri!data,ri!fieldCount,ri!count,ri!numListFields) is like:
    index(ri!data, ri!numListFields+ri!fieldCount*enumerate(ri!count),null)

    But its not working because it only returns one array and i wanted to return 3 arrays with the supposed data...

  • 0
    Certified Lead Developer
    in reply to Guilherme Canteiro

    2 points:

    1) i'm not really sure what you're asking for now, or what you mean by "replicate" - maybe you could provide some example of the result you'd expect to get from a given set of input data or something?

    2) the non-indented code you've pasted is pretty hard to read in this format; you should consider editing your comment and formatting your SAIL code with 2-space indents (like is used in the interface/expression rule editors) and use the "Pre" block formatting style on it to make it come out nice like the code posted above by myself and .

  • 0
    A Score Level 1
    in reply to Sachin
    Just a heads up on apply() with xpathsnippet() - for a medium to large amount of data this can perform quite poorly.