I'm facing an issue related to timestamp precision in an Oracle table that’s used with Appian. The table has two timestamp columns:
one with zero precision (Datatype - TIMESTAMP(0))
one with six precision (Datatype - TIMESTAMP(6))
When I insert data into these columns without any triggers enabled, everything works as expected: the zero precision column doesn’t include milliseconds, and the six precision column includes milliseconds.
However, when I add a trigger to the zero-precision column, I get an error saying the actual timestamp precision is 7, but received 11 even though the column is defined with zero precision and the trigger only defaults it to SYSTIMESTAMP if null.
SYSTIMESTAMP
Why would the trigger suddenly cause exception , wanted to understand the logic better
Discussion posts and replies are publicly visible
I am not 100% sure but my guess would be, When Appian sees SYSTIMESTAMP in your trigger, it thinks this column will get timestamp data with timezone info. But your column is set up for basic timestamp without timezone (TIMESTAMP(0)). These two don't match, so it throws an error
I understand that in Oracle, SYSTIMESTAMP has a higher precision (and includes timezone) compared to a TIMESTAMP(0) column, and this mismatch can cause errors.
TIMESTAMP(0)
What I’m trying to understand is more :
Why does a database trigger’s behavior come into play during an Appian write to data store entity?
write to data store entity
Since triggers are database-level components, I would expect Appian to only care about its own insert statement and let Oracle handle the trigger logic internally. But in this case, the trigger seems to affect what Appian "sees" and causes the insert to fail.
Can anyone explain why Appian is sensitive to this trigger behavior, instead of treating it as transparent database logic?