We are not able to bring the tomcat server even though Data- server is started successfully.
Data server health is becoming false immediately from true.
Kindly help us to resolve this issue.
Tomcat log is given below.
2022-10-07 14:48:42,814 [wait-for-component] INFO com.appiancorp.common.startup.WaitForStatefulComponents - Waiting for Appian component Data Server to be healthy...
2022-10-07 14:48:53 Appian in Tomcat is stopped via stop command
And watchdog.log is displaying the below error.
{"level":"ERROR","time":"2022-10-07T15:14:30.569Z","logger":"watchdog","caller":"BranchCleanup.java:56","thread":"branch-cleaner","msg":"Error executing branch cleanup."}
com.appian.data.client.AdsException: APNX-3-0100-004: No-op historical store gateway will not process write requests
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
Discussion posts and replies are publicly visible
Hi, getting the same error and health.bat is showing false for healthy status any resolutions
hi fixed by follow below step:
Hi Abdullah thank you for you reply. I have a small doubt in the 2nd point you said execute 2 scripts but i see the same script name mentioned twice can you send the 2 scripts that i need to execute again.
sorry ..it is mistake
Thank you. I will execute those and get back to you
Hi Abdullah after following the steps you mentioned i got the following errors
ill start up in no-op mode"}{"level":"INFO","time":"2024-10-24T13:00:57.950Z","logger":"watchdog","caller":"TransactionLogRecovery.java:80","thread":"main","msg":"checking for transaction effects Kafka topic existence"}{"level":"INFO","time":"2024-10-24T13:01:47.979Z","logger":"watchdog","caller":"TransactionLogRecovery.java:97","thread":"main","msg":"waiting for transaction effects Kafka topic to be available"}{"level":"INFO","time":"2024-10-24T13:02:38.019Z","logger":"watchdog","caller":"TransactionLogRecovery.java:97","thread":"main","msg":"waiting for transaction effects Kafka topic to be available"}{"level":"INFO","time":"2024-10-24T13:03:28.034Z","logger":"watchdog","caller":"TransactionLogRecovery.java:97","thread":"main","msg":"waiting for transaction effects Kafka topic to be available"}{"level":"INFO","time":"2024-10-24T13:04:18.054Z","logger":"watchdog","caller":"TransactionLogRecovery.java:97","thread":"main","msg":"waiting for transaction effects Kafka topic to be available"}{"level":"INFO","time":"2024-10-24T13:05:08.071Z","logger":"watchdog","caller":"TransactionLogRecovery.java:97","thread":"main","msg":"waiting for transaction effects Kafka topic to be available"}{"level":"INFO","time":"2024-10-24T13:05:58.097Z","logger":"watchdog","caller":"TransactionLogRecovery.java:97","thread":"main","msg":"waiting for transaction effects Kafka topic to be available"}{"level":"INFO","time":"2024-10-24T13:06:48.112Z","logger":"watchdog","caller":"TransactionLogRecovery.java:97","thread":"main","msg":"waiting for transaction effects Kafka topic to be available"}{"level":"ERROR","time":"2024-10-24T13:06:58.116Z","logger":"watchdog","caller":"Watchdog.java:1483","thread":"main","msg":"Shutting down with exit code 1..."}java.lang.IllegalStateException: Historical store directory (C:\appian24.1\appian\data-server\bin\..\data\hs) and/or snapshot root directory (C:\appian24.1\appian\data-server\bin\..\data\ss) are not empty, however, kafka topic doesn't exist. Is the kafka log lost or corrupted? at com.appian.data.server.TransactionLogRecovery.checkTopicExistsWithRetry(TransactionLogRecovery.java:106) at com.appian.data.server.TransactionLogRecovery.verify(TransactionLogRecovery.java:63) at com.appian.data.server.Watchdog.initializeData(Watchdog.java:471) at com.appian.data.server.Watchdog.startup(Watchdog.java:354) at com.appian.data.server.Watchdog.main(Watchdog.java:1478){"level":"ERROR","time":"2024-10-24T13:06:58.721Z","logger":"watchdog","caller":"Watchdog.java:1483","thread":"main","msg":"Shutting down with exit code 1..."}com.appian.data.server.UnrecoverableError: Lost WebSocket connection with localhost:5400. The target JVM process exited abnormally. Check for earlier errors (e.g. JVM startup errors). at com.appian.data.server.HeartbeatListener$SyncHeartbeatListenerBuilder.create(HeartbeatListener.java:165) at com.appian.data.server.JvmStarter.startSync(JvmStarter.java:41) at com.appian.data.server.Watchdog.startWatchdogDaemon(Watchdog.java:1348) at com.appian.data.server.Watchdog.main(Watchdog.java:1472)Caused by: java.lang.Exception: WebSocket closed. at com.appian.data.server.HeartbeatListener$HeartbeatListenerFuture.lambda$null$1(HeartbeatListener.java:215) at io.vertx.core.http.impl.WebSocketImplBase.handleClosed(WebSocketImplBase.java:620) at io.vertx.core.http.impl.WebSocketImpl.handleClosed(WebSocketImpl.java:47) at io.vertx.core.http.impl.Http1xClientConnection.handleClosed(Http1xClientConnection.java:812) at io.vertx.core.net.impl.VertxHandler.lambda$channelInactive$3(VertxHandler.java:153) at io.vertx.core.impl.ContextImpl.executeTask(ContextImpl.java:366) at io.vertx.core.impl.EventLoopContext.execute(EventLoopContext.java:43) at io.vertx.core.impl.ContextImpl.executeFromIO(ContextImpl.java:229) at io.vertx.core.impl.ContextImpl.executeFromIO(ContextImpl.java:221) at io.vertx.core.net.impl.VertxHandler.channelInactive(VertxHandler.java:153) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:303) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:281) at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:274) at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:411) at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:376) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:305) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:281) at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:274) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1405) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:301) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:281) at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:901) at io.netty.channel.AbstractChannel$AbstractUnsafe$7.run(AbstractChannel.java:813) at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:173) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:166) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:566) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750)