[OnPremKubernetes] java.lang.IllegalMonitorStateException when committing changes to repo

Hey again,

I get the following errors in the space and vcs deployments when trying to commit changes to a repo:

java.util.concurrent.ExecutionException: java.lang.IllegalMonitorStateException: attempt to unlock lock, not locked by current thread by node id: 2ce5b355-2610-4b0a-8a18-2f3c5216fab9 thread-id: 0
        at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396)
        at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2096)
        at jetbrains.vcs.server.redis.RedisDfsDbCache.timedWait(RedisDfsDbCache.kt:70)
        at jetbrains.vcs.server.redis.RedisDfsDbCache.access$timedWait(RedisDfsDbCache.kt:91)
        at jetbrains.vcs.server.redis.RedisDfsDbCache$locks$1.lock$lambda$0(RedisDfsDbCache.kt:29)
        at kotlin.jdk7.AutoCloseableKt.closeFinally(AutoCloseable.kt:61)
        at jetbrains.vcs.server.index.OrgIndexer.updateRepoFilesIndex(OrgIndexer.kt:268)
        at jetbrains.vcs.server.index.OrgIndexer.access$updateRepoFilesIndex(OrgIndexer.kt:131)
        at jetbrains.vcs.server.index.OrgIndexer$updateRepoFilesIndex$1.invokeSuspend(OrgIndexer.kt)
        at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
        at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
        at libraries.io.DaemonThreadFactory.newThread$lambda$0(DaemonThreadFactory.kt:22)
        at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.IllegalMonitorStateException: attempt to unlock lock, not locked by current thread by node id: 2ce5b355-2610-4b0a-8a18-2f3c5216fab9 thread-id: 0
        at org.redisson.RedissonBaseLock.lambda$unlockAsync$2(RedissonBaseLock.java:323)
        at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:934)
        at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:911)
        at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
        at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147)
        at org.redisson.command.CommandBatchService.lambda$executeAsync$7(CommandBatchService.java:322)
        at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
        at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
        at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
        at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147)
        at org.redisson.command.RedisCommonBatchExecutor.handleResult(RedisCommonBatchExecutor.java:162)
        at org.redisson.command.RedisExecutor.checkAttemptPromise(RedisExecutor.java:478)
        at org.redisson.command.RedisExecutor.lambda$execute$3(RedisExecutor.java:172)
        at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
        at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
        at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
        at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147)
        at org.redisson.client.handler.CommandDecoder.decodeCommandBatch(CommandDecoder.java:318)
        at org.redisson.client.handler.CommandDecoder.decodeCommand(CommandDecoder.java:210)
        at org.redisson.client.handler.CommandDecoder.decode(CommandDecoder.java:137)
        at org.redisson.client.handler.CommandDecoder.decode(CommandDecoder.java:113)
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:510)
        at io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:366)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:279)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:995)
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        ... 1 common frames omitted
2023-01-17 13:11:45.787 [IndexWriter-1] ERROR j.vcs.server.index.OrgIndexer [repoId=automation-worker-test-test-repository-1-9f5ad80d, requestId=index-index-cec3-4] - Org indexing repo automation-worker-test-test-repository-1-9f5ad80d
java.util.concurrent.ExecutionException: java.lang.IllegalMonitorStateException: attempt to unlock lock, not locked by current thread by node id: 2ce5b355-2610-4b0a-8a18-2f3c5216fab9 thread-id: 0
        at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396)
        at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2096)
        at jetbrains.vcs.server.redis.RedisDfsDbCache.timedWait(RedisDfsDbCache.kt:70)
        at jetbrains.vcs.server.redis.RedisDfsDbCache.access$timedWait(RedisDfsDbCache.kt:91)
        at jetbrains.vcs.server.redis.RedisDfsDbCache$locks$1.lock$lambda$0(RedisDfsDbCache.kt:29)
        at kotlin.jdk7.AutoCloseableKt.closeFinally(AutoCloseable.kt:61)
        at jetbrains.vcs.server.index.OrgIndexer.updateRepoFilesIndex(OrgIndexer.kt:268)
        at jetbrains.vcs.server.index.OrgIndexer.access$updateRepoFilesIndex(OrgIndexer.kt:131)
        at jetbrains.vcs.server.index.OrgIndexer$updateRepoFilesIndex$1.invokeSuspend(OrgIndexer.kt)
        at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
        at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
        at libraries.io.DaemonThreadFactory.newThread$lambda$0(DaemonThreadFactory.kt:22)
        at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.IllegalMonitorStateException: attempt to unlock lock, not locked by current thread by node id: 2ce5b355-2610-4b0a-8a18-2f3c5216fab9 thread-id: 0
        at org.redisson.RedissonBaseLock.lambda$unlockAsync$2(RedissonBaseLock.java:323)
        at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:934)
        at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:911)
        at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
        at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147)
        at org.redisson.command.CommandBatchService.lambda$executeAsync$7(CommandBatchService.java:322)
        at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
        at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
        at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
        at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147)
        at org.redisson.command.RedisCommonBatchExecutor.handleResult(RedisCommonBatchExecutor.java:162)
        at org.redisson.command.RedisExecutor.checkAttemptPromise(RedisExecutor.java:478)
        at org.redisson.command.RedisExecutor.lambda$execute$3(RedisExecutor.java:172)
        at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
        at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
        at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
        at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147)
        at org.redisson.client.handler.CommandDecoder.decodeCommandBatch(CommandDecoder.java:318)
        at org.redisson.client.handler.CommandDecoder.decodeCommand(CommandDecoder.java:210)
        at org.redisson.client.handler.CommandDecoder.decode(CommandDecoder.java:137)
        at org.redisson.client.handler.CommandDecoder.decode(CommandDecoder.java:113)
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:510)
        at io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:366)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:279)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:995)
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)

Changes seem to commit anyways though. 

0
5 comments

Till Rother thanks for this as well. We'll optimize the code to handle such exceptions gracefully, but again, nothing critical here :)

0

Hey Pavel Boger, I keep getting this error in computeservice as well:

 

2023-01-20 06:38:43.904 [ComputeK8sAutoscaler-1] ERROR c.p.aws.ec2.WorkerAutoscalerBase [] - []-[k8s.xlarge] Error during scaling
java.lang.IllegalMonitorStateException: attempt to unlock lock, not locked by current thread by node id: 47f17080-a4ef-46ee-8470-ce8b86bfc6f0 thread-id: -9223372036854775796
        at org.redisson.RedissonBaseLock.lambda$unlockAsync$2(RedissonBaseLock.java:323)
        at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:934)
        at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:911)
        at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
        at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147)
        at org.redisson.command.CommandBatchService.lambda$executeAsync$7(CommandBatchService.java:322)
        at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
        at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
        at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
        at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147)
        at org.redisson.command.RedisCommonBatchExecutor.handleResult(RedisCommonBatchExecutor.java:162)
        at org.redisson.command.RedisExecutor.checkAttemptPromise(RedisExecutor.java:478)
        at org.redisson.command.RedisExecutor.lambda$execute$3(RedisExecutor.java:172)
        at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
        at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
        at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
        at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147)
        at org.redisson.client.handler.CommandDecoder.decodeCommandBatch(CommandDecoder.java:318)
        at org.redisson.client.handler.CommandDecoder.decodeCommand(CommandDecoder.java:210)
        at org.redisson.client.handler.CommandDecoder.decode(CommandDecoder.java:137)
        at org.redisson.client.handler.CommandDecoder.decode(CommandDecoder.java:113)
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:510)
        at io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:366)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:279)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:995)
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        at java.base/java.lang.Thread.run(Thread.java:833)

The workers error out with the following logs:

[WARN  tini (242)] Tini is not running as PID 1 and isn't registered as a child subreaper.
07:02:06.765 INFO  [DInD] Zombie processes will not be re-parented to Tini, so zombie reaping won't work.
07:02:06.857 INFO  [DInD] To fix the problem, use the -s option or set the environment variable TINI_SUBREAPER to register Tini as a child subreaper, or run Tini as PID 1.
07:02:07.861 INFO  Process DInDRunner-PG-1[pid 242] finished in 3803 ms. Exit code 1. Still waiting for process stream readers...
07:02:07.862 INFO  Process DInDRunner-PG-1[pid 242] handling completed in 3803 ms. Exit code 1. DInDRunner-PG-1[pid 242]::stdout 790 bytes. DInDRunner-PG-1[pid 242]::stderr 0 bytes
07:02:07.862 INFO  [DInD] time="2023-01-20T07:02:07.764815881Z" level=info msg="Starting up"
07:02:07.863 INFO  [DInD] time="2023-01-20T07:02:07.766377403Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
07:02:07.863 INFO  [DInD] failed to load listeners: listen tcp 0.0.0.0:2376: bind: address already in use
0

Another thing regarding the error in the first post of the thread: I think it's occuring due to programming mistakes in the committed .kts file. 

0

Till Rother, I just had a conversation with the team, one of the possible reasons for this exception is the lack of CPU resources. Could you please check if that's the case and try increasing the limits if necessary?

0

@...

I'm using a horizontal autoscaler with modified request-values, based on historic usage:

        VCS
        resources:
          limits:
            cpu: "1"
            memory: 1500Mi
          requests:
            cpu: 300m
            memory: 800Mi
              
        Space
        resources:
          limits:
            cpu: "2"
            memory: 3Gi
          requests:
            cpu: 500m
            memory: 2Gi
            
        Packages
        resources:
          limits:
            cpu: "1"
            memory: 2Gi
          requests:
            cpu: 250m
            memory: 800Mi
            
        Computeservice
        resources:
          limits:
            cpu: "2"
            memory: 4Gi
          requests:
            cpu: 100m
            memory: 800Mi

 

I also checked the tables vcsdfs.vcsrepoindexfiles and vcsdfs.vcsorgindexfiles and removed any file from s3 that wasn't referenced in the tables. After doing that, the error didn't turn up anymore in the vcs pods.

0

Please sign in to leave a comment.