Issues with Space On-Premises installation.

Hell,
I'd like to present your solution in my company, but I need to install it on my VM.
I've followed this guide (https://www.jetbrains.com/help/space/production-installation.html) and I achived such issue:

LAST SEEN   TYPE      REASON             OBJECT                                MESSAGE
2m28s       Warning   FailedScheduling   pod/jb-space-space-549ddb8bcd-6888x   0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
2m28s       Warning   FailedScheduling   pod/jb-space-space-549ddb8bcd-l5sfj   0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

I'm on Ubuntu 22.04 with microk8s.
Do you have any ideas?
How this step "Configure the TCP proxy for the VCS Ingress, namely, configure the Ingress Controller map. If you use Kubernetes Nginx Ingress Controller, follow this guide." should be done?

0
15 comments

bartiszosti could you please make sure you have at least 5 worker nodes in your cluster, then try to downscale the number of replicas for all Space pods to 1, and let us know about the results?

As for your question about the Ingress Controller, could you please elaborate if you had any particular issues when going through the guide?

https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/

0

Hello Pavel,
thanks for your answer.
I had only one worker, but now I'm have 5 workers in the cluster.

bartiszosti@bsvsa:~$ kubectl get nodes
NAME    STATUS   ROLES    AGE     VERSION
bsvsd   Ready    <none>   6d17h   v1.25.5
bsvsb   Ready    <none>   6d17h   v1.25.5
bsvsc   Ready    <none>   6d17h   v1.25.5
bsvse   Ready    <none>   6d17h   v1.25.5
bsvsa   Ready    <none>   6d18h   v1.25.5

I also added to the values.yaml

...
space:
  replicaCount: 1
  autoscaling:
    minReplicas: 1
    maxReplicas: 1
...
packages:
  replicaCount: 1
  autoscaling:
    minReplicas: 1
    maxReplicas: 1
...
vcs:
  replicaCount: 1
  autoscaling:
    minReplicas: 1
    maxReplicas: 1
...
langservice:
  replicaCount: 1
  autoscaling:
    minReplicas: 1
    maxReplicas: 1
...

It helped, but now my pods got stuck in the "Init" state.

bartiszosti@bsvsa:~$ kubectl get pods -n kube-space
NAME                                    READY   STATUS     RESTARTS      AGE
jb-space-space-549ddb8bcd-5nssd         0/1     Init:0/5   2             6d4h
jb-space-vcs-6ffd6fd75c-9hx84           0/1     Init:0/2   2             6d4h
jb-space-packages-d4867d4f6-hh8h9       0/1     Init:0/3   2             6d4h
jb-space-langservice-69c8dd8f84-b74r7   1/1     Running    2 (22m ago)   6d4h

During initialization I had only this warnings

 

About Ingress Controller. The guide you sent is clear, but I have no idea which service and which port should I add to the ConfigMap. Could you tell me that? I feel I'm close to run it. It's last thing that I didn't do.

0

Hey bartiszosti,

if it's only for presentation purposes, you can edit the deployments and delete the antiaffinity settings:

kubectl edit deployment -n kube-space

find and delete the following lines:

        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: app.kubernetes.io/component
                    operator: In
                    values:
                    - ${space|packages|vcs|langservice}
              topologyKey: kubernetes.io/hostname

Just another question, did you install postgresql, redis, elasticsearch and minio beforehand, or do you already have those services?

0

Hi Okay, I didn't have these services.
Currently, I have another issues...
1. The login button doesn't work. If I enter credensials and click the button nothing will happen, but after opening the space in another tab I'm on the on-boarding page. Why?
2. I'm not able to upload the company logo during on-boarding nor the user's profile picture. I always have "Something went wrong. If the problem persists, please contact the Space support team." or something about logs from server. Why? How can I get these logs?

Another question, can I have only two workers? I have only two physical machines so it have no sense to dupliate workesr on the same machine.

0

bartiszosti, for the sake of a better order, could you please create separate support requests here? The following information would be useful to start investigations:

Case 1: Are there any errors in the browser console and network tabs? Is Space installation configured behind any reverse proxy? Do you use self-signed or CA certificates?

Case 2: Please open the browser console and check if there are any errors there. Also, check for any errors in Space and MinIO pods.

As for the number of worker nodes, please note they could be virtual and hosted on a single machine. Below is the link to the docs with more details about k8s nodes:

https://kubernetes.io/docs/concepts/architecture/nodes/ 

0

Hello,
I found the issue. When I was trying to login I saw the error about mixed content http and https. The solution is to delete the space.altUrls value. I had there http adress. I have no idea why it took this url.
The secon issue still exist. When I trying to upload company's logo and user's profile picture I can see such output.
Of course I can create the order but so far the support here is great.

0

bartiszosti, glad to hear you managed to fix this issue. As for the issue with uploading images, please reproduce it once again and then provide the logs from the Space and MinIO pods. The command should be `kubectl -n ${namespace} logs ${pod_id}`.

0

Hello,
the issue was that I didn't create buckets in the MinIO.
Of course, I have another question. How to run the vsc ssh server on port 22?
I moved the server ssh server on different port (10022), changed the vcs.service.ports.ssh in the values.yaml to 22 and ...

bartiszosti@bsvsa:~/space$ kubectl get pods --namespace space
NAME                                READY   STATUS             RESTARTS          AGE
space-langservice-bfcd7f99d-q2zkp   1/1     Running            0                 11h
space-langservice-bfcd7f99d-4vwl4   1/1     Running            0                 11h
space-packages-94b57c9d6-km7pw      1/1     Running            0                 11h
space-packages-94b57c9d6-ftzdd      1/1     Running            0                 11h
space-space-7458b87644-qnq87        1/1     Running            0                 11h
space-space-7458b87644-w2vzv        1/1     Running            0                 11h
space-vcs-bf8ff8959-cj5nd           0/1     CrashLoopBackOff   117 (4m50s ago)   11h
space-vcs-bf8ff8959-bw6th           0/1     CrashLoopBackOff   117 (3m49s ago)   11h
bartiszosti@bsvsa:~/space$ kubectl logs --namespace space space-vcs-bf8ff8959-cj5nd
Defaulted container "vcs" out of: vcs, check-postgresql (init), check-redis (init)
2023-02-02 05:40:25.652 [main] INFO  org.eclipse.jetty.util.log [] - Logging initialized @1197ms to org.eclipse.jetty.util.log.Slf4jLog
2023-02-02 05:40:25.755 [main] INFO  Application [] - Autoreload is disabled because the development mode is off.
2023-02-02 05:40:26.346 [main] INFO  Application [] - Application is starting up
2023-02-02 05:40:26.364 [main] INFO  jetbrains.vcs.server.VcsServer [] - VCS-Server: detected production environment, server root /home/space/git/vcs-hosting
2023-02-02 05:40:26.379 [main] INFO  r.container.ReflectionsClassScanner [] - found urls in plugins:
+ auth-circlet: 
  - file:/home/space/git/vcs-hosting/lib/plugins/auth-circlet/auth-circlet.jar
+ dfs: 
  - file:/home/space/git/vcs-hosting/lib/plugins/dfs/dfs.jar
+ dfs-redis: 
  - file:/home/space/git/vcs-hosting/lib/plugins/dfs-redis/dfs-redis.jar
+ dfs-s3: 
  - file:/home/space/git/vcs-hosting/lib/plugins/dfs-s3/dfs-s3.jar
+ git-backend: 
  - file:/home/space/git/vcs-hosting/lib/plugins/git-backend/git-backend.jar
+ graph-index: 
  - file:/home/space/git/vcs-hosting/lib/plugins/graph-index/graph-index.jar
+ layout: 
  - file:/home/space/git/vcs-hosting/lib/plugins/layout/layout.jar
+ metrics: 
  - file:/home/space/git/vcs-hosting/lib/plugins/metrics/metrics.jar
+ rest-api: 
  - file:/home/space/git/vcs-hosting/lib/plugins/rest-api/rest-api.jar
+ signature: 
  - file:/home/space/git/vcs-hosting/lib/plugins/signature/signature.jar
+ ssh-server: 
  - file:/home/space/git/vcs-hosting/lib/plugins/ssh-server/ssh-server.jar
2023-02-02 05:40:26.422 [main] INFO  j.v.s.s.ConfigurationPropertiesImpl [] - Loading application settings from: /home/space/git/vcs-hosting/app.conf
2023-02-02 05:40:27.139 [main] INFO  r.container.ReflectionsClassScanner [] - Async reflections took 697 ms to scan 12 urls, producing 470 keys and 5601 values
2023-02-02 05:40:27.523 [main] INFO  r.container.ReflectionsClassScanner [] - Loaded 125 classes in parallel in 377 ms
2023-02-02 05:40:27.525 [main] INFO  circlet.platform.a.b.b [] - Found 125 extensions in 12 modules in 1084 ms
2023-02-02 05:40:27.525 [main] INFO  circlet.platform.a.b.b [] - Found plugins: vcs hosting server, graph-index, metrics, dfs-s3, git-backend, dfs-redis, signature, rest-api, auth-circlet, dfs, ssh-server, layout
2023-02-02 05:40:28.942 [main] INFO  org.redisson.Version [] - Redisson 3.17.4
2023-02-02 05:40:29.434 [redisson-netty-2-7] INFO  o.r.c.p.MasterPubSubConnectionPool [] - 1 connections initialized for redis-master.redis.svc.cluster.local/10.152.183.124:6379
2023-02-02 05:40:29.532 [redisson-netty-2-20] INFO  o.r.c.pool.MasterConnectionPool [] - 24 connections initialized for redis-master.redis.svc.cluster.local/10.152.183.124:6379
2023-02-02 05:40:30.839 [main] INFO  jetbrains.vcs.server.s3.S3Storage [] - Bucket space-bucket already exists
2023-02-02 05:40:31.643 [main] INFO  o.a.s.c.u.s.b.BouncyCastleSecurityProviderRegistrar [] - getOrCreateProvider(BC) created instance of org.bouncycastle.jce.provider.BouncyCastleProvider
2023-02-02 05:40:31.645 [main] INFO  o.a.s.c.u.s.e.EdDSASecurityProviderRegistrar [] - getOrCreateProvider(EdDSA) created instance of net.i2p.crypto.eddsa.EdDSASecurityProvider
2023-02-02 05:40:33.554 [main] ERROR r.c.JvmTypeBasedSingletonDescriptor [] - Error creating instance of jetbrains.ssh.server.SSHServer
java.lang.reflect.InvocationTargetException: null
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77)
    at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499)
    at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480)
    at runtime.container.JvmTypeBasedSingletonDescriptor$createInstanceOfImpl$2.invoke(JvmTypeBasedSingletonDescriptor.kt:35)
    at libraries.basics.ClassLoaderUtilsKt.withContextClassLoader(ClassLoaderUtils.kt:9)
    at runtime.container.JvmTypeBasedSingletonDescriptor.createInstanceOfImpl(JvmTypeBasedSingletonDescriptor.kt:32)
    at runtime.container.SingletonDescriptor.createInstanceOf(SingletonDescriptor.kt:128)
    at runtime.container.SingletonDescriptor.constructInstance$suspendImpl(SingletonDescriptor.kt:43)
    at runtime.container.SingletonDescriptor.constructInstance(SingletonDescriptor.kt)
    at runtime.container.ListDescriptor.constructInstance(ListDescriptor.kt:10)
    at runtime.container.ResolveKt.bindArguments(Resolve.kt:56)
    at runtime.container.JvmTypeBasedSingletonDescriptor.createInstanceOfImpl(JvmTypeBasedSingletonDescriptor.kt:30)
    at runtime.container.SingletonDescriptor.createInstanceOf(SingletonDescriptor.kt:128)
    at runtime.container.SingletonDescriptor.constructInstance$suspendImpl(SingletonDescriptor.kt:43)
    at runtime.container.SingletonDescriptor.constructInstance(SingletonDescriptor.kt)
    at runtime.container.JvmTypeBasedComponentStorage.composeDescriptors(Storage.kt:139)
    at runtime.container.JvmTypeBasedComponentStorage.compose(Storage.kt:128)
    at runtime.container.StorageComponentContainer.compose(Container.kt:40)
    at circlet.platform.a.b.b.a(b.java:45)
    at circlet.platform.a.b.b.a(b.java:39)
    at jetbrains.vcs.server.VcsServerKt.appContainer(VcsServer.kt:81)
    at jetbrains.vcs.server.VcsServerKt.access$appContainer(VcsServer.kt:76)
    at jetbrains.vcs.server.VcsServerKt$mainImpl$container$1.invokeSuspend(VcsServer.kt:1)
    at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
    at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
    at kotlinx.coroutines.EventLoopImplBase.processNextEvent(EventLoop.common.kt:284)
    at kotlinx.coroutines.BlockingCoroutine.joinBlocking(Builders.kt:85)
    at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking(Builders.kt:59)
    at kotlinx.coroutines.BuildersKt.runBlocking(Unknown Source)
    at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking$default(Builders.kt:38)
    at kotlinx.coroutines.BuildersKt.runBlocking$default(Unknown Source)
    at jetbrains.vcs.server.VcsServerKt.mainImpl(VcsServer.kt:96)
    at jetbrains.vcs.server.VcsServerKt.main(VcsServer.kt:65)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:568)
    at kotlin.reflect.jvm.internal.calls.CallerImpl$Method.callMethod(CallerImpl.kt:97)
    at kotlin.reflect.jvm.internal.calls.CallerImpl$Method$Static.call(CallerImpl.kt:106)
    at kotlin.reflect.jvm.internal.KCallableImpl.call(KCallableImpl.kt:108)
    at kotlin.reflect.jvm.internal.KCallableImpl.callDefaultMethod$kotlin_reflection(KCallableImpl.kt:159)
    at kotlin.reflect.jvm.internal.KCallableImpl.callBy(KCallableImpl.kt:112)
    at io.ktor.server.engine.internal.CallableUtilsKt.callFunctionWithInjection(CallableUtils.kt:119)
    at io.ktor.server.engine.internal.CallableUtilsKt.executeModuleFunction(CallableUtils.kt:36)
    at io.ktor.server.engine.ApplicationEngineEnvironmentReloading$launchModuleByName$1.invoke(ApplicationEngineEnvironmentReloading.kt:331)
    at io.ktor.server.engine.ApplicationEngineEnvironmentReloading$launchModuleByName$1.invoke(ApplicationEngineEnvironmentReloading.kt:330)
    at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.avoidingDoubleStartupFor(ApplicationEngineEnvironmentReloading.kt:355)
    at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.launchModuleByName(ApplicationEngineEnvironmentReloading.kt:330)
    at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.access$launchModuleByName(ApplicationEngineEnvironmentReloading.kt:32)
    at io.ktor.server.engine.ApplicationEngineEnvironmentReloading$instantiateAndConfigureApplication$1.invoke(ApplicationEngineEnvironmentReloading.kt:311)
    at io.ktor.server.engine.ApplicationEngineEnvironmentReloading$instantiateAndConfigureApplication$1.invoke(ApplicationEngineEnvironmentReloading.kt:309)
    at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.avoidingDoubleStartup(ApplicationEngineEnvironmentReloading.kt:337)
    at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.instantiateAndConfigureApplication(ApplicationEngineEnvironmentReloading.kt:309)
    at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.createApplication(ApplicationEngineEnvironmentReloading.kt:150)
    at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.start(ApplicationEngineEnvironmentReloading.kt:276)
    at io.ktor.server.jetty.JettyApplicationEngineBase.start(JettyApplicationEngineBase.kt:49)
    at io.ktor.server.jetty.JettyApplicationEngine.start(JettyApplicationEngine.kt:24)
    at io.ktor.server.jetty.EngineMain.main(EngineMain.kt:31)
    at jetbrains.vcs.server.VcsServer.main(VcsServer.kt:4)
Caused by: java.net.BindException: Permission denied
    at java.base/sun.nio.ch.Net.bind0(Native Method)
    at java.base/sun.nio.ch.Net.bind(Net.java:555)
    at java.base/sun.nio.ch.ServerSocketChannelImpl.netBind(ServerSocketChannelImpl.java:337)
    at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:294)
    at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:141)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:562)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1334)
    at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:506)
    at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:491)
    at io.netty.handler.logging.LoggingHandler.bind(LoggingHandler.java:230)
    at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:506)
    at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:491)
    at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:973)
    at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:260)
    at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:356)
    at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:569)
    at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.base/java.lang.Thread.run(Thread.java:833)
Exception in thread "main" java.lang.reflect.InvocationTargetException
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77)
    at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499)
    at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480)
    at runtime.container.JvmTypeBasedSingletonDescriptor$createInstanceOfImpl$2.invoke(JvmTypeBasedSingletonDescriptor.kt:35)
    at libraries.basics.ClassLoaderUtilsKt.withContextClassLoader(ClassLoaderUtils.kt:9)
    at runtime.container.JvmTypeBasedSingletonDescriptor.createInstanceOfImpl(JvmTypeBasedSingletonDescriptor.kt:32)
    at runtime.container.SingletonDescriptor.createInstanceOf(SingletonDescriptor.kt:128)
    at runtime.container.SingletonDescriptor.constructInstance$suspendImpl(SingletonDescriptor.kt:43)
    at runtime.container.SingletonDescriptor.constructInstance(SingletonDescriptor.kt)
    at runtime.container.ListDescriptor.constructInstance(ListDescriptor.kt:10)
    at runtime.container.ResolveKt.bindArguments(Resolve.kt:56)
    at runtime.container.JvmTypeBasedSingletonDescriptor.createInstanceOfImpl(JvmTypeBasedSingletonDescriptor.kt:30)
    at runtime.container.SingletonDescriptor.createInstanceOf(SingletonDescriptor.kt:128)
    at runtime.container.SingletonDescriptor.constructInstance$suspendImpl(SingletonDescriptor.kt:43)
    at runtime.container.SingletonDescriptor.constructInstance(SingletonDescriptor.kt)
    at runtime.container.JvmTypeBasedComponentStorage.composeDescriptors(Storage.kt:139)
    at runtime.container.JvmTypeBasedComponentStorage.compose(Storage.kt:128)
    at runtime.container.StorageComponentContainer.compose(Container.kt:40)
    at circlet.platform.a.b.b.a(b.java:45)
    at circlet.platform.a.b.b.a(b.java:39)
    at jetbrains.vcs.server.VcsServerKt.appContainer(VcsServer.kt:81)
    at jetbrains.vcs.server.VcsServerKt.access$appContainer(VcsServer.kt:76)
    at jetbrains.vcs.server.VcsServerKt$mainImpl$container$1.invokeSuspend(VcsServer.kt:1)
    at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
    at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
    at kotlinx.coroutines.EventLoopImplBase.processNextEvent(EventLoop.common.kt:284)
    at kotlinx.coroutines.BlockingCoroutine.joinBlocking(Builders.kt:85)
    at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking(Builders.kt:59)
    at kotlinx.coroutines.BuildersKt.runBlocking(Unknown Source)
    at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking$default(Builders.kt:38)
    at kotlinx.coroutines.BuildersKt.runBlocking$default(Unknown Source)
    at jetbrains.vcs.server.VcsServerKt.mainImpl(VcsServer.kt:96)
    at jetbrains.vcs.server.VcsServerKt.main(VcsServer.kt:65)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:568)
    at kotlin.reflect.jvm.internal.calls.CallerImpl$Method.callMethod(CallerImpl.kt:97)
    at kotlin.reflect.jvm.internal.calls.CallerImpl$Method$Static.call(CallerImpl.kt:106)
    at kotlin.reflect.jvm.internal.KCallableImpl.call(KCallableImpl.kt:108)
    at kotlin.reflect.jvm.internal.KCallableImpl.callDefaultMethod$kotlin_reflection(KCallableImpl.kt:159)
    at kotlin.reflect.jvm.internal.KCallableImpl.callBy(KCallableImpl.kt:112)
    at io.ktor.server.engine.internal.CallableUtilsKt.callFunctionWithInjection(CallableUtils.kt:119)
    at io.ktor.server.engine.internal.CallableUtilsKt.executeModuleFunction(CallableUtils.kt:36)
    at io.ktor.server.engine.ApplicationEngineEnvironmentReloading$launchModuleByName$1.invoke(ApplicationEngineEnvironmentReloading.kt:331)
    at io.ktor.server.engine.ApplicationEngineEnvironmentReloading$launchModuleByName$1.invoke(ApplicationEngineEnvironmentReloading.kt:330)
    at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.avoidingDoubleStartupFor(ApplicationEngineEnvironmentReloading.kt:355)
    at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.launchModuleByName(ApplicationEngineEnvironmentReloading.kt:330)
    at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.access$launchModuleByName(ApplicationEngineEnvironmentReloading.kt:32)
    at io.ktor.server.engine.ApplicationEngineEnvironmentReloading$instantiateAndConfigureApplication$1.invoke(ApplicationEngineEnvironmentReloading.kt:311)
    at io.ktor.server.engine.ApplicationEngineEnvironmentReloading$instantiateAndConfigureApplication$1.invoke(ApplicationEngineEnvironmentReloading.kt:309)
    at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.avoidingDoubleStartup(ApplicationEngineEnvironmentReloading.kt:337)
    at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.instantiateAndConfigureApplication(ApplicationEngineEnvironmentReloading.kt:309)
    at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.createApplication(ApplicationEngineEnvironmentReloading.kt:150)
    at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.start(ApplicationEngineEnvironmentReloading.kt:276)
    at io.ktor.server.jetty.JettyApplicationEngineBase.start(JettyApplicationEngineBase.kt:49)
    at io.ktor.server.jetty.JettyApplicationEngine.start(JettyApplicationEngine.kt:24)
    at io.ktor.server.jetty.EngineMain.main(EngineMain.kt:31)
    at jetbrains.vcs.server.VcsServer.main(VcsServer.kt:4)
Caused by: java.net.BindException: Permission denied
    at java.base/sun.nio.ch.Net.bind0(Native Method)
    at java.base/sun.nio.ch.Net.bind(Net.java:555)
    at java.base/sun.nio.ch.ServerSocketChannelImpl.netBind(ServerSocketChannelImpl.java:337)
    at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:294)
    at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:141)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:562)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1334)
    at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:506)
    at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:491)
    at io.netty.handler.logging.LoggingHandler.bind(LoggingHandler.java:230)
    at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:506)
    at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:491)
    at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:973)
    at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:260)
    at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:356)
    at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:569)
    at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.base/java.lang.Thread.run(Thread.java:833)
2023-02-02 05:43:53.520 [VcsServer shutdown hook] WARN  Application [] - VcsServer shutdown hook called
0

Another question. I have such services created by space.

bartiszosti@bsvsa:~$ kubectl get services --namespace space
NAME                TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)               AGE
space-vcs           ClusterIP      10.152.183.190   <none>        19084/TCP,12222/TCP   4h46m
space-space         ClusterIP      10.152.183.201   <none>        8084/TCP,9084/TCP     4h46m
space-langservice   ClusterIP      10.152.183.114   <none>        8095/TCP              4h46m
space-packages      ClusterIP      10.152.183.128   <none>        8390/TCP,9390/TCP     4h46m
space-vcs-ext       LoadBalancer   10.152.183.248   <pending>     12222:31412/TCP       4h46m

I know I must add to the nginx-ingress-tcp-microk8s-conf the space-vcs service to expose the SSH server service. So far I use below file, but I'm not sure whar service should I use space-vcs or space-vcs-ext. Could you tell me that? If the space-vcs-ext - how the below file should look?

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-ingress-tcp-microk8s-conf
  namespace: ingress
data:
12222: space/space-vcs:1222

The next step to expose the SSH server is to add a record in the nginx-ingress-microk8s-controller. I do it by this below command, but do you know better method to do so? This tutorial doesn't work as expected -> https://microk8s.io/docs/addon-ingress.

kubectl patch ds -n ingress nginx-ingress-microk8s-controller --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/ports/-", "value":{"containerPort":12222,"name":"space-vcs-ssh","hostPort":12222,"protocol":"TCP"}}]'
0

bartiszosti, space-vcs is the correct service name. Instead of patching the ConfigMap directly, we'd recommend editing it through the values.yaml file of the Nginx Ingress Controller Helm Chart (at the same time, the best point of contact would be microk8s project maintainers). 

If I'm not mistaken, you're trying to change both internal and external ports. If that's true, please note that the internal one is defined by the app and shouldn't conflict with any others. It means it'd be enough to specify an external port only (22222 in the example below):

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-ingress-tcp-microk8s-conf
  namespace: ingress
data:
  22222: space/space-vcs:12222

 

0

Thanks for you answer.
You're right but only in a half, because your solution will enable connection on the external port, but on the website the address will be with the internal one.

What is the purpose of the space-vcs-ext service?

0

bartiszosti, oh, I see. Could you now change the `vcs.service.ports.ssh` parameter to your external port value? As for `space-vcs-ext` service, it could be disabled with `vcs.externalService.enabled` set to false.

0

The change have the same effect as I described before.
The container have no permission to use port 22.
Do you know how to do that?

0

bartiszosti could you please clarify how did you apply these changes? Please try to scale the pods down to 0, apply changes with helm upgrade, then scale the pods up back.

0

Hello.

I did it using below commands.

kubectl scale deployments --all --namespace space --replicas 0
helm upgrade space jetbrains-space-onpremises/space --namespace space --values ./space/values.yaml

Finally, I managed to bind the SSH server to the port 22.
Unfortunatelly, I had to run container as a root user, which isn't good and add the NET_BIND_SERVICE cabability.

vcs:
  service:
    ports:
      ssh: "22"
  podSecurityContext:
    enabled: true
    fsGroup: 10001  
containerSecurityContext:
  enabled: true
  runAsUser: 0
  runAsNonRoot: false
  allowPrivilegeEscalation: false
  readOnlyRootFilesystem: true
  capabilities:
   drop:
     - all
   add:
     - NET_BIND_SERVICE

I read in the Internet that adding the NET_BIND_SERVICE cabability should be enough, but not here. Could you tell me why?

vcs:
  service:
    ports:
      ssh: "22"
  podSecurityContext:
    enabled: true
    fsGroup: 10001
  containerSecurityContext:
    enabled: true
    runAsUser: 10001
    runAsNonRoot: true
    allowPrivilegeEscalation: false
    readOnlyRootFilesystem: true
    capabilities:
      drop:
        - all
      add:
        - NET_BIND_SERVICE
0

Please sign in to leave a comment.