Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to extract KINC network with container? #190

Open
feltus opened this issue Apr 12, 2023 · 0 comments
Open

How to extract KINC network with container? #190

feltus opened this issue Apr 12, 2023 · 0 comments
Assignees

Comments

@feltus
Copy link

feltus commented Apr 12, 2023

Hi all.

I am unable to install KINC so I need to use the container. I can only get KINC to output EMX/CMX/CCM files but not do a network extraction. Below is what I did that results in trying to write to a read-only file system and launching MPI jobs. Please help as my students are naturally frustrated and I want to leave them with KINC instead of WGCNA. Alex

#nextflow version 22.10.7.5853
module add openjdk/11.0.15_10-gcc/9.5.0
module add openmpi/4.1.4-gcc/9.5.0-ucx
nextflow run systemsgenetics/KINC-nf -with-singularity

####Pull KINC Docker image and convert to singularity SIF
singularity pull kinc-3-4-2.sif docker://systemsgenetics/kinc:3.4.2-cpu

singularity run -B ${PWD} ./kinc-3-4-2.sif kinc run extract \
  --emx "500.merged-gtex-kirp-kich-kirp-gem.log2.quantile.emx" \
  --ccm "500.merged-gtex-kirp-kich-kirp-gem.log2.quantile.ccm" \
  --cmx "500.merged-gtex-kirp-kich-kirp-gem.log2.quantile.cmx" \
  --format "tidy" \
  --output "500.merged-gtex-kirp-kich-kirp-gem.log2.quatile.800000-gcn.txt" \
  --mincorr  0.800000 \
  --maxcorr 1

[WARN  tini (997058)] Tini is not running as PID 1 and isn't registered as a child subreaper.
Zombie processes will not be re-parented to Tini, so zombie reaping won't work.
To fix the problem, use the -s option or set the environment variable TINI_SUBREAPER to register Tini as a child subreaper, or run Tini as PID 1.
--------------------------------------------------------------------------
A call to mkdir was unable to create the desired directory:

  Directory: /local_scratch
  Error:     Read-only file system

Please check to ensure you have adequate permissions to perform
the desired operation.
--------------------------------------------------------------------------
[node0077.palmetto.clemson.edu:997090] [[30396,0],0] ORTE_ERROR_LOG: Error in file util/session_dir.c at line 106
[node0077.palmetto.clemson.edu:997090] [[30396,0],0] ORTE_ERROR_LOG: Error in file util/session_dir.c at line 382
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  orte_session_dir failed
  --> Returned value Error (-1) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
[node0077.palmetto.clemson.edu:997083] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a daemon on the local node in file ess_singleton_module.c at line 582
[node0077.palmetto.clemson.edu:997083] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a daemon on the local node in file ess_singleton_module.c at line 166
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  orte_ess_init failed
  --> Returned value Unable to start a daemon on the local node (-127) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  ompi_mpi_init: ompi_rte_init failed
  --> Returned "Unable to start a daemon on the local node" (-127) instead of "Success" (0)
--------------------------------------------------------------------------
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[node0077.palmetto.clemson.edu:997083] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants