Skip to contents

Download a copy of the vignette to follow along here: a_less_simple_example.Rmd

We recommend you go through the simple example before working through this one!

This vignette walks through how this package can be used over a complete SNF-subtyping pipeline.

Data set-up and pre-processing

1. Load the library and data into the R environment

Your data should be loaded into the R environment in the following format:

  • The data is in one or multiple data.frame objects
  • The data is in wide form (one row per patient)
  • Any dataframe should one column that uniquely identifies which patient the row has data for

It is fine to have missing data at this stage.

The package comes with a few mock dataframes based on real data from the Adolescent Brain Cognitive Development study:

  • abcd_anxiety (anxiety scores from the CBCL)
  • abcd_depress (depression scores from the CBCL)
  • abcd_cort_t (cortical thicknesses)
  • abcd_cort_sa (cortical surface areas in mm^2)
  • abcd_subc_v (subcortical volumes in mm^3)
  • abcd_h_income (household income on a 1-3 scale)
  • abcd_pubertal (pubertal status on a 1-5 scale)

Here’s what the cortical thickness data looks like:

library(metasnf)

class(abcd_cort_t)
#> [1] "tbl_df"     "tbl"        "data.frame"

dim(abcd_cort_t)
#> [1] 188 152

str(abcd_cort_t[1:5, 1:5])
#> Classes 'tbl_df', 'tbl' and 'data.frame':    5 obs. of  5 variables:
#>  $ patient : chr  "NDAR_INV0567T2Y9" "NDAR_INV0GLZNC2W" "NDAR_INV0IZ157F8" "NDAR_INV0J4PYA5F" ...
#>  $ mrisdp_1: num  2.6 2.62 2.62 2.6 2.53
#>  $ mrisdp_2: num  2.49 2.85 2.29 2.67 2.76
#>  $ mrisdp_3: num  2.8 2.78 2.53 2.68 2.83
#>  $ mrisdp_4: num  2.95 2.85 2.96 2.94 2.99

abcd_cort_t[1:5, 1:5]
#>            patient mrisdp_1 mrisdp_2 mrisdp_3 mrisdp_4
#> 1 NDAR_INV0567T2Y9    2.601    2.487    2.801    2.954
#> 2 NDAR_INV0GLZNC2W    2.619    2.851    2.784    2.846
#> 3 NDAR_INV0IZ157F8    2.621    2.295    2.530    2.961
#> 4 NDAR_INV0J4PYA5F    2.599    2.670    2.676    2.938
#> 5 NDAR_INV0OYE291Q    2.526    2.761    2.829    2.986

The first column “patient” is the unique identifier (UID) for all subjects in the data.

Here’s the household income data:

dim(abcd_h_income)
#> [1] 275   2

str(abcd_h_income[1:5, ])
#> Classes 'tbl_df', 'tbl' and 'data.frame':    5 obs. of  2 variables:
#>  $ patient         : chr  "NDAR_INV0567T2Y9" "NDAR_INV0GLZNC2W" "NDAR_INV0IZ157F8" "NDAR_INV0J4PYA5F" ...
#>  $ household_income: num  3 NA 1 2 1

abcd_h_income[1:5, ]
#>            patient household_income
#> 1 NDAR_INV0567T2Y9                3
#> 2 NDAR_INV0GLZNC2W               NA
#> 3 NDAR_INV0IZ157F8                1
#> 4 NDAR_INV0J4PYA5F                2
#> 5 NDAR_INV0OYE291Q                1

Putting everything in a list will help us get quicker summaries of all the data.

abcd_data <- list(
    abcd_anxiety,
    abcd_depress,
    abcd_cort_t,
    abcd_cort_sa,
    abcd_subc_v,
    abcd_h_income,
    abcd_pubertal
)

# The number of rows in each dataframe:
lapply(abcd_data, dim)
#> [[1]]
#> [1] 275   2
#> 
#> [[2]]
#> [1] 275   2
#> 
#> [[3]]
#> [1] 188 152
#> 
#> [[4]]
#> [1] 188 152
#> 
#> [[5]]
#> [1] 174  31
#> 
#> [[6]]
#> [1] 275   2
#> 
#> [[7]]
#> [1] 275   2

# Whether or not each dataframe has missing values:
lapply(abcd_data,
    function(x) {
        any(is.na(x))
    }
)
#> [[1]]
#> [1] TRUE
#> 
#> [[2]]
#> [1] TRUE
#> 
#> [[3]]
#> [1] FALSE
#> 
#> [[4]]
#> [1] FALSE
#> 
#> [[5]]
#> [1] FALSE
#> 
#> [[6]]
#> [1] TRUE
#> 
#> [[7]]
#> [1] TRUE

Some of our data has missing values and not all of our dataframes have the same number of participants.

Generating the data list

The data_list structure is a structured list of dataframes (like the one already created), but with some additional metadata about each dataframe. It should only contain the input dataframes we want to directly use as inputs for the clustering. Out of all the data we have available to us, we may be working in a context where the anxiety and depression data are especially important patient outcomes, and we want to know if we can find subtypes using the rest of the data which still do a good job of separating out patients by their anxiety and depression scores. We’ll set aside anxiety and depression for now and use the rest of the data as inputs for our subtyping, which means loading them into the data_list.

# Note that you do not need to explicitly name every single named element
# (data = ..., name = ..., etc.)
data_list <- generate_data_list(
    list(
        data = abcd_cort_t,
        name = "cortical_thickness",
        domain = "neuroimaging",
        type = "continuous"
    ),
    list(
        data = abcd_cort_sa,
        name = "cortical_surface_area",
        domain = "neuroimaging",
        type = "continuous"
    ),
    list(
        data = abcd_subc_v,
        name = "subcortical_volume",
        domain = "neuroimaging",
        type = "continuous"
    ),
    list(
        data = abcd_h_income,
        name = "household_income",
        domain = "demographics",
        type = "continuous"
    ),
    list(
        data = abcd_pubertal,
        name = "pubertal_status",
        domain = "demographics",
        type = "continuous"
    ),
    uid = "patient"
)

This process removes any patients who did not have complete data across all provided input dataframes. If you’d like to keep track of that information, you can set the “return_missing” parameter to TRUE and receive a list containing the data_list as well as the removed patients:

generate_dl_results <- generate_data_list(
    list(
        data = abcd_cort_t,
        name = "cortical_thickness",
        domain = "neuroimaging",
        type = "continuous"
    ),
    list(
        data = abcd_cort_sa,
        name = "cortical_surface_area",
        domain = "neuroimaging",
        type = "continuous"
    ),
    list(
        data = abcd_subc_v,
        name = "subcortical_volume",
        domain = "neuroimaging",
        type = "continuous"
    ),
    list(
        data = abcd_h_income,
        name = "household_income",
        domain = "demographics",
        type = "continuous"
    ),
    list(
        data = abcd_pubertal,
        name = "pubertal_status",
        domain = "demographics",
        type = "continuous"
    ),
    uid = "patient",
    return_missing = TRUE
)

# You only need to do this if you've set `return_missing` to TRUE!
data_list <- generate_dl_results$"data_list"
removed_subjects <- generate_dl_results$"removed_subjects"

removed_subjects[1:10]
#>  [1] "NDAR_INV0GLZNC2W" "NDAR_INV0OYE291Q" "NDAR_INV0SM1JLXQ" "NDAR_INV0Z87UJDR"
#>  [5] "NDAR_INV1PK0JH8Z" "NDAR_INV1R56TIVS" "NDAR_INV1RX7QO0L" "NDAR_INV2DLP49QK"
#>  [9] "NDAR_INV2EGX5UMV" "NDAR_INV2UD95T0K"

We can get a summary of our constructed data_list with the summarize_dl function:

summarize_dl(data_list)
#>                    name       type       domain length width
#> 1    cortical_thickness continuous neuroimaging    100   152
#> 2 cortical_surface_area continuous neuroimaging    100   152
#> 3    subcortical_volume continuous neuroimaging    100    31
#> 4      household_income continuous demographics    100     2
#> 5       pubertal_status continuous demographics    100     2

Each input dataframe now has the same 100 subjects with complete data.

Generating the settings matrix

The settings_matrix stores all the information about the settings we’d like to use for each of our SNF runs. Calling the generate_settings_matrix function with a specified number of rows will automatically build a randomly populated settings_matrix.

settings_matrix <- generate_settings_matrix(
    data_list,
    nrow = 20,
    min_k = 20,
    max_k = 50,
    seed = 42
)
#> [1] "The global seed has been changed!"

settings_matrix[1:5, ]
#>   row_id alpha  k  t snf_scheme clust_alg cont_dist disc_dist ord_dist cat_dist
#> 1      1   0.5 29 20          2         1         1         1        1        1
#> 2      2   0.4 26 20          1         1         1         1        1        1
#> 3      3   0.3 44 20          2         2         1         1        1        1
#> 4      4   0.3 43 20          1         1         1         1        1        1
#> 5      5   0.5 29 20          2         2         1         1        1        1
#>   mix_dist inc_cortical_thickness inc_cortical_surface_area
#> 1        1                      1                         0
#> 2        1                      1                         1
#> 3        1                      1                         0
#> 4        1                      1                         1
#> 5        1                      1                         1
#>   inc_subcortical_volume inc_household_income inc_pubertal_status
#> 1                      1                    0                   1
#> 2                      1                    1                   1
#> 3                      0                    1                   1
#> 4                      0                    1                   1
#> 5                      1                    1                   1

The columns are:

  • row_id: Integer to keep track of which row is which
  • alpha - A hyperparameter for SNF (variable that influences the subtyping process)
  • k - A hyperparameter for SNF
  • t - A hyperparameter for SNF
  • snf_scheme - the specific way in which input data gets collapsed into a final fused network (discussed further in the SNF schemes vignette)
  • clust_alg - Which clustering algorithm will be applied to the final fused network produced by SNF
  • *_dist - Which distance metric will be used for the different types of data (discussed further in the distance metrics vignette)
  • inc_* - binary columns indicating whether or not an input dataframe is included (1) or excluded (0) from the corresponding SNF run (discussed further in the settings matrix vignette)

Without specifying any additional parameters, generate_settings_matrix randomly populates these columns and ensures that no generated rows are identical.

What’s important for now is that the matrix (technically a dataframe in the R environment) contains several rows which each outline a different but reasonable way in which the raw data could be converted into patient subtypes. Further customization of the settings_matrix will enable you to access the broadest possible space of reasonable cluster solutions that your data can produce using SNF and ideally get you closer to a generalizable and useful solution for your context. More on settings_matrix customization can be found in the settings matrix vignette

Setting the optional seed parameter (which will affect the seed of your entire R session) ensures that the same settings matrix is generated each time we run the code.

While we end up with a random set of settings here, there is nothing wrong with manually altering the settings matrix to suit your needs. For example, if you wanted to know how much of a difference one input dataframe made, you could ensure that half of the rows included this input dataframe and the other half didn’t. You can also add random rows to an already existing dataframe using the add_settings_matrix_rows function (further discussed in the vignette).

Running SNF for all the rows in the settings matrix

The batch_snf function integrates the data in the data_list using each of the sets of settings contained in the settings_matrix. The resulting structure is an solutions_matrix which is an extension of the settings_matrix that contains columns specifying which cluster each subject was assigned for the corresponding settings_matrix row.

solutions_matrix <- batch_snf(data_list, settings_matrix)
#> [1] "Row: 1/20 | Time remaining: 4 seconds"
#> [1] "Row: 2/20 | Time remaining: 4 seconds"
#> [1] "Row: 3/20 | Time remaining: 3 seconds"
#> [1] "Row: 4/20 | Time remaining: 3 seconds"
#> [1] "Row: 5/20 | Time remaining: 3 seconds"
#> [1] "Row: 6/20 | Time remaining: 3 seconds"
#> [1] "Row: 7/20 | Time remaining: 3 seconds"
#> [1] "Row: 8/20 | Time remaining: 2 seconds"
#> [1] "Row: 9/20 | Time remaining: 2 seconds"
#> [1] "Row: 10/20 | Time remaining: 2 seconds"
#> [1] "Row: 11/20 | Time remaining: 2 seconds"
#> [1] "Row: 12/20 | Time remaining: 2 seconds"
#> [1] "Row: 13/20 | Time remaining: 1 seconds"
#> [1] "Row: 14/20 | Time remaining: 1 seconds"
#> [1] "Row: 15/20 | Time remaining: 1 seconds"
#> [1] "Row: 16/20 | Time remaining: 1 seconds"
#> [1] "Row: 17/20 | Time remaining: 1 seconds"
#> [1] "Row: 18/20 | Time remaining: 0 seconds"
#> [1] "Row: 19/20 | Time remaining: 0 seconds"
#> [1] "Row: 20/20 | Time remaining: 0 seconds"
#> [1] "Total time taken: 4 seconds."

colnames(solutions_matrix)[1:30]
#>  [1] "row_id"                    "alpha"                    
#>  [3] "k"                         "t"                        
#>  [5] "snf_scheme"                "clust_alg"                
#>  [7] "cont_dist"                 "disc_dist"                
#>  [9] "ord_dist"                  "cat_dist"                 
#> [11] "mix_dist"                  "inc_cortical_thickness"   
#> [13] "inc_cortical_surface_area" "inc_subcortical_volume"   
#> [15] "inc_household_income"      "inc_pubertal_status"      
#> [17] "nclust"                    "subject_NDAR_INV0567T2Y9" 
#> [19] "subject_NDAR_INV0IZ157F8"  "subject_NDAR_INV0J4PYA5F" 
#> [21] "subject_NDAR_INV10OMKVLE"  "subject_NDAR_INV15FPCW4O" 
#> [23] "subject_NDAR_INV19NB4RJK"  "subject_NDAR_INV1HLGR738" 
#> [25] "subject_NDAR_INV1KR0EZFU"  "subject_NDAR_INV1L3Y9EOP" 
#> [27] "subject_NDAR_INV1TCP5GNM"  "subject_NDAR_INV1ZHRDJ6B" 
#> [29] "subject_NDAR_INV2EJ41YSZ"  "subject_NDAR_INV2PK6C85M"

It goes on like this for some time.

Just like that, the clustering is done!

You can pull the clustering results out of each row using the get_cluster_df function:

cluster_solutions <- get_cluster_solutions(solutions_matrix)

head(cluster_solutions)
#>                 subjectkey 1 2  3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
#> 1 subject_NDAR_INV0567T2Y9 1 2  1 2 1 1 1 5 2  2  3  1  1  2  1  2  6  1  1  1
#> 2 subject_NDAR_INV0IZ157F8 2 1 10 1 2 2 2 4 1  1  6  7  2  1  1  1  2  7  8  3
#> 3 subject_NDAR_INV0J4PYA5F 1 2  8 2 3 2 3 2 2  3  7  6  2  2  2  2  3  5  6  2
#> 4 subject_NDAR_INV10OMKVLE 1 1  3 1 4 2 2 1 1  1  8  4  2  1  1  1  5  3  5  3
#> 5 subject_NDAR_INV15FPCW4O 1 2  5 2 7 2 2 1 2  1  1  3  2  2  2  2  1  8  3  2
#> 6 subject_NDAR_INV19NB4RJK 1 2  7 2 2 2 2 3 2  4  5  7  2  2  2  2  4  7  8  3

Note: Parallel processing is available on an older release of the package (v0.2.0) and will be integrated into the latest release shortly. See the “processes” parameter in ?batch_snf.

Picking a solution

Now that we have access to 20 different clustering solutions, we’ll need to find some way to pick a favourite (or a few). In this case, plotting or running stats manually on each of the solutions might be a reasonable way to determine which ones we like the most. But when the number of solutions generated goes up into the hundreds (or thousands), we’re going to need some more automated approaches.

Below are some different tools that you can use to try to pick a solution that is the best for your purposes.

1: Examining “meta clusters”

This is the approach introduced by the original meta clustering paper. This is a good approach to use when you can’t quantitatively describe what makes one cluster solution better than another, but you can have an expert compare the two solutions and “intuit” which of the two is more desirable.

The idea is to cluster the clustering solutions themselves to arrive at a small number of qualitatively different solutions. From there, a user can manually pick out some representative solutions and do the evaluations themselves.

The first step is to calculate the adjusted Rand index (ARI) between each pair of cluster solutions. This metric tells us how similar the solutions are to each other, thereby allowing us to find clusters of cluster solutions.

solutions_matrix_aris <- calc_om_aris(solutions_matrix)
#> 52.63158% completed...
#> Done.

We can visualize the resulting inter-cluster similarities with a heatmap. First, call get_matrix_order to get a hierarchical clustering-based ordering of the rows in the adjusted rand indices. This order can be passed into the adjusted_rand_index_heatmap function to get a clearer view of existing meta clusters.

meta_cluster_order <- get_matrix_order(solutions_matrix_aris)

# Just a vector of numbers
meta_cluster_order

ari_hm <- adjusted_rand_index_heatmap(
    solutions_matrix_aris,
    order = meta_cluster_order
)

save_heatmap(
    heatmap = ari_hm,
    path = "./adjusted_rand_index_heatmap.png",
    width = 400,
    height = 500,
    res = 100
)

The clustering solutions are along the rows and columns of the above figure, and the cells at the intersection between two solutions show how similar (big ARI) those solutions are to each other. The diagonals should always be red, representing the maximum value of 1, as they show the similarity between any clustering solution and itself. Complete-linkage, Euclidean-distance based hierarchical clustering is being applied to these solutions to obtain the row ordering. This is also the default approach used by the ComplexHeatmap package, the backbone of all heatmap functions in metaSNF.

If you see something interesting in your heatmap, you may be curious to know how that corresponds to the settings that were in your settings matrix.

sm_hm <- settings_matrix_heatmap(
    settings_matrix,
    order = meta_cluster_order
)

save_heatmap(
    heatmap = sm_hm,
    path = "./settings_matrix_heatmap_ordered.png",
    width = 400,
    height = 500,
    res = 75
)

This heatmap rescales all the columns in the settings_matrix to have a maximum value of 1. The purpose of re-ordering the settings matrix in this way is to see if any associations exist between certain settings values and pairwise cluster solution similarities.

This heatmap can show you if any particular settings levels were associated with the formation of distinct meta clusters.

2. Quality measures

Quality metrics are another useful heuristic for the goodness of a cluster that don’t require any contextualization of results in the domain they may be used in. metaSNF enables measures of silhouette scores, Dunn indices, and Davies-Bouldin indices. To calculate these values, we’ll need not only the cluster results but also the final fused network (the similarity matrices produced by SNF) that the clusters came from. These similarity matrices can be collected from the batch_snf using the return_similarity_matrices parameter:

batch_snf_results <- batch_snf(
    data_list,
    settings_matrix,
    return_similarity_matrices = TRUE
)
#> [1] "Row: 1/20 | Time remaining: 3 seconds"
#> [1] "Row: 2/20 | Time remaining: 4 seconds"
#> [1] "Row: 3/20 | Time remaining: 3 seconds"
#> [1] "Row: 4/20 | Time remaining: 3 seconds"
#> [1] "Row: 5/20 | Time remaining: 3 seconds"
#> [1] "Row: 6/20 | Time remaining: 3 seconds"
#> [1] "Row: 7/20 | Time remaining: 3 seconds"
#> [1] "Row: 8/20 | Time remaining: 2 seconds"
#> [1] "Row: 9/20 | Time remaining: 2 seconds"
#> [1] "Row: 10/20 | Time remaining: 2 seconds"
#> [1] "Row: 11/20 | Time remaining: 2 seconds"
#> [1] "Row: 12/20 | Time remaining: 2 seconds"
#> [1] "Row: 13/20 | Time remaining: 2 seconds"
#> [1] "Row: 14/20 | Time remaining: 1 seconds"
#> [1] "Row: 15/20 | Time remaining: 1 seconds"
#> [1] "Row: 16/20 | Time remaining: 1 seconds"
#> [1] "Row: 17/20 | Time remaining: 1 seconds"
#> [1] "Row: 18/20 | Time remaining: 0 seconds"
#> [1] "Row: 19/20 | Time remaining: 0 seconds"
#> [1] "Row: 20/20 | Time remaining: 0 seconds"
#> [1] "Total time taken: 4 seconds."

solutions_matrix <- batch_snf_results$"solutions_matrix"
similarity_matrices <- batch_snf_results$"similarity_matrices"

This time, the output of batch_snf is a list. The first element of the list is a single solutions_matrix, like what we usually get. The second element is yet another list containing one final fused network (AKA similarity matrix / similarity matrix) per SNF run. Using those two lists, we can calculate the above mentioned quality metrics:

silhouette_scores <- calculate_silhouettes(
    solutions_matrix,
    similarity_matrices
)

dunn_indices <- calculate_dunn_indices(
    solutions_matrix,
    similarity_matrices
)

db_indices <- calculate_db_indices(
    solutions_matrix,
    similarity_matrices
)

The first function is a wrapper around cluster::silhouette while the second and third come from the clv package. clv isn’t set as a mandatory part of the installation, so you’ll ned to install it yourself to calculate these two metrics.

The original documentation on these functions can be helpful for interpreting and working with them:

  1. cluster::silhouette documentation
  2. clv::clv.Dunn documentation
  3. clv::clv.Davies.Bouldin documentation

3. Stability measures

metaSNF offers tools to evaluate two different measures of stability:

  1. Pairwise adjusted Rand indices (across resamplings of the clustering, on average, how similar was every pair of solutions according to the adjusted Rand index?)
  2. Fraction clustered together (what is the average fraction of times that patients who clustered together in the full results clustered together in resampled results?)

To calculate either of these, you’ll need to first generate subsamples of the data_list.

data_list_subsamples <- subsample_data_list(
    data_list,
    n_subsamples = 30, # calculate 30 subsamples
    subsample_fraction = 0.8 # for each subsample, use random 80% of patients
)

data_list_subsamples is a list that now contains 30 smaller subsamples of the original data_list.

Then the stability calculations:

pairwise_aris <- subsample_pairwise_aris(
    data_list_subsamples,
    settings_matrix
)

fraction_together <- fraction_clustered_together(
    data_list_subsamples,
    settings_matrix,
    solutions_matrix
)

Be warned, that second function is especially extremely slow. As the number of patients and number of solutions you’re evaluating grows, these functions can get pretty slow. Consider only using them after eliminating solutions that you are certainly not interested in further characterizing.

4. Evaluating separation across “target variables” of importance

Warning: This approach can very easily result in overfitting your data and producing clustering results that generalize poorly to subjects outside of your dataset. Consider setting aside some data to validate your results to avoid this issue.

If you can specify a metric or objective function that may tell you how useful a clustering solution will be for your purposes in advance, that makes the cluster selection process much less arbitrary.

There are many ways to go about doing this, but this package offers one way through the target_list structure. The target_list contains dataframes what we can examine our clustering results over through linear regression (continuous data), ordinal regression (ordinal data), or the Chi-squared test (categorical data).

target_list <- generate_target_list(
    list(abcd_anxiety, "anxiety", "ordinal"),
    list(abcd_depress, "depressed", "ordinal"),
    uid = "patient"
)

summarize_target_list(target_list)
#>        name    type length width
#> 1   anxiety ordinal    275     2
#> 2 depressed ordinal    275     2

The target_list is like the data_list, but without the domain attribute. At this time, each dataframe used to build an target_list must be a single-feature.

Just like when generating the initial data_list, we need to specify the name of the column in the provided dataframes that is originally being used to uniquely identify the different observations from each other with the uid parameter.

We will next extend our solutions_matrix with p-values from regressing the target_list features onto our generated clusters.

extended_solutions_matrix <- extend_solutions(solutions_matrix, target_list)
#> [1] "Processing row 1 of 20"
#> [1] "Processing row 2 of 20"
#> [1] "Processing row 3 of 20"
#> [1] "Processing row 4 of 20"
#> [1] "Processing row 5 of 20"
#> [1] "Processing row 6 of 20"
#> [1] "Processing row 7 of 20"
#> [1] "Processing row 8 of 20"
#> [1] "Processing row 9 of 20"
#> [1] "Processing row 10 of 20"
#> [1] "Processing row 11 of 20"
#> [1] "Processing row 12 of 20"
#> [1] "Processing row 13 of 20"
#> [1] "Processing row 14 of 20"
#> [1] "Processing row 15 of 20"
#> [1] "Processing row 16 of 20"
#> [1] "Processing row 17 of 20"
#> [1] "Processing row 18 of 20"
#> [1] "Processing row 19 of 20"
#> [1] "Processing row 20 of 20"

colnames(extended_solutions_matrix)[1:25]
#>  [1] "row_id"                    "alpha"                    
#>  [3] "k"                         "t"                        
#>  [5] "snf_scheme"                "clust_alg"                
#>  [7] "cont_dist"                 "disc_dist"                
#>  [9] "ord_dist"                  "cat_dist"                 
#> [11] "mix_dist"                  "inc_cortical_thickness"   
#> [13] "inc_cortical_surface_area" "inc_subcortical_volume"   
#> [15] "inc_household_income"      "inc_pubertal_status"      
#> [17] "nclust"                    "subject_NDAR_INV0567T2Y9" 
#> [19] "subject_NDAR_INV0IZ157F8"  "subject_NDAR_INV0J4PYA5F" 
#> [21] "subject_NDAR_INV10OMKVLE"  "subject_NDAR_INV15FPCW4O" 
#> [23] "subject_NDAR_INV19NB4RJK"  "subject_NDAR_INV1HLGR738" 
#> [25] "subject_NDAR_INV1KR0EZFU"

# Looking at the newly added columns
head(no_subs(extended_solutions_matrix))
#>   row_id alpha  k  t snf_scheme clust_alg cont_dist disc_dist ord_dist cat_dist
#> 1      1   0.5 29 20          2         1         1         1        1        1
#> 2      2   0.4 26 20          1         1         1         1        1        1
#> 3      3   0.3 44 20          2         2         1         1        1        1
#> 4      4   0.3 43 20          1         1         1         1        1        1
#> 5      5   0.5 29 20          2         2         1         1        1        1
#> 6      6   0.4 26 20          2         1         1         1        1        1
#>   mix_dist inc_cortical_thickness inc_cortical_surface_area
#> 1        1                      1                         0
#> 2        1                      1                         1
#> 3        1                      1                         0
#> 4        1                      1                         1
#> 5        1                      1                         1
#> 6        1                      1                         1
#>   inc_subcortical_volume inc_household_income inc_pubertal_status nclust
#> 1                      1                    0                   1      2
#> 2                      1                    1                   1      2
#> 3                      0                    1                   1     10
#> 4                      0                    1                   1      2
#> 5                      1                    1                   1      8
#> 6                      1                    1                   1      2
#>   cbcl_anxiety_r_pval cbcl_depress_r_pval   min_pval mean_pval  max_pval
#> 1   0.758497782121108   0.253657581105355 0.25365758 0.5060777 0.7584978
#> 2   0.436382083446635  0.0851033924288317 0.08510339 0.2607427 0.4363821
#> 3   0.590342125663898   0.331219583636934 0.33121958 0.4607809 0.5903421
#> 4   0.672455644238922   0.863299444565211 0.67245564 0.7678775 0.8632994
#> 5   0.224206099694958   0.528033268858081 0.22420610 0.3761197 0.5280333
#> 6   0.677526005779245   0.127648264212471 0.12764826 0.4025871 0.6775260

If you just want the p-values:

target_pvals <- get_pvals(extended_solutions_matrix)

head(target_pvals)
#>   row_id cbcl_anxiety_r_pval cbcl_depress_r_pval   min_pval mean_pval  max_pval
#> 1      1           0.7584978          0.25365758 0.25365758 0.5060777 0.7584978
#> 2      2           0.4363821          0.08510339 0.08510339 0.2607427 0.4363821
#> 3      3           0.5903421          0.33121958 0.33121958 0.4607809 0.5903421
#> 4      4           0.6724556          0.86329944 0.67245564 0.7678775 0.8632994
#> 5      5           0.2242061          0.52803327 0.22420610 0.3761197 0.5280333
#> 6      6           0.6775260          0.12764826 0.12764826 0.4025871 0.6775260

There is a heatmap for visualizing this too:

pval_hm <- pval_heatmap(target_pvals, order = meta_cluster_order)

save_heatmap(
    heatmap = pval_hm,
    path = "./pval_heatmap_ordered.png",
    width = 400,
    height = 500,
    res = 100
)

These p-values hold no real meaning for the traditional hypothesis-testing context, but they are reasonable proxies of the magnitude of the effect size / separation of the clusters across the variables in question. Here, they are just a tool to find clustering solutions that are well-separated according to the outcome measures you’ve specified. Finding a cluster solution like this is similar to a supervised learning approach, but where the optimization method is just random sampling. The risk for overfitting your data with this approach is considerable, so make sure you have some rigorous external validation before reporting your findings.

We recommend using label propagation (provided by the SNFtool package in the groupPredict function) for validation: take the top clustering solutions found in some training data, assign predicted clusters to some held out test subjects, and then characterize those test subjects to see how well the clustering solution seemed to have worked.

Validating results with label propagation

Here’s a quick step through of the complete procedure, from the beginning, with label propagation to validate our findings.

The metasnf package comes equipped with a function to do the training/testing split for you :)

# All the subjects present in all dataframes with no NAs
all_subjects <- data_list[[1]]$"data"$"subjectkey"

# Remove the "subject_" prefix to allow merges with the original data
all_subjects <- gsub("subject_", "", all_subjects)

# Dataframe assigning 80% of subjects to train and 20% to test
assigned_splits <- train_test_assign(train_frac = 0.8, subjects = all_subjects)

# Pulling the training and testing subjects specifically
train_subs <- assigned_splits$"train"
test_subs <- assigned_splits$"test"

# Partition a training set
train_abcd_cort_t <- abcd_cort_t[abcd_cort_t$"patient" %in% train_subs, ]
train_abcd_cort_sa <- abcd_cort_sa[abcd_cort_sa$"patient" %in% train_subs, ]
train_abcd_subc_v <- abcd_subc_v[abcd_subc_v$"patient" %in% train_subs, ]
train_abcd_h_income <- abcd_h_income[abcd_h_income$"patient" %in% train_subs, ]
train_abcd_pubertal <- abcd_pubertal[abcd_pubertal$"patient" %in% train_subs, ]
train_abcd_anxiety <- abcd_anxiety[abcd_anxiety$"patient" %in% train_subs, ]
train_abcd_depress <- abcd_depress[abcd_depress$"patient" %in% train_subs, ]

# Partition a test set
test_abcd_cort_t <- abcd_cort_t[abcd_cort_t$"patient" %in% test_subs, ]
test_abcd_cort_sa <- abcd_cort_sa[abcd_cort_sa$"patient" %in% test_subs, ]
test_abcd_subc_v <- abcd_subc_v[abcd_subc_v$"patient" %in% test_subs, ]
test_abcd_h_income <- abcd_h_income[abcd_h_income$"patient" %in% test_subs, ]
test_abcd_pubertal <- abcd_pubertal[abcd_pubertal$"patient" %in% test_subs, ]
test_abcd_anxiety <- abcd_anxiety[abcd_anxiety$"patient" %in% test_subs, ]
test_abcd_depress <- abcd_depress[abcd_depress$"patient" %in% test_subs, ]

# A data list with just training subjects
train_data_list <- generate_data_list(
    list(train_abcd_cort_t, "cortical_thickness", "neuroimaging", "continuous"),
    list(train_abcd_cort_sa, "cortical_sa", "neuroimaging", "continuous"),
    list(train_abcd_subc_v, "subcortical_volume", "neuroimaging", "continuous"),
    list(train_abcd_h_income, "household_income", "demographics", "continuous"),
    list(train_abcd_pubertal, "pubertal_status", "demographics", "continuous"),
    uid = "patient"
)

# A data list with training and testing subjects
full_data_list <- generate_data_list(
    list(abcd_cort_t, "cortical_thickness", "neuroimaging", "continuous"),
    list(abcd_cort_sa, "cortical_surface_area", "neuroimaging", "continuous"),
    list(abcd_subc_v, "subcortical_volume", "neuroimaging", "continuous"),
    list(abcd_h_income, "household_income", "demographics", "continuous"),
    list(abcd_pubertal, "pubertal_status", "demographics", "continuous"),
    uid = "patient"
)

# Construct the target lists
train_target_list <- generate_target_list(
    list(train_abcd_anxiety, "anxiety", "ordinal"),
    list(train_abcd_depress, "depressed", "ordinal"),
    uid = "patient"
)

# Find a clustering solution in your training data
settings_matrix <- generate_settings_matrix(
    train_data_list,
    nrow = 5,
    seed = 42,
    min_k = 10,
    max_k = 30
)
#> [1] "The global seed has been changed!"

train_solutions_matrix <- batch_snf(
    train_data_list,
    settings_matrix
)
#> [1] "Row: 1/5 | Time remaining: 0 seconds"
#> [1] "Row: 2/5 | Time remaining: 0 seconds"
#> [1] "Row: 3/5 | Time remaining: 0 seconds"
#> [1] "Row: 4/5 | Time remaining: 0 seconds"
#> [1] "Row: 5/5 | Time remaining: 0 seconds"
#> [1] "Total time taken: 1 seconds."

extended_solutions_matrix <- extend_solutions(
    train_solutions_matrix,
    train_target_list
)
#> [1] "Processing row 1 of 5"
#> [1] "Processing row 2 of 5"
#> [1] "Processing row 3 of 5"
#> [1] "Processing row 4 of 5"
#> [1] "Processing row 5 of 5"

extended_solutions_matrix |> colnames()
#>   [1] "row_id"                   "alpha"                   
#>   [3] "k"                        "t"                       
#>   [5] "snf_scheme"               "clust_alg"               
#>   [7] "cont_dist"                "disc_dist"               
#>   [9] "ord_dist"                 "cat_dist"                
#>  [11] "mix_dist"                 "inc_cortical_thickness"  
#>  [13] "inc_cortical_sa"          "inc_subcortical_volume"  
#>  [15] "inc_household_income"     "inc_pubertal_status"     
#>  [17] "nclust"                   "subject_NDAR_INV0567T2Y9"
#>  [19] "subject_NDAR_INV0IZ157F8" "subject_NDAR_INV0J4PYA5F"
#>  [21] "subject_NDAR_INV10OMKVLE" "subject_NDAR_INV15FPCW4O"
#>  [23] "subject_NDAR_INV19NB4RJK" "subject_NDAR_INV1HLGR738"
#>  [25] "subject_NDAR_INV1KR0EZFU" "subject_NDAR_INV1L3Y9EOP"
#>  [27] "subject_NDAR_INV1ZHRDJ6B" "subject_NDAR_INV2PK6C85M"
#>  [29] "subject_NDAR_INV2XO1PHCT" "subject_NDAR_INV3CU5Y9BZ"
#>  [31] "subject_NDAR_INV3MBSY16V" "subject_NDAR_INV3N0QFDLO"
#>  [33] "subject_NDAR_INV3Y027GVK" "subject_NDAR_INV40Z7GVYJ"
#>  [35] "subject_NDAR_INV49UPOXHJ" "subject_NDAR_INV4G7032E6"
#>  [37] "subject_NDAR_INV4N5XGZE8" "subject_NDAR_INV4OWRB536"
#>  [39] "subject_NDAR_INV4X80QUZY" "subject_NDAR_INV50JL2RXP"
#>  [41] "subject_NDAR_INV5BRNFYQC" "subject_NDAR_INV6RVH5KZS"
#>  [43] "subject_NDAR_INV6WBQCY2I" "subject_NDAR_INV752EFAQ0"
#>  [45] "subject_NDAR_INV7QO93CJH" "subject_NDAR_INV84G9ONXP"
#>  [47] "subject_NDAR_INV8EHP6W1U" "subject_NDAR_INV8MJFUKIW"
#>  [49] "subject_NDAR_INV8WGK6ECZ" "subject_NDAR_INV915R2Z67"
#>  [51] "subject_NDAR_INV94AKNGMJ" "subject_NDAR_INV9GAZYV8Q"
#>  [53] "subject_NDAR_INV9IREH05N" "subject_NDAR_INV9KC3GVMU"
#>  [55] "subject_NDAR_INV9NFKZ82A" "subject_NDAR_INV9S1BMDE5"
#>  [57] "subject_NDAR_INVA68OU0YK" "subject_NDAR_INVADCYZ38B"
#>  [59] "subject_NDAR_INVAYM8WTIN" "subject_NDAR_INVB4MU1PDA"
#>  [61] "subject_NDAR_INVB8O4LAQV" "subject_NDAR_INVBAP80W1R"
#>  [63] "subject_NDAR_INVBTRW1NUK" "subject_NDAR_INVCI0KZWMN"
#>  [65] "subject_NDAR_INVCIXE0496" "subject_NDAR_INVCYBSZD0N"
#>  [67] "subject_NDAR_INVD37Z9N61" "subject_NDAR_INVD61ZUBC7"
#>  [69] "subject_NDAR_INVEO3JM5CB" "subject_NDAR_INVEQ1OBNSM"
#>  [71] "subject_NDAR_INVEVBDLSTM" "subject_NDAR_INVEY0FMJDI"
#>  [73] "subject_NDAR_INVFLU0YINE" "subject_NDAR_INVFNZPWMSI"
#>  [75] "subject_NDAR_INVFY76P8AJ" "subject_NDAR_INVG3T0PXW6"
#>  [77] "subject_NDAR_INVG8BRLSO9" "subject_NDAR_INVH1KV76BQ"
#>  [79] "subject_NDAR_INVH3P4T8C2" "subject_NDAR_INVH4FZC2XB"
#>  [81] "subject_NDAR_INVH8QN7WLT" "subject_NDAR_INVHERPS382"
#>  [83] "subject_NDAR_INVHM3XS68O" "subject_NDAR_INVI1RKT9MX"
#>  [85] "subject_NDAR_INVIZFV08RU" "subject_NDAR_INVJ4THFRD2"
#>  [87] "subject_NDAR_INVJ574KX6A" "subject_NDAR_INVJEV61XIU"
#>  [89] "subject_NDAR_INVJR3S271G" "subject_NDAR_INVK3FL5CP2"
#>  [91] "subject_NDAR_INVKB0CYO1H" "subject_NDAR_INVKHWS26UN"
#>  [93] "subject_NDAR_INVKTUMPLXY" "subject_NDAR_INVL4NIUZYF"
#>  [95] "subject_NDAR_INVLF3TNDUZ" "subject_NDAR_INVLI58ERQC"
#>  [97] "subject_NDAR_INVLIQRM8KC" "subject_NDAR_INVLXDP1SWT"
#>  [99] "subject_NDAR_INVMBOZVEA4" "subject_NDAR_INVMIWOSHJN"
#> [101] "cbcl_anxiety_r_pval"      "cbcl_depress_r_pval"     
#> [103] "min_pval"                 "mean_pval"               
#> [105] "max_pval"

# The fifth row had the lowest minimum p-value across our outcomes
lowest_min_pval <- min(extended_solutions_matrix$"min_pval")
which(extended_solutions_matrix$"min_pval" == lowest_min_pval)
#> [1] 5

# Keep track of your top solution
top_row <- extended_solutions_matrix[4, ]

# Use the solutions matrix from the training subjects and the data list from
# the training and testing subjects to propagate labels to the test subjects
propagated_labels <- lp_solutions_matrix(top_row, full_data_list)
#> [1] "Processing row 1 of 1..."

head(propagated_labels)
#>                 subjectkey group 4
#> 1 subject_NDAR_INV0567T2Y9 train 2
#> 2 subject_NDAR_INV0IZ157F8 train 1
#> 3 subject_NDAR_INV0J4PYA5F train 2
#> 4 subject_NDAR_INV10OMKVLE train 2
#> 5 subject_NDAR_INV15FPCW4O train 2
#> 6 subject_NDAR_INV19NB4RJK train 2
tail(propagated_labels)
#>                   subjectkey group 4
#> 95  subject_NDAR_INVGDBYXWV4  test 2
#> 96  subject_NDAR_INVHEUWA52I  test 2
#> 97  subject_NDAR_INVK9ULDQA2  test 2
#> 98  subject_NDAR_INVKYH529RD  test 2
#> 99  subject_NDAR_INVL045Z1TY  test 2
#> 100 subject_NDAR_INVLDQH8ATK  test 2

You could, if you wanted, see how all of your clustering solutions propagate to the test set, but that would mean reusing your test set and removing the protection against overfitting conferred by this procedure.

propagated_labels_all <- lp_solutions_matrix(
    extended_solutions_matrix,
    full_data_list
)
#> [1] "Processing row 1 of 5..."
#> [1] "Processing row 2 of 5..."
#> [1] "Processing row 3 of 5..."
#> [1] "Processing row 4 of 5..."
#> [1] "Processing row 5 of 5..."

head(propagated_labels_all)
#>                 subjectkey group 1 2  3 4 5
#> 1 subject_NDAR_INV0567T2Y9 train 5 2 10 2 1
#> 2 subject_NDAR_INV0IZ157F8 train 4 1 10 1 2
#> 3 subject_NDAR_INV0J4PYA5F train 6 2  8 2 2
#> 4 subject_NDAR_INV10OMKVLE train 1 1  3 2 2
#> 5 subject_NDAR_INV15FPCW4O train 1 2  7 2 2
#> 6 subject_NDAR_INV19NB4RJK train 3 2 10 2 2
tail(propagated_labels_all)
#>                   subjectkey group 1 2 3 4 5
#> 95  subject_NDAR_INVGDBYXWV4  test 1 2 7 2 2
#> 96  subject_NDAR_INVHEUWA52I  test 4 2 6 2 2
#> 97  subject_NDAR_INVK9ULDQA2  test 3 2 6 2 1
#> 98  subject_NDAR_INVKYH529RD  test 6 2 2 2 2
#> 99  subject_NDAR_INVL045Z1TY  test 1 2 1 2 2
#> 100 subject_NDAR_INVLDQH8ATK  test 1 2 1 2 2

That’s all!

If you have any questions, comments, suggestions, bugs, etc. feel free to post an issue at https://github.com/BRANCHlab/metasnf.

References

Caruana, Rich, Mohamed Elhawary, Nam Nguyen, and Casey Smith. 2006. “Meta Clustering.” In Sixth International Conference on Data Mining (ICDM’06), 107–18. https://doi.org/10.1109/ICDM.2006.103.

Wang, Bo, Aziz M. Mezlini, Feyyaz Demir, Marc Fiume, Zhuowen Tu, Michael Brudno, Benjamin Haibe-Kains, and Anna Goldenberg. 2014. “Similarity Network Fusion for Aggregating Data Types on a Genomic Scale.” Nature Methods 11 (3): 333–37. https://doi.org/10.1038/nmeth.2810.