### Determining Epoch Selection Parameters

Although we were not responsible for data collection, our analysis required us to determine a procedure for epoch selection to sample specific time intervals of the data. Given the microscopic scale from which brain activity stems, the length of the intervals would need to provide enough granularity to compare relatively small fluctuations in amplitude. The epoch lengths and number of epochs used in current resting state connectivity studies range from one second to a few minutes and from once epoch to over 100 epochs (van Diessen et. al., 2015). Furthermore, the stability of the connectivity measures calculated strongly depends on epoch length. Longer epochs may result in lower connectivity values based on the asymmetrical distribution of phase difference while shorter epochs may be useful in studying dynamic properties if sampled sequentially (van Diessen et. al., 2015). Since our focus was on studying causal relationships between two channel recordings, we chose to sample random epochs rather than consecutive epochs which would be more ideal for studying time-dependent patterns. Using the precedence set by Dr. Ching’s research, we focused on epochs between 20 and 60 seconds and generated separate networks for each patient with 20, 40, and 60 second epochs. Through experimental trial and error, we assessed the validity of our results as we determined the optimal epoch length and number. For simplicity, we chose specifically the minimum, 20 seconds, and maximum, 60 seconds, of this range as two epoch length variables for comparing the resulting patient graphs.

### Connectivity Measure Selection Process

In this project, the functional connectivity between interacting brain regions was quantified using cross-correlation as the connectivity measure between two channel signals. Functional connectivity, which relates two signals by their statistical interdependency, is distinct from effective connectivity, which measures the causal relationship between two regions through their directed information flow (Blinowska et. al., 2016). Cross correlation estimates the degree to which two time-series, f and g, are linearly correlated and is calculated using the following equation:

Using Matlab’s built-in xcorr function, the normalized value of cross-correlation was computed for discrete time intervals for each pair of channels. The maximum correlation value was extracted across all possible time lags. Since the polarity of the correlation did not affect the perceived similarity between two signals, the absolute value was obtained for each normalized value. As such, the cross-correlation value computed for each pair of signals ranged from 0 to 1.

### Graph Construction

After determining the data selection process, the next phase was to generate graphs to represent the network interactions of the brain. Given that the normalized cross-correlation between two signals ranges from -1 to 1, the existence of an edge would be determined by comparing the cross-correlation value between two channels to a chosen threshold value. Although the cross-correlation between two signals can be negative, our focus was to quantify the causal relationship between two signals regardless of their polarity. For each pair of channels, a cross-correlation value was computed for every pair of the corresponding epochs. The number of cross-correlation values for each channel pair was equal to the total number of epochs sampled. To compare these values to a single threshold value, we assessed the following functions to consolidate the cross-correlation values for each epoch: the maximum value, the mean value, or a certain percentile value. Since the maximum value of each set of cross-correlation value tended to be 1, it did not provide a strict enough bound to represent a strong correlation between two channels. As such, we opted to use the mean and percentile values of the cross-correlation values of each epoch. Using the average or percentile of the cross-correlation values provides stability in computing connectivity values by removing the effect of strong outliers on edge determination. For the percentile metric, we chose the cross-correlation value that fell on the 75^{th} percentile. After obtaining a value to represent the connectivity between two channels, we had to select a threshold value to determine the existence of an edge. A total of 12 graphs were generated for every subject in both the control and test group. The graphs represented each of the following threshold values: 0.4, 0.6, and 0.8 for both the mean and 75^{th} percentile cross-correlation value and the 30 20-second epoch and 10 60-second epoch sampling parameters. The resulting graphs are included in section A1 of the Appendix.

### Global Graph Construction

After generating network graphs for each subject, the next phase was to conduct comparative analysis between the control and test groups. To characterize the network of a healthy subject, we sought to generate a “global” graph to generalize the network interactions for both the control and test data sets. Upon comparison of the graphs generated by the mean and 75^{th} percentile cross-correlation values, we found that the graphs using the 75^{th} percentile cross-correlation value retained connectivity at higher thresholds. Because these graphs contained more network information, we simplified our approach by using only the 75^{th} percentile in global graph construction. First, two sets of graphs were created for the 10 control subjects using sampling parameters of 30 20-second epochs and 10 60-second epochs both across threshold values of 0.4, 0.6 and 0.8 using the 75^{th} percentile cross-correlation value. These graphs are included in section A1 of the Appendix. The control graphs generated by the 20-second epochs exhibited higher global connectivity while the graphs generated by the 60-second epochs featured islands of node clusters. Since network connectivity appeared more consistent using the 20-second epochs, we sampled the data set using only 30 20-second epochs to streamline our approach in generating the global graphs.

A global map was created for each of the 0.4, 0.6, and 0.8 thresholds, using an epoch length of 20 seconds, 30 epochs, and 10 trials as constant parameters. A trial is defined as a single graph generated through random selection of the specified number of epochs, such that the epochs selected to create the graph would differ between trials. Trials were to be run on an individual patient basis to generate a single “stable” graph for each patient. Then, the global graph was generated using the stable graphs, one from each patient. To determine the relationship between the number of trials and average number of connections control subject, we ran 2, 5, 10, and 20 trials for each control subject. To assess the stability of the results, we calculated the mean and standard error of the average number of connections across the control set for each trial value tested. The average number of connections per node for Control Subject 1 was also calculated for each trial number. The average number of connections per node and the standard error was calculated using this data. Tables 2 and 3 contain these results below. Because the mean number of connections and the standard error across the subjects of control set were relatively stable across all numbers of trials, we concluded that the number of trials did not have a significant impact on the stability of the number of connections per patient. Since the duration and number of epochs was kept constant for trial run, the consistency across trial numbers indicates that the epoch selection parameters do deliver stable results. Moving forward we proceeded to generate the global graphs using 10 trials per patient. The metric used for the cross-correlation was the 75^{th} percentile. If an edge appeared in 80% of those trials, the edge was counted for that patient’s final graph. Then, if an edge appeared in 80% of the final graphs of the patients, it was counted in the global graph. This edges of the global control graph are expected to be more well connected than those of test patients while the network metrics themselves characterize healthy brain interactions. The global graphs were generated for both the mean and 75^{th} percentile epoch consolidation metrics for the control and test set. These graphs are included in section A2 of the Appendix. For both the 0.6 and 0.8 threshold in both data sets, the Average graphs had no edges, meaning all nodes appeared to be unconnected. The 75^{th} percentile graphs, in comparison, retained edges at the 0.6 threshold, but were completely unconnected under the 0.8 threshold. Because the 75^{th} percentile graphs provided more network information, we proceeded to use only the 75^{th} percentile as the edge metric for the rest of our analysis.

### Head-Map Construction

The network graphs generated enabled visualization of the relative clustering and subnetworks within each patient’s individual graph. To study the spatial interactions within the brain, we mapped the nodes of the global graphs to their corresponding electrode positions on a head-map using the standard 10-20 configuration defined in the Data section. Because a bipolar montage was used, the location of a node was determined by the first electrode of each pairing because the second electrode served as the reference. The head maps were created using a Java and the StdDraw Java library. For a given graph, the adjacency matrix, hub list, and island list were passed into the program. A visualization of the head with labeled nodes, corresponding to the first reported location used for the node in the bipolar montage, was first constructed. Then, if an edge appeared in the adjacency matrix, it was drawn as a red line between the two nodes on the graph. Lastly, if a node was a hub, it was colored green; if it was an island, it was colored yellow; otherwise, the node was colored blue.