Network Analysis

Effectiveness of varying thresholds to explore network dynamics

As threshold increased, graphs had fewer connections involving a smaller fraction of nodes, so that the average number of connections per node was extremely low. As a result, a greater portion of the connected nodes are assigned as hubs. As such, with sparser graphs, the hubs that are identified are not as significant in hubs identified for more densely connected graphs. Across all subjects, hubs tended to be in the frontal center or frontal right regions. For the 0.6 threshold, the range in the number of hubs per graph increased as a result of the smaller number of average connections per node. This also indicates how the relative significance of node, whether or not it is a hub, depends on the connectivity of the graph as a whole. The 0.8 threshold highlighted the greatest inconsistencies in the concentration of connections of the test set, especially. The loss of connectivity of the test subjects was apparent even at the lowest threshold value of 0.4, with islands appearing in the test set despite both sets having a high number of random connections overall.

 

The trend of standard error with respect to increasing the threshold value indicated how much effective that network metric was in characterizing the subject set as a whole. In other words, a relatively small standard error indicates that the network metric is consistent, or stable, across all subjects in that set. The 0.4 threshold yielded the lowest standard error for the percentage of islands, which makes sense because these graphs were mostly unconnected or featured a high percentage of islands. The 0.6 threshold yielded the lowest standard error for the percentage of hubs which ties together with the observation that these graphs featured the most moderate number of connections and were most homogenous in the density of connections. The 0.8 threshold yielded the lowest standard error for the average number of edges, average clustering coefficient, and characteristic path length. All three of these metrics relate to the connectivity of a graph which was observed to be more random at lower thresholds. Although these were the results observed in this project, the stability of these network metrics could also be verified using alternative connectivity measures (only cross-correlation was considered in this project) and multiple data sets.

Identification of potential biomarkers of disease

The potential to distinguish between the EEG recordings of healthy brains and those that have undergone traumatic brain injury can be explored through the comparison of network metrics, features of classification. The key features that were investigated were the percentage and frequency of hubs and islands, average number of edges per node, characteristic path length, and average clustering coefficient. Stronger distinctions can be made between the control and test patients by comparing the average clustering coefficient and characteristic after dividing them by the average number of connections. The percentage and frequency of hubs and islands were not significantly different between the control and test sets. However, the subgraphs formed by the hub and island sets were visible upon inspection of the head-maps. As such, further research can be conducted to explore how network metrics of these subgraphs differ among control and test patient sets. Similarly, the characteristic path length and average clustering coefficient were relatively similar in value for both data sets. However, the average number of connections, which was significantly higher for the control set, can be used to scale the average clustering coefficient and characteristic path length proportionately to the overall connectivity of the graph. In doing so, these scaled network metrics would be more clearly distinguishable between the control and test set. The effectiveness of a given network metric in distinguishing between the control and test groups was assessed by comparing the mean and standard error across multiple thresholds. As such, threshold should be taken into strong consideration when using a given network metric as a feature of classification. Certain network metrics may be more sensitive to varying threshold value, which was not explored by this project since threshold intervals and values were kept constant for simplicity. Additionally, the variety of network metrics available as potential features depends on the type of graph created. Therefore, directed or weighted graphs could offer additional network metrics that were not considered in this project.

Methodological challenges and considerations for future work

Given the time constraints of this project, we used cross-correlation as it requires the fewest number of additional parameters and is the simplest connectivity measure to implement. A key limitation of using cross-correlation, as a functional connectivity measure, is its dependence on neural currents of the underlying cortical region and those generated by remote cortical regions, also known as the effects of volume conduction (Blinowska et. al., 2016). As a result, two brain regions may show correlated activations due to the effects of common feeding from a third cortical or sub-cortical source (Blinowska et. al., 2016). The inconsistencies in the results due to volume conduction can lead to a false estimate of the actual connectivity between different brain regions. There are multiple studies that aim to combat this issue by projecting the activity measured by an electrode back to underlying sources, effectively mapping “signal space” to “source space.” Also known as the inverse problem, these methods attempt to more accurately capture the communication between brain areas. However, the constraints and assumptions required to solve the inverse problem result in the lack of a unique solution. Because volume-conduction and field spread, in which multiple electrodes measure activity from a single source affect source space analysis, can still affect the study neurophysiological signals, source space analysis can be enhanced by the implementation of a connectivity measure that is less sensitive to volume conduction (Hillebrand et. al., 2012).

Although this project focused on the simplest graph type, undirected (binary) and unweighted, other types of graphs may offer additional insight into brain network interactions. Directed graphs require a connectivity measure that would be able to capture the causal or directed information flow, also known as effective connectivity (Friston, 2011). Examples of effective connectivity measures, not explored in this project, include Granger causality, in which the future of one signal can be predicted by the past of another signal and vice versa, or directed transfer function, which gives the relation between the outflow of one node towards another in the frequency domain (van Diessen et. al., 2015). In comparison to cross-correlation, other measures such as, imaginary and partial directed coherence, phase slope index, and phase lag index, are more robust to the effects of volume conduction. These measures can be adjusted to reflect directed information as well and can be used to generate directed graphs with variable weights. In order to use different connectivity measures, we would need to determine experimental parameters that correspond to the connectivity measure chosen.

Furthermore, studying the connectivity of networks generated using different frequency bands could offer insight into how connectivity is affected by the distinct cognitive functions associated with each of the bands. To do so, we could sample frequencies that lie in the following frequency bands, delta (0.5-4 Hz), theta (4-8 Hz), alpha (8-13 Hz), beta (13-30 Hz), gamma (30-90 Hz) using the data that was provided. Then, networks could be constructed after selecting an appropriate connectivity measure, using an approach similar to that used in this project.