Results

For the second model, we analyzed how the underlying graph affects the resulting loss distributions. We found that the α = 0 loss distribution for the ‘Spoke’, ‘Wheel and Spoke’, ‘Spoke with Three Centers’ and ‘Wheel and Spoke with Three Centers” is identical to the original models α = 0 loss distribution because when α = 0 the banks are all disconnected from each other.  An example of the fully connected, ‘Wheel and Spoke’ and ‘Spoke’ loss distributions are shown below. Furthermore, for the α = 1 the loss distribution still looked similar to the original models loss distribution. However, for α = 10, the loss distribution for all four graphs maintained somewhat of a bell curve while the original model flipped and had a high probability minimal banks defaulted or all the banks defaulted.

Finally, the loss distributions for the α = 100 models were somewhat quite similar. For the four graphs, the probability that all banks failed was significantly lower (by at least 10 percent), than the original model. The downside to that is that there is significantly a larger chance that between 2 to 8 banks would fail.The graphical models have less than half the connections between nodes compared to the original model, so the original model has a higher chance of cascading failure. This can also be seen in the random graph’s loss distribution (Figure 12) whose underlying graph has on average 22 connections between nodes (just half of the original model) and has a higher probability that all banks will fail. 

It seems that as the inter-connectivity of the graph increases the probability that all banks will fail, but decreases the chance that a medium number of banks will fail. The original model follows this rule as it can be thought of as a fully connected graph.

Next we looked at the initial capital distribution model. The initial capital distribution model is able to address concerns such as how the previous models only looked at the returns of the banks not the nominal changes in capital. We initially assumed that the capital distribution of the banks was exponentially distributed, then tried a uniform distribution and random distribution.The loss distribution for this model is similar to the loss distribution of the original model except that the probability that all banks would fail is suppressed. This is due to the extreme sensitivity of the Initial Capital Distribution model. Changing the connectivity or the diffusion coefficient greatly impacted the model. We chose to change the diffusion coefficient as our current value caused the returns exceeded double precision. 

 

                        Random                                                                              Exponential

Uniform

As seen in the first two figures below, the ‘Wheel and Spoke’ and the fully connected graph were pretty similar except for the higher probability that all banks would fail for ‘Wheel and Spoke’ for α = 100. This is probably because the large central bank that trades with all the peripheral banks would drag all the other banks down causing a cascading failure when the interconnectivity is that high. However, for the ‘Spoke’ graph the loss distribution for all α’s followed a binomial distribution. Since the interconnectivity of the ‘Spoke’ graph is approximately half as much of the ‘Wheel and Spoke’, at α = 100 the ‘Spoke’ loss distribution was still binomial. Thus even our initial capital distribution model, follows the trend that as interconnectivity decreases, a medium number of banks is more likely to fail. 

                       Wheel and Spoke                                                                 Spoke

As can be seen above, our implementation of the first two models was successful, and our results  conform extremely closely to the results from the previous groups. The advantages of these two models is that they are extremely ‘lightweight’, have relatively few parameters, but still exhibits the behavior we want (mean reversing and the appearance of systematic risk). However, there do appear to be some shortfalls with these models. First, the results can be difficult to quantify, and further, it can be very difficult to isolate the effect of certain parameters, such as the the arrangement of the graphs. Second, the model implicitly makes some assumptions which are not realistic in banking networks. Our third model helped address both of these issues, but still had some of the same problems. Possible extension to this project could be to mathematically derive an equations for the extension that had all of the nice properties we wanted. This was outside of or abilities at the time, but could provide some improvement. Another possible extension would to increase the number of banks in the network. The original model had a focus on taking the number of banks to infinity. We however, found the model unwieldy as we started to increase the number of banks. Treating the number of banks as a parameter could give some new insight, but we would likely need significantly more computing power to do so. Lastly, implementing some form cost system for the performance of each model, that could take into account the mean number of defaults, the the severity/chance of a systemic failure. This would need to be based on some model of social utility, or historical data on risk preference. This would help quantify the performance of the models from the perspective of a government or society, but falls well outside the realm of an ESE project.