## Abstracts

Paper Title Authors Time
A 1D model for collective dynamics of swimming bacteriaAlessandro Ravoni and Luca AngelaniTuesday, 18:00-20:00

Recently, studies of active systems (e.g. bacteria) in confined geometries have attracted great interest, since they show extraordinary collective behavior [1,2,3,4]. While from a theoretical point of view, collective dynamics represents an interesting case of study, it can also serve as a solid infrastructure for developing new technologies [5,6,7,8].
Therefore, developing computational methods capable of efficiently reproducing the dynamics of confined systems is fundamental.

In this context, we elaborate a discrete model to perform a parametric study of a confined active system, spanning a wide range of possible values of the characteristic quantities of the system, such as geometrical configurations or particle peculiarities. In this way, it is possible to associate these properties with the collective emergent dynamics.
We follow the dynamics of self-propelled active particles confined in a channel with single-file condition. The channel is represented by a 1D lattice, and active particles move within it with a constant velocity. We also account for run-and-tumble dynamics by considering a random reorientation of a particle with rate λ.
In particular, we consider a system consisting of two microchambers, containing a number of particles N, connected through a microchannel with a length L.
This geometrical configuration has been already studied in molecular dynamics simulations (MDSs) [1], showing an interesting self-sustained density oscillation of particles. However, since MDSs are computationally expensive, the above study refers only to a single geometrical configuration and a fixed set of values for N, λ and L. This limitation has been overcome in this work by using the discrete model approach, that is computationally advantageous, and allows the parametric study of the collective emergent behaviour for a wide range of values of N, λ and L, well reproducing all the results found in MDSs.
We take into account short-range interactions, namely the excluded volume effect and the relative pushes between adjacent particles, responsible for the formation of aggregated states. In fact, self-propelled particles generally interact through non-reflecting collisions, with a subsequent formation of clusters [9]. We also consider long-range interactions amongst particles in the same cluster, which leads to the rise of collective dynamics.

We find the density oscillations relying on the formation of long active clusters. These clusters must be long enough to allow the formation of long-lasting flows of particles in the channel.
Results show that in channels with L < L* = 25l (where l is the size of the active particle), long-lasting fluxes of particles are not sustained and the oscillatory dynamics does not appear. We also find a threshold value of λ* = 7 x 10^-3, above which the random reorientation of the particles causes a rapid fragmentation of the cluster. Therefore, efficient tumbling prevents the emergence of periodicity.
It is also very interesting to note that the oscillatory behavior does not depend on the number of swimmers N. However, high values of N (N>300) increase the period of oscillation, due to the increasing number of swimmers in the chambers, which reduces their motility.
These results can be the starting point for designing microstructured-devices based on the collective dynamics of living organisms. These devices would be capable of, for instance, controlling bacteria diffusion or transport passive matter.

References
[1] Paoluzzi M., Di Leonardo R., Angelani L., Self-sustained density oscillations of swimming bacteria conned in microchambers, Phys. Rev. Lett. 115(18) 188303, 2015.
[2] Wioland H., Woodhouse F. G., Dunkel J., Kessler J., Goldstein R., Confinement Stabilizes a Bacterial Suspension into a Spiral Vortex, Phys. Rev. Lett. 110, 268102, 2013.
[3] Yaouen Fily Y., Baskaran ., Hagan M. F., Dynamics and density distribution of strongly conned noninteracting nonaligning self-propelled particles in a nonconvex boundary, Physical review E91 , 012125, 2015.
[4] Paoluzzi M., Di Leonardo R., Cristina Marchetti M. C., Angelani L., Shape and displacement Fluctuations in Soft Vesicles Filled by Active Particles, Sci Rep. 2016; 6: 34146, 2016.
[5] Costanzo A., Di Leonardo R., Ruocco G., Angelani L., Transport of self-propelling bacteria in micro-channel flow, J. Phys.: Condens. Matter 24 065101, 2012.
[6] Di Leonardo R., Angelani L., Ruocco G., Iebba V., Conte M. P., Schippa S., De Angelis F Mecarini F., Di Fabrizio E., Bacterial ratchet motors, Proc. Natl Acad. Sci. USA 107 9541, 2010.
[7] Sokolov A., Apodaca M. M., Grzybowski B. A., Aranson I. S., Swimming bacteria power microscopic gears, Proc. Natl Acad. Sci. USA 107 969, 2010.
[8] Koumakis N., Lepore A., Maggi C., Di Leonardo R., Targeted delivery of colloids by swimming bacteria, Nature communications 4, 2588, 2013.
[9] Locatelli M., Baldovin F., Orlandini E., Pierno M., Active Brownian particles escaping a channel in single le, Phys. Rev. E 91, 022109, 2015.

Accuracy and Robustness of Machine Learning PredictionsKenric NelsonWednesday, 15:40-17:00

Machine learning algorithms are typically trained and tested based on classification or regression error. While the Kullback-Liebler or other information theoretic metrics may be utilized, these metrics often measure relative performance without a clear sense of what constitutes an absolute standard of success. The interpretation of information theoretic metrics is clarified by translating them into a probability which can be compared with the classification metrics. Furthermore via the weighted generalized mean of predicted probabilities, which is a translation of the Tsallis and Renyi generalizations of entropy, the contrast between decisive and robust algorithms can be measured. These probabilistic metrics of learning performance can be split into components related to the discrimination power of the underlying features and the accuracy of the learned models. The components are related to the entropy and divergence components of the cross-entropy between the model and source distributions. The probabilistic metrics are embedded in a plot of the calibration curves contrasting predicted and measured distributions providing a clear visualization of the accuracy and robustness of machine learning algorithms.

Adaptive Safety stock policies for robust pharmaceutical supply chainsRana Azghandi, Jacqueline Griffin and Mohammad JalaliTuesday, 14:00-15:20

Over the past few years, vulnerability of pharmaceutical supply chains to disruption has affected heath care through the United States. Designing a system which is robust for these disruptions is complex and requires adaptive decision process that dynamically changes over time. Despite from inevitable occurrence of exogenous disruptions that can happen in any system, endogenous disruptions as irrational decisions can reinforce the vulnerability of supply chain and cause system to collapse.
Using system dynamics simulation helps us to capture the complex interaction among components of a pharmaceutical supply chain and try to design safety stock policies for varying exogenous stochastic shocks to the system. In addition to that we characterize how disperse events (spatially and temporarily) in area and time can propagate disruptions in a system.

Agent cognition through micro-simulations: Adaptive and tunable intelligence with NetLogo LevelSpaceBryan Head and Uri WilenskyWednesday, 14:00-15:20

We present a method of endowing agents in an agent-based model with sophisticated cognitive capabilities and a naturally tunable level of intelligence. Often, agent-based models use random behavior or greedy algorithms for maximizing objectives (such as a predator always chasing after the closest prey). However, random behavior is too simplistic in many circumstances and greedy algorithms, as well as classic AI planning techniques, can be brittle in the context of the unpredictable and emergent situations in which agents may find themselves. Our method centers around representing agent cognition as an independently defined, but connected, agent-based model. To that end, we have implemented our method in the NetLogo agent-based modeling platform, using the recently released LevelSpace extension, which we developed to allow NetLogo models to interact with other NetLogo models.

Our method works as follows: The modeler defines what actions an agent can take (e.g., turn left, turn right, go forward, eat, etc.), what the agent knows (e.g., what the agent can see), what the agent is trying to do (e.g., maximize food intake while staying alive), and how the agent thinks the world works via a cognitive model defined by a separate agent-based model. Similar to Monte Carlo tree search methods used in game AI, during each tick of the simulation each agent runs a settable number of micro-simulations using its cognitive model, with initial conditions based on their surroundings, tracking what actions they take, and how well they meet their objectives as a consequence of those actions. The agent then selects an action based on the results of these micro-simulations. A significant upshot of this method is that it gives researchers several tunable parameters that precisely control agents’ “intelligence”, such as the number of micro-simulations to run and the length of each micro-simulation. Having such control over agents’ intelligence allows modelers to, for instance, naturally adjust agents’ cognitive capabilities based on what is reasonable for those agents, or have an evolvable “intelligence” parameter that directly correlates to the agents’ cognitive capabilities.

As an illustrative example, and to begin to understand how this type of cognition interacts with complex systems, we present a modification of a classic predator-prey model, in which the animals have been equipped with the cognitive faculties described above. Based on the Wolf-Sheep Predation model included with NetLogo, the model contains wolves, sheep, and grass. In the classic model, wolves and sheep move randomly and reproduce when they have sufficient energy. Sheep eat grass and wolves eat sheep. Grass grows back at a set rate. In our extension, we define a simplified version of the model that represents how the wolves and sheep think the world works. The cognitive model has similar mechanics as the Wolf-Sheep Predation model, but with two key simplifications: 1) the subject agent’s field of vision defines what other agents are included in the cognitive model, and similarly 2) the mechanics that are reasonable for the agent to know define what mechanics are in effect (e.g., the internal states of other agents are unknown to the subject agent). We then use NetLogo LevelSpace to equip each of the wolves and sheep with this cognitive model, which they then use to perform short-term simulations of their surroundings and select actions that lead to the best outcomes. This cognitive model naturally allows sheep, for instance, to realize that if they move towards a particular patch of grass, a wolf might eat them or another sheep may arrive there first. However, the cognitive model also automatically adapts to special circumstances that the sheep may find itself in. For instance, if the sheep is about to starve to death, they will be more willing to risk being eaten by a wolf if it means getting food. The naturally adaptive capabilities and emergent decision making sets this agent cognition method apart from traditional agent AI.

To understand the impact of this cognitive model on the dynamics of the predator-prey model, we performed experiments in which the number and length of the micro-simulations that the sheep use is varied, while the wolves are left with their original random behavior. We find that only two short micro-simulations of the sheep’s cognitive model are needed to dramatically improve the performance of the sheep, as measured by their mean population when the system reaches periodicity. More broadly, increasing the number of simulations monotonically improves the sheep’s performance as a group, tending towards an asymptote at the system’s carrying capacity. Simulation length, however, achieves peak performance at a relatively small number of ticks; when the simulations are too long, sheep performance drops. Thus, we find that giving the agents even limited cognitive capabilities results in dramatic changes to the systems long-term behavior.

Agent-based models for assessing social influence strategiesZachary Stine and Nitin AgarwalWednesday, 14:00-15:20

Motivated by the increasing attention given to automated information campaigns and their potential to influence information ecosystems online, we argue that agent-based models of opinion dynamics provide a useful environment for understanding and assessing social influence strategies. This approach allows us to build theory about the efficacy of various influence strategies, forces us to be precise and rigorous about our assumptions surrounding such strategies, and highlights potential gaps in existing models. We present a case study illustrating these points in which we adapt a strategy, viz., amplification, commonly employed by so-called ‘bots’ within social media. We treat it as a simple agent strategy situated within three models of opinion dynamics using three different mechanisms of social influence. We present early findings from this work suggesting that a simple amplification strategy is only successful in cases where it is assumed that any given agent is capable of being influenced by almost any other agent, and is likewise unsuccessful in cases that assume agents have more restrictive criteria for who may influence them. The outcomes of this case study suggest ways in which the amplification strategy can be made more robust, and thus more relevant for extrapolating to real-world strategies. We discuss how this methodology might be applied to more sophisticated strategies and the broader benefits of this approach as a complement to empirical methods.

AI in The Real World, Year 2030: Some Game-Changing Application Domains for AI, Machine Learning and Data SciencePredrag TosicTuesday, 15:40-17:00

We are interested in practical applications of AI and Data Science across different industries and aspects of our daily lives, and in particular how are AI and the Big Data disrupting different industries as well as various aspects of economic, social and other human endeavor. Based on the observed technology trends from the past 10-15 years, as well as recent and current progress in AI and Data Science R&D, we make some short-to-medium term predictions on which industries and social domains are to be among the most disrupted by AI over the next decade or so -- as well as, how will that disruption change those industries.

In that context, we specifically identify health care, the energy sector and education to be among those domains in which the AI- and Big Data-triggered disruption is already in progress, with much more to come in the future. We first briefly discuss how is the landscape (from technology use to business models to impact on people working on those industries) of each of these three domains already being considerably by the emergence of scaleable, practical applied AI and "big data" analytics; some of the discussion is based on our own research addressing some of the major challenges those industries face. We then outline our prediction on further changes that we think are very likely to befall these industries. While most technology-driven (and especially, AI and Big Data driven) changes that health care, the energy sector and education (esp. higher education) should expect will in our view overall be very positive, many practices as well as business models in these three areas will need to change, as well. In particular, those changes will require forward-looking, technology-aware industry leaders and policy makers capable of and willing to embrace change and re-invent their organizations and industries, in order to not merely survive but actually strive while riding on the wave of the ongoing, not-slowing-down-anytime-soon AI- and Big Data-driven technology revolution.

An alternative for calculating the edge PageRank. Application for analysing human mobility in metro networksRegino Criado, Miguel Romance and Angel PerezTuesday, 15:40-17:00

Complex Networks theory is considered as a formal tool for describing and analyzing the interaction backbone of a wide range of real complex systems. The concept of line graph offers a good representation of the network properties when it is appropriate to give more importance to the edges of a network than to its nodes. It is possible to consider two different approaches on a directed and weighted network G in order to define the PageRank of each edge of G:

-By obtaining the PageRank of each edge from the PageRank of its nodes.
-By computing the PageRank (as usual) in a new auxiliar network (line-graph of the network) in which each edge of the original network be a node of this new auxiliar network.

We can show that both approaches are equivalent, even though it is clear that one approach has clear computational advantages over the other.
As an application, we analyze human mobility in the Madrid Metro System in order to locate the segments with the highest passenger flow on a standard working day, distinguishing between the morning and the afternoon time periods.

Analysis of Recurrence Distance Distributions in Bacterial and Archaeal Complete GenomesZuo-Bing WuTuesday, 18:00-20:00

The symbolic dynamics and recurrence plots are basic methods of nonlinear dynamics for analyzing complex systems. Although the conventional methods have made great strides in understanding genetic patterns, they are required to analyze the so-called junk DNA with complex funtions. In this presentation, firstly, the metric representation of a genome borrowed from the symbolic dynamics is proposed to form a fractal pattern in a plane. Due to the metric repsentation method, the recurrence plot technique of the genome is established to analyze the recurrence structures of nucleotide strings. Then, by using the metric repsentation and recurrece plot methos, the recurrence distance distributions in bacterial and aechaeal complete genomes are identified. The mechanism of the recurrence structures are analyzed. Further, the Synechocystis sp. PCC6803 genome as one of oldest unicellular organism is taken as an example to make detailed analysis of the periodic and non-periodic recurrence structures. The periodic recurrence structures are generated by periodic transfer of several substrings in long periodic or non-periodic nucleotide strings embedded in the coding regions of genes. The non-periodic recurrence structures are generated by non-periodic transfer of several substrings covering or overlapping with the coding regions of genes. In the periodic and non-periodic transfer, some gaps divide the long nucleotide strings into the substrings and prevent their global transfer. Most of the gaps are either the replacement of one base or the insertion/reduction of one base. Due to the comparison of the relative positions and lengths, the substrings concerned with the non-periodic recurrence structures are almost identical to the mobile elements annotated in the genome. The mobile elements are thus endowed with the basic results on the recurrence structures. This research is supported by the National Science Foundation through the Grants No. 11172310 and No. 11472284.

Analytical approach to network inference: investigating the degree distributionGloria Cecchini and Bjoern SchelterWednesday, 15:40-17:00

Networks are one of the most frequently used modelling paradigms for dynamical systems. Investigations towards synchronization phenomena in networks of coupled oscillators have attracted considerable attention, and so has the analysis of chaotic behaviour and corresponding phenomena in networks of dynamical systems to name just a few. Here, we discuss another related challenge that originates from the fact that network inference in the Inverse Problem typically relies on statistical methods and selection criteria. When a network is reconstructed, two types of errors can occur: false positive and false negative errors about the presence or absence of links. We analyse analytically the impact of these two errors on the vertex degree distribution. Moreover, an analytic formula of the density of the biased vertex degree distribution is presented. In the Inverse Problem, the aim is to reconstruct the original network. We formulate an equation that enables us to calculate analytically the vertex degree distribution of the original network if the biased one and the probabilities of false positive and false negative errors are given. When the dimension of the network is relatively large, numerical issues arise and consequently the truncated singular value decomposition is used to calculate the original network vertex degree distribution. The outcomes of this work are general results that enable to reconstruct analytically the vertex degree distribution of any network. This method is a powerful tool since the vertex degree distribution is a key characteristic of networks.

The Application of (Complex) Systems Theory to the Impact SectorTanuja Prasad Monday, 15:40-17:00

The impact sector is the sector that uses business to achieve environmental and social positive impact in a sustainable manner. There are several popular terms in this sector that you may have come across: impact investing, social enterprise, double-bottom-line, people-planet-profit (the 3 Ps), purpose-driven business, etc. These terms, and this sector, have been receiving a lot of air time in recent years. And, rightfully so.

In keeping with this conference’s theme, we will take a more first-principles approach where we will look at how “impact”creates impact.

The impact sector has two aspects to it: the attempt to understand impact, and the attempt to design for desired impact. These are two aspects of the same coin of course -- but quite distinct in terms of the skillsets needed. One is analysis, the other is synthesis. Science is analysis, design is synthesis. Unfortunately, traditional science and engineering education has not focused on synthesis.

In many ways, the impact sector is a leader in the application of complex systems theory. In fact, the impact sector is born from the realization that traditional models of analysis, business and implementation have not solved the larger problems as expected, and a more holistic and comprehensive approach was needed.

Our (current) mechanistic and reductionist understanding of the universe has led us to linear thinking, simple-cause-and-effect paradigms, negation of context and to silo-ed solutions. We see this thinking applied everywhere: in medicine, in government, in corporations. But, climate change, poverty, illness -- none of these can be attributed to a single cause.

The nature of change (or, evolution) is of the individual system experiencing change as a result of its internal processes, and, as a result of its selective responses to external stimuli. Thus the individual system expresses itself, as itself, within its environment thereby effecting change.

Consider what that means for “scaling”. Science has taught us to test a solution in a “lab”, if it works, then to scale it. Scaling not only assumes that the context is constant, but it also negates the role of the individual system (perhaps a person). Instead of scaling, we must connect. Instead of applying a tested solution across individual systems, we must have the individual systems apply the solution.
The story of complex systems theory is that it is unifying and universal. Patterns of behavior that appear in chemical reactions also appear in cognition. Or, those in financial systems, also appear in social systems. The underlying theme of the universe is process, not things.

But then, how do we intervene when everything is connected to everything else? And in so doing, will we break more than we create? How do we even create within a multi-causal structure?

Those are the challenges the impact sector is innovating within and innovating for.
This submission will bring a series of examples on how those challenges are being met in various industries.

Applying Complexity Science with Machine Learning, Agent-Based Models, and Game Engines: Towards Embodied Complex Systems EngineeringMichael Norman, Matthew Koehler, Jason Kutarnia, Paul Silvey, Andreas Tolk and Brittany TracyWednesday, 14:00-15:20

The application of Complexity Science, an undertaking referred to here as Complex Systems Engineering, often presents challenges in the form of agent-based policy development for bottom-up complex adaptive system design and simulation. Determining the policies that agents must follow in order to participate in an emergent property or function that is not pathological in nature is often an intensive, manual process. Here we will examine a novel path to agent policy development in which we do not manually craft the policies, but allow them to emerge through the application of machine learning within a game engine environment. The utilization of a game engine as an agent-based modeling platform provides a novel mechanism to develop and study intelligent agent-based systems that can be experienced and interacted with from multiple perspectives by a learning agent. In this paper we present results from an example use-case and discuss next steps for research in this area.

Approaching comparative Semantic Change in Wikipedia Corpus, through doublet words, in Romance Languages.M. Àngels Massip-Bonet, Irene Castellón and Gemma Bel-EnguixWednesday, 15:40-17:00

Distributional semantics introduced the idea that the meaning of a word is given by a vector consisting in the meanings of the neighbour items. Distributional models have been applied mainly to big amounts of texts and data in a synchronic sense.

We propose a methodology for the discrimination of senses for patrimonial words, applying algorithms of unsupervised machine learning to different corpus compiled from wikipedia of four languages: Catalan, Spanish, French, Italian and Portuguese corpus. Distributional hypothesis states that capturing semantic relations between words is possible based on [the context of words/their context]. It specifically states that similar contexts indicate similar meanings (Harris 1954, Clark 2015) . In this research, these methods are applied in order to group examples that have similar contexts as variants of the same word thanks to a neural-network based model.

The final objective of this research is to compare the relation between pairs of words in differerent romance languages. We use a as target words, popular/cultism pairs of words, like cat. doblegar/duplicar; esp. doblar/duplicar; fr. plier/dupliquer; it. piegare/duplicare, all coming from the latin DUPLICARE, that appear in different languages. We explain the accommodation of the different senses of both words, popular and cultism, documenting the semantic field of each word (or sense).

This research is relevant in the complexity frame in two senses: it takes into account a complex linguistic subsystem through different methodologies (Cilliers et al. 2013) and it considers a linguistic subject with an interdisciplinary approach (Bastardas 2016, Miller 1982) that will be able to be applied to other languages and linguistic problems.

Artificial Intelligence and Legal Personality: Any Rescue from Salomon V. Salomon?Damilola Odumosu R and  Grace SolomonTuesday, 18:00-20:00

Most scientific conferences tend to ignore the contributions of law to the development of Artificial Intelligence (AI). This paper tends to bridge this gap by considering the multidisciplinary perspective of Artificial Intelligence as a developing discipline and the legal quandary it as thrown the law Courts. Considering, the feats and advancements attained by Artificial Intelligence so far, should the Courts upon trial find the manufacturers liable for defects of its autonomous device or should a robot simply be confiscated and destroyed? Our legislators, policy makers, academicians, legal scholars and judges should be properly guided by providing from a laboratory of refined thoughts and practical ideas, a definite approach towards ensuring that a robot which cannot be sued today, will not only be able to be sued tomorrow but also be trained on causing minimal or no damage. As one Court observed, “robots cannot be sued,” even though they can cause devastating damage.” The Courts from time immemorial have always forged new path by ensuring that the principle of ‘ibi jus ibi remedium’ applied in circumstances that presents itself just and fair. It is widely believed that within the ambit of every Court there lies the power to find a defaulting party liable. In 2010, it was reported that a robot made by a Swiss art group purchased arms from a black-market website and was later arrested by the Italian police who could not prosecute further because no law recognized such. This has increased the level of knowledge on whether Artificial Intelligence should be considered as a legal person. In considering Artificial Intelligence as a person, would a piece of legislation solve this question or pronouncements from a court of law solve the legal puzzle? Can we also take a bold step by understanding why exactly Artificial Intelligence should be considered as an artificial person by examining how it leans and adapts to the human society.

Artificial Intelligence in the 21st CenturyJiaying Liu, Xiangjie Kong, Feng Xia, Xiaomei Bai, Lei Wang, Qing Qing and Ivan LeeTuesday, 15:40-17:00

Artificial Intelligence (AI) has grown dramatically and become more and more institutionalized in the 21st Century. The evolution in AI has advanced the development of human society in our own time, with dramatic revolutions shaped by both theories and techniques. However, the fast-moving, complexity and dynamic of the network make AI a difficult field to be well-understood. To fill this gap, relying on the ability of the complex network topologies, we study the evolution of AI in the 21st Century from the perspective of network modeling and analysis according to following dimensions:
The evolvement of AI based on the volume of publications over time;
Analysis of impact and citation pattern to characterize the referencing behavior dynamics;
Identifying impactful papers/researchers/institutions and exploring their characteristics to quantify the milestone and landmark;
The inner structure by investigating topics evolution and interaction.
Our study is performed on a large-scale scholarly dataset which consists of 58,447 publications and 1,206,478 citations spanning from 2000 to 2015. The publication metadata is obtained from Microsoft Academic Graph, which contains six entity types of scholarly data including authors, papers, institutions, journals, conferences, and the field of study. To construct and analyze the citation network of AI, we select the articles published in the list of top journals and conferences of China Computer Federation recommended international academic publications and Computing Research and Education Association of Australasia under the category “Artificial Intelligence”. Based on the analysis, the main findings are:
In the context of AI's growth, we discover that the number of publications, citations as well as the length of the author list has been increasing over the past 16 years. It suggests that the collaboration in the field of AI is becoming more and more common.
From the perspective of reference behavior, the decrease in self-references including author self-references and journal/conference self-references indicates the science of AI is becoming more open-minded and more widely sharing. The development of techniques and tools (evidenced by the citing behavior of new literature) in AI leads the area getting diverse.
We use the average number of citations per paper of each papers/authors/institutions as an indicator to evaluate their importance. Those influential entities are consistent with our intuitions.
Finally, we explore the inner structure of AI in the 21st Century. We identify the hot keywords and topics from the perspective of how they change with time. Some topics have attained “immortality” in this period such as computer vision, pattern recognition, feature extraction, etc. Furthermore, based on the co-presence of different topics and the citation relationship among them, we find interconnections and unveil the trend of development in this complex disciplinary.
Overall, our findings demonstrate that AI is becoming more and more collaborative, diverse, and challenging during the first sixteen years of the 21st Century. These results not only explain the development of AI overtime, but also identify the important changes, with the ultimate goal of advancing the evolution of AI.

Aspects of Complex Systems Relevant to Medical PracticeStefan TopolskiTuesday, 18:00-20:00

Purpose: To improve the understanding and communication between complex systems scientists and physicians practicing medicine in order to improve medical practice.
Problem: In the absurdist world of medical education, preparing idealistic young people to work in the healthcare business lacks discussions of core complexity aspects of human health and illness. For their part, complexity scientists often produce abstract science of poor utility when they lack understanding of the nature of medicine and healthcare.
Methods: Sharing a consensus physician view regarding the roles of complexity in health, illness, and healthcare systems drawn from the experience of Family Medicine thought leaders of the North American Primary Care Research Group.
Results: An improved understanding of 1) how complexity is understood by physician teachers in real medical practice, 2) applications of complexity in healthcare practice, and 3) improved more utilitarian directions for researchers to pursue in complex systems.

Asymmetric return rates and wealth distributions induced by introduction of technical analysis into a behavioral agent based modelFischer Stefan and Allbens FariaTuesday, 15:40-17:00

Behavioral Finance has become a challenge to the scientific community. Based on the assumption that behavioral aspects of investors may explain some features of the Stock Market, we propose an agent based model to study quantitatively this relationship. In order to approximate the simulated market to the complexity of real markets, we consider that the investors are connected between them through a scale free network; each investor, who is represented by the nodes, has his own psychological profile (Imitation, Anti-Imitation, Random). Two different strategies for decision making: one of them is based on the trust neighborhood of the investor and the other on considers a technical analysis, the momentum of the market index technique. We analyze the market index fluctuations, the wealth distribution of the investors according to their psychological profiles and the rate of return distribution. Besides, we analyze the influence of changing the psychological profile of the hub of this complex network and report interesting results which show how and when the anti-imitation strategy become the more profitable strategy for investment. Moreover, an intriguing asymmetry of the return rate distribution is explained considering the behavioral aspect of the investors. This asymmetry is quite robust being observed even when a completely different algorithm to calculate the investors decision making was applied to it. A remarkable result which, up to our knowledge, has never been reported before.

Avalanche dynamics and correlations in neural networksFabrizio Lombardi, Dietmar Plenz, Lucilla de Arcangelis and Hans Jürgen HerrmannWednesday, 15:40-17:00

Avalanche dynamics characterize many complex systems. In the brain, the near synchronous firing of many nerve cells gives rise to so-called neuronal avalanches, a collective phenomenon that is a key feature of resting and evoked activity of cortical networks. Experiments at all spatial scales have evidenced that neuronal avalanche size and duration follow power law distributions, a typical feature of systems at criticality. Yet, avalanche dynamics in neuronal systems remain poorly understood. In this talk, I'll focus on the relationship between criticality and the temporal as well correlative organization of neuronal avalanches recorded over many hours with high-density microelectrode arrays in cortex slice cultures. I'll first show that waiting time distributions for avalanches follow a peculiar non-monotonic behaviour featuring a power-law at short (< 1s) time scales, followed by a hump and an exponential decay for long time scales (Phys.Rev.Lett.108,2012). Using numerical simulations, I'll demonstrate that this peculiar behavior arises from the alternation of two different network states, the up and down-state. Importantly, the specific temporal organization of neuronal avalanches is closely related to the criticality of the system. Finally, I'll discuss the dynamical relationship between avalanche size and waiting time in the context of specific features of networks with balanced excitation and inhibition (Chaos,27,2017).

The Beauty of Self-Referential ScienceScott Harris Tuesday, 18:00-20:00

In early 1955, John von Neumann agreed to deliver the 1956 Silliman Memorial lectures at Yale. He chose as his topic the mathematics of reason. He died before finishing this work. Yale University Press posthumously published his unfinished lectures as THE COMPUTER AND THE BRAIN. In the culmination of THE ASCENT OF MAN, former colleague and fellow polymath Jacob Bronowski described what he sought in his mathematics of reason as “a procedure, as a grand overall way of life—what in the humanities we would call a system of values.” To complete his study, von Neumann needed to expand his theory of games into the realm of grand strategy. Doing so calls for applying this process to itself, hence for knowing something about what we do not know. From a logical view, this is a contradiction. From a strategic view, this logical contradiction is the result of too simple a concept of reason. Intelligent beings decide well by finding problems that “ring true” with all that they currently know about deciding well. They then use logical models that predict well within the domains of these problems to evaluate alternative solutions to them. This combination of beauty and logic allows them to change what Albert Einstein called the whole of science from the products to the process of refining everyday thinking. This “rings true” with Bronowski’s claim that “[i]t is not the business of science to inherit the earth, but to inherit the moral imagination; because without that, man and beliefs and science will perish together.”

Beliefs – Attitudes – Behavior as a Complex SystemKrishnan RamanMonday, 15:40-17:00

Beliefs – Attitudes – Behavior as a Complex System
by K. Raman
West Hartford, Connecticut, USA

The human mind and its workings, in individuals and groups, have been discussed in different ways by philosophers, sociologists, psychologists, and other thinkers.
This paper analyzes the dynamics of the interacting system of Beliefs, Attitudes and Behavior and related entities in the framework of current thinking about complex systems.

Although a large amount of work has been done by social psychologists in this field over the last several decades, there are unresolved questions that merit further analysis.
Also, in today’s society, new questions are raised by changes in existing political systems, changes in the commercial world, the new ways in which information is presented to people by the Media, and the advent of the Internet, which has radically changed the modes of communication.
In the political arena, questions have been raised about the fair working and efficacy of voting systems, which determine the results of major elections.
In the commercial world, advertising using new media techniques has a major effect on attitudes and beliefs which determine consumer decisions on a large scale.
The dependence on Mass Media for news can strongly influence decision-making of all types.

A fresh Systems perspective on what governs beliefs, attitudes and behavior can help answer questions of importance. Examining this field using the viewpoints suggested by the body of work on Complex Systems may shed light on the mechanisms involved in Beliefs, Attitudes and Behavior, and how they can be influenced. This paper is an attempt toward this.

We point out the basic components of the Belief-Attitude-Behavior interacting system, the working of each subsystem, and the causal relations among the elements of the system, in particular the mutual influences of Attitudes, Behavior and Beliefs. We examine the feedback effects that occur in this system, and their role in determining the characteristics of the system.
We point out the different types of inputs to this open system, and their possible effects on the dynamics of the system.

We discuss the various factors involved in the formation of Beliefs, Attitudes, and Behavior in modern society.
And the diverse mechanisms for changes in these entities, and to what extent they can be controlled or influenced.
We discuss the important roles of Affect and Emotion, and Information Processing, in the working of the system. And the role of Communication in persuasion and changing attitudes and beliefs.
We present the idea of Attitudes as an emergent phenomenon.

We outline summary examples of Political, Business, Religious and Daily-Life Belief and Attitude Systems, especially in the context of modern societies.
We discuss the effects of modern technology and communication, including the Internet, on the structure and evolution of these Systems.

Bi-SOC-states in one-dimensional random cellular automatonZbigniew Czechowski, Agnieszka Budek and Mariusz BialeckiTuesday, 15:40-17:00

Many phenomena and experiments display SOC-like evolution. In order to understand a source of this behavior different models, especially cellular automata, were constructed. They exhibited a spontaneous organization towards a single dynamical critical point in which fluctuations manifested a typical power-law scaling. However, in some phenomena, e.g., in neuronal networks, a switching between two stable SOC-like states can be observed.
We constructed a simple 1D cellular automaton in which the migration of states of the automaton between two SOC-like states is possible. Profiting from the simplicity, analytical equations of the model were derived in a mean-field-like approximation. The existence of the two SOC-like states is shown analytically and confirmed by computer simulations. The linear stability analysis showed that one of the steady states S1 is a spiral saddle and the second S2 a saddle with index 1. The system fluctuates mainly around S1, but sometimes it can migrate slowly to S2, which is less stable. Therefore, the period in which the system fluctuates around S2 is short and by large avalanches the system goes back to S1, and the process is statistically repeated. Avalanche distributions around the steady states are inverse-power with the same exponent (about 3.28). The emergence of these two critical states and the migration between them are the result of spontaneous evolution of the automaton. Hence, the model exhibits two SOC-like states.
The cellular automaton, being a recurrent slowly driven system with avalanches, can be treated as a toy model of earthquake supercycles. In this geophysical phenomenon numerous small and large earthquakes, which release only part of the accumulated strain energy, might have been contributing to bring the stress towards a critical state in the entire seismogenic zone. Then megathrust earthquake nucleates and releases the stored energy in a rupture of the large zone. In our model we are observing rare episodes of the growth of density of occupied cells during long migration to S2, which can constitute a parallel of increasing strain energy. The return to S1 in large avalanche can be treated as a megathrust earthquake.

Bionic Engineering Governance as the dominant success factor for Systems EngineeringMarkus JungingerTuesday, 14:00-15:20

- Growing complexity in product engineering in large scale OEM engineering operations –

The car industry had become massive under public pressure, found guilty for polluting the environment, lying about the efficiency of their products and taking an active role for decreasing the wellbeing in inner cities. The irrational and harmful cry for safer, cleaner and more electric cars is matched with an unprecedented expectation of individualization and even more comfort in the cars, has massive impact on the industry.
Additional services and safety standards, like remote updating, autonomous driving and entertainment connectivity is taking its toll on the established engineering organizations, processes and the individual engineer. The number of hard ware variants to be offered to downstream productions processes, amount for a standard car, e.g. VW Golf easily in the range of 7 to 8 million. This number is exploding exponentially, if electronic and software variants are added to the equation.
Development processes are getting longer and longer, due to the need to implicitly buffer for time and cost overruns. The increase in costs cannot be absorbed by higher retail prices and are eroding the car company profits. Beyond an increase in overall cost, customers are having only limited patience to wait for their car or are expecting a competitive flow of new models. Car industry is a fashion industry.
One of the most prevailing symptoms of proliferating complexity is the baffling inability of the engineers to explain emerging behavior of the complex systems, being put together. The adoption of Systems Engineering and Model Based Systems Engineering had been identified as the silver bullet to tame internal complexity. Largely neglected had been the fact the complexity of the product need to be catered and absorbed through the complexity of an engineering organization. However to the hierarchical organization and a command-and-control management systems, despite being efficient in the past, render the whole undertaking ineffective and fragile.
The paper being presented will touch on this experiences, explain the challenges and the road to solution that are delivering significant gains in due date performance and superior engineering systems and products.
At the core of the program is the working principle of Ashby’s law of requisite variety. A massive takeaway for senior management was the recognition of the abyss between power of decision making and technical competence to do though.
The principle of recursion between independent levels of autonomous control reflect the basic building blocks of a central nervous systems. Whereas the lower level of recursion prevent the higher levels from complete overflow and the risk to follow the trap of human nature, ignoring the most obvious risks.
In order to create a language of control and to make dependencies on narrow local optimization of human nature less harmful, we went back to Stafford Beer and his viable systems model, which had been published in its first draft, after a country wide application in Chile under the presidency of Salvador Allende.
The institutionalization of an “Engineering Craftsmanship” are helping the leading engineers to reconnect to the wholeness of the engineering value chain, addressing redundancies under lifecycle criteria and erode information latency early in the process.
Systems Engineering can now deliver on its hailed promises and human nature as factor of ingenuity and ignorance are not left to a Darwinian process.

Bitcoin ecology: Quantifying and modelling the long-term dynamics of the cryptocurrency marketAbeer Elbahrawy, Laura Alessandretti, Anne Kandler, Romualdo Pastor-Satorras and Andrea BaronchelliWednesday, 14:00-15:20

The cryptocurrency market surpassed the barrier of 100 billion market capitalization in June 2017, after months of steady growth. Despite its increasing relevance in the financial world, however, a comprehensive analysis of the whole system is still lacking, as most studies have focused exclusively on the behaviour of one (Bitcoin) or few cryptocurrencies. Here, we consider the history of the entire market and analyse the behaviour of 1,469 cryptocurrencies introduced between April 2013 and June 2017. We reveal that, while new cryptocurrencies appear and disappear continuously and their market capitalization is increasing (super-) exponentially, several statistical properties of the market have been stable for years. These include the number of active cryptocurrencies, the market share distribution and the turnover of cryptocurrencies. Adopting an ecological perspective, we show that the so-called neutral model of evolution is able to reproduce a number of key empirical observations, despite its simplicity and the assumption of no selective advantage of one cryptocurrency over another. Our results shed light on the properties of the cryptocurrency market and establish a first formal link between ecological modelling and the study of this growing system. We anticipate they will spark further research in this direction.

Bitcoin is robust by design, but cryptocurrency markets are fragile systemsPercy Venegas and Tomas KrabecTuesday, 18:00-20:00

Value in algorithmic currencies resides literally in the information content of the calculations; but given the constraints of consensus (security drivers) and the necessity for network eﬀects (economic drivers), the deﬁnition of value extends to the multilayered structure of the network itself –that is, to the information content of the topology of the nodes in the blockchain network, and, on the complexity of the economic activity in the peripheral networks of the web, mesh-IoT networks, and so on. In this boundary between the information ﬂows of the native network that serves as the substrate to the blockchain, and that of the real-world data, is where a new “fragility vector” emerges. Our research question is whether factors related to market structure and design, transaction and timing cost, price formation and price discovery, information and disclosure, and market maker and investor behavior, are quantiﬁable to the degree that can be used to price risk in digital asset markets. We use an adaptive artiﬁcial intelligence method based on evolutionary algorithms to study the adaptive system of crypto currency markets. The results obtained show that while in the popular discourse blockchains are considered robust and cryptocurrencies anti-fragile, the cryptocurrency markets are in fact fragile. This research is pertinent to the regulatory function of governments, that are actively seeking to advance the state of knowledge regarding systemic risk, to develop policies for crypto markets, and for investors, who are in need of expanding their understanding of market behavior beyond explicit price signals and technical analysis.

The brain gut microbiome axis in Fragile X syndromeFrancisco Altimiras, Bernardo Gonzalez and Patricia CogramTuesday, 18:00-20:00

The human microbiome is the internal ecosystem of microorganisms that live in the human body. The gut microbiome represents the volume/part of the human body with the greatest abundance and diversity of microorganisms. Several works have recently established the importance of the gut microbiome and its essentiality for human health, in association with metabolic functions, immune and nervous systems and even with behaviour. The Fragile X syndrome (FXS) is a genetic condition that causes a range of nervous system and behavioural problems and is the most common known cause of inherited autism. FXS is characterized by intellectual disability, behavioural and learning challenges and various physical characteristics. For the study of FXS, the fmr1 gene knockout (fmr1-KO2) mouse reproduces most of this phenotype and represents a preclinical model for the identification of new biomarkers and the assessment of potential drug treatments for FXS. This research was focused in the application of different bioinformatics methods to identify potential interactions between the gut microbiome and the brain in the FXS, paying attention to its influence with behaviour and gene expression. Data analyses and integration of behavioural tests, brain transcriptome experiments and the microbial taxonomy of the FXS model (the fmr1-KO2 mouse compared with wild type controls), was used to improve our knowledge of FXS. Different behavioural tests were applied to evaluate the performance of these animals including the open field, successive alleys, contextual fear conditioning, social recognition, and two species-typical behaviour tests. The fmr1 KO2 mouse showed several behavioural deficits in comparison with wild type animals, associated with learning, memory, anxiety, hyperactivity and social interaction as it is observed in human FXS patients, and also differences in species-typical behaviours as a measure of hippocampal dysfunctions. Multiple brain transcriptome data from the fmr1-KO2 mouse were analysed and compared with wild type controls. Several differentially expressed genes were associated with immune system, and were subsequently used for gene ontology analysis identifying affected gene pathways. Microbial analysis using 16S ribosomal RNA gene sequencing for the taxonomy and diversity characterization of the fmr1-KO2 gut microbiome was applied. The main objective was to identify potential links of the fmr1-KO2 gut microbiome with the behavioural and brain transcriptome data. It was found a distinctive profile of the fmr1-KO2 gut microbiome in comparison with wild type controls, which is characterized by different bacterial groups including an increased level of Akkermansia and Sutterella genera. The interactions of these microorganisms with the host are largely unknown and its influence on health still remains unclear. A potential harmful mechanism of interaction may be related to the excessive mucin degradation produced by these intestinal bacterial groups. All together the findings of this research contributes to the better understanding of the FXS pathology, the characterization of the fmr1-KO2 mouse model, and to propose novel possible interactions between the brain and gut microbiome in FXS

Bursting the Filter Bubble: Strategic Diffusion between Dissimilar CommunitiesMarcin Waniek, César Hidalgo and Aamena AlshamsiTuesday, 14:00-15:20

Homophily is the tendency of individuals to associate with those who are similar to them. Yet, if individuals are more likely to accept ideas or behaviors from those who are similar to them, homophily can limit the spread of ideas or behaviors, giving rise to echo chambers or filter bubbles. Here we ask, how can we spread an idea or behavior more effectively when the probability of transmission is affected by homophily?

Previous literature on the diffusion of information or behaviors has focused on either weak links between communities, or thick bridges, when spreading requires social reinforcement. Despite these advances, we know little about how to build these bridges when information diffusion is not only limited by social reinforcement, but also, by homophily.

In this work we investigate how to build bridges between communities to maximize the diffusion of information or behaviors in a network where social reinforcement and homophily modulate the diffusion of information. In our model, we model homophily by assigning to each node a vector of attributes. Links between similar nodes have a higher weight in the diffusion of information. We solve the problem of selecting the minimal number of edges that should be added to the network to optimize the speed of contagion in the whole network.

Since finding an optimal solution is an NP-hard problem, we focus on the effectiveness of various polynomial heuristic strategies. We base these algorithms on both topological features, such as the degree of nodes, and the similarity of nodes in terms of their characteristics. Our strategies connect pairs of nodes from different communities that are either the most similar or the least similar to each other in terms of their attributes. Another criterion of choosing pairs of nodes is based on either their maximal or minimal sum of degrees.

Our results suggest that the best way of building a bridge between communities depends both on the type of the network topology that we consider and the distribution of attributes values in different communities. We found that the most effective strategy selects a small group of already linked individuals in each community, that are similar to each other. Then, it connects these two groups into one densely connected cluster, thus forming a bridge.

Can adoption of rooftop solar panels trigger a utility death spiral? A tale of two U.S. citiesIqbal AdjaliTuesday, 14:00-15:20

The growing penetration of distributed energy generation (DEG) is causing major changes in the electricity market. One key concern is that existing tariffs incentivize ‘free riding’ behavior by households, leading to a cycle of rising electricity prices and DEG adoption, thereby eroding utility revenues and start a death spiral. We developed an agent based model using data from two cities in the U.S. to explore this issue. Our model shows worries about a utility ‘death spiral’ due to the adoption of rooftop solar, under current policies and prices in the U.S., are unfounded. We found, consistently for a number of scenarios, that, while the residential segment is impacted more heavily than the non-residential segment, the scale of PV penetration is minimal, in terms of overall demand reduction and subsequent tariff increases. Also, the rate of adoption would probably be smooth rather than sudden, giving the physical grid, the utility companies, and government policies enough time to adapt. Although our results suggest that fears of a utility death spiral from solar systems are premature, regulators should still monitor revenue losses and the distribution of losses from all forms of DEG. The concerns should lead to a more focus on tariff innovations.

Can Complex Systems Science Help Rationalize Cross-Border Governance in Cyberspace?Cameron F. KerrySunday, 11:40-12:20

Microsoft’s Satya Nadella recently spoke of “the world as a computer.” This discussion asks: What is the operating system of this computer? Who sets the rules? And how? These questions have taken on greater political and economic importance in international relations as global connectivity and reliance on information and communications technology increase. The discussion explores the various dimensions of the questions from a law and public policy perspective in order to invite contributions from across diverse fields to the understanding of the complex systems at play.

The world as a computer is conceivable because of our networked world, and the resulting information Big Bang. Moore’s Law on the doubling of processor power every two years may be reaching diminishing returns, but it is being compounded by (1) the multiplication of computers in billions of devices generating and using data, (2) increases in the capacity to store that data, (3) increased bandwidth enabling wider and faster data transmission, and (4) more powerful software to manage and analyze all that data, aided by machine learning.

Because information flows cross national borders, they strain established forms of nation-state control over the content of information and, to a lesser extent, over the means of transmission exercised within sovereign territories, and generate conflicts when states assert extraterritorial jurisdiction over these movements or act outside their territories. The national interests involved are various – national security, economic development, consumer protection, social control, geopolitical ambition, among others – and mechanisms and norms to address transnational concerns are evolving. This evolution is taking place horizontally on broad questions of “who governs the internet,” and vertically with regard to specific issues such as intellectual property or cyber-warfare.

Privacy/ data protection and cybersecurity are two notable areas of such conflicts. Privacy has been a source of conflict especially between the US and EU in particular. The EU regulates to what countries personal information can be transferred and, in the wake of the Snowden leaks on US intelligence surveillance, the Court of Justice of the European Union invalidated an agreement between the US and EU enabling data transfers. Negotiation of a new agreement involved in the EU review of US privacy laws and its laws and practices on government access to information. This process is repeating itself as the EU negotiates new bilateral agreements or reviews existing agreements with countries such as Japan, South Korea, Israel, and Canada (and, coming soon with Brexit, the UK). At bottom, such agreements are a sui generis form of trade agreement negotiated between governments, with unstructured consultation with a variety of stakeholders.

In cybersecurity, national responses are evolving, and international norms with them. In important respects, cybersecurity presents greater transnational challenges than privacy because of the threat actors that are able to operate outside their national borders and because of shared vulnerabilities across global networks. In turn, many countries and organizations (companies in particular) have common interests because they rely on the same systems, same software and hardware, and face the same vulnerabilities and threats.

Cybersecurity therefore presents a significant opportunity for collaboration and development of international norms across borders and across nongovernmental sectors. Such conversations are progressing episodically on various tracks – multilateral and bilateral government discussions and other discussions in a variety of public-private or nongovernmental forums. These operate in ways that are both complementary and conflicting.

The study of governance of transnational norms and policies has been largely the province of law and public policy specialists, with the exception of technical standards and protocols. My hypothesis is that understanding of the challenges in this arena can be helped by additional disciplines to understand the interaction of the numerous different systems involved – from mapping networks, to gauging the traffic that runs on these networks, to the social architecture of the various stakeholders in the decision making. This will not solve the difficult political problems presented, but may help to chart a path toward solutions.

Characterizing cues for collective construction by Macrotermes termitesDaniel Calovi, Nicole Carey, Ben Green, Paul Bardunias, Scott Turner, Radhika Nagpal and Justin WerfelWednesday, 14:00-15:20

Termites build meters-high, complex mounds that play a role in functions like nest atmospheric regulation as well as protection against predators. These mounds are built by the collective actions of millions of independent, decentralized workers. By contrast, traditional human engineering projects are built using extensive preplanning and central coordination of effort. An understanding of how low-level rules result in high-level system outcomes could help us understand the functioning of many natural systems, as well as elucidating design principles useful in creating artificial distributed systems. However, a limiting factor in this undertaking is a lack of data on low-level termite construction behavior.

To advance our understanding of mound-building termite behavior and the mechanisms they use to coordinate their activity, we conducted experiments with termites at a field site in Namibia. These studies place termites in known conditions and observe their behavior, using visual recording, real-time 3D scanning, and automated tracking of individuals. This work is leading to a revision of the classic understanding which is based on a hypothesized “cement pheromone”, where deposited soil contains a chemical that triggers further depositions. Instead, our studies point to other factors having more important influence on termite actions, with chemical signaling playing a secondary or absent role in early construction activity. One primary organizing mechanism is based on excavation, where digging sites provide templates for soil deposition, and focus activity via new workers being attracted based on the number of active excavators already present. Another strong cue is based on local soil geometry, with surface curvature strongly predictive of where building activity occurs.

This work helps to build a more complete picture of termite building behavior, which will help further our understanding of collective systems in nature as well as providing principles to inform future artificial collectives.

References:

B. Green, P. Bardunias, J. S. Turner, R. Nagpal and J. Werfel. Excavation and aggregation as organizing factors in de novo construction by mound-building termites. Proceedings of the Royal Society B, 284, 20162730 (2017).

N. Carey, R. Nagpal and J. Werfel. Fast, accurate, small-scale 3D scene capture using a low-cost depth sensor. 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), Santa Rosa, CA, 1268-1276 (2017).

City Scanner: Analysis of "drive-by" sensingKevin O'Keeffe, Carlo Ratti and Paolo SantiThursday, 14:00-15:20

Todays cities contain myriads of sensors. Capable of measuring quantities as diverse as water quality, traffic congestion, and noise pollution, these sensors empower urban managers to monitor a city’s health with unprecedented scope. Yet in spite of their ubiquity, traditional sensors have distinct limitations. Airborne sensors scan wide areas, but only during limited time windows. Land-based sensors have complementary properties; they collect data over long periods of time, but with finite spatial range. In this work we analyze mobile sensors, whose coverage is good in both space and time. Such sensors "piggy-back" on third-party vehicles (taxis/buses/garbage trucks), which explore the spatiotemporal profile of a city as they execute their default (non-sensing) functions. We mathematically examine the feasibility of this “drive-by sensing” approach, and show a remarkably small number of vehicles need need to be tagged to scan' an appreciable fraction of a city (i.e. ~10 taxis cover ~50% of Manhattan's street network). We support our analysis with taxi data from NYC, Chicago, Vienna, Singapore, and Shanghai. Our results have direct utility for city planners and councilors, and other decision makers who wish to maintain functional urban environments.

CityScope: A Data-Driven Interactive Simulation Tool For Urban Design. Use Case VolpeLuis A. Alonso-Pastor, Yan Ryan Zhang, Arnaud Grignard, Ariel Noyman, Yasushi Sakai, Markus Elkatsha, Ronan Doorley and Kent LarsonThursday, 14:00-15:20

MIT City Science Group (CS) studies the interaction of social, economic and physical characteristics of urban areas to understand how people use and experience cities with the goal of improving urban design practices to facilitate consensus between stakeholders. Long-established processes of engagement around urban transformation have been reliant on visual communication and complex negotiation to facilitate coordination between stakeholders, including community members, administrative bodies and technical professionals. City Science group proposes a novel methodology of interaction and collaboration called CityScope, a data-driven platform that simulates the impacts of interventions on urban ecosystems prior to detail-design and execution. As stakeholders collectively interact with the platform and understand the impact of proposed interventions in real-time, consensus building and optimization of goals can be achieved. In this article, we outline the methodology behind the basic analysis and visualization elements of the tool and the tangible user interface, to demonstrate an alternate solution to urban design strategies as applied to the Volpe Site case study in Kendall Square, Cambridge, MA.

Classification and prediction of the fourth industrial revolution technologies: Analyzing the patent citation networkOhsung Kwon, Sangmin Lim and Duk Hee LeeTuesday, 18:00-20:00

The fourth industrial revolution (4IR) is represented by super-connected and super-intelligent technologies, such as artificial intelligence, the Internet of Things, 3D printing, virtual and augmented reality, big data analytics and cloud computing. Recently, the technological innovation is faster and more disruptive, thus predicting direction of emerging technologies becomes more difficult. Nonetheless, the prediction of new technology has become more important for both micro and macro perspectives, as it is connected with the profits of firms and also the management of the national innovation system. Analyzing the patent citation network can be a suitable methodology to provide insights on the predicting the direction of technology, thus we use USPTO patent data to compose the network. According to our results, first, there are clusters for each of the technologies. Second, some specific technologies are clustered, thus the emergence of new technology groups are observed. The results provide the insight of emerging 4IR technologies, thus firms and authority can become to exploit those knowledge.

Collective Intelligence and its Use in Preparing for Black SwansRebecca Law, Garth Jensen and Matthew LargentTuesday, 14:00-15:20

Described by Dr. Nassim Taleb (Taleb, 2007), Black Swan events are characterized as unanticipated events that have a large effect or impact and are incorrectly rationalized after the fact as something that should have been predicted. In 2015, the authors designed a MMOWGLI game focused on the components of Black Swan events. MMOWGLI (Massively Multiplayer Online War Game Leveraging the Internet) is a collective intelligence platform that uses a conversation-based method of interaction to encourage collaboration in addressing difficult or wicked problems. The focus of the game centered on identifying the elements of Black Swan events (called precursors) and then subsequently determining ways to anticipate and to be antifragile (Taleb, 2012) to these precursors. This collaboration of players from around the globe allowed for a broader range of exploration of the space of possibilities than would come from a monolithic group or organization. This paper highlights some of the examples from the Black Swan MMOWGLI game that indeed came to fruition later, examines the concept of collective intelligence as a low cost method for identifying potential Black Swan events and reflects on the 2015 MMOWGLI game as a model for that identification process.
Taleb, N. N. (2007). The black swan: The impact of the highly improbable (Vol. 2). Random house.
Taleb, N. N. (2012). Antifragile: Things that gain from disorder (Vol. 3). Random House Incorporated

Combining preferences, rewards, and ethical guidelines in AI systemsFrancesca RossiFriday, 13:40-14:20

Recently a large attention has been devoted to ethical issues arising around the design, implementation, and use of artificial agents. This is due to the fact that humans and machines more and more often collaborate to decide on actions to take or decisions to make. Such decisions should be not only correct and optimal from the point of view of the overall goal to be reached, but should also agree to some form of shared ethical principles that are aligned with the human ones. Examples of such scenarios can be seen in autonomous vehicles, medical diagnosis support systems, and many other domains, where humans and artificial intelligent systems cooperate. In this talk I will discuss the possible use of compact preference models as a promising approach to model, reason, and embed ethical principles in decision support systems. I will also describe an approach that combines online reward-based decision systems with ethical policies.

Communicating and Performing Simulation Validation MethodicallyMegan Olsen and Mohammad RaunakWednesday, 14:00-15:20

Modeling and simulation is used in many fields to study interesting and complex problems, including in biology, sociology, psychology, computer science, and many more. For the results of a simulation model to be trusted, they must be validated. Validation in this case refers to the determination of whether the simulation model adequately represents the system being studied. For instance, a simulation of gossip propagation should correctly represent the social network among people, their tendency to share gossip, and how accurate that gossip tends to be over time. Only when the model accurately represents that real world system can “what if” questions be asked of the model, to answer questions that may be difficult to determine in the real world.

Many techniques exist for validating simulation models, and have been discussed for a number of decades. Some techniques are readily available to modelers, such as in the animation abilities of NetLogo, whereas others require significant data such as is the case with results validation’ where a simulation model’s output is compared with data from the real world. The maturity of a scientific field is often measured with the level of reuse achieved by the researchers and practitioners of that field. In the case of modeling and simulation, the reuse of a simulation model is directly dependent on how well the model matches the real world system or the system being studied. How well the model matches the studied system is closely tied to the simulation’s validation. However, we do not have a standard for discussing the level of validation of a simulation model. In fact, many papers on simulation models either do not discuss whether the model has been validated, or saves validation for later work.

We propose a framework for quantifying the amount of validation performed on a simulation model, such that it is possible to answer the question of how well the simulation matches the system being studied in a consistent way across all simulation models. Our framework provides guidelines on determining the structure, behavior, and data that should be validated, a mechanism for tracking successful validation, and a metric for calculating the level of confidence gained in the model via validation for agent-based and discrete-event simulation models. Additionally, we provide a web-based tool to aid in applying this framework to a model. This work provides a suggested template for discussing validation within simulation papers, which is currently lacking in the field. With this framework, we provide one aspect that is necessary to treat our published simulation models as trustworthy and re-usable.

Comparative Online Behavior Analysis of Extremist Groups and Correlation with Violent Real-world EventsJuan Botero, Minzhang Zheng, Thomas Holt, Joshua Freilich, Steven Chermak and Neil JohnsonTuesday, 15:40-17:00

Hate, extremist and supremacy groups come from different ideologies, but share a common theme of extreme rhetoric and behavior — and ultimately possible violent events. Such groups have been widely studied in the social sciences, but less so from the perspective of disciplines associated with Complex Systems. Here we attempt to address this gap by analyzing their behaviors on online platforms. Such online environments present a unique platform for the expression of radical beliefs, since the global nature of the Internet enables individuals to examine and interact with messages and beliefs that are in opposition to their daily lives. Yet it is unclear how far-right radical messaging gets transmitted to others via web forums. Inside online forum communities, the participation operates on a J-curve with the majority of respondents posting infrequently. Such patterns of use suggest there may be more noise produced in these communities, with frequent posters potentially generating content that blurs the perceived value of signals generated by infrequent posters. This has particular importance when examining radical messaging, as infrequent posters who espouse radical beliefs may be connected to the broader forum population via frequent posters who may amplify their messaging.

In this work, we carry out an analysis of the data from three online alt-right forums NSM, Tightrope and White News. We propose a time-directed network analysis in order to recognize the pathways of possible causality between postings and individuals, suing a classification of the posts depending on the numbers of post on days after and before. We identify the users according to their behavior and the post according to its ideology. We show that this identification helps to effectively track and even forecast events based on who posts, and/or on the ideology of the post.

We also compare the time-series of the web forums to the dataset of real-world terrorist events of Alt-Right nature. The behavior of the real-world terrorist attacks seems to have a short-term (up to two weeks) causality effect on the Internet, while behavior on the Internet seems to have an effect on the medium term (between two weeks and six months) on the terrorist attacks. Finally, in the long term (biyearly and yearly scale), the number of post on the Internet is highly correlated with the number of terrorist events perpetrated by Alt-Right groups. In short, this provides three important time scales where the dynamics in the extremist Internet forums seems to have an interplay with violent events in the real-world.

Finally, we find common patterns between online violence promoters, particularly between Alt-Right Internet forums and pro-ISIS groups in the social network VK. We identify that in both situations the posting activity follows an approximate power-law with a scale parameter close to 2. They also exhibit similar boundaries in a Burstiness-Memory phase diagram.

A Comparative Study of Various AI Based Cancer Detection TechniquesSujatha Alla, Christopher Knight, Akshara Kapoor and Leili SoltanisehatThursday, 14:00-15:20

As Artificial Intelligence and learning techniques are making great leaps in detection of abnormalities in tumors, it is important to see which technique is most efficient in terms of accuracy. This in turn will allow for timely detection and reduce physician burnout. Supervised learning, deep learning, and data visualization techniques are some of the common methods used to improve the accuracy of cancerous tumor detection. Choosing the best detection method can be a complex problem due to different characteristics of the methods and the features of the data. This paper aims to analyze the literature and develop a comprehensive comparison between various models that can differentiate between malignant and benign breast tumors. The results will help to determine which method is most accurate for diagnosis of the cancer stage. In order to validate the findings, the data sets will be extracted from the Cancer Imaging Archive (CBIS-DDSM) and tested in the chosen methods.

Comparison of brain vasculature network characteristics between wild type and Alzheimer’s disease miceMohammad Haft-Javaherian, Victorine Muse, Jean Cruz Hernández, Calvin Kersbergen, Iryna Ivasyk, Yiming Kang, Gabriel Otte, Sylvie Lorthois, Chris Schaffer and Nozomi NishimuraTuesday, 18:00-20:00

There is a strong clinical correlation between Alzheimer’s disease (AD) and microvascular disorders. Reliable delivery of oxygen and nutrients to neurons is vital for brain health, favoring a robust, redundant vascular network. However, growing and maintaining blood vessels costs space and resources. In mouse models of AD, our lab has found blood flow dysfunction in brain capillaries, suggesting the need to study the structure of vascular networks at the capillary level. Tools to quantitatively describe and compare the these complex structures are lacking. Here, we use network analysis to characterize the connectivity of brain capillary networks in AD and control mice.

We imaged cortical vascular networks using in vivo two-photon excited fluorescence microscopy through a cranial window in APP/PS1 (AD) and littermate, wild type (WT) mice (3 mice per group; ~1,000 vessels per mouse) and extracted the vascular networks. Two metrics suggested interesting differences between WT and AD mice. The average shortest path length is the mean of the smallest number of vessel segments that joins all pairs of vessel junctions. In AD animals this metric was 10.5 ± 0.6 vessels (mean ± standard deviation), which was just 8% lower (p = 0.07, t-Test) than in WT animals (11.4 ± 0.7 vessels). The average clustering measures the tendency of vessels in the network to group together; higher numbers imply that vessels and their neighbors have more connections to each other. In AD animals this metric was 37% (p = 0.09) lower than WT animals. This metric is related to the redundancy of networks connections, which enables the vascular system to maintain blood supply even with occluded vessels, suggesting that capillary networks in AD mice are less connected and redundant than in WT animals. These metrics further provide a quantitative way to compare vascular topology across different animals and disease states.

In order to study how the brain blood flow is affected during the progression of AD, we developed a novel formalism to compare and describe the differences in the brain capillary vascular networks. Further investigation with larger datasets is underway to make more solid conclusions using this new formalism.

Comparison of multipartite networks and to projected networks and generating synthetic network modelsHyojun Lee and Luis AmaralWednesday, 14:00-15:20

The past four decades have witnessed tremendous developments in our understanding of complex network. While the initial focus was on unweighted, and undirected networks, it has widened to include weighted, directed and multiplexed networks. However, the theoretical study of large complex networks has remained focused primarily on “static” networks, that is, networks that do not change with time.
A particularly important type of network that has not received the attention it deserves is multipartite networks. Systems that are most naturally represented as multipartite networks include metabolic reactions, collaborations, romantic relationship, and product recommendation systems. These systems comprise multiple types of nodes — movie, actors, directors, producers, and so on, in movie-production networks — and edges can only connect nodes of different types — actors connect through movies.
Multipartite networks are typically analyzed after projection onto a single type of nodes. However, the unipartite projection of a multipartite network is dramatically less information-rich than the original network. Moreover, for many multipartite networks, at least one type of nodes has a strong temporal component, that is, these nodes present discrete timestamps and finite durations — movies have a release date, and a production schedule.
Recent studies have recognized that the temporal properties of a network are also crucial in accurately characterizing system dynamics. Indeed, in almost every real world system, the connectivity between individuals changes over time. The effects of this temporal patterning are unlikely to be negligible when studying public epidemic outbreaks, opinion formation, or the spread of innovations. However, the vast majority of network-based epidemic models still investigates transmission dynamics using static networks, or at best, sequences of time aggregated snapshots. This is presumably due to two reasons; it is extraordinarily difficult to obtain large-scale society-wide data that simultaneously captures the dynamic information on individual movements and social interactions; and the complexity of the analysis increases dramatically.
Here, we investigate the extent to which unipartite projections of multipartite network can be used to accurately predict the transmission dynamics taking place on time-evolving multipartite networks. We simulate transmission dynamics using two well-known transmission models — susceptible-infected (SI) and general contagion (GC)— to investigate the impact of using different unipartite projection methods of complex multipartite network with temporal properties on the observed system dynamics. We find that the unipartite projections yield grossly incorrect predictions, prompting us to investigate the critical amount of information that must be retained from a multipartite network in order to generate synthetic multipartite networks that accurately reproduce the dynamics observed on the original network.

Competing Information Spreading Processes over Facebook and Twitter: Preference and BroadcastingDong Yang, Yang Lou, Tommy Wai-Shing Chow and Guanrong ChenWednesday, 14:00-15:20

Complex network of citations to scientific papers - measurements and modelingMichael GolosovskyTuesday, 18:00-20:00

We consider network of citations of scientific papers and use a combination of the theoretical and experimental tools to uncover microscopic details of this network growth. Namely, we develop a stochastic model of citation dynamics based on copying/redirection/triadic closure mechanism. In a complementary and coherent way, the model accounts both for statistics of references of scientific papers and for their citation dynamics. We validate the model by measuring citation dynamics of Physics papers and age composition of their reference lists. Our model includes the notion of fitness in the sense of Caldarelli et al. We discuss different ways to measure fitness. In particular, we estimate how fitness is related to the timeliness of the paper.

Our measurements revealed nonlinear citation dynamics, the nonlinearity being intricately related to network topology. The nonlinearity has far-reaching consequences including non-stationary citation distributions, diverging citation trajectory of similar papers, runaways or ”immortal papers” with infinite citation lifetime etc. We show how our results can be used for quantitative probabilistic prediction of citation dynamics of individual papers.

Complex Networks of Collaborative Reasoning on Value-Laden TopicsSarah ShugarsWednesday, 15:40-17:00

Collaborative reasoning - whether successful or unsuccessful - is a core facet of human society. In business, politics, and everyday life, individuals with varying opinions, experience, and information attempt to collaborate and make decisions. If the topic under consideration is factual, the reasoning process can be well-modeled under an explore/exploit framework where agents attempt to find a global optimum given local information [7, 8]. However, if the topic is value-laden, if agents hold their own subjective opinions of the solution space, existing models can neither explain nor predict the process of collaborative reasoning. Yet, a great deal of real-world reasoning relies upon agents’ normative beliefs. In the political realm, for example, a policy solution is only optimal if it results in outcomes an agent would qualify as “good.” Political polarization, in this sense, does not represent agents unable to map the solution space, but rather indicates agents who are unable to come to consensus regarding the contours of the solution space itself. Despite skepticism to the contrary, numerous empirical studies demonstrate that people are able to productively discuss value-laden matters [4, 6, 11], indicating a growing need to understand the conditions under which these conversations succeed.

This paper presents a novel framework for modeling the value-laden process of collaborative reasoning, drawing upon group problem-solving literature as well as work around cultural convergence [13, 2, 5]. We model human reasoning as a complex network where nodes represent beliefs and edges represent the logical connections between those beliefs. This model builds upon scholarship across numerous fields showing that cognitive processes as diverse as reasoning [1], arguing [14], remembering [3], and learning [12] are best modeled as ‘conceptual networks’ in which ideas are connected to other ideas. Through the process of collaborative reasoning, discussants move through their interconnected webs of belief, offering up related arguments and attempting to find disconnects in others’ reasoning. In this deliberative game of ‘giving and asking for reasons” [10], each player tries to map the others’ belief system, using their own conceptual network to translate the signals they receive from other players.
To model this process, we take the solution space to be an NK landscape, initiated as a weighted, signed network. This rough terrain represents the true connections and trade-offs related to a single policy question. We take the heaviest path through this network as the optimal policy solution; e.g. the best set of policies to efficiently reach a given outcome. Each agent is initiated with their own belief space, representing their personal interpretation of the policy space. Each time step t represents a single speech-act, as each agent either shares an opinion (weight of a single edge) or receives someone else’s opinion. When an agent receives an opinion, they must decide whether or not to incorporate this new information. Good faith discussants [9] should incorporate new information into their thinking if it seems reasonable given their existing knowledge. We model this by having an agent move towards a received opinion if there is a positive cosine similarity between the first eigenvector of the agent’s existing beliefs and the first eigenvector of the proposed beliefs.

Modeling this with small groups of deliberators, we find that if agents reason together in good faith, they can reach consensus and identify optimal solutions, even when their views are initially divergent. We measure the convergence of agents’ beliefs as the Frobenius distance between their respective belief networks, and we measure the success of the reasoning process as the difference between a group’s selected solution - the path which they believe is optimal - and the ‘true’ path which is optimal from the solution landscape. Interestingly, even when agents fail to converge to the ‘true’ solution space, they may still select good policy solutions – e.g. a deliberating group may ultimately select the right choice for the wrong reasons. Ultimately, this work helps us better understand the dynamic process of collaborative reasoning around value-laden topics.

References
[1] R. Axelrod. Structure of decision: The cognitive maps of political elites. Princeton university press, 1976.
[2] R. Axelrod. The dissemination of culture: A model with local convergence and global polarization. Journal of conflict resolution, 41(2):203–226, 1997.
[3] A. M. Collins and E. F. Loftus. A spreading-activation theory of semantic processing. Psychological review, 82(6):407, 1975.
[4] J. Fishkin. Reviving deliberative democracy. In ”Democracy Gridlocked?” Colloquium. Royal Academy of Belgium, 2014.
[5] N. E. Friedkin, A. V. Proskurnikov, R. Tempo, and S. E. Parsegov. Network science on belief system dynamics under logic constraints. Science, 354(6310):321–326, 2016.
[6] K. R. Knobloch, J. Gastil, J. Reedy, and K. Cramer Walsh. Did they deliberate? applying an evaluative model of democratic deliberation to the oregon citizens’ initiative review. Journal of Applied Communication Research, 41(2):105–125, 2013. ISSN 0090-9882 1479-5752. doi: 10.1080/00909882.2012.760746.
[7] D. Lazer and A. Friedman. The network structure of exploration and exploitation. Administrative Science Quarterly, 52(4):667–694, 2007.
[8] W. Mason and D. J. Watts. Collaborative learning in networks. Proceedings of the National Academy of Sciences, 109(3):764–769, 2012.
[9] H. Mercier and H. Landemore. Reasoning is for arguing: Understanding the successes and failures of deliberation. Political Psychology, 33(2):243–258, 2012.
[10] M. A. Neblo. Deliberative Democracy Between Theory and Practice. Cambridge University Press, 2015.
[11] M. A. Neblo, K. M. Esterling, R. P. Kennedy, D. M. Lazer, and A. E. Sokhey. Who wants to deliberateand why? American Political Science Review, 104(03):566–583, 2010.
[12] R. J. Shavelson. Methods for examining representations of a subjectmatter structure in a student’s memory. Journal of Research in Science Teaching, 11(3):231–249, 1974.
[13] E. H. Spicer. Persistent cultural systems. Science, 174(4011):795–800, 1971. ISSN 0036-8075
[14] S. E. Toulmin. The uses of argument. Cambridge University Press, 1958.

Complex Quantum World: Tower of ScalesAntonina N. Fedorova, Michael G. ZeitlinTuesday, 18:00-20:00

We present a family of methods which can describe complex behaviour in quantum ensembles. We demonstrate the creation of nontrivial (meta) stable states (patterns), localized, chaotic, entangled or decoherent, from the basic localized modes in various collective models arising from the quantum hierarchy described by Wigner-like equations. The advantages of such an approach are as follows: i). the natural realization of localized states in any proper functional realization of (Hilbert) space of states, ii). the representation of hidden symmetry of a chosen realization of the functional model describes the (whole) spectrum of possible states via the so-called multiresolution decomposition. Effects we are interested in are as follows:
1. a hierarchy of internal/hidden scales (time, space, phase space);
2. non-perturbative multiscales: from slow to fast contributions, from the coarser to the finer level of resolution/decomposition;
3. the coexistence of the levels of hierarchy of multiscale dynamics with transitions between scales;
4. the realization of the key features of the complex quantum world such as the existence of chaotic and/or entangled states with possible destruction in "open/dissipative" regimes due to interactions with quantum/classical environment and transition to decoherent states.
The numerical simulation demonstrates the formation of various (meta) stable patterns or orbits generated by internal hidden symmetry from generic high-localized fundamental modes. In addition, we can control the type of behaviour on the pure algebraic level by means of properly reduced algebraic systems (generalized dispersion relations).

Complex Spatio-Temporal Behavior in Chemical SystemsIrving EpsteinMonday, 11:40-12:20

The vast majority of chemists once believed that the second law of thermodynamics essentially forbids any dynamical behavior more complex than monotonic approach to equilibrium in reacting systems. The occurrence of temporal oscillations, spatial pattern formation, waves, chemical chaos, and life itself provide counterexamples to disprove this notion. I will give an overview of complex behavior in a variety of chemical systems and discuss some promising directions for future research in this field.

A Complex Systems Approach to Countering Mind Control, Trafficking and TerrorismSteven HassanSunday, 10:20-11:00

Destructive mind control is a systematic social influence process that typically includes deception, hypnosis, and behavior modification techniques to subvert an individual’s identity to create a new pseudo-identity that is dependent and obedient.
The Influence Continuum model demonstrates specific values from ethical to unethical influence. Quantitative research on the BITE model has been initiated to show what factors are most important in the control of behavior; information; thoughts and emotions used by destructive cults (pyramid structured, authoritarian regimes).
Furthermore, the presentation will include how a complex systems model called the Strategic Interactive Approach can be used to mobilize social networks to empower impacted individuals to reassert their own identity and independence and break free from the pseudo-identity. These include policy makers, politicians, educators, health professionals, law enforcement as well as citizens. Inoculation through education will be vital as well as specialized far-reaching training programs for each of these critical areas.

Complexity and Ignorance as Determinants of Security in Information Society in the 21st CenturyCzeslaw MesjaszTuesday, 14:00-15:20

Fake news – a new buzzword which has become recently popular in the language of politics can be perceived as a kind of symbol of a new phenomenon which is shaping sociopolitical reality of the modern world. Remaining for a while at that term, a question should be asked: what does it mean “fake news”? To what is it related – to the absolute truth, to the constructed truth of post-modern society? Remaining on the ground of moderate constructivism, it may be stated that the “fake news” is just another social construct. If so, the process of construction “fake news” should be studied – the actors, their interpretations and the processes of negotiating intersubjective meaning.
The most significant feature of the modern society is not the quantitative information overabundance understood as production and necessity of reception of measurable information (signals, impulses, etc.). but the need to assign meaning to that superfluous information (sensemaking). In this situation a conjecture is put before that it is only broadly defined complexity studies (as familiar with the field, I avoid I purposively avoid the term “complexity science”), which can be helpful in a better understanding of the modern society.
The following main conjecture will be presented and scrutinized. Under the impact of information overabundance and its consequences, it is not only knowledge but ignorance which has to be taken into account in studying modern social systems.
The paper is developed upon the following assumptions.
1. All uses of the “utterance” complexity, should it be “hard” complexity based on mathematical modelling or various types of “soft” complexity built upon analogies and metaphors deriving from the “hard” complexity, or qualitative ideas of complexity, as those of Luhmann, reflect the ignorance of the observer (participant) (Mesjasz, 2010) (“It’s complex, so I am not able to comprehend it to a certain extent”). Therefore more attention should be paid to a deeper analysis of ignorance. A new interpretation of complexity of social systems relating to ignorance will be proposed.
2. Two types of ignorance can be distinguished – negative – the lack of knowledge, whatever knowledge may mean, and positive ignorance which results from development of science. The more we know the better we are aware that we do not know and the better we are aware that we do not know that we do not know. Examples of that kind are known from the past and they are broadly discussed at present (Nicolaus of Cusa, 1985; Smithson, 1989; Ravetz, 1990, 1993; Proctor, 2008; Roberts, 2012).
3. When the constructivist character of knowledge and ignorance is taken into account, the problem of “structure” and hierarchy of ignorance obtains a new sense. It is not sufficient to realize what is known and what is not known. It is also becoming important why something is not known – why I do not know that I do not know? It concerns the knowledge of characteristics upon which knowledge and ignorance are defined. This idea can be called the second-level ignorance and it is different from ignorance of ignorance (the unknown unknowns). The sense of the second-level ignorance can be described as follows. Reflections about knowledge lead to a multi-level hierarchy of recursion – I know that I know that I know....ad infinitum. It’s worthwhile to remind that ignorance occurs only at the first and second levels of reflexivity. It is not logical to state: I do not know that I do not know that I do not know (3 levels). Not knowing may thus have only two levels. At the first level of ignorance, I know some features of an object of my cognitive process and of the environment of that object which allow for declaring what I know and what I do not know about that object (ignorance arising from absence or incompleteness of knowledge). It is summarized by a sentence: I do not know that I do not know (first level). At the second level – I do not know precisely those characteristics of the object and methods of their identification, which allow for declaring that I know and I do not know about the object at the first level (ignorance about knowledge of sources and methods of gaining knowledge). Thus I try to answer to the question: Why I do not know that I do not know?
4. A new approach to complexity of social systems should be based not on the question: “what do I (we) know?” but on the question: ”what and why I (we) do not know and I (we) must use the notion ‘complexity’?”
5. A question is arising how many levels of reflexivity are necessary in studying knowledge and ignorance in social studies. In the second order cybernetics (Foerster, 1982), considering the role of observer and in modern sociology including the role of reflection (reflexivity), analysis usually ends at the second level – meta-level or double hermeneutics (Giddens, 1993). The higher levels of reflection about knowledge could be applicable in more advanced hermeneutical discourse. In the paper only two levels of reflexive knowledge and two levels of reflection upon ignorance are proposed.
Applying the above assumptions about reinterpretation of complexity, a new universal concept, which can be called Complexity-and-Ignorance-Sensitive-Systems-Approach (incidentally it has a symbolic acronym CAISSA) is proposed. It may have multiple applications in studying complex social systems. When applied in modern management it can be called CAISM (Complexity-and-Ignorance-Sensitive-Management). Not Ignorance Management, as it was once proposed (Gray, 2003), since it is logically incorrect.
The proposed concept resulting from long lasting research on social complexity is not designed as another too far-reaching “theory of everything” in social studies. It is a modest attempt based upon cautiousness and self-criticism.
The main aim of the paper is to present an application of this new approach based upon complexity research in studying new security challenges in modern society affected by the information overabundance.
The following example concerning the role of social complexity and ignorance will be developed.
First, the meaning of the terms “security of information society” and/or “security in information society” will be primarily explained. These concepts are not identical with security of IT systems or cyber-security, etc., but rather concern negative consequences of information overabundance. It will be shown how two-level self-reflexive knowledge and two-level ignorance lead to a situation, which are harmful for the specific social collectivities, e.g. ignorance about environmental threats, self-delusion before economic crises, etc. It will be shown that analysis of such situations and phenomena can be accomplished solely with the ideas drawn from complexity studies enhanced with a deeper understanding of the links between complexity and ignorance. Since the social systems are always affected by ignorance stemming from lack of information, cultural limitations – e.g. taboos, prohibitions of access to information purposive activities of limiting, distorting and creating false information, the model of two-level reflexive knowledge and two-level reflexive ignorance will show potential complexity of societal threats under the conditions of information overabundance. The case studies will embody patterns of intersubjective social construction of concepts of “real news” and “fake news” under the conditions of two-level knowledge and two-level ignorance. Distortions stemming from potential sources of ignorance will be described and scrutinized.

PRELIMINARY BIBLIOGRAPHY
Foerster, von H. 1982. Observing Systems. A Collection of Papers by Heinz von Foerster. Seaside, CA, 1982: Intersystems Publications.
Giddens, A. 1993. New Rules of Sociological Method: A Positive Critique of Interpretative Sociologies, 2nd Edition. Stanford: Stanford University Press.
Gray, D. 2003. Wanted: Chief Ignorance Officer. Harvard Business Review, 81(11): 22-24.
Mesjasz, C., 2010. Complexity of Social Systems. Acta Physica Polonica A, 117(4): 706–715. http://przyrbwn.icm.edu.pl/APP/PDF/117/a117z468.pdf, 10 April 2012.
Nicolas of Cusa, 1985. On Learned Ignorance (De Docta Ignorantia). Books I, II, III. Minneapolis, MN: The Arthur J. Banning Press.
Proctor, R. N. 2008. Agnotology. A Missing Term to Describe the Cultural Production of Ignorance (and Its Study). In R. N. Proctor, & L. Schiebinger (Eds.). Agnotology: The Making and Unmaking of Ignorance: 1–35. Stanford, CA: Stanford University Press.
Ravetz, J. R. 1990. Usable knowledge, usable ignorance: Incomplete science with policy implications. In J. R. Ravetz (Ed.), The Merger of Knowledge with Power: 260-283. London: Mansell Publishing Limited.
Ravetz, J. R. 1993. The sin of science. Ignorance of ignorance. Science Communication, 15(2): 157–165.
Roberts, J. 2012. Organizational ignorance: Towards a managerial perspective on the unknown, Management Learning, 44(3): 215–236.
Smithson, M. 1989. Ignorance and Uncertainty: Emerging Paradigms. New York: Springer Verlag.

Complexity of Maxmin-ω Cellular AutomataEbrahim PatelTuesday, 18:00-20:00

We present an analysis of an additive cellular automaton (CA) under asynchronous dynamics. The asynchronous scheme employed is maxmin-$\omega$, a deterministic system, introduced in previous work with a binary alphabet. Extending this work, we study the impact of a varying alphabet size, i.e., more than the binary states often employed. Far from being a simple positive correlation between complexity and alphabet size, we show that there is an optimal region of $\omega$ and alphabet size where complexity of CA is maximal. Thus, despite employing a fixed additive CA rule, the complexity of this CA can be controlled by $\omega$ and alphabet size. The flavour of maxmin-$\omega$ is, therefore, best captured by a CA with a large number of states.

A compositional lens on the drivers of complexityHarshal HayatnagarkarMonday, 15:40-17:00

Complexity science provides a scientific philosophical framework to understand systems and phenomena that are called as Complex. The term complex has no single agreed upon definition, but it certainly implies difficulty to understand a complete system or phenomenon by understanding its individual parts, and by this semantics much of the World is complex. Even more difficult is to compare complexity of unrelated entities, such as a human body, a large computer software, an economy, and a star. Our particular interest is in characterizing the source of complexity in the sense of challenge involved in effectively achieving various purposes, such as understanding, explaining, engineering and managing. We believe that having an articulated model of the source of complexity "What makes a system complex?" will enable us to better manage the complexity of human-built systems, and perhaps even grapple better with the challenges involved in understanding complex natural systems and phenomena.
In this paper we propose that the complexity of a system emerges from interplay of four primary concepts namely scale, diversity, network, and dynamics. Although such ideas have been discussed before, we bring in a compositional lens to the discussion. We examine the impact of various combinations of these fundamental concepts and the different flavors of complexity emerging out of this alchemy. We then map these concepts to the existing terminology of complexity science, to understand if these terms can be explained using our compositional lens. For example, complexity as non-linear dynamics in a system emerges from special arrangement of that system's parts, and network properties can help us to further study such arrangements. We look at how computational complexity theory deals with scale, diversity, network, and dynamics in a manner specific to algorithms, with emphasis on scale of dynamics. To contrast, we also ponder complexity without dynamics, such as in the cases of static networks and static program analysis. We hope that this lens will facilitate new insights to understand the nature of complexity in engaging with systems, and new tools and methods to manage it, as we witnessed in our practice, as diverse as to develop human behavior models and to build control systems for modern scientific apparatus.

A Computational Approach to Independent Agent Segmentation in the Insurance SectorBurcin Bozkaya and Selim BalcisoyWednesday, 14:00-15:20

We aim to do a computational analysis to determine if there exist behavioral groups that reflect insurance agent channel dynamics. To this extent, we have developed segmentation-based models to evaluate independent agents that work for an insurance company. We present three separate segmentation models based on three metrics: utilization, response and governance. The utilization model proposes that each agent has a potential based on its location, which is determined by several local socio-economic factors, and addresses the question of how much of its potential an agent uses. We consider points of interest (POIs) around an agent as such a factor, which is a novel approach in this sector as well as in the literature. Our response model is based on the idea that a responsive agent must follow, in terms of monthly customer contracts and premiums produced, the overall trends of the company. To this end, we calculate the correlation of changes in an agent’s monthly production to that of the company. Our governance model explains the contribution level of an agent to the total premium production of the company. All together, we propose an agent segmentation model that provides an understanding of insurance agent behavior in these three dimensions. We have validated our model with the company officials and confirmed that our proposed model is an improvement over the current value-based segmentation model.

Computational Complexity of Restricted Diffusion Limited AggregationNicolas Bitar, Eric Goles and Pedro MontealegreMonday, 14:00-15:20

Diffusion Limited Aggregation (DLA) is a kinetic model for cluster growth de- scribed by Witten and Sander (Phys. Rev. Let. 1981), which consists on an idealization of the way dendrites or dust particles form, where the rate-limiting step is the diffusion of matter to the cluster.

The DLA model consists in a series of particles that are thrown one by one from the top edge of a two (or more) dimensional grid. The sites in the grid can be occupied or empty. Initially all the sites in the grid are empty except to the bottom line which are occupied. Each particle follows a random walk in the grid, starting from a random position in the top edge, until it neighbors an occupied site, or the particle escapes from one of the lateral or top edges.

We study a restricted version of DLA, consisting in the limitation on the direc- tions a particle is allowed to move, as if they were affected by an external force such as wind. In two dimensions, we consider three scenarios: the particles can move in three directions (downwards, left and right sides); two directions (the particles can move downwards and only to the right side); and one direction (the particles can move only downwards).

Theoretical approaches to the DLA model are usually in the realm of fractal analysis, renormalization techniques and conformal representations. In this research we consider a perhaps unusual approach to study the DLA model, related with its computational capabilities. Machta and Greenlaw (J. Stat. Phy. 1996) studied, from the point of view of the computational complexity, a prediction problem. This problem consists in to decide whether a given site on the grid becomes occupied after the dynamics have taken place, i.e. all the particles have been discarded or have stuck to the cluster. We call this problem DLA-Prediction.

Machta and Greenlaw showed that the (unrestricted version of) DLA-Prediction is P-Complete in two (or more) dimensions. We show that, restricted to two or three directions the prediction problem is still P-Complete. Furthermore, the case restricted to one direction can be solved by a fast-parallel-algorithm, i.e. is not P-Complete unless a wide believed conjecture in computer science is false. Later, we discuss the possible shapes realizable by the restricted DLA model introduced in this presentation.

Computational Social Science in Big Scholarly DataFeng XiaWednesday, 14:00-15:20

We are entering the new era of big data. With the widespread deployment of various data collection tools and systems, the amount of data that we can access and process is increasing at an unprecedented speed far from what we could imagine even a decade ago. This is happening in almost all domains in the world, including e.g. healthcare, research, finance, transportation, and education. In particular, the availability of big data has created new opportunities for transforming how we study social science phenomena. Data-driven computational social science emerges as a result of the integration of computer science and social sciences, which has been attracting more and more attentions from both academia and industry.

The second is about understanding the role of academic conferences in promoting scientific collaborations. While previous research has investigated scientific collaboration mechanisms based on the triadic closure and focal closure, in this work, we propose a new collaboration mechanism named conference closure. Conference closure means that scholars involved in a common conference may collaborate with each other in the future. We analyze the extent to which scholars will meet new collaborators from both the individual and community levels by using 22 conferences in the field of data mining extracted from DBLP digital library. Our results demonstrate the existence of conference closure and this phenomenon is more remarkable in conferences with high field rating and large scale attendees. Scholars involved in multiple conferences will encounter more collaborators from the conferences. Another interesting finding is that although most conference attendees are junior scholars with few publications, senior scholars with fruitful publications may gain more collaborations during the conference.

Conjoining Uncooperative Societies to Promote Evolution of CooperationBabak Fotouhi, Naghmeh Momeni, Benjamin Allen and Martin NowakMonday, 15:40-17:00

Network structure affects the evolution of cooperation in social networks. Under the framework of evolutionary game theory, we demonstrate analytically that cooperation-inhibiting social networks can be sparsely conjoined to form composite cooperation-promoting structures. To this end, we introduce a method based on the equivalence between evolutionary games on graphs and coalescing random walks. We consider several random and non-random network topologies, as well as empirical networks.

Consequences of Changes in Global Patterns of Human InteractionAnne-Marie Grisogono, Roger Bradbury, John Finnigan, Nicholas Lyall and Dmitry BrizhinevThursday, 15:40-17:00

Recent rapid and extensive changes in global patterns of interaction between individuals resulting from exponential growth in the proportion of populations participating in social media and other interactive online applications, suggest a number of possible consequences -some of which are concerning for the future of democratic societies and for the stability of global order.

We draw insights from the scientific study of collective phenomena in complex systems. Changes in interaction patterns often bring about phase transitions – system re-arrangements that are sudden and transformative, through the emergence and self-amplification of large-scale collective behaviour.

We see a parallel here in the possible effects of changes in human interaction patterns on the structure of global social systems – including the traditional structures of nation states, and national and cultural identities. In particular, the instantaneous and geographically agnostic nature of online interaction is enabling the emergence of new forms of social groupings with their own narratives and identities which are no longer necessarily confined by traditional geographic and cultural boundaries.

Moreover, since the growth of these new groupings is largely driven by the recommender algorithms implemented in the social applications, whereby people become more and more connected to like-minded others and have less and less visibility of alternate perspectives, the possible trend we are concerned about is towards increasing global fragmentation into large numbers of disjoint groupings, accompanied by erosion of national identities, and weakening of the democratic base.

We study these new long-range interactions and their disruptive potential and draw conclusions about the risks and their consequences.

A cooperative game model of human-AI interactionTed TheodosopoulosWednesday, 14:00-15:20

We present an iterated cooperative game of chance that challenges players to infer the state of a non-ergodic Markov chain while coordinating their actions. We consider a neural network that employs a deep Q-learning algorithm to learn to play this game, including the rules and strategies. The primary question we explore is how to adjust the optimal strategy for two human players when one of the players is our autonomous neural network. We explore this framework as a toy model of trust-building in human-AI strategic interactions.

Coupling Dynamics of Epidemic Spreading and Information Diffusion on Social NetworksZi-Ke ZhangTuesday, 15:40-17:00

Recently, the coupling effect of information diffusion (or awareness) and epidemic spreading has facilitated an interdisciplinary research area. When a disease begins to spread in the physical society, the corresponding information would also be transmitted from among individuals, which in turn influence the spreading pattern of the disease. In this paper, we have studied the coupling dynamics between epidemic spreading and relevant information diffusion. Empirical analyses from representative diseases (H7N9 and Dengue fever) show that the two kinds of dynamics could significantly influence each other. In addition, we propose a nonlinear model to describe such coupling dynamics based on the SIS (Susceptible-Infected-Susceptible) process. Both simulation results and theoretical analyses show the underlying coupling phenomenon. That is to say, a high prevalence of epidemic will lead to a slow information decay, consequently resulting in a high infected level, which shall in turn prevent the epidemic spreading. Further theoretical analysis demonstrates that a multi-outbreak phenomenon emerges via the effect of coupling dynamics, which finds good agreement with empirical results.
The findings of this work may have various applications of network dynamics. For example, as it has proved that preventive behaviors introduced by disease information can significantly inhibit the epidemic spreads, and information diffusion can be utilized as a complementary measure to efficiently control epidemics. Therefore, the government should make an effort to maintain the public awareness, especially during the harmonious periods when the epidemic seems to be under control. In addition, in this work, we only consider the general preventive behavioral response of crowd. However, the dynamics of an epidemic may be very different due to the behavioral responses of people, such as adaptive process, migration, vaccination, and immunity. This work just provides a start point to understand the coupling effect between the two spreading processes, a more comprehensive and in-depth study of personalized preventive behavioral responses shall need further efforts to discover.

Critical dynamics in the Belousov−Zhabotinsky reactionHarold Hastings, Richard Field, Sabrina Sobel and Maisha ZahedMonday, 15:40-17:00

The Belousov−Zhabotinsky (BZ) reaction is the prototype oscillatory chemical reaction. In reaction mixtures in a Petri dish, with ferroin/ferriin as catalyst and bromide added initially for production of bromomalonic acid, one sees the “spontaneous” formation of target patterns of concentric waves of oxidation (blue, high-ferriin) in a red/reduced/low-ferriin reaction medium, following an initiation period of several minutes in a red steady state. In analogous manganese (Mn(ii)/Mn(iii))-catalyzed reactions one typically sees bulk oscillations. The transition between bulk oscillations in manganese catalyzed reactions and pattern formation in ferroin-catalyzed reactions can be understood by chemical interpolation using mixed catalysts. We shall describe here the use of the BZ reaction as an experimental testbed for exploring spatio-temporal dynamics near criticality in a variety of related biological systems (the brain and the cardiac electrical system).
As observed by Glass and Mackey (From Clocks to Chaos: The Rhythms of Life, 1988) and Winfree (The geometry of biological time, 2001), there are strong analogies among BZ dynamics (as described in the Oregonator (Field, Körös and Noyes, J. Am. Chem. Soc. 1972; Field and Noyes, J. Chem. Phys. 1974), neuronal dynamics (as described by the FitzHigh-Nagumo model) and cardiac dynamics (where the FitzHigh-Nagumo model has been used as highly simplified starting point). In fact, the generic Boissonade-DeKepper model (J. Phys. Chem. 1980, )for excitable chemical systems (including the BZ reaction) is essentially the FitzHugh-Nagumo neuronal model (c.f. Hastings et al., J. Phys. Chem. A, 2016). All of these systems display a wide range of patterns of excitability when operate close to criticality (the excitable/ oscillatory boundary).
A variety of abnormal states from epilepsy through ventricular fibrillation have been characterized as dynamical disease (Glass, Nature 2001; Chaos 2015 and references therein), whose control and treatment requires control of global dynamics. See, e.g., Weiss et al. (Circulation. 1999) and Garfinkel, et al. (Proc. Nat. Acad. Sci. 2000) for the dynamics of fibrillation and Meisel, C. et al. (PLoS Comp. Bio. 2012), Hesse and Gross (Frontiers Systems Neuro. 2014) and Massobrio et al. (Frontiers Systems Neuro. 2015 for criticality in the normal and epileptic brain, and Gosak et al. (Frontiers Physiol. 2017) for the role of criticality in pancreatic beta cells.
We report here on an experimental and theoretical study of synchronization and more generally pattern formation in the Belousov-Zhabotinsky reaction. In particular at the generic level, BZ and related dynamics derive from the interplay between a fast activator species, here bromous acid, and a slow inhibitory species bromide generated by the slow catalytic oxidation of a brominated organic acid, here bromomalonic acid. The dynamics of pacemaker formation in the BZ reaction can be described by slow passage through a Hopf bifurcation from excitable to auto-oscillatory dynamics, in which an effective stochiometric factor including all bromide production serves as a bifurcation parameter. For the ferroin-catalyzed reaction, this Hopf bifurcation is subcritical, allowing a combination of heterogeneities and random fluctuations to initiate pacemakers. The transition between bulk oscillations in manganese catalyzed reactions (where the background substrate is oscillatory) and pattern formation in ferroin-catalyzed reactions(where the background substrate is excitable, thus supporting traveling waves of excitation) can be understood as a change in the rate of production of the inhibitory species bromide from chemical interpolation using mixed catalysts. A second bifurcation, the Showalter-Noyes criterion for oscillations, that the concentration ratio [H+][bromate]/[malonic acid] exceeds a critical value, arises because [H+][BrO3–] parametrizes the ratio of the time scale of fast bromous acid (activator) dynamics to that of the slow redox dynamics of the catalyst, and thus the effective time scale of production of the inhibitory species bromide. The existence of near-critical dynamics supporting such a bifurcation is demonstrated with small perturbations (removal of bromide); also adding one catalyst (e.g., ferroin) to a reaction parametrized by another catalyst (e,g., Mn). Related research and future research directions will also be described.

A Critique of A Critique of Transfer Entropies. Or: Untangling concepts of information transfer, storage, causality, unique and synergistic effectsJoseph LizierTuesday, 18:00-20:00

Transfer entropy is a measure of the predictive gain from the past state of a source time-series variable on the future sample of a target time-series variable, in the context of the past state of that target. It has been widely used recently to study information processing (as the information transfer or flow component) in various complex systems domains, in particular in computational neuroscience as well as biological networks and financial market analysis. Transfer entropy, and how to understand it, is of central importance to the "Causality and Information Flow within Complex Systems" workshop.
The transfer entropy and interpretations thereof were the subject of criticism of the recent paper "Information Flows? A Critique of Transfer Entropies" by James et al., PRL 116, 238701 (2016). That article perceived fatal flaws with the transfer entropy in that it: i. does not capture the causal effect between the source and target; ii. does not differentiate direct from indirect effects; and iii. incorporates synergistic effects of the source with the target's past. Yet while all of these properties of transfer entropy are true, they were also all already known and incorporated into interpretations of transfer entropy in the literature.
In this talk I synthesise these existing results, along with new examples and perspectives, into a coherent view of what the transfer entropy is and how it should be interpreted in capturing information transfer in the context of modelling of information processing in complex systems. First, I will discuss the differing perspectives and results given by measuring information transfer and causal effect, arguing for why both perspectives are important but for different reasons. Next, I will discuss the different nature of pairwise and conditioned transfer entropies, again arguing for the complementary utility of both. Finally, I argue for why synergistic effects from the source with the target past should indeed be considered in conjunction with unique effects from the source in a comprehensive understanding of information transfer.
The holistic perspective of modelling information processing that is used to interpret transfer entropy is quite important, because it marks a distinction in perspectives between information transfer and causal effect, and also engages a fundamental juxtaposition between information transfer and storage. Indeed, the nature of information storage is also central in this perspective, since measures introduced by the authors of the aforementioned critique (e.g. excess entropy) are subject to corresponding criticisms, although we argue similarly in favour of such measures in this perspective of modelling information processing.

Crypto Economy Complexity and Market FormationPercy Venegas and Tomas KrabecWednesday, 14:00-15:20

We demonstrate that attention flows manifest knowledge, and the distance (similarity) between crypto economies has predictive power to understand whether a fork or fierce competition within the same token space will be a destructive force or not. When dealing with hundreds of currencies and thousands of tokens investors have to face a very practical constraint: attention quickly becomes a scarce resource. To understand the role of attention in trustless markets we use Coase’s theorem. For the theorem to hold, the conditions that the crypto communities that will split should meet are: (i)Well defined property rights: the crypto investor owns his attention; (ii) Information symmetry: it is reasonable to assume that up to the moment of the hard fork market participants are at a level ground in terms of shared knowledge. Specialization (who becomes the expert on each new digital asset) will come later; (iii) Low transaction costs: Just before the chains split there is no significant cost in switching attention. Other factors (such as mining profitability) will play a role after the fact, and any previous conditions (e.g. options sold on the future new assets) are mainly speculative. The condition of symmetry refers to the “common knowledge” available at t-1 where all that people know is the existing asset. Information asymmetries do exist at the micro level -we cannot assume full efficiency because transaction costs are really never zero. Say’s Law states that at the macro level, aggregate production inevitably creates an equal aggregate demand. Since a fork is really an event at the macroeconomic level (in this case, the economy of bitcoin cash vs the economy of bitcoin), the aggregate demand for output is determined by the aggregate supply of output — there is a supply of attention before there was demand for attention. The Economic Complexity Index (ECI) introduced by Hidalgo and Hausmann allows to predicting future economic growth by looking at the production characteristics of the economy as a whole, rather than as the sum of its parts i.e. the present information content of the economy is a predictor of future growth. Say’s Law and the ECI approach are about aggregation of dispersed resources, and that’s what makes those relevant to the study of decentralized systems. While economic complexity is measured by the mix of products that countries are able to make, crypto economy complexity depends on the remixing of activities. Some services are complex because few crypto economies consume them, and the crypto economies that consume those tend to be more diversified. We should differentiate between the structure of output (off-chain events) vs aggregated output (on-chain, strictly transactional events). It can be demonstrated that crypto economies tend to converge to the level of economic output that can be supported by the know-how that is embedded in their economy — and is manifested by attention flows. Therefore, it is likely that a crypto economy complexity is a driver of prosperity when complexity is greater than what we would expect, at a given level of investment return. As members of the community specialize in different aspects of the economy, the structure of the network itself becomes an expression of the composition of attention output. We use genetic programming to find drivers — in other words, to learn the rankings. Such a ranking score function has the form, returns tokenA > returns tokenB = f (sources tokenA > sources tokenB). Ultimately, the degree of complexity is an issue of trust or lack thereof, and that is what the flow of attention and its conversion into transactional events reveal.

Culture Meets Artificial Intelligence and StorytellingDavar ArdalanTuesday, 18:00-20:00

Artificial intelligence, big data, and deep learning will transform how we engage with our past and present, providing tools for harnessing the power of data on world cultures for modern stories. As the daughter of a Harvard-educated architect and a Chatham College educated scholar, I spent many summer days on cultural excursions to the Parthenon in Greece, the Pyramids in Egypt, and the Persepolis in Iran. In my career as a journalist in public media for 25 years, I designed stories anchored in multiculturalism. In 2015, my last position at NPR News was senior producer of the Identity and Culture Unit. Looking ahead, I see a gaping hole in the emerging tools that will train future storytellers — the lack of prolific cultural content in Artificial intelligence (AI) algorithms.

With a team of journalists, educators, and machine learning experts, we have just formed IVOW, to research and develop automated, intelligent stories based in world culture and tradition — tagged through citizen engagement and strategic partnerships. This means we apply algorithms to leverage data for smart and culturally rich stories. IVOW stands for Voices of Wisdom and also a vow to design the next generation of intelligent machines to be culturally conscious.

Over the past few years the AI community has been mining vast amounts of data and leveraging algorithms in an effort to foster innovation in self-driving cars, cybersecurity, healthcare applications, and facial recognition. IVOW is focusing on the same science for future AI consumer storytelling applications — a virtually untapped market.

It's imperative that we enhance the use of AI in topics related to culture and traditions and diminish bias in algorithmic identification and reporting on related issues by diversifying available sources. At IVOW, we will begin with a cultural database on world cultures, traditions, and history and will design and build towards future consumer storytelling applications that will play a significant role in shaping the future of media. In this way, we amplify voices of wisdom and the cultural traditions underlying modern civilizations. We engage the public and corporate allies who can contribute to further improvement of codes and performance of robots. It's important to include culture --- as significant ---- as the standard economic, environmental and social criteria when gathering data.

By focusing on central themes of multiculturalism and tangible and intangible cultural traditions, IVOW brings an inclusive knowledge-sharing approach to natural language processing and machine learning. What excites me is working collaboratively together with technologists, data scientists, journalists, educators and development agencies, to create a new way of preserving and celebrating the art of storytelling. So far our partners include Code for Africa, SAP Next Gen and our academic partners - Morgan State University and the Southwestern Indian Polytechnic Institute.

I’m excited to share more as part of your 9th International Conference on Complex Systems. Thought leaders as part of the New England Complex Systems Institute network will want to discuss the future of culturally conscious storytelling in AI.

Data driven vs. model driven approaches to predicting and controlling macroeconomic dynamicsThomas Wang, Tai Young-Taft and Harold Hastings but not paidTuesday, 18:00-20:00

In the past, a variety of empirical models, such as the Taylor rule and a variety of auto-regression models, have been used as guidelines for determining the interest rate. This paper compares prediction metrics relative to nonlinear nonparametric methods developed by George Sugihara, Hao Ye, and colleagues [1-2] of GDP measures, the yield curve, and possibly other macroeconomic variables related to the business cycle, such as productivity, interest rates, asset stocks, investment-consumption cycles, capacity utilization, velocity of money, and credit measures. Their approach, called Empirical Data Modeling (EDM), arises from the Takens embedding theorem [3], has been developed in the context of network dynamics in problems arising in ecology, and has proven relatively successful in challenging problems in the fishery dynamics of forecasting salmon recruitment [1]. Given the known similarities between economic and ecological modeling [4], it is interesting to inquire whether EDM would prove useful in economic forecasting. Preliminary analysis indicates EDM is sometimes superior and other times inferior to more traditional models. This may be due to the fact that local deterministic dynamics dominate certain macroeconomic relations over certain time scales and periods while aggregate linear relations may dominate over others, inclusive of process (for example, variable selection) and observation noise. Additionally, inclusion of lags and other model parameters (such as time step, weighting function, embedding dimension, and time to predict) may significantly mitigate and identify such efforts, though thoroughness in such identification has been sought.

Previous work on yield curve [5-7] observed the yield curve between the 10 year and three month treasury bond. We consider the term spread measured by the ten year treasury yield minus the federal funds rate [8]. This provides us with a wide spread relative to liquid assets which is represented by the 10 year federal funds rate as well as access to the control variable, the federal funds rate, which we hypothesis has effects on the dynamics of the system. This can be considered relative to the efficacy of our control mechanism.

References:

[1] Ye, H, et al. (2015) Equation-free mechanistic ecosystem forecasting using empirical dynamic modeling. Proc Natl Acad Sci USA 112:E1569-E1576.

[2] Ye, H., et. al. (2017). rEDM: Applications of Empirical Dynamic Modeling from Time Series. R package version 0.6.9. https://CRAN.R-project.org/package=rEDM.

[3] Takens, Floris. “Detecting Strange Attractors in Turbulence.” Lecture Notes in Mathematics Dynamical Systems and Turbulence, Warwick 1980, 1981, pp. 366–381., doi:10.1007/bfb0091924.

[4] May, R.M., Levin, S.A. and Sugihara, G. (2008). “Complex systems: Ecology for bankers.” Nature, 451(7181), pp.893-895.

[5] Estrella, A., Mishkin, F. (1998). “Predicting U.S. Recessions: Financial Variables as Leading Indicators,” The Review of Economics and Statistics, 80, 45.

[6] Estrella, A.,Trubin, M. (2006). “The Yield Curve as a Leading Indicator: Some Practical Issues,” Current Issues in Economics and Finance, 12(5).

[7] Liu, W. Moench, E. (2014). “What Predicts U.S. Recessions?” Federal Reserve Bank of NY. Staff Report 691.

[8] Hastings, H.M., Young-Taft, T., Landi, A., & Wang, T. (2018). “When to ease off the brakes (and hopefully prevent recessions).” Draft paper.

Data Science to tackle Urban ChallengesMarta GonzalezThursday, 11:00-11:4

I present a review on research related to the applications of big data and information technologies in urban systems. Data sources of interest include: Probe/GPS data, Credit Card Transactions, Traffic and Mobile Phone Data. Key applications are modeling adoption of new technologies and traffic performance measurements. I propose a novel individual mobility modeling framework, TimeGeo, that extracts all required features to model daily mobility from ubiquitous and sparse digital traces. Based on that framework, I present a multi-city study to unravel traffic under various conditions of demand and translate it to the travel time of the individual drivers. First, we start with the current conditions, showing that there is a characteristic time that takes to a representative group of commuters to arrive to their destinations once their maximum density has reached. While this time differs from city to city, it can be explained by the ratio of the vehicle miles traveled to their available street capacity. We identify three states of urban traffic, separated by two distinctive transitions. The first describing the appearance of the first bottle necks, and the second the transition to a complete collapse of the system. The transition to the second state measures the resilience of the various cities and is characterized by a non-equilibrium phase transition.

Data Sciences Meet Machine Learning and Artificial Intelligence: A Use Case to Discover and Predict Emerging and High-Value Information from Business News and Complex SystemsYing Zhao and Charles ZhouThursday, 15:40-17:00

In this presentation, we will show an innovative machine learning (ML) and artificial intelligence (AI) process combing lexical link analysis (LLA) and a rule-based reinforcement learning method in Soar (Soar-RL) to discover and predict emerging and high-value information from big data.

In a LLA, a complex system can be expressed in a list of attributes or features with specific vocabularies or lexicon terms to describe its characteristics. LLA is a text analysis method that can also be extended to structured data such as attributes and their values from databases.
Soar is a cognitive architecture that scalably integrates a rule-based system with many other capabilities, including reinforcement learning and long-term memory. Soar has been used in modeling large-scale complex cognitive functions for warfighting processes.

In this paper, we show how to combine LLA and Soar-RL to discover and predict emerging and high-value information from business news and complex systems. The LLA is used to discover patterns, rules and associations from big data of heterogeneous sources. The Soar-RL utilizes the discovered rules to perform precise learning and prediction. The use case includes the big data of business news and stock performance for a large collection of public companies for a long period of time. The use case is used to demonstrate the ML and AI process that is data-driven, deep and explainable.

Data-driven extraction and classification of convectively coupled equatorial wavesJoanna Slawinska and Dimitrios GiannakisWednesday, 15:40-17:00

We extract patterns of convective organization using a recently developed technique for feature extraction and mode decomposition of spatiotemporal data generated by ergodic dynamical systems. The method relies on constructing low dimensional representations (feature maps) of complex signals using eigenfunctions of the Koopman operator. This operator is estimated from time-ordered unprocessed data through a Galerkin scheme applied to basis functions computed via the diffusion maps algorithm. Koopman operators are a class of operators in dynamical systems theory that govern the temporal evolution of observables. They have the remarkable property of being linear even if the underlying dynamics is nonlinear, and provide, through their spectral decomposition, natural ways of extracting intrinsic coherent patterns and performing statistical predictions.

We apply this approach to brightness temperature data from the CLAUS archive and extract a multiscale hierarchy of spatiotemporal patterns on timescales spanning years to days, including dominant intraseasonal mode of tropical variability (MJO) but also traveling waves on temporal and spatial scales characteristic of convectively coupled equatorial waves (CCEWs). In particular, we examine if the activity of these coherent structures is modulated by low-frequency atmospheric and oceanic variability. We discuss various properties of waves in our hierarchy of modes, focusing in particular on their across-scale interactions and temporal evolution. As an extension of this work, we discuss the deterministic and stochastic aspects of the variability of these modes.

Dealing with Complexity in Mali: Wicked Problems, Self-Organization and Loose CouplingErik De WaardTuesday, 14:00-15:20

Commercial and public organizations alike are increasingly facing wicked problems (Camillus, 2008). Such multi-faceted problems exceed common organizational understanding of the relationship between means and ends (Lyles, 2013; Wijen, 2014; Von Hippel & Von Krogh, 2015; Brook, Pedler, Abbott & Burgoyne, 2016). Examples are the global financial crisis that struck the world as of 2008, global warming, acts of terrorism, and refugee streams. This kind of problems typically require that a deep understanding and possible solutions have to be developed jointly (Conklin, 2006).
Wicked problems have, so far, been addressed by social scientists as an isolated, unique kind of phenomenon (Rittel & Webber, 1973). This is remarkable, since the notion of ‘wickedness’ strongly resembles the concept of complexity that since the 1950s has developed into an overarching perspective within the academic world. Scholars with different areas of expertise became increasingly convinced that the reductionist Newtonian linear paradigm of science was insufficient for dealing with key questions about the fundamentals of life and the functioning of societies (Anderson, Arrow & Pines, 1998; Gell-Mann, 1994; Holland, 1998; Kaufmann, 1993; Waldrop, 1992). In essence the complexity perspective refers to individual agents –be it particles, neurons, molecules, algorithms, species, or organizations- obeying to quite simple rules, whereas in populations, ecosystems and groups , where all these simple rules are combined and interact in many different ways, an intricate pattern emerges that is highly adaptable, but also very unpredictable and difficult to grasp.
Organization and management scientists that have adopted the complexity lens, argue that organizations need to become ‘complex adaptive systems’ to be able to deal with an increasingly hypercompetitive, volatile business environment (Anderson, 1999; Boisot and Child, 1999; Axelrod & Cohen, 2001; Palmberg, 2009). In this article, we draw on the growing body of knowledge on complex adaptive systems (CAS) to offer a novel perspective on how organizations deal with wicked problems. In so doing, our study also helps to avoid the risk of reification of the (highly ambiguous) concept of ‘wicked problem’ by approaching it from the perspective of complexity science (Lane, Koka & Pathak, 2006).
We concentrate on self-organization as a key tenet of the CAS perspective. More specifically the assumption that self-organization is associated with the system’s interpretative capacity facilitated by its structural characteristics will be closely investigated (Galbraith, 1973; Weick, 1977; Anderson, 1999; Ashmos, Duchon & McDaniel, 2000). To analyze this relationship we use Weick’s (1979) organizing model. Central to this model is the concept of enactment. According to Weick, enactment strongly influences an organization’s sense making ability, arguing that only by acting organizations can genuinely probe their environment and gain the necessary feedback for decision-making and further action. Charm of the skeleton….
Empirically, we focus on the context of complex emergency operations, in which frontline actors on a daily basis struggle to manage the symptoms of wicked problems. The United Nations’ Multidimensional Integrated Stabilization Mission in Mali (MINUSMA) serves as the empirical setting. As part of the overarching MINUSMA organization, a tailor-made intelligence unit was established by the Netherlands armed forces to gather, analyze and disseminate intelligence on societal issues, such as illegal trafficking and narcotics trade, ethnic dynamics and tribal tensions, and corruption and bad governance (Rietjens & De Waard, 2017). The causal ambiguity and interrelatedness of these problem areas provide a highly interesting setting for investigating the organizational reality of coping with wicked problems. Matters become even more wicked, taking into account the complex, ad hoc UN organizational constellations that is created to help the government of Mali with the endeavor of getting the country back on its feet.
One of our main findings indicates that Weick’s (1979) enactment philosophy of organizing being a matter of conscious ongoing experimenting behavior, only partially occurred within the Mali case. In short, the mental models, which the Netherlands armed forces had developed based on prior experience, were important frames of reference for designing a customized solution for the new “wicked problem” encountered by the organization in Mali. However, this solution evolved into an idée fixes, resulting in a situation where the confrontation between the paper, strategic-level solution and the real-life, operational-level outcome did not lead to a fertile reciprocal relationship, necessary for coping with the ambiguity and volatility of the security situation in Mali. Interestingly, the pitfall of slipping back into a blueprint-like approach appeared to partially happen tacitly, caused by unavoidable internal issues, including resource restrictions set by the political decision-makers, a fixed schedule of personnel rotating in and out, and the number and availability of technological assets.

Descartes, Gödel and Kuhn: Epiphenomenalism Defines a Limit on Reductive LogicJ. Rowan ScottTuesday, 15:40-17:00

René Descartes’ enduring contribution to philosophy, natural science and mathematics includes the unresolved residue of Cartesian dualism, as well as the persistence in modern science of Descartes’ four precepts defining a method for conducting natural science, which included the implementation of reductive logic. The continued difficulty resolving the Cartesian brain/mind split may be directly related to Descartes’ interpretation of reductive logic, which has been sustained within the modern structure of the reductive natural science paradigm.
Practical application of reductive logic is limited in many contexts involving complicated and convoluted system dynamics but may also be limited in principle by fundamental attributes of reductive logic, as it was composed by Descartes and as it is implemented in the modern reductive science paradigm. If reductive logic is fundamentally limited, this may reveal one way in which reductive logic may not provide a close enough approximation of the natural ‘logic’ instantiated by evolution, self-organization, and the emergence of complexity. Consequently, it is important to identify reductive propositions that reveal fundamental limits on the use of reductive logic. Spelling-out the implications of the limits associated with particular propositions may alter the future application of reductive logic in natural science. This could also consequently reveal novel solutions to particular unsolved or anomalous problems in science.
Kurt Gödel’s two famous incompleteness theorems provide a logical platform and a set of predictable implications, from which it is possible to construct a parallel analogy within reductive natural science. The analogy demonstrates that certain reductive propositions define a hard limit on reductive logic in science, in the form of reductive incompleteness. Specifically, reductive epiphenomenalism of consciousness is an unresolved reductive proposition characterized by theoretical contradiction and conceptual paradox most obvious when strong reductive logic supporting the proposition is demonstrated to be true. Paradoxically, epiphenomenalism reduces consciousness to quantum mechanics and erases from the universe the participatory consciousness that intentionally composed the proposition in the first place. This paradox, and other contradictions and inconsistencies cannot be avoided or resolved from inside the modern reductive science paradigm employing the singular ‘bottom-up’ application of reductive logic.
However, the paradox, contradiction and inconsistency generated by epiphenomenalism can be subtly sidestepped by declaring reductive epiphenomenalism of consciousness to be an undecidable reductive proposition. Taking this step serves to protect the logic of reductive science from paradox and inconsistency and allows the development of a theoretical position in which truth and proof in reductive science can be separated, just as in the case of abstract logic in formal mathematical systems of sufficient complexity. It can be shown that reductive logic employed in science creates a sufficiently complex logical system that it can exhibit fundamental reductive incompleteness. Epiphenomenalism, therefore, reveals the presence of a fundamental hard limit on the application of reductive logic in natural science. Reductive incompleteness and its implications are analogically similar to Gödel’s formal incompleteness in abstract logic and mathematical systems of sufficient complexity.
Thomas Kuhn’s conception of a scientific revolution, as well as modern explorations of scientific theory or paradigm adaptations intended to open exploration of new domains of research, provides a framework within which the implications of reductive incompleteness can be spelled-out. Among the implications of the limit on reductive logic set by reductive incompleteness, is the potential for an unresolvable and undecidable reductive proposition, stated in the paradigm and logic of reductive natural science, to become a resolvable and decidable reductive proposition, within a closely related meta-reductive paradigm, employing slightly different assumptions and premises. This opens the door to the creation of adjacent possible meta-reductive paradigms in which previously unresolvable or anomalous reductive scientific problems might find novel solutions.
An example of an adaptive adjacent possible meta-reductive paradigm is presented which preserves strong reductive logic up to the limit of formal reductive incompleteness and then encompasses the modern reductive paradigm as a special case within a meta-paradigm. Within the framework of the meta-paradigm, it is possible to resolve and decide epiphenomenalism in favor of consciousness and mind being causally efficacious emergent agents in the Universe. A number of predictions and hypotheses arising from the meta-reductive paradigm address further unresolved problems and anomalies within reductive science. The predictions and hypotheses make the meta-paradigm a falsifiable theoretical proposition, which can be tested against the success and limits of the modern reductive science paradigm.

This abstract is based on work related to the book chapter: “Dynamical systems therapy (DST): Complex Adaptive Systems in Psychiatry and Psychotherapy” published in the Handbook of Research Methods in Complexity Science: Theory and Application, Mitleton-Kelly, E et al (eds), London: Edgar Elgar Publ., 2018

Design of the Artificial: lessons from the biological roots of general intelligenceNima DehghaniTuesday, 15:40-17:00

Our desire and fascination with intelligent machines dates back to the antiquity’s mythical automaton Talos, Aristotle’s mode of mechanical thought (syllogism) and Heron of Alexandria’s mechanical machines and automata. But the quest for Artificial General Intelligence (AGI) is troubled with repeated failures of strategies and approaches throughout the history. Recent decade has seen a shift of interest towards bio-inspired software and hardware, with the assumption that such mimicry entails intelligence. Though these steps are fruitful in certain directions and have advanced automation, their singular design focus renders them highly inefficient in achieving AGI. Which set of requirements have to be met in the design of AGI? What are the limits in the design of the artificial? Here, a careful examination of computation in biological systems hints that evolutionary tinkering of contextual processing of information enabled by a hierarchical architecture is the key to build AGI.

Detecting Emergency Braking Intention using Recurrence Plot and Deep Learning: Analysis of Combined EEG/EMG and Behavioral Data in Simulated DrivingMiaolin Fan, Zhiwei Yu and Chun-An ChouWednesday, 15:40-17:00

The main objective of present study is to propose a novel framework for quantifying the information flow in human physiological responses using multimodal sensing data. Previous study has suggested the delayed coupling mechanism between EEG and EMG data associated with the sensorimotor cortex to muscle in neural systems. In our study, we used recurrence plot (RP) method to represent the nonlinear dynamics in (neuro-)physiological data by squared matrices. Furthermore, deep learning was applied to the unthresholded RPs for detecting the boundary of transitional states. Our method was tested on a public database and successful detected the braking intention after 400 ms in EEG and 540 ms in EMG. These results indicated that the proposed method can be used as an effective tool for assessing the directional information flow to reflect the motor control command.

Detecting seasonal migrations with mobile phone dataSamuel Martin-Gutierrez, Javier Borondo, Alfredo Morales, Juan Carlos Losada, Ana Maria Tarquis and Rosa M. BenitoTuesday, 15:40-17:00

Agriculture workers in Senegal represent over 70% of its labor force. The seasonal nature of an agriculture-based economy implies the alternation of periods of higher and lower laboral activity, which triggers the seasonal migration of workers. We have been able to detect mobility patterns that seem to be associated to these phenomena. In particular, we have found an increase of the number of migrants throughout the country during the harvest season.
In order to do that we have built a special kind of mobility networks that we call migration networks. In a migration network the nodes are locations that, depending on the scale, can correspond to cities, districts, regions, etc. By looking at the times and places from which a user makes her calls, we can infer the location that corresponds to her regular residence, as well as detect if she has changed her residence for a signiticant period of time; that is, if she has migrated, and where. This second location corresponds to her temporary residence. The links of the migration network are directed and traced from usual residence nodes to temporary residence nodes. This has enabled us to show the influence of the harvest season on the migration flows and study the impact of religious events on the trajectories networks.

Developing adaptive data analysis approaches for protein researchMing-Chya WuTuesday, 18:00-20:00

The biological activities and functional specificities of proteins depend on their native three-dimensional structures, which are defined by the corresponding chemical compositions or sequences. Statistics on the structures suggests that, there are universal geometric factors as constraints on native conformations, while the combinations of properties of amino acids in sequences lead to diversity of functions. Here, we introduce a novel approach to study structure, stability, and dynamics of biomolecules based on adaptive data analysis approaches which are developed to reconcile two concepts. Application of the idea to the analysis of intrinsically disordered proteins will also be discussed.

Differentiation in Convergence Spaces: Heterotic Dynamical SystemsHoward BlairTuesday, 18:00-20:00

With topology, continuity of functions generalizes from the context of classical analysis to a huge collection of structures, the topological spaces. Lesser, but still increasingly known [1], are \textit{convergence spaces}, built on the work of Henri Cartan who began with filters of sets as the basic notion to treat convergence, as distinct from open sets, and is elucidated in full in Bourbaki [2]. Built as they are on the notion of filters, several key properties of convergence spaces, individually and collectively, allow to \textit{conservatively} extend the notion of differentiation from normed spaces to convergence spaces. By \textit{conservatively} we mean the notion of differentiation remains unchanged on normed spaces, thus in no way altering definitions of differentiation in familiar Euclidean and Hilbert spaces for example.

This is not a work in category theory, however the collection of all convergence spaces form a large cartesian closed category. This result [3] has a several important consequences and among these are: (1) topological spaces are convergence spaces and thus convergence spaces preserve the notion of continuity on topological spaces; (2) directed graphs are convergence spaces and thus convergence spaces preserve the notion of homomorphism on directed graphs; (3) there is a uniform way of regarding function spaces of continuous functions on convergence spaces as convergence spaces (enabling heterotic functional analysis) and what is key to differentiation in convergence spaces: (4) the composition operation on finite products of function spaces is continuous, as is the evaluation function. Moreover, (5) there are several ways to both decompose or embed convergence spaces into homogeneous convergence spaces with commuting regular actions on them - a convergence space with a commuting regular action is a module over a ring. With (5) we recover the ability to have linear functions serve as differentials, but where the nonzero scalars are now endomorphisms of the regular action.

This development of differentiation in convergence spaces - and which was the original motivation for the work reported here - allows differentiation of functions involving diverse types, in particular, differentiation of functions from the real numbers into discrete structures.

Distinguishing functional brain states through the analysis of extreme events in human EEGKanika Bansal, Javier Garcia, Jean Vettel and Sarah MuldoonTuesday, 18:00-20:00

It has been proposed that the brain is functionally organized to operate near criticality, which, in principle, could provide a mechanism to facilitate efficient processing and storage of information. Often, this criticality is demonstrated by the power law distributions of neuronal avalanches – large bursts of neuronal activity – that are expected to display theoretically predicted universal exponents and features. Here, we analyzed large fluctuations in electroencephalogram (EEG) recordings of twenty-six healthy humans in order to study how these ‘extreme events’, which represent the ongoing neuronal activity (avalanches) within the brain, relate to human behavior and task performance. We observe that the spatiotemporal distributions of extreme events are dissimilar across individuals and show deviations from theoretically proposed criticality features, both during rest and task performance. Further, these deviations vary across individuals and are predictive of the individual’s ability to perform certain tasks. We therefore propose that our analysis paradigm is useful in characterizing functional brain states and making predictions about human task performance.

Distributed Computational Intelligence for the Next-Generation Internet-of-ThingsPredrag TosicWednesday, 15:40-17:00

Internet-of-Things (IoT) is one of the most important new paradigms and technological advances in the realm of the pervasive Internet-powered cyber-physical systems of this decade and likely many years to come. We are interested software and AI enabling technologies and architectures that will be the driving force behind the Next-Generation IoT; in particular, given our background in AI and Multi-Agent Systems research, we particularly focus on the distributed intelligence, agent-based programming and multi-agent coordination & cooperation aspects of what it will take to enable the reliable, secure, inter-operable and human-friendly Next-Generation IoT.

In this work, we focus on identifying the programming abstractions and distributed intelligence paradigms for, as well as enabling the self-healing capabilities of, the NG IoT. In that context, we discuss several software design and computational intelligence aspects that hold a significant promise for the NG IoT (as well as other heterogeneous, open, very large scale cyber-physical systems). One, we discuss suitable programming models for the software agents that would provide IoT's inter-operability, enabling different users, devices and platforms to effectively communicate, coordinate and share data and other resources with each other.To address cyber-security aspects of the NG IoT holistically, we outline some elements of distributed computational intelligence that would enable the self-healing and self-recovery capabilities of the NG IoT. We argue that the design of the NG IoT's cyber-defense, self-healing and self-recovering mechanisms would greatly benefit from exploring and applying some paradigms from biology, more specifically, from immune systems of living organisms. If such highly adaptable self-healing capabilities were built into the NG IoT, future cyber-attacks would be able to cause much less disruption to the infrastructure and the end-users than what has been the case with some recent cyber-attacks (such as the Distributed Denial of Service attacks in the eastern parts of the US in October 2016) on the contemporary IoT.

Diversity and collaboration in EconomicsSultan OrazbayevTuesday, 14:00-15:20

Publications written by authors from different countries, on average, are published in better journals, have higher citations counts, and are evaluated more positively by peers. Similar ‘diversity premia’ exist for collaborations between different ethnicities and genders. Based on collaborations among 34 thousand economists, this paper examines the role of authors' social network properties in explaining the positive quality-diversity correlation. After controlling for a range of relevant factors, the authors’ position in the global research network plays an important role in explaining variation in the quality of collaboration, proxied by citation counts and simple impact factor of the journal in which the article is published. Access to non-redundant social ties in the global research network is associated with greater quality of the collaboration. This suggests that diversity is important only to the extent that it correlates with non-redundancy of social ties.

Dynamics & kinematics at small scales: from micro & nano bubbles to nanotubulationBalakrishnan AshokMonday, 14:00-15:20

We discuss the behaviour of systems at the micro and nanoscales, looking at three interesting examples in particular. We first discuss our theoretical work on charged micro- and nano-bubbles undergoing radial oscillations in a liquid due to ultrasonic forcing. We obtain charge, frequency & pressure thresholds for the system. We show how the electric charge affects the nonlinear oscillations of the bubble crucially, and limits the influence of the other control parameters such as the pressure amplitude & frequency of the driving ultrasound on the bubble dynamics. We believe our work has ramifications both for medical diagnostics as well as industrial applications.
We will then report our theoretical work on the behaviour of nanotubes drawn out from micrometre-scale vesicles and exhibiting very interesting dynamics. Our theoretical model completely captures and reproduces all aspects of the force-extension curves reported in the experimental literature, completely explaining the dynamics of vesicular nanotubulation for the very first time.
Lastly we shall touch upon our theoretical modelling of multi-walled carbon nanotubes and show how our simple theoretical model reproduces the results obtained from quantum-mechanical calculations, while successfully predicting the elastic constant for the system and the functional dependence of interaction energy on the dimensions of the nanotubes.

Dynamics of Community Structure in an Adaptive Voter ModelPhilip Chodrow and Peter MuchaMonday, 14:00-15:20

Modern social network platforms offer users unprecedented volumes of interactions, while also offering them unprecedented control over the content the people and opinions to whom they are exposed. This combination of factors is often blamed as a contributor to the current, fractured state of American political culture. However, we lack a quantitative understanding of how agent-level forces such as local influence and homophily drive system-level outcomes such as fragmentation and polarization. This is a natural problem for agent-based modeling; however, the prohibitive nature of massive network simulations calls for models of social fragmentation that can be both simulated and analyzed mathematically.

We therefore develop a novel analysis of the Adaptive Voter Model (AVM). In this model, agents argue with friends who hold differing opinions, thereby seeking uniformity in their local networks. In addition, agents display homophilic preferences, and may sever ties to friends with whom they disagree. In traditional models, the interplay of local influence and homophily eventually pushes the network into an unrealistic, fully fragmented state, which may be either egalitarian or dominated by a single opinion depending on system parameters. To more appropriately model social networks, we introduce external forces to the model, which, even at low magnitude, resist full fragmentation and introduce the possibility of persistent dialogue between disagreeing agents.

We survey the model’s behavior in different regions of parameter space, showing a first- order phase transition marking the passage from full fragmentation to nonzero levels of persistent dialogue. We then develop a novel, Markovian approximation for the link densities that allows us to compute this phase transition, as well as the magnitude of persistent disagreement. Compared to existing methods for similar systems, ours achieve superior levels of accuracy and vastly superior scaling properties, allowing analytical calculations for denser graphs with greater numbers of opinions than previously possible. We close with a discussion of various generalizations, including asymmetric external forces and structured opinion spaces.

Dynamics of Financial Flows in the Developmental ContextAabir Abubaker Kar, Yaneer Bar-YamMonday, 14:00-15:20

Modern social network platforms offer users unprecedented volumes of interactions, while also offering them unprecedented control over the content the people and opinions to whom they are exposed. This combination of factors is often blamed as a contributor to the current, fractured state of American political culture. However, we lack a quantitative understanding of how agent-level forces such as local influence and homophily drive system-level outcomes such as fragmentation and polarization. This is a natural problem for agent-based modeling; however, the prohibitive nature of massive network simulations calls for models of social fragmentation that can be both simulated and analyzed mathematically.

We therefore develop a novel analysis of the Adaptive Voter Model (AVM). In this model, agents argue with friends who hold differing opinions, thereby seeking uniformity in their local networks. In addition, agents display homophilic preferences, and may sever ties to friends with whom they disagree. In traditional models, the interplay of local influence and homophily eventually pushes the network into an unrealistic, fully fragmented state, which may be either egalitarian or dominated by a single opinion depending on system parameters. To more appropriately model social networks, we introduce external forces to the model, which, even at low magnitude, resist full fragmentation and introduce the possibility of persistent dialogue between disagreeing agents.

We survey the model’s behavior in different regions of parameter space, showing a first- order phase transition marking the passage from full fragmentation to nonzero levels of persistent dialogue. We then develop a novel, Markovian approximation for the link densities that allows us to compute this phase transition, as well as the magnitude of persistent disagreement. Compared to existing methods for similar systems, ours achieve superior levels of accuracy and vastly superior scaling properties, allowing analytical calculations for denser graphs with greater numbers of opinions than previously possible. We close with a discussion of various generalizations, including asymmetric external forces and structured opinion spaces.

The effect of removing the self-loops on cell cycle networkShu-Ichi Kinoshita and Hiroaki YamadaMonday, 15:40-17:00

Recently, we numerically investigated the role of degenerate self-loops on the attractors and its basin size of the cell-cycle network of the budding yeast. It is found that for the point attractors a simple division rule of the state space is caused by removing the self-loops on the network. We report validity of the simple division rule for the cases of the cell-cycle network of the fission yeast and ES cell network of c.elegans, while comparing with result by the budding yeast.

Effects of peers influence on voting and health-risk behaviour at the population level without using link dataAntonia Godoy-Lorite and Nick JonesWednesday, 14:00-15:20

The Social network, understood as the structure of influence between individuals, is of major interest in the study of public opinion formation. However, it is impossible to know the complete social network at the population level. Sociological studies typically analyse the relation of opinion with the population socio-demographic variables, such as age, gender, ethnicity, religion or income. However, correlations between individuals are often treated as confounds rather than primitives. In this project we reverse these priorities and attempt to privilege the inference of universal inter-individual dependencies (common across multiple datasets) over understanding factors specific to any one opinion.
Our mathematical framework represents population opinions’ outcomes (specifically Brexit remain/leave and London Mayoral elections conservative/liberal data) as binary spins embedded in a high dimensional space that combines spatial location and social variables such as age, gender, income, etc., which is called Blau space. We model the spins configuration as an Ising like model. This approach allows us to infer the social network structure by fitting the parameters of a connectivity Kernel that tells us about the scale of decorrelation of the peers influence in the different socio-spatial dimensions of the Blau space. We also found that the peers influence accounts on average for a 30% of election outcomes.

Efficient detection of hierarchical block structures in networksMichael Schaub and Leto PeelWednesday, 14:00-15:20

Many complex systems can be meaningfully coarse-grained into modules according to the pattern of interactions between the components or entities of the system. This coarse-graining provides a more interpretable view of the system. One such approach is to represent the system as a network and perform community detection. However, many complex systems contain relevant information at multiple resolutions, but most previous work on community detection has focused on discovering flat community structures, thus allowing information to be captured at a single resolution. Here, we circumvent this issue by developing a hierarchical community detection method to capture network structure across all resolutions.

While previous approaches exist for detecting hierarchical communities, these either rely on approximate heuristics or Markov chain Monte Carlo methods, for which convergence can be slow and difficult to diagnose. We take advantage of recently developed efficient (scaling linearly with the size of the network) spectral methods based on the regularised Laplacian and Bethe Hessian operators. These approaches provide approximate inference for the stochastic blockmodel (SBM), a popular model of network community structure in which nodes are assigned to one of k groups such that the probability of a link within and between groups can accurately and compactly described according to a k × k mixing matrix Ω. The SBM allows us to capture a wide range of mesoscopic structures that can be assortative, disassortative, core-periphery or a combination therein.

These methods are of particular interest as they have been shown to be "optimal" by being able to detect communities right down to the theoretical limit of detectability. We extend these results and use them to develop an efficient spectral algorithm to determine if hierarchical structure exists and accurately recover it when it does. We achieve this by considering the geometry of spectral clustering in the context of hierarchically structured stochastic blockmodels. Specifically, we show that a hierarchical arrangement gives rise to a so-called external equitable partition at each level of the hierarchy. This implies certain properties of the eigenvectors, namely that there exists a set of eigenvectors constant on each of the blocks of the partition. We can use these spectral properties to develop an efficient hierarchy detection scheme. By combining such spectral approaches with a model-based technique we can gain fast and easily implementable results, while at the same time we can still relate our results to a specific generative model and all its advantages, such as the ability to do link-prediction, obtain confidence estimates, or to generate surrogate and bootstrap samples after having obtained the model.

Eigenvector-Based Centrality Measures for Multiplex and Temporal NetworksDane Taylor, Peter Mucha and Mason PorterWednesday, 14:00-15:20

Quantifying the importances of nodes in biological, technological and social networks is a central pursuit for numerous applications, and it is important develop improved techniques for more comprehensive data structures such as multilayer network models in which layers encode different edge types, such a network at different instances in time or data obtained from complementary sources. In this work, we present a principled generalization of eigenvector-based centralities—which includes pagerank, hub/authority scores, and non-backtracking centrality among others—for the analysis of multiplex and temporal networks with general schemes for the coupling between layers. A key aspect of this approach involves studying joint, marginal and conditional centralities, which are derived from the dominant eigenvector of a supracentrality matrix. We characterize these centralities in the strong and weak-coupling limits using singular perturbation theory and apply this approach to empirical and synthetic datasets. Among other insights, this work reveals how fine tuning the interlayer coupling can enhance the performance of centrality and ranking.

Election Methods and Collective DecisionsThomas CavinMonday, 15:40-17:00

This paper presents some simulation results on various collective decision methods in the context of Downsian proximity electorates. I show why these results are less than ideal, and contrast these different voting systems with a new system called Serial Approval Vote Elections (SAVE), which produces better outcomes that approach the ideal represented by the median voter theorem. I show how SAVE works in both normal and unusual electorates, how SAVE can be easily integrated into committee procedures, and how SAVE can be used in larger elections.

Embodied Cognition and Multi-Agent Behavioral EmergencePaul Silvey, Michael Norman and Jason KutarniaWednesday, 15:40-17:00

Autonomous systems embedded in our physical world need real-world interaction in order to function, but they also depend on it as a means to learn. This is the essence of artificial Embodied Cognition, in which machine intelligence is tightly coupled to sensors and effectors and where learning happens from continually experiencing the dynamic world as time-series data, received and processed from a situated and contextually-relative perspective. From this stream, our engineered agents must perceptually discriminate, deal with noise and uncertainty, recognize the causal influence of their actions (sometimes with significant and variable temporal lag), pursue multiple and changing goals that are often incompatible with each other, and make decisions under time pressure. To further complicate matters, unpredictability caused by the actions of other adaptive agents makes this experiential data stochastic and statistically non-stationary. Reinforcement Learning approaches to these problems often oversimplify many of these aspects, e.g., by assuming stationarity, collapsing multiple goals into a single reward signal, using repetitive discrete training episodes, or removing real-time requirements. Because we are interested in developing dependable and trustworthy autonomy, we have been studying these problems by retaining all these inherent complexities and only simplifying the agent's environmental bandwidth requirements. The Multi-Agent Research Basic Learning Environment (MARBLE) is a computational framework for studying the nuances of cooperative, competitive, and adversarial learning, where emergent behaviors can be better understood through carefully controlled experiments. In particular, we are using MARBLE to evaluate a novel reinforcement learning long-term memory data structure based on probabilistic suffix trees. Here, we describe this research methodology, and report on the results of some early experiments.

Emergence of encounter networks due to human mobilityJose L. Mateos and Alejandro Pérez RiascosThursday, 15:40-17:00

There is a burst of work on human mobility and encounter networks. However, the connection between these two important fields just begun recently. It is clear that both are closely related: Mobility generates encounters, and these encounters might give rise to contagion phenomena or even friendship. We model a set of random walkers that visit locations in space following a strategy akin to Lévy flights. We measure the encounters in space and time and establish a link between walkers after they coincide several times. This generates a temporal network that is characterized by global quantities. We compare this dynamics with real data for two cities: New York City and Tokyo. We use data from the location-based social network Foursquare and obtain the emergent temporal encounter network, for these two cities, that we compare with our model. We found longrange (Lévy-like) distributions for traveled distances and time intervals that characterize the emergent social network due to human mobility. Studying this connection is important for several fields like epidemics, social influence, voting, contagion models, behavioral adoption and diffusion of ideas.

Riascos AP, Mateos JL (2017) Emergence of encounter networks due to human mobility. PLoS ONE 12(10): e0184532.

Emergence of network effects and predictability in the juridical systemEnys Mones, Simon Thordal, Piotr Sapiezynski, Henrik Palmer and Sune LehmannTuesday, 14:00-15:20

Courts constitute one of the three branches of power (the judiciary branch), and as such, they are fundamental to the functioning of our democracies. Supreme courts are distinguished as they interpret the basic laws at the highest level and their decisions are then referred to as precedents in courts at other levels. As the courts strive to remain self-consistent and able to adapt to new legal challenges, the network of references connecting verdicts and rulings continues to grow exponentially and exhibits an ever-increasing level of complexity. Due to the importance of references to previous cases within legal reasoning, knowledge regarding the underlying patterns of citations between rulings can help us understand the mechanisms shaping the legal system. Here we investigate the citation patterns of The Court of Justice of the European Union (CJEU) in order to understand the underlying factors that affect the decision making process.

We consider the network of citations in the period between 1955 and 2014, where the network consists of the individual cases, and directed links signify the citations between the cases. As the court evolves, the network grows and the structure of citations becomes increasingly complex. Our main question is whether---and to what extent---the observed structure of the citation network can account for the citation patterns seen in the court? We pose the question as a link prediction problem. More precisely, we define six contextual and structural quantities and use these as input variables to predict each link in the network separately. The prediction is implemented as a recommender system: for a single link, we assign a score to all possible links and calculate the position of the original link in the sorted predictions.

This process provides, not only a measure of predictability of the court itself, but by interpreting the importance and the predictive power of individual properties, we learn about the nature of the underlying mechanism of citations. We show that the court’s citations are predictable to a surprisingly high extent. Further, we investigate the temporal evolution of the performance and importance of single features and show that contextual properties---such as the similarity between the content of the cases or their age---have a decreasing significance in describing the observed citations, compared to the increasing predictive power of structural similarities such as common citations. Then, we study the heterogeneity of the court with respect to its communities defined solely on structural basis.

Our content analysis shows these network communities are coherent sub-fields of the court.
We perform the link prediction procedure restricted to the communities and find that the court is highly heterogeneous with respect to the the significant properties that predict the citations.
Each community is characterized by a particular set of feature preferences that are descriptive of the references inside the community. The implications of the results are two-fold: they allow us to better understand the complex structure of the court decisions, but also to build recommendation systems aiding the work of practitioners.

The emergence of social-ecological regime shifts and transformationsMaja Schlüter, Emilie Lindkvist, Romina Martin, Kirill Orach and Nanda WijermansMonday, 15:40-17:00

Social-ecological systems (SES) are complex adaptive systems of humans embedded in ecosystems. Change or persistence of SES, such as regime shifts, transformation or traps, emerge from multiple interactions between diverse people and dynamic ecosystems within and across scales. Uncovering mechanisms that explain emergent SES phenomena remains challenging. It requires approaches that can capture the interdependence between people and ecosystems and view macro-level SES outcomes as being shaped and shaping micro-level actions and adaptations in a continuously evolving process. This paper will discuss the application of agent-based modelling to uncover mechanisms that link collective properties and behaviours with the actions of individual entities and vice versa to produce macro-level outcomes such as the adaptation of a policy to environmental change or a transition to cooperative self-governance in small-scale fisheries. We will particularly focus on how micro-level action, e.g. between fishers in a cooperative, can create feedbacks that influence emergent outcomes such as stability of cooperative self-governance. These dynamics, however, only emerge under certain macro-level conditions which themselves are emergent results of meso-level interactions between cooperatives. We will discuss methodological and theoretical challenges of unravelling the complex, multi- and cross-level causalities that determine emergent dynamics of human-environment systems.

The Emergence of Trust and Value in Public Blockchain NetworksMichael Norman, Yiannis Karavas and Harvey ReedTuesday, 18:00-20:00

Public blockchain networks, such as Bitcoin and Ethereum, appear to be complex systems, due to the emergence of certain properties which only become observable at a system level from the choices made by a set of decentralized participants. Two of the most salient properties of these complex public blockchain networks are trust and value. We propose that a complex systems-based perspective should be applied to public blockchain networks and offer a number of possible applications of complexity science in this space.

Emergent collective motion in one-dimensonal systems of interacting Brownian particlesVictor Dossetti and Iván Fernando Herrera-GonzálezTuesday, 15:40-17:00

We study a one-dimensional system of off-lattice Brownian particles that interact among themselves through a local velocity-alignment force that does not affect their speed. These conditions restrict the implementation of the aligning forces to a time-based scheme, in consequence, two different cases are analyzed: synchronous and asynchronous. In the first, velocity-alignment is implemented periodically throughout the whole system while, in the second, probabilistically at every time-step in a Monte Carlo fashion. As the frequency of alignment increases in the synchronous case, or the probability of alignment in the asynchronous one, the system is driven from stationary states close to thermal equilibrium to far-from-equilibrium ones, where the system exhibits spontaneous symmetry breaking and self-organization characterized by long-range order and giant number fluctuations. Our results show that self-propulsion is not necessary to induce the flocking transition even in one-dimensional systems. Moreover, the order parameter in the synchronous version of our model shows a regular spiking and resetting activity as the system approaches the thermodynamic limit even for low densities. This behavior resembles, for example, the ordering and relaxation processes in the Axelrod's model for social influence with cultural drift, driven by a periodic mass media campaign.

Emergent patterns of chimera and synchrony in data-driven models of brain dynamicsKanika Bansal, Timothy Verstynen, Jean Vettel and Sarah MuldoonWednesday, 14:00-15:20

The human brain is a complex dynamical system, and it functions through the emergence of spatiotemporal patterns of coherent and incoherent activity as regional neuronal populations interact. Often, separate domains of coherence and incoherence can be observed to coexist, forming a state referred to as ‘chimera’ in the complex systems framework. Recent studies indicate that the chimeras appear due to an interplay between the characteristic dynamics, topology, and coupling functions of interacting network elements. In this work, we constructed data-driven network models of brain dynamics and studied the emergent patterns of synchrony and chimera that are constrained by the underlying anatomical connectivity of the brain. In order to construct brain models, we obtained the anatomical connectivity of individuals’ brains from diffusion weighted imaging data across a cohort of thirty subjects. For each subject, we modeled regional brain dynamics using nonlinear Wilson-Cowan oscillators that are coupled through the individual’s observed connectivity. We then sequentially studied the effects of applying computational regional activation across brain regions and across the cohort of subjects. We analyzed the spread of the activation and the separation of synchronized and de-synchronized populations within the brain by calculating network synchrony, revealing distinct patterns of coherence, incoherence, and chimera states, depending on the specific activation site. Our results indicate that different regions of the brain are structurally constrained to produce patterns that can fulfill their functional roles. We further describe the utility of chimera framework to aid our understanding of the anatomical organization of the human brain networks, and to probe individual variability.

Emergent Physiological Processes: A new type of emergence to understand the maintenance of homeostasis in biological systemsAlan A Cohen, Francis Dusseault-Bélanger, Vincent Morissette-Thomas, Diana L Leung and Tamàs FülöpMonday, 14:00-15:20

Emergence is often discussed in terms of properties, not processes. Recent data on biological systems, however, suggests that important aspects of regulation happen through emergent physiological processes (EPPs): processes that cannot be easily measured or understood via their components. In particular, our lab has identified three such EPPs: inflamm-aging and metabolic syndrome are known from the literature, and integrated albunemia (IA) is a newly discovered process. We present data on all three processes, focusing on IA as an example. IA was detected as the first principal component on an analysis of 43 clinical biomarkers; to our surprise, it integrated multiple different systems we expected to fall out as separate components: anemia/oxygen transport, protein transport, inflammation, and calcium, among others. Its structure is precisely replicable in multiple independent data sets, and it is much more stable across datasets than its components or their pairwise relationships. It increases with age and predicts various health outcomes net of age. We believe it provides the clearest example of an EPP, confirming a prediction made by Kauffmann in 1993 for genetic networks. The implication of the existence of such EPPs is that evolution structures regulatory networks to integrate across multiple systems coherently, but often without recourse to linear or simply predictable regulatory control. We expect many more such EPPs to be present, though not all will be as easy to detect.

Empirical Mode Decomposition in Defence Data AnalysisPeter Dobias and James WanlissThursday, 15:40-17:00

Repetitive, but not strictly periodic, trends in the temporal data can present a challenge to the analysis of short-term patterns. Military examples of such time-series include violence data or vessel detections. Empirical mode decomposition (EMD) has been used across a variety of different fields such as biology and plasma physics to deal with non-stationarities in the data. This methodology enables separation of different modes intrinsic to the data and it does not require a priori assumptions about time dependence of various data sub-components, such as periodicity of variations. We show the application of this methodology to two distinct types of data. The Afghanistan violence data between 2009 and 2013 provided an example of a relatively sparse, limited dataset. With EMD we were able to identify a multi-year cycle, without the skewed trend in the vicinity of turning points. In contrast, ship detection data for the Canadian West coast provide an example of a very large data set. We have conducted analysis of the summary detection data; unfortunately, this led to the presence of noise that limited our ability to identify specific temporal patterns in the data. The analysis could be improved by geographically dividing the data into a number of small areas and conducting separate analysis for each area. Despite this, the EMD demonstrated its usefulness and applicability, enhancing the analysis of these two datasets compared to more conventional approaches.

Empirical scaling and dynamical regimes for GDP: challenges and opportunitiesHarold Hastings, Tai Young-Taft and Thomas WangTuesday, 18:00-20:00

Empirical scaling and dynamical regimes for GDP: challenges and opportunities

Harold M Hastings, Tai Young-Taft, and Thomas Wang

Scaling laws in economic distributions have been considered for some time, for example relative to cities [1], companies [1-2], asset prices [3], and wage income [3]. Scaling laws may be considered relative to network dynamics with respect to the division of labor [4], the growth of cities [1, 5-6], and competition relative to imperial urban center [5-6]. This paper considers scaling of GDP relative to rank, scaling of per capita GDP, as well as scaling of trade (the gravity law [7,8]). Initial analysis analysis of GDP data and per capita GDP data from 1980 and 2016 (and many years in between) finds three scaling regions. The GDP of the largest ~25 economies (nations, EU) follows a power law GDP ~ 1/rank (c.f. [9]); this is followed by a second scaling region in which GDP falls off exponentially with rank and finally a third scaling region in which the GDP falls off exponentially with the square of rank. The distribution of per capita GDP also displays these three scaling regions in 2016; but only the first two in 1980. The broad pattern holds despite significant changes in technology (enormous growth in computing power, “intelligent” automation, the Internet), the size of the world economy, emergence of new economic powers such as China, and world trade (almost free communication, containerized shipping yielding sharp declines in shipping costs, trade partnerships, growth of the EU, the effect of multinationals displacing the traditional economic role of the nations-state [10]).

Thus, empirically, these patterns may be universal [11-15], in which case one of the targets for growth of potentially less developed economies (those in the second and third scaling regions) may be to identify and target causative differences between these economies and those in the first (power law) scaling region. To such an extent, data analysis is undertaken to identify such salient features of such national economies, and their evolution in time. Finally, we comment on the relationship between efficiency and size, and the effect of other related variables on GDP.

References

[1] West, G. (2017). Scale: The Universal Laws of Growth, Innovation, Sustainability, and the Pace of Life in Organisms, Cities, Economies, and Companies. Penguin Press.

[2] Duboff, R. (1989). Accumulation and Power: Economic History of the United States. Routledge.

[3] Dos Santos, P. (2017). The Principal of Social Scaling, Complexity.

[4] Smith, A. (1977). An Inquiry into the Nature and Causes of the Wealth of Nations. University of Chicago Press.

[5] Braudel, F. (1984). Capitalism and Civilization. Harper & Row.

[6] Arrighi, G. (2010). The Long Twentieth Century: Money, Power, and the Origins of our Times. Verso.

[7] Tinbergen, J. (1962). Shaping the World Economy: Suggestions for an International Economic Policy. Twentieth Century Fund, New York.

[8] Poyhonen, P. (1963). A Tentative Model for the Volume of Trade between Countries, Weltwirtschafriches Archiv 90, 93-99.

[9] Garlaschelli, D., DiMatteo, T., Aste, T., Caldarelli, G., Loffredo, M.L. (2007). Interplay between topology and dynamics in the world trade web, European Physics Journal B 57, 159-164.

[10] Weber, M. (2003). General Economic History. Dover.

[11] Solomon, S. and Richmond, P., (2002). Stable power laws in variable economies; Lotka-Volterra implies Pareto-Zipf. The European Physical Journal B-Condensed Matter and Complex Systems, 27, 257-261.

[12] Solomon, S. and Richmond, P., (2001). Power laws of wealth, market order volumes and market returns. Physica A: Statistical Mechanics and its Applications 299, 188-197.

[13] Mitzenmacher, M., (2004). A brief history of generative models for power law and lognormal distributions. Internet Mathematics 1,226-251.

[14] Yakovenko, V.M., (2009). Econophysics, statistical mechanics approach to. In Encyclopedia of Complexity and Systems Science (pp. 2800-2826). Springer New York. https://arxiv.org/pdf/0709.3662

[15] Yakovenko, V.M. and Rosser Jr, J.B. (2009). Colloquium: Statistical mechanics of money, wealth, and income. Reviews of Modern Physics 81, 1703. https://arxiv.org/pdf/0905.1518

Employee effort and collaboration in organizations: A network data science approachNan Wang and Evangelos (Evan) KatsamakasTuesday, 18:00-20:00

Estimating the performance of employees is an important consideration in all organizations. This paper proposes a network data science approach to the estimation and visualization of employee effort, productivity and collaboration patterns. Using data from a software development organization, the paper models developers’ contribution to project repositories as a bipartite weighted graph. This graph is projected into a weighted one-mode network of developer-to-developer to model collaboration. Techniques applied include graph theoretic metrics, power-law estimation, community detection, and network dynamics. Among other results, we validate the existence of power-law relationships on project sizes (number of developers). We discuss implications for managers and future research directions. As a methodological contribution, the paper demonstrates how network data science can be used to derive a broad spectrum of insights about employee effort and collaboration in organizations.

Enabling Constraints for the Emergence of Complex MulticellularityPedro Márquez-Zacarías and William RatcliffMonday, 15:40-17:00

The evolution of multicellularity is one of the major transitions in evolution, and one that allowed the diversification and further evolution of biological complexity [1]. There are two modes to form a multicellular organism: by aggregation of independent cells (aggregative multicellularity), and through the cohesion of dividing cells (clonal multicellularity). These two modes exhibit qualitatively distinct properties and they have evolved independently across the major lineages of life. Notably, clonal multicellularity is the mode that exhibits pervasive complex traits, such as division of labor (cell differentiation) and intricate life cycles [2]. Despite decades of work on molecular and genomic comparative studies, we still lack the understanding of what are the underlying principles that drove the emergence, stabilization, and diversification of multicellular complexity.
We propose that a set of two simple constraints are sufficient to enable the emergence of complex multicellularity: a constraint on cell motility (through cell-cell cohesion/adhesion) and a constraint on information transfer among cells. We tested this hypothesis using a spatially explicit model, where cells act as information-processing agents forming permanent or transient multicellular collectives. In our model, cells process information through an internal threshold Boolean network, and they can reproduce or die. Cells communicate with adjacent cells through a subset of the nodes in their internal network working as input/output nodes. After updating the networks in each cell, we can calculate the Hamming distance between cellular network states as a proxy for cell differentiation. We modeled clonal and aggregative multicellular development as irreversible and reversible cell-cell adhesion mechanisms, respectively. If irreversible, cells always adhere to their offspring after division, and each of this links can brake irreversibly. If reversible, cells can associate or dissociate with any adjacent cell, regardless of its lineage. To summarize our modeling framework, we simulated multicellular collectives where we controlled two important aspects: the amount of communication between cells, and the persistence of cell-cell cohesion links.
From these two simple constraints, we observe the emergence of complex multicellular traits similar to those observed in extant multicellular organisms [2]. When cell-cell cohesion is irreversible, cells are always part of multicellular entities, and therefore there is readily a transition to multicellular organismality. Furthermore, cell death and cell-cell link severance can become simple mechanisms by which these multicellular collectives can reproduce. In contrast, reversible cell adhesion implies that there is no reproduction at the collective level: only the lower-level entities reproduce. Furthermore, collectives in aggregative development are always variable in cell number and cell types, which would preclude natural selection to act on collective traits. Finally, we observe that moderate intercellular communication ensures cell differentiation at both spatial and temporal dimensions, but only clonally developing collectives can stabilize cell types (defined as distinct and stable internal network states).
In the present work, we argue that simple constraints on cell motility and cell communication enabled the evolution of multicellularity, and therefore they represent a template upon which adaptive processes could further increase and diversify complexity. Our framework and results contrast with the dominant idea that the evolution of biological complexity implies intricate generative mechanisms through adaptive tinkering, and they align with the view that evolutionary novelties represent the enablement of expanding phenotypic spaces [3]. Understanding how multicellularity emerged and evolved can help on the understanding of biological complexity in general and particularly it can extend our knowledge on the organizing principles that guided other major evolutionary transitions.
[1] Szathmáry, Eörs (2015). Toward major evolutionary transitions theory 2.0., Proc. Natl. Acad. Sci. USA. 112 (33) 10104-10111
[2] King, Nicole (2017). The Origin of Animal Multicellularity and Cell Differentiation. Developmental Cell. Volume 43, Issue 2, 124 - 140
[3] Longo G, Montévil M, Kauffman SA (2012). No entailing laws, but enablement in the evolution of the biosphere. arXiv 1201.2069v1

Engineered Complex Adaptive Systems of Systems: A Military ApplicationBonnie JohnsonTuesday, 15:40-17:00

Tactical warfare is complex. It requires agile, adaptive, forward-thinking, fast-thinking and effective decision-making. Advancing threat technology, the tempo of warfare, and the uniqueness of each battlespace situation, coupled with increases in information that is often incomplete and sometimes egregious; are all factors that cause human decision-makers to become overwhelmed. Automated battle management aids become part of a solution to address the tactical problem space—to simplify complexity, to increase understanding/knowledge, and to formulate and provide quantitative analyses of decision options. The other part of the solution is engineering an adaptive architecture of distributed weapons and sensors that can act independently or as collaborative systems of systems. This paper proposes a systems approach to the complex tactical problem space. The approach is based on a complex systems engineering strategy that views the decision space holistically in the context of capability enablers for managing future distributed warfare assets as complex adaptive systems of systems.

Enterprises as Complex Systems: Effecting Transformative Changes in OrganisationsPeter Midgley Monday, 15:40-17:00

The Author has 30 years of experience transforming the operational performance of manufacturing facilities. The same principles have been applied in other industry sectors.

This is not an academic paper. This is a case study on practical achievement of significant results in over 200 factories, ~50 countries and in a range of product sectors. It uses a common approach developed in collaboration with UK Universities, Government and Industries. The participating businesses have typically been around $500M turnover. What has typified most of these interventions is that they have been with what are generally referred to as ‘distressed assets’. One of the principal benefits of this is that people pay attention and are willing to embrace a whole systems approach – frankly anything. Part of the ‘fun’ of this for the Author is you have no money to spend and it must be done now. This abstract will offer some of the key insights to the approach. The paper will outline a more comprehensive description of the same. The presentation will provide case studies for the attendees to question, criticise and help improve the process through their rigour in this subject area – a continuation of the ‘tinkering’ approach to improvement over decades. Some of the key objective insights are as follows: • The best performers in terms of outcomes are independent of industry, sector or region • Measurement of effects, not causes, is more important – and they are non-commutative • Causes underlying the best performing effects are transferrable to the best and to others • The principal operating criteria is global optimisation of the whole – for those you serve. However, from a subjective perspective, the ‘problem’, as presented, is rarely the problem. The social dynamics, both within and without the facility have the greatest impact on the effectiveness of a business as a whole, and that which it has been created in order to serve. Objective measurement will get you so far, and is broadly independent of external factors. What makes the most significant difference between the best and the rest are the way that networks of people and things, their stewardship & governance, both intrinsic & extrinsic, operate in harmony – or not. These causational factors are common to best performers. Epistemological Constraints when Evaluating Ontological Emergence with Computational Complex Adaptive SystemsAndreas Tolk, Matthew Koehler and Michael NormanMonday, 14:00-15:20 Natural complex adaptive systems may produce something new, like structures, patterns, or properties, that arise from the rules of self-organization. These novelties are emergent if they cannot be understood as any property of the components, but is a new property of the system. One of the leading methods to better understand complex adaptive systems is the use of their computational representation. In this paper, we make the case that emergence in computational complex adaptive systems cannot be ontological, as the epistemological constraints of computer functions do not allow for ontological emergence. As such, computer representations of complex adaptive systems are limited, but nonetheless useful to better understand the relationship of emergence and complex adaptive systems. Evidence for a conserved quantity in Human MobilityLaura Alessandretti, Piotr Sapiezynski, Vedran Sekara, Sune Lehmann and Andrea BaronchelliThursday, 15:40-17:00 Recent seminal works on human mobility have shown that individuals constantly exploit a small set of repeatedly visited locations. A concurrent literature has emphasized the explorative nature of human behavior, showing that the number of visited places grows steadily over time. How to reconcile these seemingly contradicting facts remains an open question. Here, we analyze high-resolution multi-year traces of ~40,000 individuals from 4 datasets and show that this tension vanishes when the long-term evolution of mobility patterns is considered. We reveal that mobility patterns evolve significantly yet smoothly, and that the number of familiar locations an individual visits at any point is a conserved quantity with a typical size of ~25 locations. We use this finding to improve state-of-the-art modeling of human mobility. Furthermore, shifting the attention from aggregated quantities to individual behavior, we show that the size of an individual's set of preferred locations correlates with the number of her social interactions. This result suggests a connection between the conserved quantity we identify, which as we show can not be understood purely on the basis of time constraints, and the Dunbar number' describing a cognitive upper limit to an individual's number of social relations. We anticipate that our work will spark further research linking the study of Human Mobility and the Cognitive and Behavioral Sciences. The Evolution of Complex Societies: Old Theories and New DataPeter TurchinTuesday, 11:40-12:20 Over the past 10,000 years human societies evolved from “simple”—small egalitarian groups, integrated by face-to-face interactions, —to “complex”—huge anonymous societies with great differentials in wealth and power, extensive division of labor, elaborate governance structures, and sophisticated information systems. One aspect of this “major evolutionary transition” that continues to excite intense debate is the origins and evolution of the state—a politically centralized territorial polity with internally specialized administrative organization. Theories proposed by early theorists and contemporary social scientists make different predictions about causal processes driving the rise of state-level social organization. I will use Seshat: Global History Databank to empirically test predictions of several such theories. I will present results of a dynamic regression analysis that estimates how the evolution of specialized governance structures was affected by such factors as social scale (population, territorial expansion), social stratification, provision of public goods, and information systems. The evolution of death: mortality is favored by selection in spatial systems with limited resourcesJustin Werfel, Donald Ingber and Yaneer Bar-YamWednesday, 15:40-17:00 Standard evolutionary theories of aging and mortality, implicitly based on assumptions of spatial averaging, hold that natural selection cannot favor shorter lifespan without direct compensating benefit to individual reproductive success. However, a number of empirical observations appear as exceptions to or are difficult to reconcile with this view, suggesting explicit lifespan control or programmed death mechanisms inconsistent with the classic understanding. Moreover, evolutionary models that take into account the spatial distributions of populations have been shown to exhibit a variety of self-limiting behaviors, maintained through environmental feedback. We show, through spatial modeling of lifespan evolution, that both theory and phenomenology are consistent with programmed death. Spatial models show that self-limited lifespan robustly results in long-term benefit to a lineage; longer-lived variants may have a reproductive advantage for many generations, but shorter lifespan ultimately confers long-term reproductive advantage through environmental feedback acting on much longer time scales. Numerous model variations produce the same qualitative result, demonstrating insensitivity to detailed assumptions; the key conditions under which self-limited lifespan is favored are spatial extent and locally exhaustible resources. Factors including lower resource availability, higher consumption, and lower dispersal range are associated with evolution of shorter lifespan. A variety of empirical observations can parsimoniously be explained in terms of long-term selective advantage for intrinsic mortality. Classically anomalous empirical data on natural lifespans and intrinsic mortality, including observations of longer lifespan associated with increased predation, and evidence of programmed death in both unicellular and multicellular organisms, are consistent with specific model predictions. The generic nature of the spatial model conditions under which intrinsic mortality is favored suggests a firm theoretical basis for the idea that evolution can quite generally select for shorter lifespan directly. References: Programmed death is favored by natural selection in spatial systems. Justin Werfel, Donald E. Ingber, and Yaneer Bar-Yam. Physical Review Letters 114: 238103 (2015). Theory and associated phenomenology for intrinsic mortality arising from natural selection. Justin Werfel, Donald E. Ingber, and Yaneer Bar-Yam. PLoS ONE 12(3): e0173677 (2017). Evolutionary Development: A Universal Perspective on Evolution, Development, and Adaptation in Complex SystemsJohn SmartTuesday, 15:40-17:00 This paper offers a general systems definition of the phrase "evolutionary development", and an introduction to its application to the universe as a system. Evolutionary development, evo devo or ED is a term that can be used by philosophers, scientists, historians, and others as a replacement for the more general term “evolution”, whenever a scholar thinks experimental, selectionist, contingent and stochastic or “evolutionary” processes, and also convergent, statistically deterministic (probabilistically predictable) or “developmental” processes, including replication, may be simultaneously contributing to selection and adaptation in any complex system, including the universe as a system. Like living systems, our universe broadly exhibits both stochastic and deterministic components, in all historical epochs and at all levels of scale. It has a definite birth and it is inevitably senescing toward heat death. The idea that we live in an “evo devo universe,” one that has self-organized over past replications both to generate multilocal evolutionary variation (experimental diversity), and to convergently develop and pass to future generations selected aspects of its accumulated complexity ("intelligence") is an obvious hypothesis. Living systems harness stochastic evolutionary processes to produce novel developments, especially under stress, in a variety of systems and scales. If our universe is an adaptive replicator, it makes sense that it would do the same. Today, only a few cosmologists or physicists, even in the community that theorizes universal replication and the multiverse, have entertained the hypothesis that our universe may be both evolving and developing (engaging in both unpredictable experimentation and goal-driven, teleological, directional change and a replicative life cycle), as in living systems. Our models of universal replication, like Lee Smolin's cosmological natural selection (CNS), do not yet use the concept of universal development, or refer to development literature. I will argue that some variety of evo devo universe models must emerge in coming years, including models of CNS with Intelligence (CNS-I), which explore the ways emergent intelligence can be expected to constrain and direct “natural” selection, as it does in living systems. Evo devo models are one of several early approaches to an Extended Evolutionary Synthesis (EES), one that explores adaptation in both living and nonliving replicators. They have much to offer as a general approach to adaptive complexity, and may be required to understand several important phenomena under current research, including galaxy formation, the origin of life, the fine-tuned universe hypothesis, possible Earthlike and life fecundity in astrobiology, convergent evolution, the future of artificial intelligence, and our own apparent history of unreasonably smooth and resilient acceleration of both total and “leading edge” adapted complexity and intelligence growth, even under frequent and occasionally extreme past catastrophic selection events. If they are to become better validated in living systems and in nonliving adaptive replicators, including stars, prebiotic chemistry, and the universe as a system, they will require both better simulation capacity and advances in a variety of theories, which I shall briefly review. Exploiting the collective intelligence of human groups as a novel optimization methodIlario De Vincenzo, Giovanni Francesco Massari, Ilaria Giannoccaro and Giuseppe CarboneMonday, 14:00-15:20 We propose a novel optimization algorithm belonging to the class of swarm intelligence optimization methods. The algorithm is inspired by the decision making process of human groups, and exploits the dynamics of this process, as an optimization tool for combinatorial problems. The algorithm is based on a decision making model, recently developed by some authors, to describe how humans, with cognitive limits, modify their opinions driven by self-interest and consensus-seeking. The dynamics of this process is characterized by a phase transition from low to high values of the consensus and group fitness. We recognize this phase transition as being associated with the emergence of a collective superior intelligence of the group. The proposed methodology is tested on combinatorial NP-complete problem, defined in terms of Kauffman NK complex landscape. The results are compared with those of Genetic Algorithm (GA), Simulated Annealing (SA) and Multi-Agent Simulated Annealing (MASA). Exploring the true relationship between countries from flow data of international trade and migrationKedan Wang, Xiaomeng Li, Xi Wang, Qinghua Chen and Jianzhang BaoTuesday, 14:00-15:20 The relationship among various entities in the socio-economic systems is an important part of complexity research. Here we combine the general gravity model and minimum reverse flows idea to propose a general framework to reveal comprehensive relationship among entities with intimacy and hierarchy based on flow data among entities. Besides, we apply this method to comprehensively analyze international trade network and population migration network. Based on the empirical flow data, we calculate the effective distance among countries and rank or grade of countries, which could reveal the true relationship among them. The countries in global trade are clustered but not hierarchical, while the relationship among countries in international migration is just the opposite. They are hierarchical and not clustered. Failing Students and Teachers: Examining Education as a Complex SystemJamey HeitThursday, 15:40-17:00 Education is a network of some size, yet it has been relatively unimagined through the lens of complex systems theory. Given the structured patterns of behavior, this seems like a missed opportunity. This paper will examine a specific element of our education system: how students learn to write. After analyzing the inefficiencies in how writing is taught through tenets of Complex Systems Theory, this system will articulate why similar tenets point to artificial intelligence (AI) technologies as a way to overcome the shortcomings that define the current educational system. The goal for this paper is to reimagine education as a dynamic system fueled by positive feedback loops based on redefining the strategies we encourage teachers to use based on new metrics of success. The specific analysis of the problem and the possible strategies that technology can provide will be framed by examining education through four key questions (Axelrod and Cohen, 2001): are problems long-term or widespread? Can the problem be evaluated through fast feedback loops? Is the problem a low risk for catastrophe from exploring new strategies? Is the problem a looming disaster? The answers to these four questions regarding education will structure the examination of education’s shortcomings as a system and identify specific ways in which AI software can lead to immediate and lasting improvements. Fast and accurate detection of spread source in large complex networksRobert Paluch, Xiaoyan Lu, Krzysztof Suchecki, Bolesław Szymański and Janusz HołystTuesday, 14:00-15:20 Spread over complex networks is a ubiquitous process with increasingly wide applications. Locating spread sources is often important, e.g. finding the patient one in epidemics, or source of rumor spreading in social network. The topic of the source detection is now very popular and many variants of this problem have been studied. However, current methods are too computationally expensive and they can not be use for a quick identification of the propagation source. Here we propose a new detector-based approach in which observers with low quality information (i.e. with large spread encounter times) are ignored and potential sources are selected based on the likelihood gradient from high quality observers. Our Gradient Maximum Likelihood Algorithm (GMLA) has computational complexity O(log(N)N^2) and is capable to process timely large networks consisting of tens of thousands of nodes. The accuracy of GMLA, which is comparable with other methods, strongly depends on the infection rate and network topology. Indeed, we found that scale-free topology, which facilitates the spread over the network, impede also the detection of the spread source. A Flow-Based Heuristic Algorithm for Network Planning in Smart GridsGeorge Davidescu, Andrey Filchenkov, Amir Muratov and Valeriy VyatkinThursday, 15:40-17:00 The smart grid is envisioned as a reconfigurable energy exchange network that is resilient to failures. An expected feature of the future smart grid is optimal power distribution from energy producers to consumers, otherwise known as "network planning". This involves allocating finite energy resources to customers in order to optimally satisfy all customer demands, subject to constraints on the topology of the graph. We model this problem as the Capacitated Spanning Forest Problem (CSF), namely the graph optimization problem of creating a spanning forest with a capacity constraint on each tree limiting its total weight. We present a new heuristic algorithm for solving CSF based on computing the minimum-cost maximum flow on the graph. We find that our algorithm outperforms state of the art approaches with respect to solution quality and running time. Forest complexity in the green tonalities of satellite imagesJuan Antonio López-Rivera, Ana Leonor Rivera and Alejandro FrankWednesday, 15:40-17:00 Forest complexity is associated with biodiversity and tells us information about the ecosystem health. A healthy forest must be in a scale-invariant state of balance between robustness and adaptability, reflected in the tonalities present on its vegetation. Remote imaging can be used to determine forest complexity based on the scale-invariance of green tones in the images. Here proposed is a simple technique to monitor changes on the forest using statistical moments and spectral analysis of the green tones on the satellite images. Formal descriptors of functional constructivism and examples of their applicationsIrina TrofimovaWednesday, 14:00-15:20 The Functional Constructivism (FC) paradigm considers behavior of natural systems as being generated every time anew, based on an individual’s capacities, environmental resources and demands. Referencing evolutionary theory, several formal descriptors of such processes were proposed. These FC descriptors refer to the most universal aspects for constructing consistent structures: expansion of degrees of freedom, integration processes based on internal and external compatibility between systems and maintenance processes, all given in four different classes of systems: 1) Zone of Proximate Development (poorly defined) systems; 2) peer systems with emerging reproduction of multiple siblings; 3) systems with internalized integration of behavioral elements (“cruise controls”); and 4) systems capable of handling low-probability, not yet present events. The recursive dynamics within this set of descriptors is conceptualized as diagonal evolution, or dievolution. Three examples applying these FC descriptors to taxonomy are given: 1) classification of the functionality of neuro-transmitters; 2) classification of temperament traits, and 3) classification of mental disorders. From Consensus to Polarization of Opinions in Complex ContagionFlavio Pinheiro and Vítor V. VasconcelosThursday, 14:00-15:20 The study of how opinions, innovations, behaviors, and knowledge spread has long been a central topic in physical, social, and ecological sciences. In the past, these have been commonly studied through simple contagion processes, processes in which information flows through the contact of two individuals. The inability for simple contagion models to account for the plethora of dynamical patterns observed in the real world, such as the polarization of opinions, has led to a search for additional mechanisms. Recent empirical evidence suggests that different matters spread in different ways, namely, that some require a dependence on the whole neighborhood of an individual to propagate. In that sense, the process of information acquisition requires reinforcement from multiple contact sources. This phenomenon became known as Complex Contagion. Although widely investigated in the literature of cascading effects, complex contagion has only recently received some attention in the context of population dynamics, i.e., when multiple competing opinions co-evolve over time in a population. Complex Contagion has been commonly modeled by a process of fractional thresholds. This implies that there is a well-defined threshold fraction of neighbors needed for an opinion/idea/innovation to be adopted by an individual. Dynamically, this results in a deterministic process that either percolates through the system or that becomes contained to a few elements of the system. Under this context, it was found that complex contagion spreading is speeded up by clustering of individuals in populations (triangular closures) but that a modular structure of the population can halt the propagation of an opinion. Indeed, the study of opinion dynamics has examined under which conditions a consensus is formed. Typical questions involve the time to consensus and how likely is it for a new opinion to invade a population. Here, we introduce a new class of complex contagion processes inspired by recent empirical findings in the literature of innovation and knowledge diffusion. [1] We consider different opinions coevolving in a population with potentially asymmetric properties of contagion. We assume that the probability of an opinion to spread to a new individual grows as an arbitrary power of the density of neighbors that already share that same opinion. We explore analytically and computationally the properties of this model in well-mixed and structured populations. Namely, we test well-mixed, homogeneous random, random, scale-free, and modular networks of influence. We show these populations span a dynamical space that exhibits patterns of polarization, consensus, and dominance. We map these patterns to topologically equivalent ones found in the literature of evolutionary games of cooperation. We find that these dynamical properties are robust to different population structures. Finally, we show how modular topologies can create different dynamics and additional dependences on the initial configuration of opinions. Our results are general and of relevance not only for the study of opinions and ideas but also when considering propagation in more abstract networks derived from data, like that of product complexity and product adoption by countries as well as others. [2,3] Future Concepts for Increasing Organizational Carrying Capacity in the U.S. NavyGarth Jensen, Matthew Largent and Rebecca Law Monday, 15:40-17:00 Many military organizations are created or have evolved with a hierarchical organizational construct. This construct allows for unity of purpose and for clear lines of responsibility and authority. In this paper the authors build off the argument made by Dr. Yaneer Bar-Yam (Bar Yam, 2002) that hierarchical organizations are limited in the complexity they can handle by the carrying capacity of the relatively small number of individuals who make decisions at the top of the hierarchy. As the environment and the battlespace become more complex military hierarchical organizations will find themselves in a state where traditional organizational constructs limit the ability to process this complexity and thereby limit effectiveness. The authors, sponsored by the U.S. Office of Naval Research, created a study to explore this concept of organizational complexity using the collective intelligence platform MMOWGLI (Massively Multiplayer Online War Game Leveraging the Internet). Over a 1-week timeframe players from all over the world collaborated, developing concepts that explored how the U.S. Navy could change to either mitigate or even embrace the increase in complexity. The themes that emerged from that event paint a picture of how the Navy might adjust so that it could ride the wave of increasing complexity instead of being swamped. This paper will explore the question of military organizational complexity, the themes that emerged from the MMOWGLI game, and lightly touch on a more detailed concept fleshed out in a follow-on workshop. Bar-Yam, Y. (2002). General features of complex systems. Encyclopedia of Life Support Systems (EOLSS), UNESCO, EOLSS Publishers, Oxford, UK, 1. Future Governance of Biotech Risks: Black Swans, Precaution and Planned AdaptationKenneth OyeWednesday, 11:40-12:20 What should be done now to prepare for future management of the benefits and risks of emerging biotechnologies? This talk will suggest that planned adaptive risk governance is better suited to conditions of uncertainty and controversy than traditional permissive and precautionary approaches. The talk will treat future applications of biotechnology raised by previous speakers, including gene drive based control of vector born diseases and invasive species, brewing opiates, xeno-transplantation of vascularized tissue, restoration of extinct species, and human germline modification. The talk will differentiate between examples of low probability systemic risks and applications with more conventional risk profiles; make the case for programs of research and observation that should start now to inform future decisions; and discuss evidentiary triggers for invoking private and governmental adaptive risk governance. The Future of Human and Artificial Intelligence Teaming in the U.S. NavyMatthew Largent, Garth Jensen and Rebecca LawWednesday, 15:40-17:00 The concept of the technological singularity, where Artificial Intelligence (AI) could rapidly expand in capability past human intelligence, has been hypothesized by some to mean that the AI would leave humanity behind, possibly making humankind irrelevant. Standing at cross-purposes to this concept is the example of Freestyle Chess, where human-AI teams have been shown to be more effective than AI alone. The authors, sponsored by the U.S. Office of Naval Research, created a study to explore this concept of human-machine, or human-AI teaming using the collective intelligence platform MMOWGLI (Massively Multiplayer Online War Game Leveraging the Internet). Over a 1-week timeframe players from all over the world collaborated, developing concepts that explored how the U.S. Navy could encourage this teaming and what kinds of teaming would be most beneficial. The themes that emerged from that event paint a picture of how the Navy might adjust so that it could ride the wave of technological change instead of being swamped. This paper will explore this question of human-AI teaming, the themes that emerged from the MMOWGLI game, and two more detailed concepts fleshed out in a follow-on workshop. Genomic classification system applied for identifying grapevine growth stagesFrancisco Altimiras, Leonardo Pavez and Jose GarciaTuesday, 18:00-20:00 In agricultural production, it is fundamental to characterize the phenological stage of the plants to ensure a good evaluation of the development, growth and health of the crops. The phenological characterization allows early-detection of nutritional deficiencies in the plants, those diminish the growth, the productive yield and drastically affect the quality of its fruits. Currently, the phenological estimation of development in grapevine Vitis vinifera is done using four different schemes (Baillod and Baggiolini, Eichhorn and Lorenz and its derivatives), that requires the exhaustive evaluation of crops, which makes it intensive in terms of labor, personnel and time required for its application. In this work we propose a phenological classifier based in transcriptional measures of certain genes, to accurately estimate the stage of development of the grapevine. There are several genomic information databases for Vitis vinifera and the function of their thousands of genes has been widely characterized. The application of advanced molecular biology, including massive parallel sequencing of RNA (RNA-seq), and the handling of large volumes of data, provide state-of-the-art tools for the determination of phenological stages on a global scale of molecular functions and processes of plants. With this aim we created a bioinformatic pipeline for high-throughput quantification of RNA-seq datasets. We identified differential expressed genes in several datasets of grapevine phenology stages of development. Differential expressed genes were classified using count-based expression analysis, multidimensional scaling and annotated using gene ontology enrichment. This work contribute to the use of genomic analysis for the classification of plants, with a wide range of industrial applications in agriculture. Glassy states of aging social networksLeila Hedayatifar and Foroogh HassanibesheliTuesday, 18:00-20:00 Yesterdays’ friend/enemy rarely become tomorrows’ enemy/friend. Relations do not change easily in presence of memory. In fact, the ability of human beings to remember history of relations develops social concepts such as commitment and allegiance leading to the formation of cultural communities, alliances, and political groups. In order to investigate this effect on dynamic of social networks, we introduce a temporal kernel function into the Heider balance theory, allowing the quality of past relations to contribute to the evolution of system. In this theory, relations between agents are considered as positive/negative links referring to friendship/animosity, profit/nonprofit, etc. This theory proposes a model based on triadic configurations in which relations evolve to reduce the number of unbalanced triads and attain minimum tension states (balanced or jammed states). In this regard, assigning a potential energy allows more quantitative view of the network’s dynamic. Considering memory results in the emergence of aged links which measures the aging process of the society. By increasing age of some relations, some nodes get older resulting in the formation a skeleton under the skin of society. Even though network’s dynamic gets affected by memory, still the general trend goes towards obtaining stable states. The resistance of aged links against the changes decelerates the evolution of the system and traps it into long-lived glassy states which can survive in unstable states in contrast to stable configurations. Graph reduction by edge deletion and edge contractionGecia Bravo Hermsdorff and Lee GundersonTuesday, 14:00-15:20 How might one “compress” a graph? That is, generate a reduced graph that approximately preserves the structure of the original? Spielman and collaborators developed the concept of spectral graph sparsification, i.e. deleting a fraction of the edges and reweighting the rest so as to approximately preserve the Laplacian quadratic form. Interestingly, for a planar graph, edge deletion corresponds to edge contraction in its planar dual (and more generally, for a graphical matroid and its dual). This duality suggests a way to further reduce a graph. Indeed, with respect to the dynamics induced by the Laplacian (e.g., diffusion), deletion and contraction are physical manifestations of two opposite limits: edge weight of 0 and infinity, respectively. In this work, we propose a measure of edge importance with respect to these two operations. Based on this measure, we provide a unifying framework by which one can systematically reduce a graph, not only in the number of edges, but also in the number of nodes, while preserving its large-scale structure. A Gravity Model Of Market Share Based On Transaction RecordsYoshihio Suhara, Mohsen Bahrami, Burcin Bozkaya and Alex 'Sandy' PentlandThursday, 15:40-17:00 For years, companies have been trying to understand customer patronage behavior in order to allocate a new store of their chain in the right location. Customer patronage behavior has been widely studied in market share modeling contexts, which is an essential step in solving facility location problems. Existing studies have conducted surveys to estimate merchants' market share and their factors of attractiveness to use in various proposed mathematical models. Recent trend in big data analysis enables us to understand human behavior and decision making in a deeper sense. This study proposes a novel approach of transaction based patronage behavior modeling. We use the Huff gravity model together with a large-scale transactional dataset to model customer patronage behavior in a regional scale. Although the Huff model has been well studied in the context of facility location-demand allocation, this study is the first in using the model in conjunction with a large scale transactional dataset to model customer retail patronage behavior. This approach enables us to easily apply the model to different regions and different merchant categories. As a result, we are able to evaluate indicators that are correlated with the Huff model performance. Experimental results show that our method robustly performs well on modeling customer shopping behavior for a number of shopping categories including grocery stores, clothing stores, gas stations, and restaurants. Regression analysis verifies that demographic diversity features such as gender diversity and marital status diversity of a region are correlated with the model performance.The contribution and advantages of our approach include the following: 1-Merchants and business owners can implement our model in different geographical regions with different settings to determine what locations are suitable for new stores. 2-One can use different merchant categories to compare cross-category performance of shopping behavior models. 3-A deeper analysis is possible on evaluating shopping behavior models and multiple factors derived from transaction data such as demographic diversity, mobility diversity, and merchant diversity. 4-It is computationally inexpensive to rebuild a model. One can simply replace transaction data and fit models in the same manner as previous models. This eliminates the need and associated costs to conduct surveys for data collection under different settings. Health complexity loss in addictionStefan TopolskiTuesday, 15:40-17:00 Purpose: To apply definitions of health complexity to improve the understanding of the relative disease severity of illness among individuals with a diagnosis of addiction. Method: Qualitative and semi-quantitative definitions of health in the complex sciences literature [Sturmberg, Topolski, et al.] are applied to published health self-assessments by individuals who suffer severe addiction or severe locked-in syndrome, respectively. Results: Individuals with the profound physical debility of locked-in syndrome appear to report more complex stronger family and community social support with emotional health while individuals with severe addiction appear to have less complex disrupted, lost, or lacked family or community support networks. Individuals with locked-in syndrome report higher subjective personal assessments of their overall health than do individuals suffering with addiction. Physical and emotional responses of individuals with addiction appear to be more predictable and stereotypical, i.e. less complex, than those of individuals with locked-in syndrome. The physical disability of individuals with severe addiction can begin to resemble the marked physical disability and mortality rates of individuals withlocked-in syndrome. Addiction may constrain volitional activity to a degree comparable with locked-in syndrome. Conclusion: Quantitative, qualitative, subjective and objective measures of lost health complexity may be more than previously appreciated among individuals with severe addiction. The illness and disability of severe addiction may have the potential to approach or exceed the disability and suffering of individuals with profoundly physically disabling locked-in syndrome. Appreciating a complex systems approach to understanding and defining illness may produce an unexpected and paradoxical result to accepted wisdom in the physician assessment of human illness from addiction. Heart Rate Variability of healthy individuals depends on sex and age: a complex approachAna Leonor Rivera, Juan Antonio López-Rivera, Bruno Estañol, Juan Claudio Toledo-Roy, Ruben Fossion and Alejandro FrankTuesday, 14:00-15:20 Heart rate variability depends not only on age, but also on sex for young healthy subjects. Time series analysis of subjects watching Disney’s Fantasia movie in time and frequency domains shows statistically significant differences for young men and women, that are lost when elderly. This is maybe due to hormonal modulation of the heart. However, entropy is similar for young people independently of sex, and is statistically different for old and young subjects. This is related to the loss of complexity of the heart rate variability. High Performance Computing-Enabled Machine Learning for Biosynapse Neuromorphic DevicesCharles Collier, Joseph Najem, Alex Belianinov and Catherine SchumanWednesday, 15:40-17:00 We propose to integrate experiment and simulation, drawing inspiration from biologically-based approaches to computing and the high performance computational capabilities at Oak Ridge National Laboratory, to develop and demonstrate a new class of neuromorphic devices focused on soft, low-power, multifunctional material systems driven by the structure (cell membrane) and transport functionality (ion channels) of synapses separating neurons in the brain. The neuromorphic devices are two-terminal, membrane-based, biomolecular memristors (memory resistors) consisting of alamethicin-doped synthetic biomembranes that are 3-5 nanometers in thickness, which we call "biosynapses". These physical devices will offer basic learning and data processing functionalities capable of autonomous pattern recognition and decision making. Our proof-of-principle micro-fabricated device will consist of up to 30 soft synthetic biosynapses connected to solid-state neurons, running a pre-simulated and pre-trained neural network capable of recognizing an onset of an epileptic episode based on electroencephalographic (EEG) data. Higher Precision Implementation of Chaotic Maps for CryptographyAmir Akhavan, Afshin Akhshani and Azman SamsudinTuesday, 14:00-15:20 The evident similarities between chaotic maps and Pseudo Random Number Generators (PRNG) and consequently cryptography have been a strong motivation for the researchers in the past few decades and numerous PRNGs and cryptosystems base on various chaotic maps have been designed in this period. The most noticeable similarities between chaos and PRNGs and cryptosystems are their strong sensitivity to the initial conditions and control parameters, aperiodicity, random-like behavior, and ergodicity. Yet, there are also several drawbacks in the application of the chaotic maps in the design of secure random number generators and cryptosystems that could lead to disaster if not treated well. Particularly, dozens of cryptographic algorithms based on the Logistic map were cryptanalyzed since the emergence of chaos based cryptography and the adequacy of the Logistic map has been questioned by researchers in this area. Almost all the chaos-based cryptography algorithms use the chaotic maps as an easy and deterministic source of entropy. Thus, these algorithms, directly or indirectly, use the characteristic of sensitivity to control parameters and initial conditions to reflect the cryptographic requirement of sensitivity to the keys. Therefore, the low sensitivity of the Logistic map to a range of control parameters and the three windows in the bifurcation diagram of the Logistic map are two of the major issues in the application of this map for the purpose of cryptography. Surprisingly, with all these issues Logistic map is still one of the most favorite maps in the field of chaos-based cryptography and each year several algorithms based on it are proposed. In addition to the low sensitivity, the limitation in the size of the variables in computer simulation of the chaotic maps is another issue that arises in the application of chaotic maps in cryptography. Many of the chaotic maps are taken from a natural phenomenon and are based on real numbers. Thus, their implementation requires simulation of differential equations with real numbers in their structure. The approximation of the real numbers in the computer results in rounding and approximation errors. Palmore and Herring (1990) have investigated the effect of computer realization of the chaotic maps and fractals in computer and concluded that with rounding, in 30 to 49 iterations could “destroy all accuracy” of the results. Reviewing hundreds of papers and proceedings published on chaos-based cryptographic algorithms from 1998 to 2015 shows that majority the proposed algorithms are implemented in the finite precision of$10^{-13}$to$10^{-16}$. The reason lies in the difficulty and possible lower speed of higher precision implementation of the chaotic maps. Perhaps another reason could be Blackledge and Ptitsyn’s research in 2010 that states increase in precision does not guarantee a sufficiently long trajectory in chaotic maps. In this study, we investigate the trajectories generated by various chaotic maps including Logistic map using statistical methods and propose a new PRNG based on the Logistic map that passes all the statistical randomness tests including the Big Crush test. The results of the analysis demonstrate that the proposed algorithm with proper implementation not only passes all the tests but also is fast and can provide a huge key-space for the cryptographic random number generator. Homo Potens: A Species Most Complex and PowerfulMichael Francis McCulloughTuesday, 18:00-20:00 Homo Potens: A Species Most Complex and Powerful We are a species whose astounding powers of creativity and innovation are matched by destructive powers so enormous we could easily subvert -- for ourselves and all other species -- the very conditions of life on Earth. We are, as Edgar Morin says, Homo Sapiens-Homo Demens, a whirling mix of the wise and the foolish, the rational and the irrational. In a word, we are “potens”, the Latin for powerful. As Homo Potens, we are a species whose extraordinary potential for better or worse is realized through the exercise of power. To avert the perils that lurk in our Demens and to nurture the immense promise of our Sapiens will depend to a large extent on how well we understand ourselves as Potens. There is yet another essential respect in which humans far surpass other species that we would do well to try to understand: our complexity. As with power, we partake in complexity for better or worse. Failure to deal with complexity tends to transform small problems into larger ones. Complexity can overwhelm. But it also poses challenges that, once mastered, make it possible to explore complex problems in greater depth. Advances in recent decades in understanding the nature of complex dynamical systems raise hopes that, over the long term, novel approaches to science itself can help us navigate the promise and perils of complexity. As a means to cast light on Homo Potens as a most complex and powerful species, this essay proposes a complexity theory of power, a combination of power theory and complexity theory. The proposed theory correlates the ability of one party to exercise power over another (A.Allen; R. Dahl; S. Lukes) with disorganized complexity (W. Weaver) and the power to collaborate (A. Allen; H. Arendt; T. Parsons) with self-organized complexity (I. Prigogine). In this view, power exercised by one party to dominate another is a disorganizing process and power exercised by different parties to collaborate with one another is a self-organizing process. These processes can occur across scale in human systems. Whether at the level of interpersonal, national or global politics, self-organizing is a democratizing process through which the disorganizing effects of domination and authoritarianism can be countered and overcome. While complexity perspectives teach us that there nothing is inexorable or guaranteed about the future generally and the advance of self-organization more specifically, they also offer the hope we can better diagnose the debilitating effects of power imposed and learn how to exercise power with not over others. How do we consciously see, hear, feel, and know?Stephen GrossbergThursday, 14:00-14:40 What happens in our brains when we consciously see, hear, feel, and know something? The Hard Problem of Consciousness is the problem of explaining how this happens. A theory of how the Hard Problem is solved needs to link brain to mind by modeling how brain dynamics give rise to psychological experiences, notably how emergent properties of brain dynamics generate properties of individual experiences. This talk summarizes evidence that Adaptive Resonance Theory, or ART, is accomplishing this goal. ART is a cognitive and neural theory of how advanced brains autonomously learn to attend, recognize, and predict objects and events in a changing world. ART predicted that “all conscious states are resonant states” and specified mechanistic links between processes of consciousness, learning, expectation, attention, resonance, and synchrony. It hereby provides functional and mechanistic explanations of data ranging from individual spikes and their synchronization to the dynamics of conscious and unconscious perceptual, cognitive, and cognitive-emotional behaviors. ART proposes how and why evolution has created conscious states. In brief, sensory data are typically ambiguous and incomplete, and incapable of supporting effective action. Only after cortical processing streams that obey computationally complementary laws interact through hierarchical resolution of uncertainty do sufficiently complete and stable representations form whereby to control effective action. Consciousness is an “extra degree of freedom” in resonant states that are triggered by these representations, thereby enabling successful predictive actions to be based upon them. Different resonances support seeing, hearing, feeling, and knowing. The talk will describe where these resonances occur in our brains, and how they interact. Both normal and clinical psychological and neurobiological data will be explained and predicted that have not been explained by alternative theories. The talk will also mention why some resonances do not become conscious, and why not all brain dynamics are resonant, including brain dynamics that control action. Stephen Grossberg is Wang Professor of Cognitive and Neural Systems; Professor of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering; and Director of the Center for Adaptive Systems at Boston University. He is a principal founder and current research leader in computational neuroscience, theoretical psychology and cognitive science, and neuromorphic technology and AI. In 1957-1958, he introduced the paradigm of using systems of nonlinear differential equations to develop models that link brain mechanisms to mental functions, including widely used equations for short-term memory (STM), or neuronal activation; medium-term memory (MTM), or activity-dependent habituation; and long-term memory (LTM), or neuronal learning. His work focuses upon how individuals, algorithms, or machines adapt autonomously in real-time to unexpected environmental challenges. These discoveries together provide a blueprint for designing autonomous adaptive intelligent agents. They includes models of vision and visual cognition; object, scene, and event learning and recognition; audition, speech, and language learning and recognition; development; cognitive information processing; reinforcement learning and cognitive-emotional interactions; consciousness; visual and path integration navigational learning and performance; social cognition and imitation learning; sensory-motor learning, control, and planning; mental disorders; mathematical analysis of neural networks; experimental design and collaborations; and applications to neuromorphic technology and AI. Grossberg founded key infrastructure of the field of neural networks, including the International Neural Network Society and the journal Neural Networks, and has served on the editorial boards of 30 journals. His lecture series at MIT Lincoln Lab led to the national DARPA Study of Neural Networks. He is a fellow of AERA, APA, APS, IEEE, INNS, MDRS, and SEP. He has published 17 books or journal special issues, over 550 research articles, and has 7 patents. He was most recently awarded the 2015 Norman Anderson Lifetime Achievement Award of the Society of Experimental Psychologists (SEP), and the 2017 Frank Rosenblatt computational neuroscience award of the Institute for Electrical and Electronics Engineers (IEEE). See the following web pages for further information: sites.bu.edu/steveg http://en.wikipedia.org/wiki/Stephen_Grossberg http://cns.bu.edu/~steve/GrossbergNNeditorial2010.pdf http://scholar.google.com/citations?user=3BIV70wAAAAJ&hl=en http://www.bu.edu/research/articles/steve-grossberg-psychologist-brain-research/ http://www.bu.edu/research/articles/stephen-grossberg-ieee-frank-rosenblatt-award/ Organizing Data in Dynamic Flexible Tagging System_harandiMahboobeh HarandiTuesday, 18:00-20:00 There are various projects that need to apply collective intelligence and machine learning algorithms for analysis of the massive dataset. These projects would not get the desired result if they relied on either collective intelligence or machine learning results. Gravity Spy (GS) project, one of the citizen science projects supported by Zooniverse platform, is an example of projects which support human-machine collaboration to analyse a large scale and complex dataset. Gravity Spy shares a huge number of glitch images with the public, citizen scientists. Citizen scientists are volunteers who have different motivations to join the project such as learning about the science and contributing to the science. The glitch data, non-cosmic, non-Gaussian disturbance are recorded by Laser Interferometer Gravitational Observatory (LIGO) due to its high sensitivity that is needed for detecting gravitational waves. LIGO scientific collaboration should know characteristics and origins of glitches to eliminate them either from the dataset or resolve the issue at LIGO site. While GS project relies on citizen scientists for classifying the glitch images, it also applies machine learning algorithms to classify the images. The GS system uses high confidence results of the machine learning algorithm to train novice volunteers. Experienced volunteers help the system to retire images based on their votes and ML results. As glitch types are not bounded to the primary classes, volunteers are asked to find new classes of data in addition to the primary classification. The project also provides a forum including several boards where volunteers can ask questions about glitches, classification and the system, brainstorm new approaches to handle unknown glitches and discuss any topics about gravitational waves and progress of the project. They specifically use the Note board to tag glitch images. Like other social software applications, volunteers use hashtags to organize the data individually (personomy) and eventually they all should to choose the same tag for a particular class of glitches (folksonomy) to organize the data. However, since there are no certain rules and no explicit authority for tagging, there is a high chance that volunteers use different or overlapping tags for the same type of a glitch morphologies. Consequently, they can not aggregate their findings to propose a new glitch class. I would like to study whether accessing relevant information about tags through a conversation with a chatbot support folksonomy to organize images in a dynamic and flexible tagging system. How Rankings Go Wrong: Structural Bias in Common Ranking Systems Viewed as Complex SystemsPatrick Grim, Jared Stolove, Natalia Jenuwine, Jaikishan Prasad, Paulina Knoblock, Callum Hutchinson, Chengxi Li, Kyle Fitzpatrick, Chang Xu and Catherine MingWednesday, 14:00-15:20 We use agent-based techniques to analyze inherent structural bias in abstract models of common ranking systems such as PageRank, HIT, and Reddit. Using the simpler example of University rankings such as U.S. News and World Report as a first example, the existence of reputational loops is a core source of ranking distortion across a variety of ranking systems. In the complex dynamics of reputational loops, an element’s ranking itself influences factors in terms of which rank is calculated, resulting in the amplification of divergence and the exaggeration of small random and path-dependent differences. We construct agent-based models of basic algorithms employed in PageRank, HIT, and Reddit which allow comparison of common forms and effects of intrinsic structural bias across a number of parameters. Various ranking systems have been criticized on the grounds that they rely on data of dubious value, such as opinion survey rankings of law schools by University presidents. Ranking systems have also been criticized on the grounds that they can be intentionally ‘gamed’, and in some cases have been manipulated by simple fraud. Neither of these, nor other shortcoming traceable to human error or deliberate manipulation, will be our target here. Our concern is with the intrinsic structural bias evident within a handful of common ranking mechanisms viewed as complex systems, even if those systems are employed with the best of human intentions and the cleanest and relevant of input data. Reputational loops appear as a perennial problem in the agent-based models we construct using core elements of the PageRank, HITs, and Reddit algorithms. Our modeling allows us to compare these different ranking algorithms for the relative threat and impact of this form of intrinsic bias. We conclude with a discussion of the extent to which ranking systems might be able to eliminate, dampen, or compensate for intrinsic structural bias effects. How the division of knowledge creates coworker complementaritiesFrank NeffkeTuesday, 14:00-15:20 Division of labor allows people to reap the benefits of specialization. What is often underappreciated is that division of labor does not just mean that people specialize, it means that people specialize in different things. As a consequence, in larger firms, no single individual disposes of all the knowledge required to keep operations going. The knowledge base of such firms is typically spread out across many different workers. When know-how takes this shape of "distributed expertise", the value of human capital is dependent on the ecosystem of complementary knowledge in which it is embedded. For workers, this means that the benefits of acquiring a certain specialization depend on having access to coworkers who can complement their skills and know-how. In this paper, I quantify complementarities among coworkers’ skill sets and show how they affect careers, returns to schooling and the large-plant and urban wage premiums. I focus on the skills workers acquire through education. To be precise, I use information on the educational tracks, classified into 491 different categories that describe the content and the level of education for each individual living in Sweden between 1990 and 2010. I use this information to quantify the extent to which the human capital acquired in these educational tracks is complementary or similar to human capital acquired in other educational tracks. The intuition behind this quantification is that, if knowledge and skills are distributed over many different workers, a firm must hire teams of workers that together cover the full range of expertise its production processes require. In other words, firms will hire teams of complementary workers. This suggests that one can infer which educational tracks are complementary to one another by studying who works with whom. To measure complementarity, I therefore mine the information provided in the co-occurrences of educational tracks in the workforces of Swedish establishments that together cover 75% of Swedish employment, keeping the remaining 25% of the sample for testing the effects of the thus-measured complementarity. However, because workers may also become coworkers because they have similar skills, I also need to assess how similar educational tracks are to one another. I measure the similarity between two tracks by quantifying to what extent these tracks allow carrying out similar sets of tasks, as expressed in the degree to which they give access to the same occupations. In this way, I construct complementarity and similarity indices for each pair of educational tracks. These indices are subsequently used to assess the complementarity and similarity of a worker to her team of coworkers. Complementarity and similarity prove to be important in explaining various aspects of a worker’s career. First of all, workers who work with complementary coworkers stay longer with the same employer than those who don’t. Moreover, complementarity-rich work environments are associated with substantially higher wages – for college-educated workers having complementary coworkers yields returns that are of the same order of magnitude as the returns to college education themselves. In contrast, working with similar workers is associated with lower wages. Using a supply-shift instrument based on predicted local graduation rates, I show that this effect is causal. Moreover, complementarity not only raises wages, but the returns to education turn out to be contingent on finding complementary work environments. For instance, college-educated workers in Sweden typically earn about 60% more than workers who only finished secondary school. However, college-educated workers who work in the 20 percent least complementary work environments do not exhibit any returns to their education whatsoever: they earn the same income as workers who only finished secondary school. In contrast, college-educated workers in the 20 percent most complementary work environments receive wages that are over 80% higher than those of primary-school educated workers. Similar patterns are found when analyzing the urban wage premium. Although doubling the size of the city in which a worker works, is, on average, associated with 5% higher wages, this elasticity varies from below 2% when restricting the sample to the lowest quintile of complementarity to above 9% for the highest complementarity quintile. How Value is Created in Tokenized AssetsNavroop Sahdev, John Hargrave and Olga FeldmeierWednesday, 14:00-15:20 A tidal wave of change is coming to the world of Economic Science. Digital tokens—including bitcoin, altcoins, and cryptocurrencies—will require a fundamental rethinking of valuation, in the same way that the introduction of the stock market required a new understanding of value. As of this writing, the total value of all tokens stands at$500 billion. How do investors place value on computer code, with no central bank or physical asset to support it? Drawing from the literature on behavioral economics and tools from cognitive psychology, we aim to provide the first anchor to understand the criteria that investors are deploying to value new digital assets, making this the first study of applied behavioral economics on token valuation. Using a new instrument called the Framework for Token Confidence, we show how value can be created out of “thin air,” and how tokens, and indeed the entire economic system, operate as something like a “vote of confidence.”

Human Dynamics with Limited Complexity - An AbstractMark Levene, Trevor Fenner and George LoiozuTuesday, 18:00-20:00

Human dynamics and sociophysics suggest statistical models that may explain and provide us with better insight into human behaviour in various social settings. One of the principal ideas in sociophysics is that, in a similar framework to that of statistical physics, individual humans can be thought of as social atoms'', each exhibiting simple individual behaviour and possessing limited intelligence, but nevertheless collectively giving rise to complex social patterns. In this context, we propose a {\em multiplicative decrease process} generating a {\em rank-order distribution} \cite{FENN17a}, having an {\em attrition function} that controls the rate of decrease of the population at each stage of the process. The discrete solution to the model takes the form of a product, and a continuous approximation of this solution is derived via the renewal equation that describes age-structured population dynamics \cite{FENN15}.

Identities by genes: How real are they?Pierre Zalloua and Daniel PlattMonday, 15:40-17:00

Populations, not individuals, within certain geographic boundaries are assigned certain genetic signatures, that are commonly referred to as ethnicities. These populations may have been previously a collection or a mix of many smaller, genetically distinct, subpopulations or communities that amalgamated (through admixture) together and adopted a geographic location and a set of cultures including language. DNA markers provide signatures or attributes, given a certain probability, that an individual belongs to a particular population. Populations however, are not static systems, they mix, interact, evolve, transform (genetic drift), disappear or get replaced. Hence, ethnicity is not a firm or a certain concept; As they evolve or change, specific populations adopt certain habits and become culturally distinct and identities arise from these culturally distinct units. In my lecture, I will demonstrate through modern and ancient DNA analyses how various populations in the Near East were ancestrally derived and expose the complexity and perils of translating DNA findings into distinct identities without the notion of population variability or the assimilation of cultures.

We analyzed genetic signatures among ancient and modern populations of the Near East and Asia minor to investigate how the Arabian Peninsula and the Levant were initially populated, and how population expansion events shaped this region of southwest Asia. We observe genetic population variations in the Arabian Peninsula that appear to reflect distinct ancient ancestral populations. We show that the expansion out of Africa to the Neolithic Iran region is most closely aligned with an expansion through the Arabian Peninsula, and that the path through the Levant was a distinct expansion, through the Sinai rather than the Gates of Tears, with little alignment with the evolution in the Arabian Peninsula genetics. We further suggest that the Levant differentiated from Arabia and Yemen before exiting Africa. The Neolithic Anatolian and Natufian populations appear to be the basal populations for the Neolithic Levant. F4 statistic results show some levels of correlation between Iran and the Levant suggesting that the Neolithic Levant was subsequently admixed with the Neolithic Iranian population yielding the modern Levant.
Interestingly, while it is possible to resolve two paths populating the Levant vs. the Arabian Peninsula through to Neolithic and the modern Persian Gulf, the details of where an ancestral population may have resided is still ambiguous. Since this is further clouded by examples of essentially complete genetic replacement, the question of origins will require analysis of aDNA not now currently available.

The impact of communication on individual and population behaviorMoriah Echlin, Boris Aguilar, Chris Kang and Ilya ShmulevichTuesday, 14:00-15:20

We are investigating how communication between individuals influences the diversity of individual level and population level behavior; for example, how cell-cell signalling alters cellular and tissue behavior in multicellular organisms. We are primarily focused on the effects of the communication bandwidth, i.e. the number of unique signals available, but also explore other communication parameters such as the degree of signal diffusion within the population. In order to mechanistically understand this complex phenomena, all of our research is conducted by means of in silico simulation of populations of interacting agents, represented by an agent-based mathematical model.
At the individual level, we ask whether behaviors are lost, altered, or gained as a function of communication and the individual’s underlying behavioral capacity. Preliminary work suggests that some existing behaviors become less frequent and novel behaviors may arise when comparing individuals in an independent setting to those influenced by its social environment.
At the population level, we ask how heterogeneity is linked to communication between individuals. We explore two facets of heterogeneity, the overall diversity of and spatial patterns in expressed behaviors within a population. Current findings indicate that homogeneity is more likely as communication increases. In those populations where multiple behaviors are retained, spatial patterns that might not otherwise be present can persist due to communication.
Simple observation will show that the behavior of individuals is highly dependent on their social environment. However, it is a multifaceted relationship that is difficult to dissect. Our research aims at an in-depth understanding of one of those facets - communication - and how its effects begin at the individual level and propagate through to the population level.

Impact of Cyber-attacks on Valuation of Public CompaniesOmer Poyraz, Ozkan Serttas, Omer Keskin, Ariel Pinto and Unal TatarWednesday, 14:00-15:20

As cyber has become an operational domain rather than a technological enabler, dependency on cyber has been increasing [WEF, 2016]. Organizations move their operations to the cloud to have the advantages of efficiency, productivity, economic, cybersecurity, and state-of-the-art technology. They keep their confidential data either in private or public cloud. Also, as well as they operate in the cloud, they serve their customers through the cloud. Therefore, interruption or compromise of a company's cloud service may cause direct or indirect cost. However, this cyber cost can create a cascading effect that includes individuals, partner companies, and government. The aftermath of the attack, a company, may face litigations, penalties, loss of revenue, loss of reputation, hardware, and software replacement cost, business interruption, loss of physical devices, loss of trade secrets, and so on. Although the U.S. Securities and Exchange Commission urges public companies to disclose their cyber compromise more, public companies are reluctant to report or they under-report cybersecurity events [Schroeder & Finkle, 2018]. Since cybersecurity incidents can cause direct and indirect costs, investors would need to know such a significant phenomenon about the company they put their money in. Also, because cyber now serves as a domain in which organizations manage and trade their assets, investors should know what is happening to the company assets which help to determine the value of a company. In this paper, we will review the data of public companies that suffered cyber incidents, how their stock prices were affected, and how investors responded to this event.

Reference:
1. World Economic Forum. (2016). The Global Risks Report 2016 | World Economic Forum. Retrieved from https://www.weforum.org/reports/the-global-risks-report-2016

2. Schroeder, P., & Finkle, J. (2018). U.S. SEC updates guidance on cyber-attack disclosure for companies. Retrieved from https://www.nasdaq.com/article/us-sec-updates-guidance-on-cyber-attack-disclosure-for-companies-20180221-01098

The impact of new mobility modes on a city: A generic approach using ABMArnaud Grignard, Luis Alonso, Patrick Taillandier, Tri Nguyen-Huu, Benoit Gaudou, Wolfgang Gruel and Kent LarsonThursday, 15:40-17:00

Mobility is a key issue for city planners. Being able to evaluate the impact of its evolution is very complex and involves many factors including new technologies like electric cars, autonomous vehicles and also new social habits like vehicle sharing. We need to get a better understanding of different scenarios to improve the quality of long-term decisions. Computer simulations can be a tool to better understand this evolution, to discuss different solutions and to communicate the implications of different decisions. In this paper, we propose a new generic model that creates an artificial micro-world which allows the modeler to create and modify new mobility scenarios in a quick and easy way. This not only helps to better understand the impact of new mobility modes on a city, but also fosters a better-informed discussion of different futures. Our model is based on the agent-based paradigm using the GAMA Platform. It takes into account different mobility modes, people profiles, congestion and traffic patterns. In this paper, we review an application of the model of the city of Cambridge.

Implicit Learning and Creativity in Human Networks: A Computational ModelMarwa Shekfeh and Ali MinaiThursday, 14:00-15:20

Creativity, or the generation of novel ideas, is an important and distinctive characteristic of the human mind. With rare exceptions, new ideas necessarily emerge in the minds of individuals, and the process is thought to depend crucially on the recombination of existing ideas in the mind. However, the epistemic repertoire for this recombination is supplied largely by the ideas the individual has acquired from external sources. Among these, one of the most important is the set of ideas acquired through interaction with peers. As people exchange ideas or information over their social networks, it changes the epistemic content of their own minds through implicit learning, i.e., the implicit acceptance of received ideas. This, in turn, provides the raw material for the generation of new ideas through recombination. An interesting way to think of this is to consider the knowledge present in a human network as an ecology of ideas, and the generation of new ideas as an evolutionary process.
In this research, we use a computational model we have developed – Multi-Agent Network for the Implicit Learning of Associations (MANILA) – to study how the generation of novel ideas by individual agents undergoing implicit learning in a social network is influenced by two crucial factors: 1) The structure of the social network; and 2) The selectivity of agents allowing themselves to be influenced by peers. Every agent in the model generates ideas, which are received by social peers and evaluated by an Oracle that is assumed to be the reference for ground reality. These evaluations generate rewards that accrue to the social status, or fitness, of the agents generating the ideas. This fitness is then the basis on which other peers choose to be influenced by the ideas each agent expresses.
We look at novel ideas at two levels: a) New ideas that are expressed for the first time; and b) New ideas that have formed implicitly in the mind of an agent but have not yet been discovered or expressed by the agent. The latter, termed latent ideas, are hypothesized to be a critical aspect of creativity. A community that generates more latent ideas is likely to also express more novel ideas in the long run. Our results indicate that, while social network structure has only a subtle effect of creativity, selectivity of influence is much more decisive.

Indispensable Quorum Sensing Proteins of Multidrug Resistant Proteus mirabilis: The Strategy Players of Urinary Tract InfectionShrikant Pawar, Md. Izhar Ashraf, Shama Mujawar and Chandrajit LahiriTuesday, 18:00-20:00

Catheter-associated urinary tract infections (CAUTI) is an alarming hospital based disease with the increase of multidrug resistance (MDR) strains of Proteus mirabilis. Cases of long term hospitalised patients with multiple episodes of antibiotic treatments along with urinary tract obstruction and/or undergoing catheterization have been reported to be associated with CAUTI. The cases are complicated due to the opportunist approach of the pathogen having robust swimming and swarming capability. The latter giving rise to biofilms and probably inducible through autoinducers make the scenario quite complex. High prevalence of long-term hospital based CAUTI for patients along with moderate percentage of morbidity due to ignorance and failure due to MDR necessitates an immediate intervention strategy effective enough to combat the deadly disease. Several reports and reviews focus on revealing out the important genes and proteins essential to tackle CAUTI caused by P. mirabilis. Despite longitudinal countrywide studies and methodical strategies to circumvent the issues, effective means of unearthing the most indispensable proteins to target for therapeutic uses have been meagre. Here we have reported a strategic approach for identifying the most indispensable proteins from the complete set of proteins of the whole genome of P. mirabilis, besides comparing the interactomes comprising the autoinducer-2 (AI-2) biosynthetic pathway along with other proteins involved in biofilm formation and responsible for virulence. Essentially, we have adopted a computational network model based approach to construct a set of small protein interaction networks (SPIN) along with the whole genome (GPIN) to identify, albeit theoretically, the most significant proteins which might be actually responsible for the phenomenon of quorum sensing (QS) and biofilm formation and thus, could be therapeutically targeted to fight out the MDR threats to antibiotics of P. mirabilis. Our approach signifies the eigenvector centrality coupled with k-core analyses to be a better measure in addressing the pressing issues.

Inferring hierarchical structure of complex networksStanislav SobolevskyTuesday, 14:00-15:20

Our recent paper [Grauwin et al, 2017] demonstrates that community and hierarchical structure of the networks of human interactions, expressed through a hierarchical distance, largely determines the intensity of the interactions and should be taken into account while modeling them. However the way the hierarchical distance is defined relies on the knowledge on the nested network community structure. The least could be constructed by iteratively applying a partitioning algorithm like the one suggested in [Sobolevsky et al, 2014] or by applying the hierarchical partitioning algorithm constructing the entire hierarchical structure at once [Peixoto, 2014], but either way the detailed knowledge of the network itself is required in order to define the hierarchical distance. Sometimes an existing structure underlying the set of interacting objects, like a regional structure of the country, could be used instead and the associated hierarchical distance also turns out to be useful for the modeling purposes [Grauwin et al, 2017].

In the present work the following question is considered: could the hierarchical distance be inferred to best serve an appropriate modeling framework for generic complex networks and spatial networks in particular? The network model resembles the combination of the gravity model accounting for geographical distance and the hierarchical model from [Grauwin et al, 2017] but based on an arbitrary hierarchical distance to be fitted. This will not only supplement the model but will also reveal the hierarchical and community structure of the network, implied by the inferred hierarchical distance without any additional need in community detection algorithms.

Inferring the phase response curve from observation of a continuously perturbed oscillatorRok Cestnik and Micheal RosenblumMonday, 14:00-15:20

Phase response curves are important for analysis and modeling of oscillatory dynamics in various applications, particularly in neuroscience. Standard experimental technique for determining them requires isolation of the system and application of a specifically designed input. However, isolation is not always feasible and we are compelled to observe the system in its natural environment under free-running conditions. To that end we propose an approach relying only on passive observations of the system. We illustrate it with simulation results of an oscillator driven by a stochastic force.

Influence Maximization for Fixed Heterogeneous ThresholdsPanagiotis D. Karampourniotis, Boleslaw K. Szymanski and Gyorgy KornissTuesday, 15:40-17:00

\section*{Introduction}
Influence Maximization\cite{Kleinberg} (IM) is a NP-hard problem of selecting the optimal set of influencers in a network. Here\cite{Karamp}, we study the problem of IM for non-submodular functions, in particular, for a classical opinion contagion model designed to capture peer pressure, namely the Linear Threshold Model (LTM). Yet our methods can be used for any percolation based model. The LTM is a binary state, deterministic model, where a node $i$ has either adopted a new product/state/opinion or not. The spreading rule is that an inactive node, with in-degree $k^{{\rm in}}_{i}$ and threshold $\phi_i$, adopts a new opinion only when the fraction (or absolute number) of its neighbors holding the new opinion is higher than the node's threshold (or resistance $r_i$).
\section*{Methods and Results}
We introduce two very different methods for IM.
The first metric, termed Balanced Index (BI), is fast to compute and assigns top values to two kinds of nodes: those with high resistance to adoption, and those with large out-degree. This is done by linearly combining three properties of a node: its degree, susceptibility to new opinions, and the impact its activation will have on its neighborhood. Controlling the weights between those three terms has a huge impact on performance. Further, we are able to study the importance of each of those features for a variety of network structures (degree assortativity) and threshold distributions. We discovered that the cascade size is governed not by initiators with high network centrality measures but by low resistance nodes. By placing weights between them and constructing the BI metric, we are able to study the importance of each of those features for a variety of network structures (degree assortativity) and threshold distributions. We discovered that resistance is the most impactful of the features. Hence, the cascade size is governed not by initiators with high network centrality measures but by low resistance nodes.\\
\begin{wrapfigure}{r}{0.4\textwidth}
\begin{center}
\vspace{0pt}
\includegraphics[width=0.4\textwidth]{Fig1.png}
\end{center}
\caption{Performance of Strategies. GPI strategy as well as BI (and it's variations RD and RT) outperform all other strategies on the fraction of active nodes $S_{eq}$ vs. initiator fraction $p$. Applied on ER graph ($N=10000$, $<$$k$$>$$=10), and uniform threshold \phi$$=$$.5$}
\end{wrapfigure}
The second metric, termed Group Performance Index (GPI), measures performance of each node as an initiator when it is a part of randomly selected initiator set. In each such selection, the score assigned to each teammate is inversely proportional to the number of initiators causing the desired spread. Our results show that the GPI metric performs better than any strategy against we compared it for almost any initiator size, threshold distribution, and network assortativity. To say the least, GPI serves as a benchmark for (synthetic) graphs and sets a minimum bound for the optimal initiator set.

The Influence of Collaboration Networks on Programming Language AcquisitionSanjay Guruprasad and César HidalgoTuesday, 15:40-17:00

Many behaviors spread through social contact [1]. Numerous studies of diffusion dynamics have demonstrated that network topology can play a significant role in the patterns of social behavior that emerge [2][3]. Is an individual’s decision to learn a new programming language strictly driven by the individual’s tastes and specific language features? Or do collaborators heavily influence the programming language one chooses to learn?

Programming language adoption follows a power law, with many languages having niche user bases [4]. We assess the influence of collaboration networks on programming language acquisition by analyzing the learning paths of hundreds of thousands of developers from the social coding platform Github, which houses millions of projects. We study collaboration networks for individual developers before each language acquisition event, to measure the influence of pre-existing knowledge in the collaboration network on the choice of programming language learnt, while controlling for factors like language popularity trends and language complementarity. We also study the complex contagion effect that seems to be present in the acquisition of programming languages.

This research sheds light on how programming languages diffuse through software communities and organizational networks. Individuals can use these ideas to curate collaborators to attain learning goals, while organizations can leverage collaboration data from Github to drive faster technology adoption.

[1] Centola, D. M., & Macy, M. (2007). Complex Contagions and the Weakness of Long Ties.
[2] Newman, M., Barabasi, A., Watts, D. J. & Boguñá, M. (2013). The Structure and Dynamics of Networks.
[3] Granovetter, M. S. (1973). The Strength of Weak Ties.
[4] Meyerovich, L. A., & Rabkin, A. S. (2013). Empirical Analysis of Programming Language Adoption.

International Security Challenges in the Information AgeTheresa WhelanTuesday, 11:00-11:40

Scores of foreign policy academics, influencers, and experts have spilled ink declaring that the world is “increasingly complex,” or “as complex as it has ever been.” These statements do not help illuminate the true nature and source of our current challenges. The reality is that the world has always been complex. There has always been some level of interconnectivity although, in the past, we could only often observe or understand those connections in retrospect. So while the world and the fundamental nature of interactions among its peoples have not changed, what has changed is our ability to gather, access, and utilize information. Consequently, a key driver of today’s extremely dynamic security environment is information over-saturation which is the almost inevitable result of generations of technological advancements - from the printing press, to the telegraph, to flight, to television, to the internet to cellular communications. These technologies have progressively exposed us to exponentially more information about the world around us at increasing levels of speed. Meanwhile however, our ability to understand and assess that information to determine its meaning or consequences has not changed nearly as fast. To further complicate matters, information technologies of the digital age combined with new manufacturing technologies are changing the distribution of power in the world by expanding the number of actors who have the ability to affect global events. Power that had once required economies of scale only possessed by state level entities is now diffused among state populations and non-state entities. This has both enabled once-abandoned classes and wrested normative control away from state organizations. Taken together, these trends demonstrate that the untapped potential to better manage today’s security challenges at the local, national and international level is not necessarily new technology, but developing the ability to learn from and positively leverage existing technology and exponentially increasing amounts of data in ways that foster a more stable world.

Introduction to Decision Process TheoryGerald ThomasMonday, 14:00-15:20

We believe that decisions are not random but are a connected network rippling from the past into the future. Is it not commonly accepted that human decision making is the result of free will and is therefore not deterministic? Yet there is an opposing view based on the theory of games that some aspects are deterministic. When this view is extended to include the system dynamics of space and time, an engineering approach to decision-based social structures becomes possible, with potentially profound consequences. This talk describes our approach to pulling these ideas together into a unified theory based on the mathematics of differential geometry.

Recently I taught a course for junior and senior engineering students with the idea to present a theory of differential game theory and provide the students the ability to apply such a theory to decision problems in areas such as economics. The presentation for this conference is based in part on that course, in part on a recent book on decision process theory, as well as other published work including some preliminary concepts that we presented at the NECSI 2004 Conference.

There, we explained the theoretical ideas that may be new to some, though they rely on concepts that are well-known each in their own areas from physics, including string theory and gravity, to game theory and differential geometry. We believe it is the application of these dynamical ideas to the field of game theory that is new.

We note the difficulties students have using ideas based on differential geometry because they are not familiar with the computational techniques that are commonly associated with them. In the course, Wolfram Mathematica® was used to help focus attention on the ideas so as not to be unduly distracted by the computations. This was successful with the students as they turned their focus on problems they were familiar with and applied the ideas to those problems. This approach may be more generally successful, which is a part of the current project

Inventive Novelty and technological Recombination in Artificial Intelligence: Evidence from the US patent system 1920 – 2018Deborah StrumskyMonday, 14:00-15:20

Artificial intelligence has emerged significant source of innovation. The rate of patented inventions in intellectual property systems suggesting there is significant creative novelty taking place in artificial intelligence. Concurrently, a key promise of artificial intelligence is its potential to transform or displace pre-existing technologies in many industries implying that the source of novelty may be driven by the pairing of artificial intelligence with pre-existing technologies. Inventive novelty does come in many flavors, novelty can occur as the arrival of an unprecedented invention, through the combining of technologies that have never previously been combined, or through improvement of an existing technology. With regard to artificial intelligence it is unclear which source of inventive novelty dominates. This research finds combinatorial innovation is the primary source of inventive novelty, the roots of AI invention can be traced back farther than many may suspect, and that relative to other technologies the source of AI inventions is surprisingly concentrated in a subpopulation of inventors and corporations.

Investigation of Dynamics in Nonlinear Coupling Systems: An Exploratory Study with Applications in Multimodal Physiological Data FusionMiaolin Fan and Chun-An ChouMonday, 15:40-17:00

The pattern of interactions between components is an essential feature of complex systems. In the many real-world systems, e.g. neural systems of human brains, global climate systems or interpersonal communication, the coupling dynamics which capture the driver-responder interrelationships among components provide important information regarding the global topological structure of systems embedded within temporal variability. Accordingly, our proposed method aims to characterize the nonlinear coupling structure based on complex network models. Moreover, cross-recurrence plot and conditional probability was applied for network construction, where the directional coupling can be assessed. Finally, we developed a novel approach for fusing multimodal physiological data using a network representation.

Investigation of Library Communities on FlickrOlga Buchel and Margaret KippTuesday, 14:00-15:20

Social media systems enable a wide array of interactions with digital objects: tagging, commenting, liking, and other modes of interaction. Understanding interactions between users and such systems is crucial to the success of social media adoption. In this study we investigate community interactions with Flickr Commons collections contributed by the Library of Congress. Through the analysis of user generated data we show that interactions with collections exhibit collective properties and behaviours constrained by the geography of the contributors. Our findings help ground our understanding of behaviours in online communities and thus contribute to the management of online communities.

Specifically, we focus on the geography, semantics, and social ties in user interactions with a collection of 4,500 images from the Library of Congress (LC) Flickr pilot project (http://www.flickr.com/photos/library_of_congress/). This collection has drawn attention of a large community of taggers and commenters on Flickr. Flickr communities crowdsource tags and comments. The latter represent local knowledge (LK) and non-local knowledge (NLK). LK comments contain factual information, such as names and places depicted in a photo, links to maps, wikipedia articles, stories about people or places depicted in a photo. Such comments add value to collections as they enrich image descriptions. NLK comments are the opposite of local knowledge comments: they usually take a form of opinions, sentiments, invitations to groups, and so on.

We consider comments as a product of three elements: images, users, and locations. We assume that commenting occurs mainly because of triggers in artifacts (e.g., images reminding commenters of past experiences trigger responses from the users). Studying triggers in artifacts and their relationship with the quality of responses and locations gives us a deeper understanding of the complexity of crowdsourced contributions, and helps us predict when and where the local knowledge contribution might fail. We are also investigating what role geographic distance and location play in the production of LK and how LK and NLK activities differ across geographic space.

In our study we use a mixed methodology that allows us to separate LK comments from NLK comments, compare their localness, and analyze commentary triggers. We develop a custom Naive Bayes classifier for classifying comments as LK and NLK and employ geovisualization methods for geospatial analysis: a) interactive geographically-weighted summary (GWSummary) for exploring spatial heterogeneity of LK and NLK comment distributions; b) geospatial metrics for comparing the diffusion of LK and NLK activities of individual users. Finally, we develop a log-linear model for finding associations between geospatial areas, LK/NLK classification, and triggers that make people contribute LK or NLK comments.

This article extends work of Kipp and collaborators (2012, 2013) by a more detailed characteristics of collective and individual spatial behaviours of commenters and reasons of such behaviours. It also extends the theory of localness (Goodchild, 2007; Lieberman & Lin, 2009; Hecht & Gergle, 2010, 2011; Park et al., 2014) by offering a better interpretation of the meaning of geospatial clusters and agglomerations in social media and showing statistically that local knowledge is elicited by different triggers at different locations.

References:

Goodchild, M. F. (2007). Citizens as sensors: the world of volunteered geography. GeoJournal, 69(4), 211-221.

Hecht, B., & Gergle, D. (2011). A beginner’s guide to geographic virtual communities research. IGI Global.

Kipp, M., Buchel, O., & Rasmussen, D. (2012). Exploring Digital Information Using Tags and Local Knowledge. ASIST Proceedings 2011.
Kipp, M., Buchel, O., Beak, J., Choi, I., & Rasmussen, D. (2013). User motivations for contributing tags and local knowledge to the Library of Commons Flickr collection. 41st Annual Conference of Canadian Association for Information Science 2013.
Lieberman, M. D., and Lin, J. You Are Where You Edit: Locating Wikipedia Contributors through Edit Histories. In ICWSM (2009).

Park, S., Kim, Y., Lee, U., & Ackerman, M. (2014). Understanding localness of knowledge sharing: a study of Naver KiN'here'. In Proceedings of the 16th international conference on Human-computer interaction with mobile devices & services (pp. 13-22). ACM.

Is the world chaotic or merely complex?Walter Clemens and Stuart KauffmanMonday, 15:40-17:00

Everything is interdependent—linked so closely that each element conditions the other in ways big and small. How and to what degree each element affects the other is difficult or even impossible to gauge. The butterfly effect suggests a world in chaos—with linkages so nuanced that just to understand them is virtually impossible. To predict how they will interact is even less feasible. Against this view, complexity science seeks to identify patterns in interactive relationships. A comparison of political entities across the globe, for example, points to the key factors that conduce to societal fitness. A survey of states that have declined in fitness suggests why their strengths turned to weaknesses. A survey of societies that were relatively democratic points to four factors that contribute to their becoming authoritarian dictatorships.
Whether chaos or patterned complexity ensues is difficult to predict for many reasons, including the role played by adjacent possibles that often are not knowable: every new technology creates opportunities to push society in some new direction or another. We stumble forward, sucked into the opportunities we keep creating. One could hope, for example, that the Internet and social media could enhance education and democratic self-rule. But the impact of raw emotion, bias, ignorance, bots and trolls can contribute to misinformation and poor decision-making. Tweets by the president of a large country can contribute to fake news and destabilize both state and non-state relations across the globe. Some of these forces are embedded in our universe and are hard to change. But human agents also play key—sometimes decisive—roles. We do not always know the games we play, but play them nonetheless.

Clemens is professor emeritus of political science, Boston University;
Kauffman is professor emeritus of biochemistry, University of Pennsylvania

Just-in-Time Traffic Model Adaptation to Non-Recurrent IncidentsInon Peled and Francisco C. PereiraThursday, 14:00-15:20

Traffic networks are complex systems, where tens of thousands of vehicles interact dynamically. Proper performance of traffic networks has great socio-economic impacts on urban environments. As such, cities worldwide invest great resources in designing and implementing effective traffic management strategies, in which reliable prediction schemes play a main role.

Under typical road conditions, traffic tends to follow repeated patterns, and can thus be effectively predicted. Alas, short-term traffic prediction models often deteriorate greatly under non-recurrent traffic disruptions -- such as road accidents or unforeseen weather extremities -- just when accurate prediction is most needed for effective incident management. The problem is further exacerbated by the typically low availability of system-wide observability in traffic networks.

Consequently, immediate adaptation of short-term traffic prediction models to sudden disruptions has so far been a largely unsolved problem. The key issue is that when an incident happens, the correlation structure between target variables and predictors changes abruptly, in a manner which is unique to the incident characteristics. So far, no systematic method has been found for updating traffic prediction models based mainly on real-time information about the incident itself.

Nowadays, however, In-Vehicle Monitoring Systems (IVMS) are increasingly penetrating vehicle markets worldwide. IVMS devices constantly monitor and report the status of the vehicles where they are installed, and in particular, deliver distress signals in case of break down. The global adoption of IVMS technologies thus offers unprecedented levels of real-time network observability.

In this work, we formulate a novel, model-based framework for timely adaptation of traffic prediction model under incidents, by leveraging real-time IVMS signals from affected vehicles. Our methodology is to simulate multiple "what-if" scenarios of the affected road, based on information in the received IVMS signals. The data obtained from the simulations is then fed to data-driven Machine Learning methods, which yield an adapted prediction model. The new, just-in-time model is better fit to predict how traffic evolves shortly after the onset of the particular incident.

We experiment our framework in a case study of a highly utilized Danish highway, and the results show that our approach potentially improves traffic prediction in the first critical minutes of road incidents. Because only a few dozens of simulations are required, such real-time computation is well within the capacity of commercially available computational clusters. Our findings suggest that given immediate incident information, the hitherto unsolved problem of just-in-time model adaptation is presently becoming tractable.

A large scale analysis of team success in a scientific competitionMarc Santolini, Abhijeet Krishna, Leo Blondel, Thomas Landrain, Albert-Laszlo BarabásiWednesday, 15:40-17:00

Science of science – Science of success – Science of team science – collaboration network – coopetition

Overview. This work investigates criteria of performance and success of teams in a scientific context. We leverage laboratory notebooks edited on wiki websites by student teams participating to the international Genetically Engineered Machines (iGEM) synthetic biology competition to uncover what features of team work best predict short term quality (medals, prizes) and long term impact (how the biological parts that teams engineer are re-used by other teams).

For over 10 years, iGEM has been encouraging students to work together to solve real-world challenges by building genetically engineered biological systems with standard, inter-changeable parts or BioBricks. Student teams design, build and test their projects over the summer and gather to present their work and compete at the annual Jamboree. A condition of participation to iGEM is that teams document their progress and results on an open wiki website. Given the underlying structure of wikis, it is possible to know which team member has edited which part of the wiki, and at what time. Team also collaborate with one another, forming a collaboration network. Finally, teams are awarded medals and special prizes (short term impact), and the BioBricks that they engineer can be later re-used by other teams in later years (long term impact). In this work, we investigate how features of team organization (obtained through the wiki) affect team success (medals, prizes etc) in this model of science.

We extracted team information at multiple levels of granularity, as shown in Figure 1A. First, we extracted the wiki history and content for 1,551 teams from 2008 to 2016. This information was used to build internal team interaction networks. For each team, a bipartite network was constructed between the wiki editors (the team students) and the sections edited. Team networks were then reconstructed by projecting the bipartite network on the user space, counting the number of wiki subsections co-edited by any two students of a team. The obtained number was compared to the expected co-edition resulting from a hypergeometric distribution and a Z- score was computed. Finally, edges with Z > 2 were deemed significant and kept for further analyses. Teams also collaborate with one another, and we extracted for each year the team collaboration network. Teams produce BioBricks, and we extracted the number of BioBricks produced and their re-use. Finally, success measures were collected, consisting of the type of medal (None, Bronze, Silver or Gold), number of special prizes, nomination as a Finalist and as a Winner of the competition.

Analysis of the data showed a saturation of team productivity, as measured by the total number of BioBricks produced, for teams larger than size 10. We then observed two trends for small (<10) and large teams (10) (Figure 1B). For small teams only, we observed that team size (measured by number of wiki editors), wiki size (number of wiki section) and degree in the team collaboration network (number of collaborations with other teams) were decisive factors to win high quality medals. For both small and large teams, we found that higher productivity per capita both in wiki edition (number of edits, number of sections edited) and BioBricks (number of BioBricks produced per editor), as well as prior experience in the competition (fraction of the team that previously participated to iGEM) were significant predictors. Finally, while team size or wiki size did not matter for large teams, we observed that the ones with higher team network density and largest connected component were significantly more successful in the competition.

In summary, we present a unique multi-scale dataset capturing team performance and success in a scientific context. The longitudinal aspect of the data exhibits the role of prior experience as well as the importance of scaling (small teams) and organizing (large teams) to meet productivity standards and achieve team success in the context of the iGEM competition.

The Laws of Complexity and Self-organization: A Framework for Understanding NeoplasiaNat PernickMonday, 14:00-15:20

Background: Current biologic research is based on a reductionist approach. Complex systems, including organisms and cells, are presumed to merely be combinations of simpler systems, which can then be studied more readily. The whole is equal to the sum of the parts, interacting in a predictable, linear way. This approach, although adequate for understanding some diseases, has failed to bring about the knowledge necessary to substantially reduce cancer-related deaths. Complexity theory suggests that emergent properties, based on unpredictable, nonlinear interactions between the parts, are important in understanding fundamental features of systems with large numbers of independent agents, such as living systems. Applying complexity theory to neoplasia may yield a greater understanding of physiologic systems that have gone awry.

Methods and Findings: The laws of complexity and self-organization are summarized and applied to neoplasia:

1. In life, as in other complex systems, the whole is greater than the sum of the parts.
2. There is an inherent inability to predict the future of complex systems.
3. Life emerges from non-life when the diversity of a closed system of biomolecules exceeds a threshold of complexity.
4. Much of the order in organisms is due to generic network properties.
5. Numerous biologic pressures push cellular pathways towards disorder.
6. Organisms resist common pressures towards disorder through multiple layers of redundant controls, many related to cell division.
7. Neoplasia arises due to failure in these controls, with histologic and molecular characteristics related to the cell of origin, the nature of the biologic pressures and the individual’s germ line configuration.

Conclusions: In the framework of the laws of complexity and self-organization, cells maintain order by redundant control features that resist inherent biologic pressures towards disorder. Neoplasia can be understood as the accumulation of changes that undermine these controls, leading to network states associated with dysregulated growth and differentiation. Studying neoplasia within this context may generate new therapeutic approaches by focusing on the underlying pressures on cellular networks and changes to associated cofactors, instead of tumor-specific molecular changes.

Learning Bayesian network classifiers with applications to Twitter sentiment analysisGonzalo A. Ruz, Pablo A. Henríquez, Aldo Mascareño and Eric GolesThursday, 15:40-17:00

A Bayesian network is a directed acyclic graph, whose nodes represent discrete attributes and the edges probabilistic relationships among them. An interesting feature of Bayesian networks is that they satisfy the Markov condition, thus enabling the computation of the joint probability distribution of all the attributes (variables) in a factorized form. Learning Bayesian networks from data, has two components that must be handled: 1) the structure of the networks, 2) the parameters (conditional probability tables). This is a difficult task (NP-complete). Therefore, several approximate learning approaches have been devised in order to simplify the learning process. Probabilistic classification consists in computing a posterior probability given an input data point. On the basis of the Bayes rule, the posterior probability can be computed by means of the joint probability distribution of the attributes with the class variable; and if we use Bayesian networks to compute this joint probability distribution, we obtain Bayesian network classifiers. Although many Bayesian network classifiers have been proposed, two are the most popular models: the naive Bayes classifier and the tree augmented naive Bayes classifier (TAN). In this work, we extend the TAN algorithm to its incremental version by incorporating a complexity measure through the Bayes factor during the tree construction process. In so doing, we assess whether the adding of an edge is supported by this measure. Depending on the data, the procedure allows ending with a forest structure (and not each time with a tree structure), thereby increasing the predictive performance in cases where the data does not fit with a tree structure. To evaluate and compare the performance of the proposed learning algorithm through the classification accuracy measure, we first considered standard machine learning benchmark datasets, and secondly, the application to sentiment analysis. Sentiment analysis can be defined as a process that automates mining of attitudes, opinions, views, and emotions from text, speech, tweets, and database sources through natural language processing. Sentiment analysis involves classifying opinions in text into categories like positive, negative, or neutral. For this application, we assessed the performance of the proposed algorithm using two Twitter datasets in Spanish: the Catalan referendum of 2017 and the Chilean earthquake of 2010. All the tweets were programmatically searched and extracted from Twitter by using twitteR package, written in R programming language. Dataset 1 contains a collection of 60,000 tweets from the Catalan referendum of 2017. For this, we used the corresponding keywords for the event: #cataluña, #IndependenciaCatalunya, #2Oct, #CatalanReferedendum, #L6Nenlaencrucijada, and others. Dataset 2 contains a collection of 2,187 tweets from the Chilean earthquake of 2010. This dataset was obtained from Cobo et al. 2015. Several preprocessing techniques were applied to eliminate inconsistencies and redundancies. Overall, the predictive performance measured through accuracy for Dataset 1 and Dataset 2 obtained competitive results when compared to other popular machine learning techniques. Also, it is important to point out that the resulting network structures obtained by the Bayesian network classifier showed interesting relations amongst words and terms during these critical events, a desirable property that black box machine learning techniques do not have.

The authors would like to thank Conicyt-Chile under grant Fondecyt 1180706 (G.A.R.),
CONICYT Doctoral scholarship (2015- 21150790) (P.H.), Basal(CONICYT)-CMM (G.A.R, E.G), and the Research Center Millennium Nucleus Models of Crises (NS130017) (G.A.R, A.M, E.G.), for financially supporting this research.

Learning Graph Representations Using Recurrent Convolution Auto Encoder Networks for Anomaly Based Forecasting Of Ethereum PricesYam PelegWednesday, 15:40-17:00

Recently, a number of papers re-visited this problem of generalizing neural networks to work on arbitrarily structured graphs, Some of them achieving very promising results in domains that have previously been dominated by other shallower algorithms. While Graph convolutions are generalisation of convolutions, and easiest to define in spectral domain, General Fourier transform used to represent them scales poorly with size of data. Therefore, first order approximation in Fourier-domain are used to obtain an efficient linear-time graph-CNNs. Those scales poorly with size of data to overcome time, due to that, the modelling power of the proposed graph convolutional networks is severely impoverished. Another approach for learning graph representations requires the repeated application of contraction maps as propagation functions until node representations reach a stable fixed point. We combine those approaches and propose a recurrent version of Graph Convolution networks, we then proceed to construct two models, Recurrent Variational Graph AutoEncoder and Recurrent Graph Convolution Regressor and we show that for Ethereum Blockchain relational graph dataset we outperform the traditional Graph Convolution Network.

A Lexical Network Approach for Identifying Suicidal Ideation in Clinical Interview TranscriptsUlya Bayram, Ali A. Minai and John PestianThursday, 14:00-15:20

The timely prevention of suicide attempts is possible through identifying the existence of any suicidal ideation. However, this is a challenging task because suicidal individuals have a bias towards hiding their suicidal tendencies. This work proposes a novel approach to recognizing latent suicidal ideation using differential analysis through lexical networks constructed from textual and verbal self-expression data collected from suicidal and non-suicidal individuals.
Previous work on identifying suicidal ideation from verbal data (such as clinical interviews) has focused on the use of individual words (unigrams) or two-word associations (bigrams) and has proved invaluable. However, ideas are built from words used in combination. In previous work, we have proposed using lexical association graphs obtained from the work of individual authors or specific knowledge domains to model cognitive spaces and identifying ideas as compact, strongly connected components in these graphs. Here, we apply the reverse of this approach to evaluate whether a given expression, obtained from individuals, can be classified as coming from a suicidal or non-suicidal mindset by checking its consistency with distinct lexical graphs constructed from corpora of suicidal and control texts.
To test this approach, we use 1,500 clinical transcriptions, based on interviews with more than 450 suicidal and control subjects. The texts obtained from each group are combined into separate text corpora - one for suicidal cases (S) and the other for control cases (C). Each corpus is used to generate a lexical network where each node is a word and the weight of the edge between every word-pair indicates how strongly the words are associated in that corpus. Several metrics of association and the effect of the spreading activation approach are evaluated in this work.
Given an expression by a subject with an unknown classification, the expression is mapped onto both lexical networks, and the normalized association of its words in each network is checked. It is classified as being more consistent with the network in which it has greater strength of association. A threshold calculated from the training data used to build the networks is used to perform binary classification on the differential associative strength. Cross-validated results indicate a classification accuracy of 75% on novel test data. We analyze the structure of the lexical networks and identify central words and motifs as potential features for further improving the classification and also explore a relaxation-based approach with network dynamics.
In general, this approach is not limited to the evaluation of suicidal ideation and can be used in a stylometric analysis, text authentication, and a variety of other applications across different domains. Future work will seek to enhance classification accuracy by enriching the feature space and applying additional machine learning methods.

Linking the many and the few: an experimental-theoretical analysis of multiagent coordinationMengsen Zhang, Christopher Beetle, J. A. Scott Kelso and Emmanuelle TognoliWednesday, 14:00-15:20

How do the many components of a complex social/biological system coordinate with each other to form spatiotemporal patterns? And what is the nature of the organization that emerges? Previous work has concentrated mostly on very large (N→∞) or very small systems (N≤4). The study of large systems has focused on gross statistical features (e.g. collective synchrony), while that of small systems focused on multiple fine-grained coordination patterns and transitions between them (e.g. limb coordination). Theoretical models of large- and small-scale coordination differ. For example, the Kuramoto model for large systems, when scaled down to N≤4, is not equivalent to the extended Haken-Kelso-Bunz (HKB) model for small systems. This begs the question whether microlevel features observed in small systems (e.g. multi- and metastablity) disappear for larger N. Here we approach this question along a middle path. We (1) conducted an experiment on a system of neither too few nor too many components (N=8), (2) found a model (scalable to arbitrary N) that captures experimental observations of both microlevel phase relations and statistical features for N=8, and (3) compared the properties of this model to existing models for smaller and larger systems. In the experiment, ensembles of eight people (×15=120 subjects total) spontaneously coordinated rhythmic movements in all-to-all networks. Diversity in movement frequencies was manipulated by pacing participants with metronomes prior to social interaction: metronome assignments effectively partitioned each ensemble into two groups of four with frequencies identical within but different between groups by δf = 0.0, 0.3 or 0.6 Hz. We observed that: (a) at the micro level, participants coordinated with each other metastably, with a preference for inphase and antiphase coordination (i.e. bistable tendencies), a preference that weakened as diversity (δf) increased; and (b) at the macro level, the two frequency groups turned from being integrated to segregated at a critical value of diversity (δf*=0.5 Hz). We show that the Kuramoto model captures the latter feature of the data, but fails to account for the coexistence of inphase and antiphase coordination at the micro level. Only when a second order (HKB-like) coupling is added to the Kuramoto model are we able to account for all observed experimental effects. Analytically, we found that the critical coupling strength for multi-stability (associated with the second order coupling) was invariant to scaling (increasing N) thus still relevant to large scale models of coordination. Our findings suggest that the coexistence of inphase and antiphase, along with related metastability and order-order transitions, may be uncovered in large-scale natural systems through a multiscale approach. In sum, not only does our model capture key experimental observations, it also reconciles models of coordination for very large and very small systems, and yields theoretical predictions that can be tested in further studies of coordination across scales. [This work was supported by NIMH Grant MH080838, The FAU Foundation, and the Davimos Family Endowment for Excellence in Science.]

Long-term Link Detection in the CO2 Concentration Climate NetworkNa Ying, Dong Zhou, Qinghua Chen, Zhangang Han and Qian YeWednesday, 15:40-17:00

Carbon dioxide (CO2) is a prominent anthropogenic greenhouse gas and its increase is expected to the major factor in global warming. Here we develop a network theory-based approach to find and quantify the path of influence propagation between remote regions. Based on CO2 concentrations retrieval from the Atmospheric Infrared Radiation Sounder, we find that the connectivity pattern of networks shows a dense stripe of links in the latitudinal band of 45°–60°N and 40°–60°S. Meanwhile, the locations of the outgoing and in-coming weighted degree hubs of the climate network are qualitatively similar to the transient heat flux. Careful analyses further reveal that long distance links spreading from west to east are the most dominant in the climate network. Specifically, there is an obvious indication of long-range transport from North America to the western North Atlantic and even to Mediterranean by warm conveyor belts. Continental outflow off the coast of East Asia is also apparent. Besides, links along the pathway of atmospheric Rossby waves are also evident in the southern hemisphere. The results and methodology reported here provide a useful way to understand the sources and sinks of CO2 as well as its subsequent transport around the globe.

Mahalanobis Distance as a Proxy for Physiological System Dysregulation: Application to Expression DataFrédérik Dufour, Pierre-Étienne Jacques and Alan A CohenMonday, 14:00-15:20

Aging is a complex physiological process that is still poorly understood. We hypothesize that aging could be driven in large part by the dysregulation (i.e. loss of homeostasis) in cellular and organismal regulatory networks. Our work on clinical blood biomarkers demonstrated the usefulness of the Mahalanobis distance as a proxy for system dysregulation. Here, we seek to expand this approach using gene expression data, which permits the definition of a much larger number of systems at a finer biological scale. We conducted secondary data analysis with gene expression data from the Rotterdam Study (RS-III, GEO accession: GSE33828) which comprises 880 whole blood samples profiled on the Illumina Whole-Genome Expression Beadchips (HT12-v4). Genes were split into systems using Gene Ontology annotations. For each gene set, we calculated the correlation between Mahalanobis distance and age. Among ~13 189 systems tested, ~1 029 showed evidence of age-related increases in dysregulation, substantially more (~369) than expected by chance. Furthermore, these systems were highly clustered, with a strong representation of systems related to immune cells function, protein localization and maintenance of proteostasis, ribosomal RNA metabolism and regulation of signaling pathways, such as the MAP kinase pathway or cell cycle phase transition pathways. These results imply that loss of homeostasis with aging is widespread but not universal in biological systems and that longer term it should be possible to map this dysregulation and identify key systems in the process.

Mainstreaming Complexity Science - Practical Decisions in a Chaotic WorldJames Thompson and David SlaterMonday, 14:00-15:20

The terms complex systems, non-linear dynamics, chaos, and multifractals have a predictable effect on most people; academics get excited, and practical decision-makers stop paying attention. The reason for the former is that many of the unanswered questions in science require an understanding of these difficult fields; and the reason for the latter is that using these concepts for immediate practical decision-making is not yet tenable. However, the problems facing the United States government and the environment in which it operates are complex and require tools to measure this complexity and present it in such a manner that well-informed decisions can be made quickly.

A central recurring theme across these problems is the ubiquity of datasets that exhibit self-similar behavior that is best described as fractal or multifractal. These include magnetic resonance images, electrocardiograms, cyber-traffic, the operations of malware, the raw water inflows to critical infrastructure, the dynamics of power loss in the grid, stock prices, and the human genome. The prevalence of these self-similar datasets raises a number of questions: How can multifractals be used to model the systems that produce them? What can we predict using non-linear time series analysis? Can we control or alter a system to our benefit once we know it is chaotic? Can we use the concepts of self-similarity as a design precept when constructing new systems?

At MITRE, we are researching ways to incorporate multifractal analysis into decision-making by testing the accuracy, precision, and computational burden of the leading algorithms for conducting multifractal analysis. Simultaneously, we are exploring practical applications for the tools of complexity science in general. We will discuss the results of our algorithm analysis, identify some of the pitfalls of using the leading methods on near real-time data streams, and expound on a few of the most pressing opportunities for employing multifractals and complexity science in mainstream decision-making.

A manifold learning approach to chart the brain dynamics of humans using resting EEG signalsHiromichi Suetani, Yoko Mizuno and Keiichi KitajoWednesday, 14:00-15:20

Persons have their own genetic traits shown not only in their appearances such as faces and fingerprints but also in their internal physiological dynamics. Use of such physiological dynamics has received great attention for the purpose of automatic people certification.
In this study, we propose an approach based on “manifold learning”, which is a general framework for finding nonlinear coordinate to describe low-dimensional manifold that data lying on from high-dimensional observations, in order to identify individuality of human electroencephalography (EEG) signals.
In experiments, a total number over 100 volunteers were participated in this study after giving informed consent that was approved by the ethics committee of RIKEN. For each subject, we measured EEG signals from 63-channel electrodes at the sampling rate 1 kHz while he/she was in a resting state. We first calculate the power spectral density for EEG signal per window (10 seconds here) and per channel by employing Fast Fourier Transformation (FFT) and we divide it into the five typical frequency bands (delta: 1-3.5 Hz, theta: 3.5-8 Hz, alpha: 8-13 Hz, beta: 13 – 25 Hz, gamma; over 25 Hz) as features of a EEG signal of a particular channel. Because there are 63 channels in measurement, totally 315 features are used per participant and per window as an input sample for manifold learning.
In the analysis, we employ several representative manifold learning methods including ISOMAP (isometric feature map), LLE (locally linear embedding), and t-SNE (t-distributed stochastic neighbor embedding) for mapping 315-dimensional data into low-dimensional space (2D or 3D) as well as we also employ the conventional linear methods such as PCA (principal component analysis) for comparisons.
We show that how our proposed method is useful for discriminating a particular subject from other subjects well and how a low-dimensional “chart” of the human EEG dynamics obtained by manifold learning can be interpreted from the viewpoint of neuroscience.

Mapping population dynamics by using spatial networks. A case study focused on the traditional pattern of settlement in northwestern SpainJosé Balsa Barreiro, Alfredo Morales Guzman and Alex Sandy PentlandTuesday, 18:00-20:00

Nowadays, around half of the world population lives in urban areas. Near future predictions estimate that this percentage will increase up to 2/3 of the world population by the middle of this century. From a demographic point of view, this worldwide urbanization phenomenon is the result of both quantitative and qualitative processes. From a quantitative perspective, the increasingly lack of economic opportunities in rural areas led to massive migrations from these areas towards cities (rural exodus), which tend to concentrate most of the job opportunities. From a qualitative perspective, urban population shows more positive indexes and dynamics mainly because of lower demographic aging.
These population dynamics are leading to an increasingly marked segregation between rural and urban societies. Most studies and official reports have been focused on the analysis of these dynamics on a large scale, polarized in urban metropolises and rural settlements completely abandoned at both ends.
However, these population dynamics should be studied more exhaustively at lower scales. In this paper we map and analyze these population dynamics in a small municipality located in the region of Galicia, in northwestern Spain. This region presents a traditional pattern of settlements based on high levels of spatial dispersion. This is evidenced by the fact of this region concentrates almost 60% of the population entities, while representing only 5.8% of the Spanish population.
Population dynamics of each human settlement within our specific study area are represented. The data used are extracted from the Nomenclator published by the National Institute of Statistics, which is a comprehensive inventory database for demographics. Time period analyzed goes from the late nineteenth century up today. We map the whole human settlements by using spatial networks, where nodes are hierarchically connected according to their total population. Resulting patterns show an increasingly marked rural-urban segregation based uniquely in quantitative data.

Mapping the Higher Education System: The Applicants PerspectiveCristian Candia-Castro-Vallejos, Sara Encarnação, Carlos Rodriguez Sickert, Cesar Hidalgo and Flavio PinheiroThursday, 15:40-17:00

Comparable statistical data is of fundamental importance for policy making on Higher Education. Current classification schemes of education fields that support data production are based on the UNESCO’s (International Standard Classification of Education) - ISCED. This scheme focus on the comparability of educational programs and as such of different majors. However, this does not always overlap with the similarity between majors when one takes into account the students' choices when applying to Higher Education. Here, we study two networks, the Higher Education Space, of similarities between majors using data of the applicants from the Portuguese and Chile Higher Education System. Each network exhibits eight communities. The internal composition of each community shows that there are new potential complementarities within institutions that should be weighted in policy making and higher education management. Furthermore, we have found that while gender is a determinant factor, in the structure under analysis, it constrains the employment opportunities of candidates since it exhibits a strong pattern of assortment in relation to unemployment levels Future research concentrates in understanding the range of phenomena captured by this structure and how it impacts the opportunities for graduates in the labor market.

Maximum entropy sparse random graphs with given average degree and clusteringPim van der Hoorn, Dmitri Krioukov, Gabor Lippner and Johan van LeeuwaardenWednesday, 14:00-15:20

Statistical analysis of real-world networks requires random graph models that generated graphswith structural constraints, corresponding to the network of interest, while being maximally unbiasedwith respect to all other structural properties. In particular, we are interested in such modelsfor sparse graphs with scale-free degree distribution and strong clustering, since these structuresare commonly present in most complex networks.Recently, we used the concept of maximum entropy from information theory and ideas from graphlimits and inhomogeneous random graphs to developed a theoretical approach to the problemof maximally unbiased sparse graph ensembles with given structural limit constraints. We thensuccessfully applied our approach to analyze maximally unbiased graphs with given scale-freedegree distributions. In this work we take a first important step to include the clustering constraint,by considering the entropy optimization problem for sparse random graphs with given averagedegree and clustering. From the vast literature on large deviations we know that analyzing thestructure of graphs under constraints given by triangle densities is hard in general. Therefore, wefirst constrain ourselves to ensembles of sparse stochastic block models. Although this seems tobe a significant restriction, we conjecture that among all sparse graphs with given average degreeand clustering, stochastic block models have the largest entropy. Let t denote the target averagenumber of triangles and k ≥√2t the target average degree. Then our main result is that theentropy maximizing block model for graphs of size n consists of n/√√2t complete graphs of size2t, and edges between them are present with probability (k −√2t)/n. In particular, we see thatthe triangle constraint is completely absorbed into these complete graphs while additional edgesare created between them to match the average degree.In addition to increasing our understanding of typical graphs with given degrees and clustering,our results also open up interesting questions and new insights regarding the structure of largesparse graphs.

Maxmin-omega: A New Threshold Model on NetworksEbrahim PatelTuesday, 15:40-17:00

We introduce the maxmin-omega system, an intuitive model of asynchronous dynamics on a network. Each node in this system updates its state upon receiving a proportion omega of inputs from neighbourhood nodes. A crucial motivation for this study is that the system is deterministic - the update of node states depends on local exchanges until the fraction omega is fulfilled. Potential applications include neural network dynamics, epidemic spreading and Twitter tweeting. There are intriguing unanswered questions regarding the theory and such applications. For example, the key difference between maxmin-omega and traditional threshold models is that of feedback. In threshold models, once the threshold is achieved, the cells stop processing, whereas once the threshold omega in the maximin-omega model is achieved, the system waits for the next cycle before iterating the same process, further iterating until at least some periodic behaviour is reached; this periodicity defines 'viral' behaviour in a maxmin-omega system. Maxmin-omega thus provides a new and compelling view of dynamics on networks of the real world.

Measuring loss of homeostasis during agingDiana L Leung and Alan A CohenTuesday, 15:40-17:00

Individual biomarkers are often studied as indicators of abnormality, but a complex systems perspective suggests that further insight may be gained by considering biomarker values in the context of others. The concept of homeostasis implies that normal levels of a biomarker may be abnormal in relation to the levels of other biomarkers, and vice versa. On the premise that healthy physiological dynamics are constrained through regulation and thus converge towards certain profiles, results from our lab suggest that Mahalanobis distance (Dm), or the distance from the center of a distribution, can be used as a measure of physiological dysregulation. Specifically, Dm increases with age, and predicts mortality and many other health outcomes of age. Increase of signal with the inclusion of more biomarkers, and lack of sensitivity to biomarker choice confirm that dysregulation is indeed an emergent phenomenon. This approach can be applied at the organismal level or to specific physiological/biochemical systems. Here, in order to improve the signal measured by Dm, we draw on the observation that the last few principal components (PCs) of the biomarkers were particularly stable, despite explaining minimal variance. We thus compared Dm calculated from different subsets of PCs to assess biological significance. Omitting these very low variance PCs in the calculation of Dm consistently improved Dm signal. Subsequent analyses support the hypothesis that the very low variance PCs represent noise. For example, these loadings often represent the contrast of highly correlated biomarkers, which may largely contain measurement error. Principal component analysis is often used as a dimensionality reduction method for simplicity or limited computing power, but for a measure like Dm where the PCs are weighted equally, careful dimensionality reduction can significantly boost the biological signal.

Meritocracy in market networksFlorentino Borondo, Javier Borondo, Cesar HidalgoTuesday, 15:40-17:00

A system is said to be meritocratic if the compensation and power available to individuals is determined by their abilities and merits. A system is topocratic if the compensation and power available to an individual is determined primarily by her position in a network. Here we introduce a model that is perfectly meritocratic for fully connected networks but that becomes topocratic for sparse networks-like the ones in society. In the model, individuals produce and sell content, but also distribute the content produced by others when they belong to the shortest path connecting a buyer and a seller. The production and distribution of content defines two channels of compensation: a meritocratic channel, where individuals are compensated for the content they produce, and a topocratic channel, where individual compensation is based on the number of shortest paths that go through them in the network. We solve the model analytically and show that the distribution of payoffs is meritocratic only if the average degree of the nodes is larger than a root of the total number of nodes. We conclude that, in the light of this model, the sparsity and structure of networks represents a fundamental constraint to the meritocracy of societies.

A meta-stable coupled-oscillator dynamics is implicated task of maintaining the upright posture of the human body.Steven Harrison but not paid and Jeffrey Kinsella-ShawTuesday, 18:00-20:00

Controlling the posture of the human body is achieved via the effective coordination of neural, muscular, and skeletal degrees of freedom. Effective postural coordination requires a balance between stability and flexibility, with stability concerning the formation of persisting low degree-of-freedom control solutions, and flexibility concerning the potential to quickly switch between control solutions so as to adapt to. Consistent with these requirements, meta-stable dynamics have been observed to emerge from parameterizations of non-linear systems of coupled oscillators. In a meta-stable state, stable solutions no longer exist. Nevertheless, a tendency remains for the collective variable dynamics to dwell in the regions where the stable states used to be (i.e. ghost attractors). The small degree of attraction that exists towards ghost attractors produces transient periods of dwelling in the vicinity of the weakly stable states (phase trapping), and periods of uncoordinated behavior (phase wandering) in which all potential relative phase relations can be explored. In this meta-stable regime, system states are at once transiently stable and highly flexible. These qualities have led to the description of meta-stable dynamics as “creative” dynamics, and the hypothesis that self-organized metastable dynamics are necessary for complex adaptive behavior in complex biological system (Harrison & Stergiou, 2015).

We used cross-wavelet analyses to assess coordination between the center of pressure locations under the left and right feet during quiet standing. We hypothesized that coordination between the limbs would take the form of a coupled oscillator dynamics operating in a meta-stable regime. The Haken, Kelso, and Bunz (1985) model of the collective variable dynamics of two bidirectionally coupled oscillators has been successfully applied to a wide variety of coordinated human actions. This model predicts in-phase and anti-phase coordination patterns as stable solutions. It also predicts that asymmetries in the intrinsic dynamics of the coupled oscillatory subsystems (i.e. detuning parameters) will produce reliable phase leads in observed coordination patterns.

Consistent with the predictions of the Haken, Kelso, and Bunz model we observed distinct preferences for in-phase and anti-phase coordination patterns between the center of pressure locations of the left and right foot, as well as phase leads arising from both biomechanical (i.e. standing asymmetrically on an uneven surface) and functional asymmetry (i.e. a preference for using one leg for tasks such as kicking a ball and standing on one leg). Consistent with the predictions of a meta-stable regime, we observed coordination taking the form of transient epochs of stable phase relations, and switches in the form and stability of coordination pattern accompanying changes in context (i.e. the availability of vision and the symmetry of stance). Our results suggest that the non-linear dynamics of coupled oscillators operating in a meta-stable regime are relevant to understanding the dynamics of quiet standing postural control (a seemingly non-oscillatory task).

Haken, H., Kelso, J. S., & Bunz, H. (1985). A theoretical model of phase transitions in human hand movements. Biological cybernetics, 51(5), 347-356.

Harrison, S. J., & Stergiou, N. (2015). Complex adaptive behavior and dexterous action. Nonlinear dynamics, psychology, and life sciences, 19(4), 345-394.

Metamorphic Testing to Improve Simulation ValidationMegan Olsen and Mohammad RaunakTuesday, 14:00-15:20

Metamorphic testing is a software testing technique that has shown great success in testing programs or algorithms lacking an oracle. An oracle is a reasonably easy-to-compute expected answer or behavior with which one can test the correctness of a program. The process of simulation validation also often lacks an oracle, as there may not be data to compare against for determining if a simulation adequately represents the system being studied. Although there are simulation validation techniques that do not require copious data, such as validation by a domain expert, animation of the simulation, and sensitivity analysis on the simulation parameters, validating a simulation model by comparing the simulation output to real world data can be very powerful.

Metamorphic testing avoids the need for an oracle by creating pseudo-oracles. A pseudo-oracle is defined by a change to an input or algorithm that results in a predictable change in output. For instance, we may not know what the exact output to our function should be for a given input, but we may know how that output should (or should not) change if the input value is changed in a particular way. The existence of such properties in the underlying function or algorithm is called a Metamorphic Relation. These metamorphic relations between two or more inputs and their corresponding outputs can then be used to test for correctness or discovery of potential anomalies in cases where a precise answer isn’t known or is too computationally expensive to create (Chen 2018).

We provide guidelines for applying metamorphic testing to simulation validation. Although Metamorphic Testing is a testing technique, we show how to adapt this process to be applicable for the purpose of simulation validation. In the case of simulation validation, the metamorphic relations describe how a definable change in simulation parameters or algorithms result in a specific change in output (Olsen 2016). Although we may not know the correct simulation output given a set of inputs, we may be able to predict how the output should change give the specific changes in the simulation algorithms or inputs. Metamorphic relations can be defined and tested for validating that the simulation correctly represents the system being studied. Our work demonstrates how this approach can be applied to validating simulation models, increasing our ability to demonstrate a model’s accuracy or discover flaws in representing the system it studies.

References
T. Y. Chen, F.-C. Kuo, H. Liu, P.-L. Poon, D. Towey, T. H. Tse, and Z. Q. Zhou, “Metamorphic testing: A review of challenges and opportunities,” ACM Computing Surveys, vol. 51, no. 1, pp. 4:1–4:27, Jan. 2018.

M. Olsen and M. Raunak, “Metamorphic validation for agent-based simulation models,” in Proceedings of the Summer Computer Simulation Conference (SCSC ’16), July 2016.

Mining the Temporal Structure of Thought from TextMei Mei, Zhaowei Ren and Ali MinaiWednesday, 14:00-15:20

Thinking is a self-organized dynamical process and, as such, interesting to characterize. However, direct, real- time access to thought at the semantic level is still impossible. The best that can be done is to look at spoken or written expression. The question we address in this research is the following: Is there a characteristic pitch of thought? Does thinking have a typical semantic correlation length?
To begin answering this complex question, we look at text documents at the sentence level – i.e., using sentences as the units of meaning – and considering each document to be the result of a random process in semantic space. Given a large corpus of multi-sentence documents, we build a lexical association network representing associations between words in the corpus. This network is used to induce a semantic similarity metric between sentences, and each document is analyzed to generate time-series of windowed forward and backward sentence similarities. The correlational structure of these time series provides one way to characterize the semantic dynamics of documents. However, most expression is semantically “chunky”, i.e., it is a sequence of multi-sentence semantically coherent chunks with occasional connecting text between the chunks. To analyze this, we segment the documents into seman- tic chunks – each termed an idea – and intermediate gap text, and model each document as a sticky Markov chain at the sentence level. Based on several datasets of research publications, our preliminary results indicate that:
1. People tend to write 6 to 7 consecutive semantically coherent sentences before shifting to a new chunk.
2. Texts typically have no more than two or three consecutive semantic chunks before transitioning to a semantic gap, though in rare cases, it is possible to have as many as ten consecutive semantic blocks.
3. The data across a large number of documents is fit quite well by a Markov process under the stipulation that a semantic chunk must not be less than three consecutive sentences in length.

Modeling Complexity as Decentralized Information ProcessingLoren Demerath and James ReidMonday, 15:40-17:00

An agent-based model of a theory of complexity as decentralized information processing is presented. The theory conceptualizes complexity as the phenomenon of emergence and evolution due to agents of information reducing their entropy through interaction. In the model, agents vary in the meaningfulness of their information as a function of the frequency, stability, and impact of issues they know about. As agents become more confident in their knowledge over the course of interaction, they standardize the most stable issues. Such emergent code lowers the entropy of agents' knowledge and increases the cohesiveness of the communities of agents that use it. The model is able to simulate the emergence and evolution of social order and language.

Modeling the Robustness of the Gene Network of Cell CycleAtaur Katebi and Mingyang LuTuesday, 18:00-20:00

Development of cancer and many other diseases involves genetic and epigenetic alterations in normal cellular pathways. Particularly, disruptions in the cell cycle can cause uncontrolled cell proliferation, known as a hallmark of cancer [1]. Substantial amount of work has been done to map the gene network of cell cycle [2-5]. However, how the combinatorial interactions of these genes enable robust function of normal cells and how this robustness is disrupted in cancer is not well understood. To this end, we utilized computational systems biology approach to model the dynamic behavior of the core gene network of cell cycle. We specifically focused on the yeast cell cycle, because the genetic pathways are well studied and experimental evidences are available. Moreover, the cell cycle network is conserved across multiple eukaryotic organisms. Thus, our framework is readily extendable to mammalian cell cycles.
The dynamics of the cell cycle network were studied by our recently developed method, namely random circuit perturbation (RACIPE) [6]. RACIPE uniquely takes a fixed circuit topology as the input and generates an ensemble of random kinetic models, from which generic features of the network behavior can be identified by statistical analysis. To better handle oscillatory dynamics, we devised a new high- throughput algorithm to process limit cycles from time dynamics.
Our analyses show that the cell cycle network has more than half chance to generate oscillatory dynamics. As a comparison, a synthetically designed repressilator circuit motif [7] has only about five percent chance to have oscillations [6], suggesting the cell cycle network is more robust to function as an oscillator. The analyses further show that the network has access to multiple stable steady states as well. The coexistence of both oscillatory and multi-stable behaviors explains the gene regulatory mechanism of the cell cycle.
These results suggest that the cell cycle gene network is robustly designed to allow switching between release and arrest of the cell cycle. Our modeling approach allows identifying the gene components that enhance or reduce the robustness of the oscillatory dynamics and the components that enable checkpoints during the cell cycle. These findings provide insights into the interplay between the cell cycle network and different checkpoints, and the gained insights can be further utilized to study the interaction between the cell cycle network, oncogenic signaling pathways, and tumor suppressive mechanisms.
REFERENCES
[1] Hanahan D and Wienberg RA (2011) Hallmarks of cancer: the next generation. Cell, 144, pp. 646-671.
[2] Gerard C and Goldbeter A (2009) Temporal self-organization of the cyclin/Cdk network driving the mammalian cell cycle. PNAS, 106(51), pp. 21643-21648.
[3] Li C and Wang J (2014) Landscape and flux reveal a new global view and physical quantification of mammalian cell cycle, PNAS, 111(39), pp. 14130-14135.
[4] Li F, Long T, Lu Y, Ouyang Q, and Tang C (2004) The yeast cell- cycle network is robustly designed, PNAS, 101(14), pp. 4781-4786.
[5] Tyson J.J, Chen K, and Novak B (2001) Network dynamics and cell physiology, Nature Review Mol. Cell Biol., 2(12), pp. 908-916, 2001.
[6] Huang B, Lu M, Jia D, Ben-Jacob E, Levine H, and Onuchic JN (2017) Interrogating the topological robustness of gene regulatory circuits by randomization, PLOS Comput. Biol., 13(3), pp. e1005456.
[7] Elowitz MB and Liebler S (2000) A synthetic oscillatory network of transcriptional regulators, Nature, 403(6767), pp. 335-338.

Modelling Fractal-Based Climate-Smart Urban Land UseRoger Cremades and Philipp SommerThursday, 15:40-17:00

Cities are fundamental to the mitigation of climate change, but the relationship between emissions and urban form is currently poorly understood and so far has not been used to provide planning advice for urban land use. Here we present climate-smart urban forms that cut in half emissions from urban transportation. Furthermore, we show the complex features that go beyond the normal debates about urban sprawl vs. compactness. Our results show how to reinforce fractal hierarchies and population density clusters within climate risk constraints to significantly decrease the energy consumption used in transportation in cities. Our modelling framework produces new fractal-based advice about how cities can combat climate change.

Modular-anti-modular graphs and the spread of sexually transmitted diseasesAradhana Singh and Sitabhra SinhaThursday, 14:00-15:20

The realization that the connection topology of social networks plays a crucial role in determining the spread of infectious disease,s has resulted in an enormous growth of studies in this topic, both theoretical and empirical [1]. One of the specific features investigated in this context is the impact on epidemics of mesoscopic struc- tural features, such as the organization of networks into communities (characterized by dense intra-connectivity and relatively sparse inter-connectivity) [2]. For instance, it has recently been shown that an optimal level of modularity in the contact network can significantly promote the persistence of recurrent epidemic outbreaks [3]. However, if one considers the class of sexually transmitted diseases (STDs), we need to necessarily take into account a further structural feature, viz., the segregation of each community into male and female subpopula- tions. Depending on the sexual orientation of the individuals being considered, interactions could be primarily between the two subpopulations resulting in a nearly bipartite organization in each community. This will result in a connection topology that is modular at one scale, and anti-modular in another. We present a model for such an interaction topology where the tunable parameter r, the ratio between intra- and inter-modular con- nection densities, allows us to investigate the behavior of spreading processes in such networks as a function of their mesoscopic organization keeping the connectivity (average degree) fixed. Elucidating the properties of this model class of networks allows us to obtain a number of results pertinent to understanding the spreading of STDs, In particular, we find the occurrence of diffusion having three distinct classes of time-scales, governing the spreading within each gender in a community, between two genders in a community and globally over the entire population, respectively. We connect this result to the characteristic spectral signature of the corresponding Laplacian matrix. While modular networks have earlier been associated with two distinct time-scales (viz., intra- and inter-modular) [4], we believe that the occurrence of an additional different time-scale in the case of social networks pertinent for STDs would have significant consequences for understanding their spreading dynamics. As STDs constitute a persistent challenge to public health, especially after the advent of HIV AIDS in the 1980s, understanding the distinct pattern of spreading of such diseases may be an important step towards their eventual control and containment [5].

The Moral Machine Experiment: 40 Million Decisions and the Path to Universal Machine EthicsIyad RahwanFriday, 13:00-13:40

With the rapid development of Artificial Intelligence technology come widespread concerns about how machines will behave in morally charged situations. Addressing these concerns raises the major challenge of quantifying societal expectations about the ethical principles that should guide machine behavior. This talk describes the Moral Machine, an Internet-based experimental platform that is designed to explore the multi-dimensional moral dilemmas faced by autonomous vehicles. This platform enabled us to gather 40 million decisions in ten languages from over 2.3 million people in 233 countries and territories, and thus to assess the paths and obstacles to machine ethics, including notable cross-cultural variations that undermine the possibility of singular, universal machine ethics.

Moran model on structured populations and effective mutation ratesGabriella Franco, Marcus de Aguiar and Lucas FernandesWednesday, 15:40-17:00

The Moran model describes the evolution of a single biallelic gene in a population with N individuals. The model has two regimes of genetic distribution, with low and high diversity respectively, that depend on the balance between mutation and drift. The regimes are separated by a critical value of the mutation, which, for well-mixed populations, depends only on the population size and is given by 1/2N. Here we study the transition between low and high diversity regimes in spatially structured populations. We define the critical mutation as the value that maximizes the Shannon entropy of the allelic distribution. By placing the population on a ring network we show that this transition point decreases with the number of neighbors on the ring. This implies that mutations have larger effects on structured populations than in well mixed ones. Since the Moran model on regular networks can be mapped into the Voter model with opinion makers, larger effective mutation rates maps on larger influence of opinion makers on structured populations of voters.

A Multi-Agent Approach to Aid in Developing CountriesLawrence De Geest and William GibsonTuesday, 15:40-17:00

The debate on aid is ancient, but has recently taken some fairly sharp methodological turns. On one hand, specialists now recommend randomized controlled trials to confirm that it is better to give the poor cash rather than specific goods, such as chickens and bed-nets. A new direction in aid thinking begins with a bottom-up approach, suggesting that the marginal aid dollar be invested in promoting thinking about how barriers to economic development can be overcome. The team approach emphasizes communication between various coalitions within the aid community, both local and international, as a way of addressing problems of corruption, lack of public goods and more streamlined and transparent rules to promote physical and human capital accumulation. Traditionally, the literature has been mixed, but mostly pessimistic about the prospects of international aid promoting growth or reducing poverty. This pessimism is supported by game theoretic models, but is mildly contradicted by the empirical literature in which institutional variables seem to matter. This paper aims to bridge this gap in the theoretical and empirical literature. It combines a popular game theoretic construct within a multi-agent computational model in which agents are subject to institutional pressure to cooperate. The conclusion is that even the simplest forms institutional analysis, modeled here by way of the genetic algorithm, show that the theoretical literature may be too quick to conclude that aid does not work.

Multi-Layer Policy Networks: Untangling the Web We WeaveDavid Slater, Hettithanthrige Wijesinghe, Alexander Lyte, Shaun Michel, Karl Branting, Haven Liu and Matt Koehler Monday, 15:40-17:00

The myriad laws, regulations, policies, and procedures (collectively defined as "rulesets") constrain how the United States Federal Government implements services and makes decisions. Moreover, these rulesets are so intertwined and interdependent that the operational impact of changes or additions to them are often poorly understood or appreciated at the legislative stage. Our team is working to improve the agility of the government to respond to the ever-changing corpus of documents by treating them as a multi-layer network woven together (primarily through citations). We created tools to parse the ruleset corpora, such as the United States Code and Code of Federal Regulations, as well as internal agency policy documents. The result is a network of hundreds of thousands of nodes and edges. We are developing interactive tools to query, view, and ultimately annotate the network. In this way, we seek to characterize the structure of the rulesets and measure the impact of new bills, regulations, and inter-agency policy changes. Our overall goal is multifaceted, including: (1) help decision policy makers better understand the connective structure of their rulesets, (2) classify redundant, inconsistent, and critical rule "pathways," (3) analyze the impact of the network structure on outcomes for citizens, and (4) trace the impacts of new laws and regulations through agencies down to their organizations, systems, and business functions. We will discuss use cases, to include the implementation of the fiscal year 2018 tax law.

Multi-scale IntentAnne-Marie GrisogonoMonday, 14:00-15:20

A multi-scale framework of intents is described for purposeful agents as a tool to explore the dynamics of cooperation, collaboration, conflict and competition between humans, both as individuals and as groups. The framework spans multiple timescales over which intentions persist - from the most transient (the next action to be taken) to the most enduring (deeply held values to be upheld), with several implicit scales in between. It is argued that each agent's behaviour in a particular situation is shaped by its own intent framework and by certain aspects of its conceptual model of the situation (including itself). This model suggests a systematic approach to identifying the scope for cooperative behaviours between two or more agents, and moreover where and how the domain of cooperation could be enlarged, with applications to improving cooperation and reducing conflict in human groups.

Multiscale Complexity Analysis of Natural ScenesMin Kyung Seo, In-Seob Shin and Seung Kee HanMonday, 14:00-15:20

Natural scenes we face every day look very complex seemingly without any regularities, but we are able to recognize them very immediately as we see simple objects. In order to reveal statistical regularities underlying the spatial organization of natural scenes, we introduced an information-theoretic method of multiscale complexity analysis where the compositional information of color or intensity distribution in visual images is computed as a function of the scale of observation [1]. It is inspired by the multiscale profile proposed by Y. Bar-Yam [2], where the complexity of a system is characterized by a scale dependent spectrum instead of conventional methods of complexity analysis using a single quantity [3]. We observed that the spatial organization in forest images is chracterized by a complexity spectrum distributed uniformly in a logarithmic-scale space. In terms of the complexity density computed in a logarithmic scale, an exponential scaling behavior with a scaling index alpha close to zero is observed. On the other hand, for the sky images big scales are dominating in the complexity spectrum (an exponential scaling with a positive index alpha) and small scales are dominating in the spectrum of grassland images (an exponential scaling with a negative index alpha). From the multiscale complexity analysis, we observed that the scaling index alpah could be utilized for the charcatrization of complexity of natural images: complex (an alpha close to zero), simple (a positive alpha), random (a negative alpha). We also point out that the scaling behaviour of the complexity spectrum of natural images could be utilized for the classification or identification of natural scenes.

NETCAL: An interactive platform for large-scale, network and complexity analysis of calcium imaging recordings in neuroscienceJavier Orlandi, Sara Fernández-García, Andrea Comella-Bolla, Mercè Masana, Gerardo García-Díaz Barriga, Josep M. Canals, Michael A. Colicos, Jordi Alberch, Jordi Soriano and Jörn DavidsenTuesday, 18:00-20:00

Systems neuroscience have become one of the best representatives of complexity in living systems. Neurons, connecting with each other with complex topologies and interacting through non-linear dynamics, are the paradigm of a complex system. In recent years, Calcium Imaging has become best technique for the large-scale analysis of neuronal networks. Its ability to record the activity of thousands of neurons simultaneously have become invaluable for the study of systems neuroscience and complex systems.

We present NETCAL, a MATLAB-built, dedicated software platform to record, manage and analyze high-speed high-resolution calcium imaging experiments. Its ease of use, interactive graphical interface and exhaustive documentation is aimed to wet-lab researchers, but it will also meet the needs of any experienced data scientist through its plugin and scripting system. We have developed a large set of tools and incorporated state-of-the-art algorithms and toolboxes for large-scale analysis of network and population dynamics. Analyses include: automated cell detection (both static and dynamic); trace and population sorting through machine learning, clustering and pattern recognition; bursting dynamics; spike detection; network inference (from functional networks to causal relations); and many more. Several of these tools are also available in real-time, e.g. cells and spikes can be monitored during the actual recording, giving the researcher extensive feedback on the progress of the experiment.

Network Modularity and Hierarchical Structure in Breast Cancer Molecular SubtypesSergio Antonio Alcala-Corona, Jesus Espinal-Enriquez, Guillermo de Anda Jáuregui, Enrique Hernandez-Lemus and Hugo TovarTuesday, 14:00-15:20

Breast Cancer is the malignant neoplasm with the highest incidence and mortality among women worldwide. It is a heterogeneous and complex disease, its classification in different molecular subtypes is a clear manifestation of this. The recent abundance of genomic data on cancer, make possible to propose theoretical approaches to model the process of genetic regulation. One of these approaches is gene transcriptional networks which represent the regulation and co-expression of genes as well-defined mathematical objects. These complex networks have global topological and dynamic properties. One of these properties is modular structure, which may be related to known or annotated biological processes.

In this way, different modular structures in transcription networks can be seen as manifestations of regulatory structures that closely control some biological processes. In this work, we identify modular structures on gene transcriptional networks previously inferred from microarray data of molecular subtypes of breast cancer: luminal A, luminal B, basal, and HER2-enriched. Using a methodology based on the identification of functional modules in transcriptional networks, we analyzed the modules (communities) found in each network to identify particular biological functions (described in the Gene Ontology database) associated to them. We also explored the hierarchical structure of these modules and their functions to identify unique and common characteristics that could allow a better level of description of such molecular subtypes of breast cancer. This approach and its findings are leading us to a better understanding of the molecular cancer subtypes and even contribute to direct experiments and design strategies for their treatment.

Network roles and risk: Opportunity dynamics in stressed plant-pollinator networksRichard Lance, Carina Jung, Denise Lindsay, Afrachanna Butler, Nathan Harms, Michael Mayo, Martin Schultz and Joshua ParkerWednesday, 15:40-17:00

Risk-reward tradeoffs are fundamental to many of the species-to-species interactions within ecological communities, and within the context of broader, more complex species interaction networks. In most habitats there are many plants that rely on insects and other animals for pollination and many pollinators that rely on those plants as food resources. Plant species that interact with relatively diverse suites of pollinators are expected to benefit from redundancy in that critical resource (i.e., pollinators). However, pollinators are also known to move a rich array of nectar pathogens (e.g. yeasts, bacteria) among plants. We surmise that plants and pollinator species with differing numbers of direct and indirect species interactions - and that exhibit differing levels of network connectedness (e.g., degree and power centrality) - must experience differing levels of pathogen risk. It is also known that nectars of different plant species have different capacities (e.g., antimicrobial enzymes) for resisting these pathogens. We therefore hypothesize that plants will have adopted network-driven strategies for optimizing the risks and rewards of pollinator "generalism," and that network roles will correspond to plant attributes, such as attractiveness to pollinators and anti-pathogen biochemical defense. We are further interested in how changing environmental stress could cause significant changes to these network-based risk strategies. In our study, we are assessing how drought and soil contamination with RDX (1,3,5-Trinitroperhydro-1,3,5-triazine; a compound often used in explosives formulations), singly and in combination, might impact the structures and dynamics of such networks. Negative impacts from either or both stressors may cause plants to minimize risk by reducing attractiveness to pollinators, maximize protective measures (e.g., anti-microbial nectar chemistry), or to simply become more susceptible to pathogen attack. Finally, we are interested in how adaptive responses to stressors within these natural networks might be observed in or applied to human resource-supply networks.

The Network Topology of Locally Interacting Agents and System-Level Distributions of Agent ActionsJanelle SchlossbergerTuesday, 14:00-15:20

This work nuances theories of aggregation in economic systems, and it develops a set of tools for the analysis of complex economic systems. This work considers an economy with N networked agents and a fixed aggregate feature, and it asks whether there exists a non-degenerate distribution of possible paths along which the economy can evolve. The N agents in the system each possess a binary-valued attribute, and their decision-making depends on the local relative frequency of the attribute’s unit value; agents’ network positions here determine their local environments. Holding the prevalence of the attribute’s unit value fixed in the system, there are combinatorially many possible configurations of the attribute among agents that are consistent with this fixed aggregate feature. If the system exhibits configuration dependence, then the distribution of paths along which the economy can evolve is non-degenerate. This work first characterizes those network structures for which configuration is irrelevant; in this case, the path of the economy solely depends on the system’s aggregate feature. This work then studies those network structures for which configuration is relevant; now, the aggregate feature of the system is no longer sufficient for determining the path of the economy. For every feasible population size, global prevalence of the attribute’s unit value, and network topology, this work maps the underlying network structure to a distribution of possible paths for the system. This work determines which network topologies maximize the variance of this system-level distribution, and this work characterizes the properties of the distribution when configurations are both equally and not equally likely to occur. The theoretical findings developed in this work offer new insights into the formation of macroeconomic sentiment in an economy, the possibility for variations in aggregate sentiment absent changes in economic fundamentals, and the effects of fluctuations in such sentiment on the outcome of an election.

Neural oscillations and spiking assemblies drive each other in the brainPeter Stratton, Francois Windels, Allen Cheung and Pankaj SahWednesday, 14:00-15:20

Oscillations in activity are hallmarks of all neural systems. These oscillations are implicated in almost every sensory, motor and cognitive function, and are perturbed in characteristic ways in brain diseases. However, little is known about how these oscillations are controlled, or how they are causally connected with the underlying activities of individual neurons. In particular, rapid fluctuations in oscillatory power and phase are observed continuously across the brain, and are often presumed to reflect reconfiguration of brain networks to meet ongoing processing demands. However, the questions of if, and how, this network reconfiguration is reflected in short-lived spike-to-spike correlations between neurons remain unexplored. We recorded local field potential (LFP) oscillations and neural spiking activity using tetrodes in the rat basolateral amygdala during fear conditioning. We show that brief spike correlations between neurons continuously form short-lived neuronal assemblies. Significantly, the assembly couplings drive oscillations across frequencies which, in turn, reconfigure the assemblies. This interplay of population oscillations with individual neuronal coupling establishes reciprocal control across spatial and temporal scales. Neither oscillations nor spikes fully dictate the patterns of neural activity, but each instead continuously influences the other. These results help explain how the brain can dynamically reconfigure itself to process information much faster than plasticity mechanisms allow.

Neural-inspired Anomaly DetectionStephen Verzi, Craig Vineyard and James AimoneThursday, 15:40-17:00

Anomaly detection is an important problem in various fields of complex systems research including image processing, data analysis, physical security and cybersecurity. In image processing, it is used for removing noise while preserving image quality, and in data analysis, physical security and cybersecurity, it is used to find interesting data points, objects or events in a vast sea of information. Anomaly detection will continue to be an important problem in domains intersecting with ”Big Data”. In this paper we provide a novel algorithm for anomaly detection that uses phase-coded spiking neurons as basic computational elements.

Neuronal avalanche dynamics leading to diverse universality classes in neuronal culturesJavier Orlandi, Mohammad Yaghoubi, Daniel Korchinski and Jörn DavidsenWednesday, 14:00-15:20

Neuronal avalanches have become an ubiquitous tool to describe the activity of large neuronal assemblies. The emergence of scale-free statistics with well-defined exponents has led to the belief that the brain might operate near a critical point. Yet not much is known in terms of how the different exponents arise or how robust they are. Using calcium imaging recordings of dissociated neuronal cultures we show that the exponents are not universal, and that significantly different exponents arise with different culture preparations, leading to the existence of different universality classes. At the same time, neuronal systems are often dominated by the presence of spontaneous activity, with neurons randomly firing and triggering avalanches. Multiple avalanches can interwine, and no clear separation of time scales exists between spontaneous activation and propagation anymore. We show that in those cases the classical definition of a neuronal avalanche has to be called into question, and other measures have to be introduced to reveal the real statistics of those systems.

A New Era of Emergence? Exploring 24 Recent Top-Tier Articles on EmergenceBenyamin LichtensteinMonday, 15:40-17:00

Emergence—the coming into being of new patterns and systems—has been examined more fully over the past decade or more, increasingly using analytic tools from complexity science to understand dynamics in leadership and teamwork, entrepreneurship and organizational growth, strategy and supply chains, international and institutional change, and more. The past three years has seen a surprising surge of articles on emergence: top tier organizational journals have published 24 articles on emergence from 2015 until now—in AMJ, ASQ, AMR, Organization Science, SMJ, JAP, and others. Having spent much of my own career examining complexity and emergence (Lichtenstein 2000; Lichtenstein, Dooley & Lumpkin, 2006; Lichtenstein 2011; Lichtenstein, 2014; Lichtenstein 2016), the sheer number of new articles is rather remarkable, as is their placement in top tier journals, where such topics have mostly been avoided. As a whole, they might lead us to a new understanding of emergence and its influence in the complex social world.

For the conference I propose a careful review and analysis of these articles, around the following questions: (1) How is emergence construed in each paper, i.e. what levels of analysis are captured across these new articles? (2) How does emergence happen, i.e. are there specific dynamics and processes that are common across these articles? (3) Why does emergence occur, i.e. can we identify any underlying causes or leverage points which explain the presence of emergence across levels of analysis? My methodology will include content analysis and a categorization using the 15 core complexity sciences (e.g. NK Landscapes, CAS, dissipative structures). My aim is to uncover one or more key insights about emergence that can help us bring emergence into a new era of understanding and enactment.

The New Field of Network Physiology: Mapping the Human PhysiolomePlamen Ch. IvanovSunday, 11:00-11:40

The human organism is an integrated network where complex physiological systems, each with its own regulatory mechanism, continuously interact to optimize and coordinate their function. Organ-to-organ interactions occur at multiple levels and spatiotemporal scales to produce distinct physiologic states: wake and sleep; light and deep sleep; consciousness and unconsciousness. Disrupting organ communications can lead to dysfunction of individual systems or to collapse of the entire organism (coma, multiple organ failure). Yet, we know almost nothing about the nature of interactions among diverse organ systems and sub-systems, and their collective role in maintaining health.

Systems biology and integrative physiology focus on the 'vertical integration' from sub-cellular level to tissues to single organs.

There is absence of knowledge and research effort in the direction of 'horizontal integration' of organ interactions, which is essential in health or disease. We do not know the basic principles and mechanisms through which complex physiological systems dynamically interact over a range of space and time scales, and horizontally integrate to generate behaviors at the organism level. There are no adequate analytic tools and theoretical framework to probe these interactions.

The emerging new field of Network Physiology aims to address these fundamental questions. In addition to defining health and disease through structural, dynamical and regulatory changes in individual systems, the network physiology approach focuses on the coordination and interactions among diverse organ systems as a hallmark of physiologic state and function.

Through the prism of concepts and approaches originating in statistical and computational physics and nonlinear dynamics, we will present basic characteristics of individual organ systems, distinct forms of pairwise coupling between systems, and a new framework to identify and quantify dynamic networks of organ interactions.

We will demonstrate how physiologic network topology and systems connectivity lead to integrated global behaviors representative of distinct states and functions. We will also show that universal laws govern physiological networks at different levels of integration in the human body (brain-brain, brain-organ and organ-organ), and that transitions across physiological states are associated with specific modules of hierarchical network reorganization.

We will outline implications for new theoretical developments, basic physiology and clinical medicine, novel platforms of integrated biomedical devices, robotics and cyborg technology.

The presented investigations are initial steps in building a first atlas of dynamic interactions among organ systems and the Human Physiolome, a new kind of BigData of blue-print reference maps that uniquely represent physiologic states and functions under health and disease.

New formalisms for emergent phenomena and diagonal evolutionIrina TrofimovaMonday, 14:00-15:20

In order to understand emergent phenomena it is sometimes useful to look not to the dynamics at the micro level of these phenomena but to the macro level. To understand physical and chemical processes therefore, it might be useful to see emergent properties in psychological, biological and social systems. Commonly discussed similarities between such processes are: stochasticity, distributed actions of multi-party systems, energy transfer and transformations, multiplicity of and changes in degrees of freedom, boundary conditions, etc. Yet, neurophysiological processes have properties that, perhaps, models in physics can benefit from, as summarized in the Functional Constructivism (FC) paradigm and in evolutionary theories. FC considers natural processes as being generated every time anew, based on capacities of contributing parties and environmental demands. Formal FC descriptors overlap with quantum mechanical principles but include several unique features: 1) Zone of Proximate Development (serving as boundary conditions); 2) (peer) systems with multiple siblings and ensembles (differing from those in statistical mechanics); 3) systems with internalized integration of behavioral elements (“cruise controls”); 4) systems capable of handling low-probability, future events; 5) functional differentiation within systems. The recursive dynamics among these descriptors which act on (traditional) downward, upward and horizontal directions of evolution is conceptualized as diagonal evolution, or dievolution. Several analogies between these FC descriptors and emergent QM constructs are given.

Nonlinear Datapalooza 2.0: A New Kind of Conference for a New Kind of ScienceLisa Taylor-Swanson, Lisa Conboy, Jonathan Butner, Mary Koithan, Rumei Yang and David PincusTuesday, 18:00-20:00

Nonlinear Datapalooza 2.0: A New Kind of Conference for a New Kind of Science
Abstract
Taylor-Swanson, Conboy, Butner, Koithan, Yang, Pincus
Background Most conferences are designed for presentations of completed or in process scientific or technical work. The goals for attending are for dissemination, citation, and critical feedback on a presenter's work. Secondary goals are networking to build colleagues with different skillsets or from different disciplines, exploring new methods, and coming up with new and creative avenues to explore. Traditional conferences are hierarchical, with experts presenting keynote talks and workshops in a didactic format.
Introduction The Nonlinear Datapalooza turns this process upside-down: no hierarchy, no finished work, and no boundaries to forming new collaborations and learning new methods. As the name suggests – the Nonlinear Datapalooza is all about getting together to analyze data. This conference was sold-out in 2015 and participants reported it to be a very effective way to learn nonlinear analyses and to develop new working collaborative networks.
Methods Datapalooza brought together three ‘kinds’ of scholars in 2015: (A) methods experts to obtain new data sets to analyze (B) people with data sets who would like to learn new approaches to analysis and (C) content experts and students who want to learn new methods and contribute as co-authors on more high quality publications. Essentially – the conference matched people who would like to write up “methods” and “results” with those who would like to write up “introductions” and “discussions.” In the process, methodologists learned about other disciplines and further validated their tools, while content experts and students learned how to apply new methodologies to their work. Each attendee was asked to be able to try out at least one new method and become an author on at least one original publication produced during the conference.
Results Datapalooza post-conference survey results were favorable. Every participant (100%) found the conference to be “very much” or “exceptionally” more interdisciplinary and innovative compared with other conferences they have attended, and 94% found it to be “very much” or “exceptionally” more tailored to participant needs. When asked to rate the conference in its relative effectiveness in different areas, 90% found it to be more effective in teaching new methods and facilitating professional networking. While in terms of increasing people’s productivity, 100% of participants expect that the Datapalooza will be more effective than the typical conference toward the goal of increasing their productivity in publications and conference presentations, and 79% expect a significantly better increase in grant applications. At the same time, the work produced made a serious impact on the field of nonlinear science, as we produced multiple papers with the potential to combine different methodologies.
Discussion This presentation will focus on outcomes of the first Datapalooza conference and will present information about the upcoming Datapalooza 2.0 conference in 2019.

Nonlinear growing complex network of citations of scientific papersMichael GolosovskyWednesday, 15:40-17:00

Many models of complex network growth have been proposed by theoretical physicists, mathematicians, and computer scientists but hardly any of them was validated against the measurements according to accepted physical standards. Our goal is to take one well-documented complex network and to establish its growth mechanism through modeling and model-inspired measurements.

We focused on citation networks of Physics, Economics, and Mathematics papers [1]. Using modeling and model-based measurements we uncovered citation dynamics of these research fields [2]. Contrary to common belief that citation dynamics is determined by the linear preferential attachment (Markov process), we found that it follows the nonlinear autocatalytic growth (Hawkes process). The nonlinearity stems from a synergistic effect in propagation of citation cascades (a kind of social reinforcement) and is intricately related to local network topology and network motifs. The nonlinearity results in non-stationary citation distributions, diverging citation trajectories of similar papers, and runaways or "immortal papers" [3]. The nonlinearity is the reason why the ideas advocated in highly-cited papers undergo viral propagation in scientific community.

We present a stochastic model of citation dynamics based on fitness and recursive search (triadic closure). The model has been validated against the measurements and does not include any ad hoc parameters. Interestingly, while we didn’t not assume a preferential attachment mechanism, we demonstrate that this mechanism follows from our model. Moreover, the initial attachment traces its origin in the shape of the fitness distribution.

We demonstrate that the apparent power-law citation distribution is a consequence of the wide fitness distribution and nonlinear dynamics. We find that this distribution is non-stationary and, contrary to any intuition, has an internal scale related to nonlinearity. Thus, power-law citation distributions are not scale-free!

Our calibrated model can serve as a probabilistic predictive tool allowing forecasting of the future citation behavior of a paper or of a group of papers. We trace similarities between this model and the Bass model of diffusion of innovations, epidemiological models, and viral marketing.

1. S. Redner, “How popular is your paper? An empirical study of the citation distribution”, European Physical Journal B 4, 131 (1998).
2. M. Golosovsky and S. Solomon “Growing complex network of citations of scientific papers: Modeling and measurements”, PRE 95 012324 (2017).
3. M. Golosovsky, “Power-law citation distributions are not scale-free”, PRE 96 032306 (2017).

A Nonlinear Model of the State as a Complex Adaptive System, Applied to Policy-makingUjjwall UppuluriTuesday, 18:00-20:00

Traditionally Economics as taught at university and practised in academia revolves around the concept of equilibrium and their application to the study of social phenomena and involves the study of scarcity. When applied to policy-making, the models used by economists have been constrained by assumptions such as the concept of rationality. These assumptions are then turned into causal models which are tested using regression analysis or other forms of statistical analysis.

This paper argues that this way of thinking about formulating models and evaluating policy choices is outdated. Recent advances in the field of complex systems science has led to the creation of interdisciplinary institutes which apply the principles of evolutionary biological theory and system dynamics(drawn from Physics) to the study of social phenomena. More specifically, Complex Systems Theory provides a framework policymakers can use to pursue strategies aimed at maximising productivity while minimising risks to create an environment in which their constituents are employed in activities which pay salaries which take into account the cost of living in an area.

A Complex System is defined as a system made up of a large number of constituent entities that interact with each other and also with the environment. They exhibit non-linear behaviour, that is, even seemingly insignificant causes can snowball into significant effects, whose behaviour is intrinsically difficult to model due to the dependencies, relationships, or interactions between their parts or between a given system and its environment. This presentation argues that a state is metaphorically a dynamic living system, an organism, made up of agents at the micro level who interact with each other over time and space. By characterising states as a biological system, one can account for heterogeneity and complexity, two factors that traditional economic models do not address. By treating the economy as a dynamic system one that is ever changing and evolving, the paradigm allows one to think about systems as being in disequilibrium. How to model such a system?

The answer lies in creating a cubic parametric function of the system as one which exists on a surface, β_t 〖〖(X〗_t〗^3)-β_t 〖〖(Y〗_t)〗^(3 )=〖Z_(t.)〗^ In this model, β_t refers to the rate of change of accumulation of intellectual capital, X refers to the effectiveness of a state at managing her resources, Y refers to the ability of a state to manage risks (Externalities and Exogenous shocks) and the Z variable refers to employment adjusted for cost of living.

This model, states that the independent variable Z (employment) is directly affected by the changes in the ability of a state/agent to manage the resources available to it and ability to mitigate risks. What drives changes in resource and risk management efficiency? This paper argues that the driver is the change in the rate of the accumulation of intellectual capital within a state. Intellectual capital is defined by this paper as the sum of the accumulation of knowledge in a society. Mathematically, it is defined as the anti-derivative of an index of dissimilarity.

Nonlinear Modeling of Acupuncture Treatment in Complex Medical IllnessLisa Conboy, Lisa Taylor-Swanson, Rumei Yang and Johnathan ButnerTuesday, 14:00-15:20

Gulf War Illness is a Complex Medical Illness characterized by multiple symptoms, including fatigue, sleep and mood disturbances, cognitive dysfunction, and musculoskeletal pain affecting veterans of the first Gulf War. it is commonly seen with a highly individualistic presentation, associated with clusters of symptoms and co-morbid medical diagnoses. No standard of care treatment exists.
In 2013, our study team competed the Congressionally Directed Medical Research Program grant “The Effectiveness of Acupuncture in the Treatment of Gulf War Illness” (W81 XWH). The results of this Phase II Randomized Controlled Trial (n=104) support the use of acupuncture to treat the symptoms of Gulf War Illness. Veterans with diagnosed symptoms of Gulf War Illness were randomized to either six months of biweekly acupuncture treatments (group 1, n=52) or 2 months of waitlist followed by weekly acupuncture treatments (group 2, n=52). Measurements were taken at baseline, 2, 4 and 6 months. The primary outcome is the SF-36 physical component scale score (SF-36P) and the secondary outcome is the McGill Pain scale. A clinically and statistically significant average improvement of 9.4 points (p=0.03) in the SF-36P was observed for group 1 at month 6 compared to group 2, adjusting for baseline pain. The secondary outcome of McGill pain index produced similar results; at 6 months, group 1 was estimated to experience a reduction of approximately 3.6 points (p=0.04) compared to group 2. (Conboy et. al 2016, PLoS).
Currently we are examining relationships among the secondary outcomes; measurements include validated instruments to collect multi-level information on baseline and study-related changes that may occur in the subjects’: 1) biology, such as markers of stress and inflammation, 2) psychosocial attitudes and functioning, such as perceived social support and improvements in social functioning, 3) symptomatology, such changes in presenting symptoms. We have found using Multi-level Modeling of the relationship of study dose to main outcome, that Depression level acts as an attractor. Further we found effects of SF36 by dose; for the lower dose group the coupling goes away. These and other results will be discussed in the context of Chinese Medicine Theory and the Complex Medical Illness literature.
This work was supported by the Office of the Assistant Secretary of Defense for Health Affairs through the Gulf War Illness Research Program under Award No. W81XWH-09-2-0064 Opinions, interpretations, conclusions and recommendations are those of the author and are not necessarily endorsed by the Department of Defense. ClinicalTrials.gov registration number NCT01305811.

On Individual Rationality, Social Welfare, and Complex Strategic Interactions: Regret Minimization and Competitive-Cooperative SpectrumPredrag TosicTuesday, 14:00-15:20

We are interested in complex strategic interactions between boundedly rational, self-interested agents, where those interactions provide implicit incentives to the agents to cooperate with each other. We model such interactions as 2-player strategic games that are "far from zero-sum". It has been observed for such games, from the now classical work on (Iterated) Prisoner's Dilemma in the 1980s to much more recent analyses of (Iterated) Traveler's Dilemma, the Centipede Game and a few other formal 2-player strategic games, that the classical game theory based on solution concepts such as Nash Equilibria fails to capture what would both intuitively and based on experiments with actual human subjects appear to be "most rational" behavior. Alternative concepts of "acting rationally" in such scenarios have been proposed in recent years, including the concept of "Regret Minimization" (Halpern and Pass, 2009). It has been argued, that regret minimization is a more adequate solution concept for such strategic interaction.

We study an interesting 2-player game, Generalized (Iterated) Traveler's Dilemma (Tosic and Dassler, 2011), and analyze which strategies tend to do well in it, as a function of key parameters (the range of allowable bids, the bonus value and the minimum bid increment value), focusing on regret minimizing strategies. We are interested in finding, for which parameter choices do regret minimization strategies do very well, and for which other adaptable (or even non-adaptable/oblivious) strategies do better. We undertake this analysis in the context of a (simulated) round-robin tournament involving a number of strategies previously studied in the context of Iterated Traveler's Dilemma (ITD), as well as a handful of regret minimization based strategies. Our simulated tournament and the analysis of its results not only provide new insights into the challenging game of ITD itself, but also shed more light on the practical usefulness of regret minimization as a relatively new model of rational behavior in commplex non-zero sum games whose structures provide implicit incentives for agent cooperation.

On the influence of the personalization vector in PageRank centrality: Classic and Biplex casesMiguel Romance, Esther Garcia, Francisco Pedroche and Regino CriadoTuesday, 14:00-15:20

In this poster we compare the controllability of PageRank algorithm by using personalization vectors for the classic and for Biplex model. The new biplex PageRank centrality is inspired by multiplex networks and it allows to introduce a new centrality measure for classic complex networks, extending the usual PageRank algorithm to multiplex networks. Some analytical results about the controllability of this Biplex model are presented which are similar to the corresponding classic case and some numerical testing are included in order to show that the biplex model is less controlable than the classic PageRank. The case of controllability of centrality of multiplex networks is also considered in terms of personalization vectors.

Ontological determinism, non-locality, quantum equilibrium and post-quantum mechanicsMaurice Passman, Philip Fellman, Jonathan Post and Avishal PassmanMonday, 15:40-17:00

In this paper, we extend our previous discussion on ontological determinism, non-locality and quantum mechanics to that of a post-quantum mechanics (PQM) perspective. We examine the nature of quantum equilibrium/non-equilibrium and uncertainty to extend the statistical linear unitary quantum mechanics for closed systems to a locally-retrocausal, non-statistical, non-linear, non-unitary theory for open systems. We discuss how the Bohmian quantum potential has a dependence upon the position of its Bell ‘beable’ and how Complexity mathematics describes the self-organising feedback between the quantum potential and its beable allowing nonlocal communication.

Open Innovation in the Public SectorLeonardo Ferreira de Oliveira and Carlos D. Santos Jr.Monday, 14:00-15:20

Innovation in the Public Sector has been seen as a way to respond the current challenges posed to society. However, the literature shows that little attention has been given to link public sector innovation to existing theories based on system and complexity. This essay aims to contribute to filling this gap with the development of a theoretical framework focused on the relation of organizational capabilities and public value generation in a systemic context. We discuss that the concept of Dynamic Capabilities - Sensing, Seizing and Transforming - is adherent to analyze public sector innovation, especially open innovation in government. In this context, the development of these organizational capabilities contributes to public value creation and can be analyzed through System Dynamics. We argue that these theoretical fields can be joined to elucidate complex and non-linear relationships related to open innovation in the public sector, making possible to scholars and policymakers to understand aspects of the systemic environment that can increase or decrease the creation of public value to providers, users, and beneficiaries.

Opinion Dynamics on Networks under Correlated Disordered External PerturbationsMarlon Ramos, Marcus Aguiar and Dan BrahaTuesday, 14:00-15:20

We study an influence network of voters subjected to correlated disordered external perturbations, and solve the dynamical equations exactly for fully connected networks. The model has a critical phase transition between disordered unimodal and ordered bimodal distribution states, characterized by an increase in the vote-share variability of the equilibrium distributions. The random heterogeneities in the external perturbations are shown to affect the critical behavior of the network relative to networks without disorder. The size of the shift in the critical behavior essentially depends on the total fluctuation of the external influence disorder. Furthermore, the external perturbation disorder also has the surprising effect of amplifying the expected support of an already biased opinion. We show analytically that the vote-share variability is directly related to the external influence fluctuations. We extend our analysis by considering a fat-tailed multivariate lognormal disorder, and present numerical simulations that confirm our analytical results. Simulations for other network topologies demonstrate the generalizability of our findings. Understanding the dynamic response of complex systems to disordered external perturbations could account for a wide variety of networked systems, from social networks and financial markets to amorphous magnetic spins and population genetics.

Opinion leaders on social media: A multilayer approachJavier Borondo, Alfredo Morales, Juan Carlos Losada and Rosa M. BenitoWednesday, 15:40-17:00

Twitter is a social media outlet where users are able to interact in three different ways: following, mentioning, or retweeting. Accordingly, one can define Twitter as a multilayer social network where each layer represents one of the three interaction mechanisms. We analyzed the user behavior on Twitter during politically motivated events: the 2010 Venezuelan protests, the decease of a Venezuelan President, and the Spanish general elections. We found that the structure of the follower layer conditions the structure of the retweet layer. A low number of followers constrains the effectiveness of users to propagate information. Politicians dominate the structure of the mention layer and shape large communities of regular users, while traditional media accounts were the sources from which people retweeted information. Such behavior is manifested in the collapsed directed multiplex network that does not present a rich-club ordering. However, when considering reciprocal interactions the rich-club ordering emerges, as elite accounts preferentially interacted among themselves and largely ignored the crowd. We explored the relationship between the community structure between the three layers. At the follower level users cluster in large and dense communities holding various hubs, that break into smaller and more segregated ones in the mention and retweet layers. We also found clusters of highly polarized users in the retweet networks. We analyze this behavior by proposing model to estimate the propagation of opinions on social networks, which we apply to measure polarization in political conversations. Hence, we argue that to fully understand Twitter we have to analyze it as a multilayer social network, evaluating the three types of interactions.
J Borondo, AJ Morales, RM Benito, JC Losada, Multiple leaders on a multilayer social media, 2015, Chaos, Solitons & Fractals 72, 90-98.
AJ Morales, J Borondo, JC Losada, RM Benito, Measuring political polarization: Twitter shows the two sides of Venezuela, 2015, Chaos: An Interdisciplinary Journal of Nonlinear Science 25 (3), 03311
Rosa M. Benito orcid.org/0000-0003-3949-8232

Optimal deployment of resources for maximizing impact in spreading processesAndrey LokhovTuesday, 15:40-17:00

The effective use of limited resources for controlling spreading processes on networks is of prime significance in diverse contexts, ranging from the identification of “influential spreaders” for maximizing information dissemination and targeted interventions in regulatory networks, to the development of mitigation policies for infectious diseases and financial contagion in economic systems. Solutions for these optimization tasks that are based purely on topological arguments are not fully satisfactory; in realistic settings, the problem is often characterized by heterogeneous interactions and requires interventions in a dynamic fashion over a finite time window via a restricted set of controllable nodes. The optimal distribution of available resources hence results from an interplay between network topology and spreading dynamics. In this contribution, we introduce a new probabilistic targeting formulation which incorporates the dynamics and encompasses previously considered optimization problems. We show how the resulting set of problems can be addressed as particular instances of a universal analytical framework based on two ingredients: scalable dynamic message-passing equations which allow for an efficient linear-time computation of marginal probabilities, and forward-backward propagation, a gradient-free optimization method inspired by the techniques used in artificial neural networks and implemented on top of the constrained message-passing scheme, scaling to networks with millions of nodes. We demonstrate the efficacy of the method on very large synthetic graphs, as well as on a variety of real-world examples, for a variety of models, including the Susceptible-Infected-Recovered and the Independent Cascade type models.

Based on:
[1] Andrey Y. Lokhov and David Saad, "Optimal deployment of resources for maximizing impact in spreading processes", Proceedings of the National Academy of Sciences, 114 (39) E8138-E8146 (2017)
[2] Andrey Y. Lokhov and David Saad, "Scalable Influence Estimation without Sampling", submitted to KDD'2018.

Optimality Proof For Community Structure In Complex NetworksStanislav Sobolevsky, Alexander Belyi and Carlo RattiTuesday, 15:40-17:00

Community detection is one of the pivotal tools for discovering the structure of complex networks. Majority of community detection methods rely on optimization of certain quality functions characterizing the proposed community structure. Perhaps, the most commonly used of those quality functions is modularity [Newman, 2006]. Many heuristics are claimed to be efficient in modularity maximization, which is usually justified in relative terms through comparison of their outcomes with those provided by other known algorithms (a comprehensive review could be found in [Fortunato, 2010], one of the recent efficient algorithms proposed in [Sobolevsky et al, 2014]). However, as all the approaches are heuristics, while the complete brute-force is not feasible, there is no straightforward way to understand if the obtained partitioning is truly the optimal one.

In this article we address the modularity maximization problem from the other side --- finding an upper-bound estimate for the possible modularity values within a given network. Whenever the constructed upper-bound estimate matches the achieved modularity score of the known partitioning, this provides a proof that the suggested community structure is indeed the optimal one (we call the networks with a proven optimal partitioning resolved).

One known approach suggested in [Agarwal&Kempe, 2008] and further developed in [Miyauchi&Miyamoto, 2013] constructs an upper bound modularity estimate through a linear programming relaxation. However, unless the optimal solution of the relaxation problem provides a valid optimal partitioning (which is usually not the case with some rare exceptions) such an estimate does not really help with resolving the network.

We propose a widely flexible algorithmic framework for building upper-bound modularity estimates through combining the proven estimates for the smaller subnetworks. For small subnetworks, such proof could be achieved using the proposed partial brute force algorithm or a linear programming relaxation as per above. For others, the proof could leverage the existence of specific edge structures, like chains with all edge modularity scores positive but one. The algorithm seeks the most optimal decomposition of the original network into a set of smaller subnetworks with proven upper-bound estimates, providing the smallest possible overall estimate.

The proposed framework is applied to a set of well-known classical and synthetic networks, being able to prove the optimality of the best known partitioning constructed by the Combo algorithm [Sobolevsky et al, 2014] for many networks including well-known Zachary's Karate Club.

Organizational Behavior and Human Decision-Making under Risk and Uncertainty in an Incentivized Knowledge Creation and Diffusion GameJessica Gu and Yu ChenTuesday, 18:00-20:00

Many studies on knowledge creation and diffusion with incentives, however, the one that bridges the short-term effort on long term effect has not yet be found. Besides monetary cost and benefit, bottom-up emergent social norms and the corresponding impacts on organizational performance are hardly mentioned. Additionally, the risks and uncertainty involved in KM decision-making in daily operations are rarely discussed either. Our study aims to elucidate the complex causality between microscopic human decision-making under risk and uncertainty and macroscopic emergent social norms and organizational behavior through strategic and dynamic social interactions in an incentivized knowledge creation and diffusion game. We firstly designed an organizational KM model with induced monetary incentives that include: high risk high return for independent effort on innovation (creating new knowledge); low risk low return for dependent effort on imitation (acquiring shared knowledge); and a knowledge bonus which is contributed by collective cooperation and shared based on individual knowledge uniqueness level. Since risks and uncertainty are incorporated, agents have to utilize bounded rationality, psychologically reason upon equal expected utilities, and form strategies when facing two dilemmas: risk seeking vs. loss aversion and competition vs. cooperation. Secondly, we developed a gaming software and implemented the KM model in behavioral experiments to trace the endogenously evolving choices of agents under exogenous policies, capture the dynamic interactions, and observe the emergent properties at macroscopic level through iterations. On top a baseline setting, three additional treatments with different intervention designs, namely one with big monetary bonus, one with reputation social incentive and one with human resources diversity, were carried out and compared against the baseline. Preliminary results display different individual behaviors and intangibles-tangibles interplays, exhibit emergent social structures and norms, and reveal various characteristics of each intervention policy. With the empirical evidence obtained for verification, the proposed KM incentive system demonstrated fairness, practicability, and effectiveness. Thirdly, we implemented the KM model in an agent-based simulation. The micromotives on macrobehaviors are further explored in depth.

Organizational climate, knowledge sharing; the importance of understanding the social mediaElnaz DarioMonday, 14:00-15:20

Nowadays, people with knowledge have incredible value because the use of collective expertise assists them to become more innovative. Hence, employees are not just mechanisms that work in the industry, so they are not just expecting to be assigned to a task. They are seeking knowledge to improve themselves. Social media can be seen as an issue and distraction or a platform to enhance intra-organizational knowledge sharing in many organizations. We theorize that the perception of organizational climate that includes policies, practices and members influences employee engagement in using social media to share knowledge. This paper is aiming to answer the following research question: “How organizational climate can affect employee knowledge sharing through social media platforms?”

PEG [Plastic Earth Game]: Ocean Plastic as a Complex SystemZann GillWednesday, 15:40-17:00

earthDECKS [DECKS – Distributed, Evolving, Collaborative, Knowledge System], under fiscal sponsorship of The Ocean Foundation, is developing PEG [Plastic Earth Game] an app addressing ocean plastic as a complex systems problem. Our app launches for beta testing in September 2018. Potential to pilot test in the California School System led us to ask whether the target audience could be broadened to include adults, and how the complex systems community might engage with this problem.

PEG is inspired by Buckminster Fuller’s concept for “World Game” (1961) as a learning game addressing systemic environmental challenges. World Game preceded the global internet, distributed computing, social networks, token offerings, and was fifty years ahead of the burgeoning “deep game movement” where players score high points for empathy and creative problem-solving, rather than for killing other players. In the pre-Internet era, when massive multi-player online games did not exist, several hundred players assembled in university gymnasia to play World Game. A huge dymaxion map was laid out on the floor as the gameboard. A day-long improvisational theater experiment in collaborative problem-solving occurred. All players were given hard copy manuals, instructions and assigned roles, either as officials or as citizens of countries (with the number of citizen gamers proportional to each country’s actual population).

earthDECKS’ flagship media experience, PLASTIC, tackles ocean plastic, a problem widely acknowledged and documented. Participants join the experience via a media portal that captures the unique profile of each user entering the system and, based on that user’s profile, matches each user to resources, challenges, and other users with complementary interests. Unlike collective intelligence, where performers are anonymous, in PEG collaborative intelligence contributors have identifying signatures (original profile in the system) and footprints. Their profiles that evolve based on actions of the agents in the system. DECKS in earthDECKS are clusters of “infocards” that players use to decide how to act. Each player receives an earthDECK, customized for that user’s profile. The alternative currency movement offers a decentralized infrastructure for collaborative intelligence dApps and for reviving the concept of World Game as more than a game – a global problem-solving tool.

In its simplest implementation, PEG [Plastic Earth Game] is a new type of MOOC (Massive Open Online Course) offering a unique way for students to navigate their own learning paths. After they watch the introductory film PLASTIC, clicking on topics of interest to them, they take a CIQ (Collaborative IQ) quiz to see their customized earthDECK. Although they define their profiles in their initial signup, their choices in the system expand their profiles. Based on their profiles, PEG recommends stories.

In the first iteration of PEG for application in schools, every Story Contributor has the role of “Reporter” in an environmental news agency network. Every story reader is a “News Analyst,” learning critical thinking skills by writing comments about the story. Players earn tokens by reading and writing stories, which receive critique and ratings from other students and teachers (determining token awards). The best stories are posted and become part of PEG’s growing story collection. The backend analytics allow us to observe emergent behaviors.

Personalized Workload Assignment in Software Development: A Two-Level Hybrid Collaborative FilteringNan Wang and Evangelos KatsamakasTuesday, 18:00-20:00

People analytics is gaining popularity because it is expected to eliminate biases that exist in all sorts of people-related issues including recruitment and performance evaluation, promotion and compensation, as well as talents assessment and development. We propose applying the technique of recommender systems to optimize talent usage by personalizing workload assignment. In the paper, we demonstrate the feasibility of this approach in software development environment.

We introduce a two-level hybrid (2LH) approach to build the recommender system. We empirically validate its predictive accuracy and recommendation effectiveness even on sparse data. Major merits of this approach include scalability, flexibility and extensibility.

Lastly, we discuss limitations of 2LH and its future extensions as realistic applications to drive business value.

Phase Transitions and Criticality in Biological Networks: Implications for Genes and NeuronsMichelle GirvanMonday, 9:50-10:30

Experimental evidence suggests that, in order to maximize performance, biological networks often operate near the brink of failure. Because of the connections between such "tipping points" and the critical points of second order phase transitions, the methods of statistical and nonlinear physics are useful for studying these systems. My research in this area explores phase transitions and critical dynamics in both networks of genes and networks of neurons. Modeling phase transitions in gene regulatory networks has led us to propose a general mechanism underlying some cancers. Modeling phase transitions in neuronal networks has allowed us to identify features of the brain's wiring that are key for optimal information processing. For both networks of genes and networks of neurons, studying how evolution shapes the path to criticality gives us insights into robustness and fragility in these systems.

The Phenotype-Genotype-Phenotype Map: The Role of Signal Processing in Evolutionary TheoryNayely Velez-Cruz and Manfred LaubichlerWednesday, 15:40-17:00

Here we introduce a robust mathematical and data analytic framework for a mechanistic explanation of phenotypic evolution that is conceptually rooted in developmental evolution theory. We respond to the lack of evolutionary models that integrate multiple simultaneously-occurring mechanisms of inheritance with developmental mechanisms in order to explain the origins of evolutionary novelty. We explore a re-conceptualization and an associated mathematical formalism of the Phenotype-Genotype-Phenotype (PGP) Map, which is based on Laubichler & Renn’s framework for extended evolution. Conceptually, rather than to begin with the genotype, as is the case with the genotype-phenotype map, we instead begin with a phenotype—an agent in Laubichler and Renn’s extended regulatory network model. A phenotype can be a single trait, a complex of traits, an organism, or a system at any scale. The phenotype is then “decomposed” into a unit of inheritance (genotype, or “features”) which passes the generational divide and is then “reconstructed” via developmental processes. Examples of features include, but are not limited to, gene regulatory network motifs, specific interactions between molecular agents (e.g. transcription factor modules), developmental mechanisms, epigenetic interactions, and of course, an organism’s genotype. This abstraction avoids later post-hoc assumptions about the genotype-phenotype map in exchange for a model of phenotypic evolution that places the explanatory power in the processes of inheritance and development. The PGP Map framework is thus capable of uniting the proximate/mechanistic explanation with the evolutionary explanation by providing a mechanistic explanation of phenotypic evolution. To accomplish this, we have developed a mathematical and associated computational framework for the PGP Map based on digital signal processing (DSP) and wavelet analysis, as it ensures that the conceptual framework, mathematics, and computational implementation are as identical in structure and logic as possible. The framework integrates concepts and methods from wavelet theory, machine vision, and graph theory and is thus a flexible tool that facilitates the conceptual interpretation and multi-scale modeling of known phenomena of phenotypic evolution (e.g. multiple mechanisms of inheritance, gene regulatory network dynamics, among others). The PGP Map is implemented in TensorFlow, a machine learning interface used for data analysis via custom designed computational graphs. This makes the PGP Map amenable to empirical test by allowing for the integration of multiple types of biological data, such as single-cell genomics and epigenomics data, gene expression data, and/or phenotype-environment interaction data, to list a few.

Physician Distribution in a Network Between Medical Schools and HospitalsPanagiotis Karampourniotis, Yoonyoung Park, Issa Sylla and Amar DasTuesday, 15:40-17:00

Introduction
The distribution of healthcare resource in the US is not homogeneous across the country. Number of physicians and availability of specialists is known to be variable between states. Understanding the location preferences of physicians, or how physicians move from their original training site to their current work place, is of great interest for those who want to guide the policy related to it1. Movement of physicians from their alma mater to their currently affiliated hospitals forms a natural bipartite network. Using publicly available data, we examined the mobility pattern of physicians participating in Medicare using network analysis.
Data and Results
The main data source we used in this study is ’Physician Compare National Downloadable File’, a public dataset from data.medicare.gov that contains information about medical doctors (MDs) who provide care for Medicare patients, including the names of their medical schools and affilcated hospitals2. We also used other public resources including the US News Ranking of medical schools in the US. We restricted our study to the 101,000 physicians (out of the 208,097) who graduated from one of the top 175 ranked US schools and are affiliated with the 3462 hospitals that had performance rate information.
Using this information, we constructed an aggregated, weighted, directed, bipartite graph with medical schools and hospitals as nodes. The number of physicians Eij who graduated from school i and are currently affiliated to hospital j represent the weight of that pair. On the school side, the average out-degree is 290.54, with the average number of MDs per school being < N >= 577.14. On the hospital side, the average in-degree is 14.68, with the average number of affiliated MDs being < M >= 29.17. The number of affiliated physicians and the degree distributions of the hospital nodes follow a powerlaw with γ = 1.79 and γ = 1.68, respectively. To study the impact of distance on the work location preferences of physicians, we applied the Gravity model, given by E′ = N M f(r ), where E′ , N , ijijij iji and Mj are the number of expected transfers, total number of MDs of school i, and of hospital j, respectively. f(r) is the function of distance r to parameterize. We applied maximum likelihood estimation to an exponential (ae−nr) and powerlaw (br−γ) function of distance. The best fit for exponential is a = 0.00019, and n = 0.00067, while the best fit powerlaw is b = 0.000286 and γ = 0.136. The Gravity model fails to capture the variance in the weights of the pairs. Our next steps include applying the Radiation model on a county level analysis of the physician mobility, as well incorporating the school ranking information and hospital rating information on our analysis.
References
1. Seifer, S.D., Vranizan, K. and Grumbach, K., 1995. Graduate medical education and physician practice location. Jama, 274(9), pp.685-691.

Portuguese Venture Capital Ecosystem – Visualising actors and investment flows across timeDiana FernandesTuesday, 18:00-20:00

The aim is to present the Portuguese Venture Capital Ecosystem, over time, using national ecosystem data to understand the investment type under analysis.

It is usually used the paper of Kaplan and Stromberg (2000) “Financial Contracting Theory Meets the Real World: An Empirical Analysis of Venture Capital Contracts”. The authors analyse 200 investment in venture capital of 118 target companies of 14 investment firms. The data source was the investment firms themselves.

Similarly, when designing cities, houses or infrastructure one doesn’t use the weather data from places with considerable different weather and population patterns (i.e. north pole data to build a city in southeast Asia). Still, in this case, we keep assuming that the empirical evidence of different systems will fit our own.

Many studies carried on this type of investment in Portugal, usually choose to analyse a given particular case or, take the data of all investment in venture capital and private equity. Secondly, most perform analysis over annual reports and not over an extended period.

It is therefore important to characterise the ecosystem as well as the actors and how they interact with one another.

First, there was no official systematised data on the type of contracts carried out, or the instruments used, since the information is scarce and contradictory (removing some references in case studies). Several data were extracted (seeking to use reliable sources) to build a report on the ecosystem of this type of investment.

Considering that this type of events can also be observed through other external signs, namely through the mandatory publications (i.e. the validity of certain events depends on their externalisation, e.g. publication on companies’ house, the annual report of VC funds).

Instead of relying on a case basis, data was retrieved, collected, treated to present a reasonable sample.

It was analysed, the Ecosystem, by mapping agents (Investors and Promoters) as the contracts (investment) between these two over time. Whereas the system under analysis, the theory of complex systems has as its objective the connections between several objects, usually dynamically. One of the advantages of this method is the possibility of easily visualising the connections and interdependence of several elements in a given system, such as the evolution and dynamics of the network.

Characterisation of the Ecosystem structure was done by:

1.1 The nodes (agents) were classified by type:

i. Targeted companies (start-ups);
ii. Venture Capital Funds;
iii. Limited Partners (LPs)/SCR´s;
iv. Limited Partners (LPs)/SCR´s - Outside PT;
v. SPV´s;
vi. Corporate;
vii. Universities;
viii. Banks;
ix. Other.

1.2 They were also classified by:

i. NACE;
ii. Country of origin;

2.1 The Edges, (directed), were also classified, type of connection by:

i. Investment;
ii. VC Fundraising;
iii. Management entity (or controlling entity);

The present work is limited to venture capital investment operations, namely through venture capital funds, having as target companies, national companies.

Possible Evolutionary Network Structure Scenarios of the Dark Web and Corresponding Policy ImplicationsLiz Johnson and Jacob StevensTuesday, 15:40-17:00

The Dark Web is largely an unpoliced part of the Internet and a means to proliferate ideas anonymously. The Dark Web is evolving in a way to offer even more secrecy and power. Consequently, this domain of the Internet is expanding utilization for criminals, law enforcement, military, intelligence communities, countries, and individuals that are just curious. A network analysis and agent-based model will be developed to capture types of Dark Web network structures, encryption thresholds, and how they could impact the balance of power between agents in commerce, intelligence communities, political activists, and governments and what are the policy considerations. First network structures will be graphed and analyzed to determine the potential strength of connections and the range of possibilities of evolutionary growth from quantum computing, to a Dark Lord takeover, to dissolution. Next an ABM will take into account the network structures and the role of encryption in the rate if change in system dynamics and type of structural changes in the networks. This is an exploratory work in progress.

A Practical Approach in the Management of Dynamic Complex SystemsCaryl Johnson, Trevor Gionet and Kay AikinTuesday, 18:00-20:00

Mankind, as never before, is faced with the challenges of effective decision making in the face of complex systems that evolve even as those decisions are being made. In many if not most spheres of human endeavor, at some point a limit is reached where the unintended consequences negate the positive outcomes to the extent that forward progress becomes impossible. Examples are easy to find in such diverse fields as economics, politics, environment, corporate management, investment decisions, to name just a few of many examples. While the concept of managing dynamic complexity is in itself a bit of an oxymoron, we are developing developed a self-evolving (AI based) dynamical framework that enables more effective decision making in the face of dynamic complexity. The solution presented here is called ‘xGraph’: an executable graph framework that presents a homoiconic approach to representing and to some extent ‘taming’ dynamically complex situations. This approach allows both the simulation as well as control that is biomimetically similar to the balance of collaboration and competition in a natural ecosystem. To date this self-aware and reflective technology has been applied in the fields of global seismology, bioinformatics, swarms of autonomous entities, strategic gaming and the creation of a modern fractal architecture for the national electrical grid.

Precision Systems Genomics and Epigenetics for Optimal Health and Human PerformanceMickra Hamilton and Daniel SticklerTuesday, 15:40-17:00

A precision, whole systems genomics approach to thriving health and wellbeing has enormous clinical applications in the emerging field of environmental epigenetics research. We can now look at all aspects of an individual’s life, their medical and family history, occupation, their lifestyle, the environments they function in, individual systems diagnostics and genetics along with real time markers from sensor and mobile data to provide precise lifestyle interventions to optimize and enhance gene expression. This new precision offers high specificity on health, tracks how individual choices affect health now and how that translates to the future. It also provides new insights about how we are interacting with our environment, in real time and in detail. The interplay of our genes and our experiences, of nature and how it interacts with nurture, has now moved from the mysterious to the knowable.

The Science of Epigenetics assists us to create precise optimization strategies by taking the reigns of gene expression to adapt and thrive under modern environmental pressures. Every decision we make contributes to this process in some way. The food we eat, the quality of sleep we experience, the cars we drive, the products we clean with and put on our skin, the thoughts we think, the levels of stress we carry and the chemicals and medications we dump into our water supply, all have an effect.

This discussion will detail the evidence-based use of precision, systems epigenetics and genomics as strategies to mitigate, optimize and enhance the effects of the modern environmental in the human system. Additionally, we will discuss actionable lifestyle modifications and system support processes to fine tune and enhance our human experience as we interact with our environment.

Predictability vs. Efficiency of Boolean Networks and Graph AutomataPredrag TosicWednesday, 15:40-17:00

Understanding complex dynamic systems such as large-scale networks of physical, computational, biological or social agents, their collective dynamics and emerging behavior is a multi-faceted endeavor. On one hand, one can investigate how (un)predictable are various properties of such systems and their dynamics. In the “long run”, a network of agents may end up in a stationary state of some sort, or exhibit cyclic behavior, or it may fail to “converge” in any meaningful sense of that term altogether. In some instances, determining the ultimate destiny of a complex networks, if some of the local nodes ("agents") are capable of sufficiently complex behaviors, may turn out to be formally undecidable. In other situations, only probabilistic statements about the long-term behavior can be made. When each agent acts in a simple deterministic manner (specifically, as a deterministic finite-state automaton), and the interaction pattern among the agents in the network (that is, the underlying graph structure) is “fixed”, answering any questions about the asymptotic dynamics is certainly decidable – since the phase spaces of such systems are finite and therefore can be exhaustively searched. However, quite often, answering those questions about asymptotic or other aspects of discrete dynamics turns out to be computationally prohibitively costly for all but the rather small and/or rather structurally simple networks of interacting agents. Thus, for the deterministic systems with finite phase spaces, a reasonable computational measure of the network dynamics’ predictability is, whether questions about the underlying systems’ asymptotic dynamics, cyclical behavior and other phase space properties, can be answered within reasonable computational resources as a function of the network size.

On the other hand, consider the fundamental issue of efficiency of dynamic networks and other dynamical systems. We approach this fundamental issue in complexity science from a computational standpoint, as well. We propose the notion that a complex network of interacting agents is efficient, if it reaches its ultimate destiny (whatever that destiny may be) relatively fast – that is, if it converges quickly. There are well-known classical notions of (fast) convergence in e.g. ergodic theory and statistical physics; we are however interested in investigating quick vs. not-so-quick convergence in purely deterministic systems, such as (finite) Boolean Networks, Cellular and Graph Automata. Since the dynamics of those systems is deterministic and the underlying phase spaces are finite, eventual convergence to either a stable state (“fixed point”) or a finite temporal cycle is guaranteed. The interesting question, then, is when is this convergence fast (as a function of the total number of agents) vs. when it takes “too long”. In the spirit of the fundamental theory of computing, we posit that within that framework, those discrete dynamical systems that converge in the number of steps that is linear in the number of agents are very efficient, and those whose convergence takes polynomial time are “efficient enough”. In contrast, those systems that take exponential time to converge to a stable state or a temporal cycle, can be reasonably argued to be computationally inefficient.

After setting the stage with respect to the proposed computational notions of (un)predictability and (in)efficiency of complex networks and other discrete dynamical systems, we review we (and others) about the dynamics of several classes of Boolean Networks, finite Cellular Automata, and Discrete Hopfield Networks. We demonstrate that predictability and efficiency of such dynamical systems may, but need not, come “hand in hand”. In particular, we show some classes of Boolean Networks and Graph Automata that are very efficient in the sense discussed above, yet in general still exhibit unpredictable dynamics: their asymptotic dynamics cannot be predicted within reasonable computational resources, under the usual assumptions from the computational complexity theory. One implication is that properties of predictability and efficiency of dynamical systems cannot even remotely be treated as synonymous, in a sense that one necessarily implies the other. Consequently, to truly understand behavior of complex networks, one needs to investigate both (un)predictability and (in)efficiency that network’s dynamics.

Predictive Patterns among Microorganisms: Data Sciences for Screening Smart Bacteria for Microbial Fuel Cells, Methanogenesis and Spider Silk ProteinsCharles Zhou and Shuo HanMonday, 15:40-17:00

Microorganisms are one of the most prolific organisms on Earth. Harnessing the power of “smart” microorganisms for energy generation and a host of other life-improving applications has become more and more crucial to a sustainable world.
“Smart” microorganisms are aptly named for their extraordinary ability to generate energy and materials like electricity, hydrogen, methane and proteins from organic sources. Enhancing how we use the unique capabilities of these microbes is important. Unlocking predictive patterns between a microorganisms genetic fingerprints and their possible “smart” metabolic capabilities opens the door to improving the interpretation of information in compiled databases of existing research which could lead to revolutionary new screening methods. For scientists, it means that microbe analysis for specific bioenergy and biotechnological uses becomes more efficient. A machine learning method, Computer-Assisted Strain Construction and Development Engineering (or CASCADE) is the basis for the technology focusing on prediction certain known properties. This big data methodology was able to uncover predictive relationships between a microorganism’s genetic information and its metabolic behavior.
We applied metabolic pathways from public databases, such as the Kyoto Encyclopedia of Genes and Genomes (KEGG) and investigated metabolic reconstructions for the organisms with only genomic information. Our selection included 327 bacteria in 13 groups.
We applied the big data biology technology to microorganism populations using a defined measure that we termed average metabolic efficiency (AME), a gauge that highly correlated with metabolic capabilities that occur in real life. This measurement allowed us to explore electrogenicity for improving microbial fuel cells (MFC), methanogenicity for methane generation in anaerobic digester and protein production for spider silk.
One notable result in our work occurred with methane experiments. Here, CASCADE-selected microorganisms were not only consistent (5/7 overlap) with current scientific selection, but also allowed the prediction of two additional microorganisms not previously selected by conventional methods.
This big data technology promises to help researchers find a cocktail of mixed microorganisms that could work more efficiently and more powerfully than a single microorganism. Data science research in predictive metabolomics and computational biology has the potential to speed the rate of discovery process and prediction and lower the expense of lab work and experimental trials.
Selected microbes for anaerobic digesters are cultured in our lab and tested in both China and USA, which systematically generated 25%-40% more methane from either human wastes or from animal wastes.

Priority Challenges for Social-Behavioral ModelingPaul Davis and Angela O'MahonyTuesday, 18:00-20:00

Modeling and simulation, if well rooted in social-behavioral (SB) science, can inform planning about some of the most vexing national problems of our day. Unfortunately, the current state of SB modeling and related analysis is not yet up to the job. This report diagnoses the problems, identifies the challenges, and recommends ways to move ahead so that SB modeling will be more powerfully useful for aiding decisionmaking. The paper is based on a report for DARPA that will be released in April 2018. It includes lessons learned from a workshop on priority challenges held in 2017 and subsequent work for an edited book planned in late 2018. The topics emphasized include the following

1. Improving the research cycle by tightening links among theory, modeling, and experimentation

2. Seeking more unifying and coherent theories while retaining alternative perspectives and confronting multidimensional uncertainty

3. Invigorating experimentation with modern data sources and emphasis on theory-observation iteration

4. Modernizing ways to use social-behavioral models for analysis to aid decisionmaking, in particular by drawing on lessons learned from other domains about using models to assist planning under deep uncertainty

A Process Algebra Model of QEDWilliam SulisMonday, 15:40-17:00

The process algebra approach to quantum mechanics applies ideas of complex systems theory and emergence to fundamental physical processes. It posits a finite and discrete ontology of primitive events which are generated by processes (in the sense of Whitehead). In this ontology, primitive events serve both as elements of an emergent space-time as well as of emergent fundamental particles and fields, which are viewed as discrete waves formed from coherently generated sub-collections of primitive events. These events are generated using only local information propagated at no more than luminal speed. Each process generates a set of primitive elements, forming a causal space termed a “causal tapestry”. Each causal tapestry can be thought of as a discrete and finite space-like sub-hyper-surface of an emergent causal manifold (space-time) M. Interactions between processes are described by a process algebra which possesses 8 commutative operations (sums and products) together with a non-commutative concatenation operator. The process algebra possesses a representation via nondeterministic combinatorial games through which causal tapestries are constructed. The process covering map associates each causal tapestry with a Hilbert space over M, providing the connection to non relativistic quantum mechanics. The probability structure of non-relativistic quantum mechanics emerges from interactions between processes. The process algebra model has been shown to reproduce many features of the theory of non-relativistic scalar particles to a high degree of accuracy. The process ontology appears to avoid the paradoxes and divergences that plague standard quantum mechanics. This paper reports on an extension of the process algebra model to vector particles, in particular photons. Light is represented as a discrete wave whose local amplitude is described by a 4-vector corresponding to the values of the scalar and vector potential. The information entering into the construction of primitive elements (photons) is propagated locally at luminal and sub-luminal speeds via a discrete version of the usual propagator. As in the scalar case this yields a high degree of accuracy. Both classical and quantum mechanical versions of light are discussed. Explicit extension of the model to include relativistic constraints is presented, paving the way for an extension of the process model to quantum field theory.

A Qualitative Approach for Detection of Emergent Behaviors in Systems of Dynamical SystemsShweta Singh and Mieczyslaw (Mitch) M. KokarTuesday, 18:00-20:00

The swarms of collaborating autonomous agents, performing a specific mission, can solve problems better than collections of agents that are controlled centrally. However, managing a large number of such agents is not only very complicated but also requires many humans to be part of the control loop. The swarms of autonomous agents can exhibit unpredictable and often undesirable behaviors, termed emergent behaviors.The emergent behaviors/properties appear when a number of simple entities operate and interact in an environment, forming more complex dynamic behaviors as a collective. Consequently, before giving control over their missions to such swarms of agents, it is necessary to establish some mechanisms to detect that an undesirable behavior is imminent and provide ways to control such swarms so that such behaviors can be avoided.In the current research, we address such issues and present techniques to detect undesirable emergent behaviors in swarms of autonomous agents. Our approach relies on the theory of similitude (or physical similarity). This theory has been used extensively in physics and engineering, in particular, to model behaviors of phenomena that occur due to the interactions of particles, e.g., in heat and mass exchange. We apply similar methods to the modeling and analysis of behaviors of autonomous agents, treated as a complex dynamical system. The main idea of the similitude theory is that similar behaviors occur when the values of the system variables are in a specific relation. The so-called dimensionless quantities can capture such relations. Each such relation defines a hypersurface in the space spanned over the system variables. Knowing such relationships will allow the system to distinguish qualitatively different behaviors. The approach uses the structure, termed theQ2, Quantitative-Qualitative system, which integrates a quantitative dynamical system with a qualitative dynamical system. The use of this structure allows the analysis of the quantitative dynamical system in the qualitative domain. This approach lowers the computational complexity of the algorithms for detecting undesirable emergent behaviors with respect to a more traditional approach based on just a general quantitative dynamical system.As a case study, we work with swarms of UAVs (Unmanned Aerial Vehicles) performing a specified mission. We formalized a specific scenario of persistent surveillance where the aim is to provide monitoring of the plume (targeted search area) such that the metric of “information age” is minimized. The agent-based modeling and simulation of the multi-UAV system is performed to implement and analyze different types of emergent behaviors. The outcomes of this research include (1) Simulations exemplifying undesirable behaviors in swarms of UAVs for the specified scenario of plume monitoring by swarms of UAVs. (2)Global control policies that result in the emergence of different types of behaviors, including both desirable and undesirable. (3) Algorithms for learning critical hypersurfaces for partitioning the system spaces into qualitative inputs, states, and outputs and for constructing qualitative state machines. (4) Results of formal analysis of the complexity and efficiency of the approach.

Quantifying economic activity on financial transaction networksCarolina MattssonTuesday, 18:00-20:00

Much of the modern economy operates within financial ecosystems hosted by major financial service providers (FSPs), where the basic unit of account is a financial transaction. Whether such ecosystems are run by FinTech companies, distributed ledgers, or more traditional players, the economic activity they support can be elegantly described by financial transaction networks. Better measures for summarizing these weighted, directed, and temporal networks can help us better understand financial ecosystems.
Of particular interest is a measure of overall economic activity. The financial ecosystems supported by FSPs generally do not encode details of transactions that would be required to construct a direct reference to well-known macroeconomic measures such as GDP. On the other hand, it is clear from actual data that not all transactions amongst users of a financial service are independently meaningful. It is often the case that two or more financial transactions likely refer to the same economic transaction.
This work constructs a rolling measure of recent overall economic activity, active volume, inspired by theoretical conceptions of GDP as value-added and by the data analysis needs of FSPs. We use time-discounting at a fixed scale k to approximate the value-added by subsequent financial transactions. Active volume can be tracked over time for any network described by financial transactions of the form (i,j,t,w) encoding the accounts that sent and received the transaction, its timestamp, and its value.

Quantifying genuine multipartite correlations and their pattern complexityDavide Girolami, Tommaso Tufarelli and Cristian SusaMonday, 14:00-15:20

We propose an information-theoretic framework to quantify multipartite correlations in classical and quantum systems, answering questions such as: what is the amount of seven-partite correlations in a given state of ten particles? We identify measures of genuine multipartite correlations, i.e. statistical dependencies which cannot be ascribed to bipartite correlations, satisfying a set of desirable properties. Inspired by ideas developed in complexity science, we then introduce the concept of "weaving" to classify states which display different correlation patterns, but cannot be distinguished by correlation measures. The weaving of a state is defined as the weighted sum of correlations of every order. Weaving measures are good descriptors of the complexity of correlation structures in multipartite systems.
Reference: D. Girolami, T. Tufarelli and C. Susa, Phys. Rev. Lett. 119, 140505 (2017)
LANL Report: LA-UR-17-29988

Quantifying people's priors over graphical representations of tasksGecia Bravo Hermsdorff, Talmo Pereira and Yael NivThursday, 14:00-15:20

In this work, we focused on tasks that have a natural mapping to graphs, and developed a method to quantify people's priors over “task graphs” that combines modelling with Markov chain Monte Carlo with People, MCMCP (Griffiths & Kalish, 2007) -- a process whereby an agent learns from data generated by another agent, who themselves learned it in the same way. Additionally, we created an online experimental platform with a game-like interface that instantiates this algorithm, and allows participants to interactively draw the task graphs.

Figure 1 illustrates our algorithm for generating experiments. This back-and-forth between data seen by the participants (“partial graphs”) and the resulting hypothesis they infer (completed task graphs) can be marginalized over the partial graphs to create a Markov Chain (MC) over the space of task graphs. Assuming that participants are Markovian and share the same fixed decision rule, this MC is time-homogeneous. If, in addition, we assume that participants are Bayesian and respond by sampling from their posterior (and that the chain is ergodic), this MC converges to a stationary distribution that is equal to the participants' shared prior over the task graphs.

In standard MCMC, one uses samples generated by the algorithm to reconstruct the target distribution. In our case, the implicit Bayesian assumption provides additional structure. Thus, we proposed a method that fits a Bayesian model to participants' choices to recover the prior, and demonstrated that it does so more precisely than the standard sampling method.

The number of non-isomorphic graphs grows super-exponentially in the number of nodes; given limited data, we cannot sufficiently sample each graph. For these cases, to obtain informative priors, we need to extend the probabilities to graphs that were not sampled. One approach is to find a natural low-dimension parameterization of the prior. Specifically, we propose to use as a basis the eigenvectors of the resulting MC over graphs of a uniform prior. Hence, “smoother” priors are obtained by including only the longer-decaying modes. Figure 3 illustrates this idea applied to experimental data.

Finally, we showed that our analysis for MCMCP over graphs can be extended to the more general case of exchangeable sequences of random variables, where the partial data are generated by randomly obscuring a given fraction of the sequence, and the relevant parameters are: $|\mathcal{A}|$, the size of the alphabet; $l$, the string length; $m$, the number of relations obscured; and $G$, the group under which the sequence is exchangeable.

Quantifying Reputation and Success in ArtSamuel Fraiberger, Roberta Sinatra, Christoph Riedl and Laszlo BarabasiThursday, 14:00-15:20

Quantifying the relation between urban environment, socio-economic conditions, mobility and crime in multiple citiesMarco De Nadai, Yanyan Xu, Emmanuel Letouzé, Marta Gonzalez and Bruno LepriThursday, 15:40-17:00

Nowadays, more than 50% of the world population lives in urban areas and crime rates are much higher in big cities than in either small ones or rural areas.
Thus, understanding which factors influence urban crime is a pressing need. Mainstreams studies analyze crime records through historical panel data, spatial analysis of crime with ecological factors, and exploratory mapping. More recently, new machine learning methods have informed crime prediction over time. However, these studies focus on a single city, considering only a limited number of factors (such as socio-economical characteristics). Hence, the resulting interpretation present ambiguities and are inconclusive.
Here we propose a spatial Bayesian model to explore how crime is related not only to socio-economic characteristics but also to spatial and mobility characteristics of neighborhoods.
To that end, we integrate open data and sources with mobile phone traces in diverse cities. We find that the inclusion of mobility information and physical characteristics of the city effectively explain the emergence of crime, and improve the performance of the traditional approaches. Moreover, we show that the ecological factors of neighborhoods relate to crime very differently from one city to another. Thus there is not a universal explanation of crime and, clearly, no "one fits all" model. However, these results demonstrate that urban diversity and natural surveillance theories play an important role in the proliferation of crime, and the knowledge of this role can be exploited by policymakers to reduce crime.

Rate of Recovery from Perturbations Reflects Future Stability of Natural PopulationsAmin Ghadami, Eleni Gourgou and Bogdan I. EpureanuThursday, 14:00-15:20

Regime shifts in complex ecological and living systems have been receiving growing attention since the cumulative human impact on the environment is increasing the risk of ecological regime shifts. Anticipating such critical transitions is a crucial need because it is often difficult to restore the system to its pre-transition state once the transition occurs. Hence, it is necessary to develop reliable methods, capable of forecasting upcoming transitions, as part of a preventive plan against possible detrimental consequences. For example, there is a need for methods capable to predict catastrophic events in natural populations, because they can lead to irreversible consequences, such as extinction of species. The potential impact of such methods is high also when applied to disease eradication (populations of infectious diseases).
To address this important topic, we introduce and experimentally evaluate a unique method to forecast critical transitions in ecological systems and natural populations. The method enables us to forecast critical points and post-critical system dynamics using measurements collected only in the pre-transition regime. The method is evaluated using as a model ecological system, namely a population of budding yeast with cooperative growth. The population exhibits a catastrophic transition as the environment deteriorates, which resembles an ecological collapse.
The exciting experimental results of this study address some of the most important challenges in forecasting safety and stability of natural populations. Results highlight that by monitoring the rate of recovery of the system's response to perturbations, it is possible to gain crucial information about the system’s future stability, such as the quantitative distance to the upcoming transition (collapse), the type of upcoming transition (catastrophic/non-catastrophic) and future equilibria. We envision this approach to be a valuable tool used in stability analysis of natural populations, which is exceedingly important in ecological management.

Reading the Media’s MindSarjoun Doumit and Ali MinaiWednesday, 15:40-17:00

“There’s no art to find the mind’s construction in the face,” wrote Shakespeare in Macbeth, but trying to infer what someone is really thinking is arguably the essence of interaction between cognitive agents. The only basis for doing so is what an agent expresses in writing, speech, or behavior. Reading thoughts and intentions from these external signals is what humans and other sentient animals do as a matter of course. Turning this intuitive expertise into a quantitative or computational model, however, is a challenging goal – and one that neither cognitive science nor engineering has come close to achieving. The best that can be done is to look at individual aspects of the mind through controlled experiments, such as the study of memory, recognition, problem-solving, etc. One useful, though still simplistic, way to try and infer mental models from expressions is to look at the pattern of associations between conceptual elements. Both theory and experiment support the idea that mental representations and processes are fundamentally associative: Memories are stored and recalled by association, and complex concepts are represented as associations between simpler ones. In an influential 1962 paper, Sarnoff Mednick even proposed that creativity is rooted in the presence of unusual associations in the minds of creative individuals. It is also reasonable to assume that the language of an agent’s written or spoken expression would, to some degree, reflect the pattern of associations in their mind. Motivated by these ideas, we have previously presented some evidence that there is a significant difference in the pattern of association between words in creative (poetic) writing and non-creative writing. In the current paper, we apply the same approach to news reports from individual media sources over the same period of time, with the goal of looking for differential patterns of association, distinctive motifs, and qualifiers associated with high-centrality concepts to infer potential biases in the reporting of the same news stories by different media sources. The underlying assumption is that the associative patterns in the expressions of a media source will reflect its “mind” and “personality,” just as they do for individuals. We also compare the structure of associative networks from each media source to characterize potential difference in linguistic style.

Recognizing complex behavior emerging from chaos in cellular automataGabriela M. Gonzalez, Genaro J. Martinez, M.A. Aziz Alaoui and Fangyue ChenMonday, 15:40-17:00

In this research, we will explain and show how a chaotic system displays non-trivial behavior as a complex system. This result is reached modifying the chaotic system with a function of memory, the result is a new system with elements of the original function which are not evident in a first step. We proof that this phenomenology can be discovered selecting a typical chaotic function in the domain of elementary cellular automata to discover complex dynamics. We will illustrate by simulations how a number of gliders emerge in this automaton and how some controlled subsystems can be designed in this complex system.

References

Y. Bar-Yam (2003) Dynamics Of Complex Systems, CRC Press.

S.A. Kauffman (1993) The Origins of Order: Self-Organization and Selection in Evolution, Oxford University Press, New York.

M. Mitchell (2009) Complexity: A guided tour, Oxford University Press.

G.J. Martínez, A. Adamatzky, R.A. Sanz (2015) On the Dynamics of Cellular Automata with Memory, Fundamenta Informaticae 137 1-16.

G.J. Martínez, A. Adamatzky, R.A. Sanz (2013) Designing Complex Dynamics in Cellular Automata with Memory, International Journal of Bifurcation and Chaos 23(10) 1330035-131.

G.J. Martínez, A. Adamatzky, J.C.S.T. Mora, R.A. Sanz (2010) How to make dull cellular automata complex by adding memory: Rule 126 case study, Complexity 15(6), 34-49, 2010.

S. Wolfram (2002) A New Kind of Science, Wolfram Media, Inc., Champaign, Illinois.

Regular and chaotic fractional (with power-law memory) dynamicsMark EdelmanMonday, 14:00-15:20

Nonlinear systems with power-law memory appear in many areas of natural and social sciences. Their dynamics can be described by nonlinear fractional differential/difference equations. Presence of power-law memory results in the new behavior typical only for fractional systems. Some of the new features include: 1. Overlapping of chaotic attractors; 2. Self-intersections of trajectories in continuous systems of the orders less than two. As a result, non-existence of chaos in such systems is only a conjecture; 3. Fractional systems may not have periodic solutions except fixed points. Instead, they may have asymptotically periodic solutions. Periodic sinks may exist only in the asymptotic sense and asymptotically attracting points may not belong to their own basins of attraction; 4. Cascade of bifurcations type trajectories exist only in fractional systems. The periodicity of such trajectories changes with time: they may start converging to a period 2^n sink, but then bifurcate and start converging to a period 2^(n+1) sink and so on; 5. Fractional extensions of the volume-preserving systems are not volume preserving. If the order of a fractional system is less than the order of the corresponding integer system, the behavior of the system is similar to the behavior of the corresponding integer system with dissipation. Correspondingly, the types of attractors which may exist in fractional systems include sinks, limiting cycles, and chaotic attractors; 6. Bifurcations in a fractional system may occur when nonlinearity parameter on which it depends changes and they depend on the order of the system (memory parameter). As a result, systems with power-law memory may be described by two-dimensional (nonlinearity parameter vs. memory parameter) bifurcation diagrams.

Relatedness, Knowledge Diffusion, and the Evolution of Bilateral TradeBogang Jun, Aamena Alshamsi, Jian Gao and Cesar HidalgoTuesday, 14:00-15:20

Resilience Modeling for Petrochemicals and Refineries Exposed to a Vector of Natural/ Technological/ Human ThreatsFarinaz Sabz Ali Pour and Adrian GheorgheTuesday, 14:00-15:20

Petrochemical and refinery units are highly vulnerable to the threats such as natural disasters, technological inconvenience, and human-induced insecurity. Also, the incidents can have catastrophic consequences. A systematic approach is required to determine the vulnerability index of a petrochemical and refinery site including all the tangible factors (e.g. Reliability of safety devices) and intangibles factors (e.g. annually budget changes for maintenance actions). Moreover, based on the location and the region of the site the involving factors may vary. This study proposes a model to identify and analyze all the possible threat scenarios that a refinery unit can face and analyze all the potential consequences in order to manage the resiliency of the petrochemical unit. Having a proper overview of the potential threats can provide the plant with a systematic structure of disaster management. Three levels of Low- Medium- High are defined for the vulnerability in this study. Data mining is used to analyze the empirical data and discover the relationships among the attributes of risk factors. A small change in one of the index might result in a dramatically different score and change the vulnerability category due to the interdependency of some of the index. The model application can provide a framework of the involving index and their correlation toward the potential threats. Elements of artificial intelligence techniques are applied to deal with a large amount of the output data processing. By employing this model, the volatility of underwriting results can be reduced, and the potential negative impact of identified threats can be minimized. Moreover, this can benefit the industry provide innovation opportunity to the process, while at the same time helping minimize vulnerability arising from novel products.

Rethinking Banking Branch NetworksOscar GranadosWednesday, 14:00-15:20

In recent years, banking services increased their digital services. However, they still require physical attention to customers and there may not be a definite extinction of the branches. Which is the best way to optimize the branch banking networks in megacities? This document proposes an alternative of branch banking network optimization, which uses genetic algorithms from information on the multi-layers structure of mobility, transportation, crime, cellular coverage, traffic and construction licenses. The results obtained define those locations where it may be more appropriate to establish a branch, as well as the need to merge or close other branches.

Reverse Engineering Methods to Study Osteochondral Regulatory NetworksRaphaelle Lesage, Johan Kerkhofs and Liesbet GerisMonday, 14:00-15:20

Chondrocyte differentiation involves a genetic switch from a proliferative cell state towards a hypertrophic cell state. The control of this process is tuned precisely by a complex network of signalling molecules and is essential for bone formation during development. Dysregulation of articular chondrocytes in pathogenic circumstances can lead to a recovery of the articular chondrocytes’ ability to switch towards hypertrophy. Understanding these regulatory mechanisms is of primordial importance in order to identify pathogenic factors as potential therapeutic targets in degenerative disease as well as for the development of tissue engineered (TE) constructs for osteochondral regeneration. Since biological systems are complex we have chosen to use computational models to meet that need. Here we propose to use reverse engineering approaches to unravel the complexity of the regulation of chondrocyte differentiation by using high throughput expression data. On one hand, network inference methods served to build a consensus gene regulatory network based on gene expression profiles in order to validate a previously developed literature derived topology. The consensus network topology arises from the processing of various published micro-array data sets with multiple inference algorithms using a consensus approach. Inference showed that the literature-derived model performs better in uncovering regulatory interactions, compared to the consensus network topology as golden standard, than a network that was created using the STRING database. On the other hand, we used a genetic optimisation algorithm to discover the parameters enabling the network’s output to fit known expression profiles. This led to the construction of an ensemble of reverse engineered additive models enabling the identification of key molecular factors implied in the stability of the state of the chondrocyte. Indeed, analysing the dynamics of this system and its behaviour under perturbations such as in silico knockout or over-activation might highlight key factors to act on to trigger or modulate chondrocyte differentiation. Currently, this model is used to design experimental strategies favouring robust osteochondral differentiation in the context of bone tissue engineering.

Acknowledgments: This project has received funding from the European Union’s Horizon 2020 research and innovation programme under Marie Sklodowska-Curie grant agreement No 721432

Rise of the NeostrategistNoah KomnickTuesday, 14:00-15:20

A general theory of strategy (i.e., abstract strategy) has endured since the 18th century. It posits that strategy is the reconciliation of ends and means in order to determine the ways. Inductively, this paradigm was consistently proven valid and strong for over two hundred years, and thus labelled "enduring," by luminary military strategists from Carl von Clausewitz to Colin S. Gray. Yet, as this monograph will prove, strategy fundamentally assumes a system is deterministic and thus fails to properly account for the ramifications of complexity. As a result, a new paradigm is proposed: neostrategy. Just like the observation of one black swan proves that not all swans are white, neostrategy highlights that strategy is not always useful nor is it enduring. Borrowing from the works of Kenneth O. Stanley, Joel Lehman, and Yaneer Bar-Yam, neostrategy offers planners an alternative to the traditional, objective-seeking strategy that we are so familiar with and instead proposes a strategy of novelty for some cases. In the process, this paper explains that some objectives, such as organizational innovation, are intrinsically uncertain and thus better served by a strategy of novelty instead of objective. Alas, in a Thomas S. Kuhn sense, neostrategy replaces strategy as a more complete paradigm for planning in today's interconnected, globalized, and thus increasingly complex world.

Robots as Complex SystemsJosh BongardFriday, 11:40-12:20

AI has recently enjoyed a renaissance due to algorithm advances and the arrival of Big Data. However, most current AI systems lack a body with which to push against the world, literally and figuratively, and observe how the world pushes back. Many argue that the “robots are coming”, but before they can in large numbers, we need to understand robots as complex systems: like animals, they are collections of interacting morphological and neurological components, and may themselves, in future, serve as components in ever more complex human/machine ecosystems. In this talk I will investigate how the study of complex systems is helping us realize such robots, and how in turn robots can serve as new and unique model systems for the study of complex systems.

Scenario set as a representative of possible futuresPeter DobiasTuesday, 14:00-15:20

Because of the inherent disconnect between the length of the procurement cycle and the dynamic time scales of the security environment, future military capability planning presents a unique challenge. While the security environment can change very rapidly, and the extent of the changes, and thus the future environment, is potentially unpredictable, the procurement cycles responsible for the renewal of the capabilities are typically on the order of decades. The traditional approach to capability acquisition consists in the development of a scenario set supposed to represent possible futures, and then plan capabilities to address the challenges of the scenario set. This approach is based on an implicit assumption that the scenario set is sufficiently representative of all possible futures. i.e., that a solution to this scenario set will be a solution for any of the feasible futures. Similar approach, with a similar assumptions, appears in almost any future planning, where the optimized solution to unknown future is sought. Intuitively, if the scenarios are selected somewhat randomly, i.e., there is no significant selection bias that would eliminate certain class of futures, then if a solution works for more of the planning scenarios, whether in optimization or capability scheduling sense, the more likely it is to work for any feasible future. In this paper we look at this assumption in a more systemic way, and we attempt to express a likelihood that a solution to a limited set of scenarios is a solution for all possible futures at least for some special cases. In particular, we look at a discrete case with a limited number of feasible scenarios, and then compare it with a continuous case (infinite number of feasible scenarios). We then look at the implications of scenario selection on optimization problems, as well as for general capability planning.

Segmentation of 3D, in vivo, multiphoton images of cortical vasculature in Alzheimer's mouse models using deep convolutional neural networksMohammad Haft-Javaherian, Linjing Fang, Victorine Muse, Chris Schaffer, Nozomi Nishimura and Mert SabuncuThursday, 14:00-15:20

There is a strong correlation between neurodegenerative diseases, e.g. Alzheimer’s disease (AD), brain microvascular dysfunction, and reduced brain blood flow. Therefore, a functional understanding of any changes in the vascular network in the brain in AD is essential. Imaging methods such as multiphoton microscopy can generate 3D images of brain vascular network with micrometer resolution in vivo from animal models. However, the quantitative analysis of these complex structures requires imaging segmentation is hampered by features of live animal images such as motion and poor contrast.The segmentation of vessels is a bottleneck that has prevented the systematic comparison of 3D vascular architecture across experimental populations. We explored the use of convolutional neural networks to segment 3D vessels within volumetric in vivo images of fluorescently-labeled blood vessels acquired by multiphoton microscopy in mouse brain. We evaluated different network architectures and machine learning techniques in the context of this segmentation problem. We show that our optimized convolutional neural network architecture, which we call DeepVess, yielded a segmentation accuracy that was better than both the current state-of-the-art and a trained human annotator, while also being orders of magnitude faster. To explore the effects of aging and AD on capillary structure, we applied DeepVess to 3D images of cortical blood vessels in young and old mouse models of AD and wild type littermates. We found little difference in the distribution of capillary diameter or tortuosity between these groups, but did note a decrease in the number of longer capillary segments (>75 μm) in aged animals as compared to young, in both wild type and Alzheimer's disease mouse models.

Selecting Information in Financial Markets: Herding and Opinion Swings in a Heterogeneous Mimetic Rational Agent-based modelAymeric ViéTuesday, 15:40-17:00

As expectations are driven by information, its selection is central in explaining common knowledge building and unraveling in financial markets. This paper addresses this information selection problem by proposing imitation as a key mechanism to explain opinion dynamics. Behavioral and cognitive approaches are combined to design mimetic rational agents able to infer and imitate each other’s choices and strategies in opinion making process. Model simulations tend to reproduce stylized facts of financial markets such as opinion swings, innovation diffusion, social differentiation and existence of positive feedback loops. The influence of imitation reliability and information precision on opinion dynamics is discussed. The results shed light on two competing aspects of imitation behavior: building collective consensus and favoring innovation diffusion. The role of contrarian and individualistic attitudes in triggering large-scale changes is highlighted. From the results, some policy recommendations to reach better financial markets stability through opinion dynamics management are finally presented.

Sequential Planning and Machine Learning for Adaptive Training and Team TrainingAlan CarlinWednesday, 15:40-17:00

Modern research in adaptive training and education often considers both AI Planning and Machine Learning approaches. Regarding AI Planning, training can be considered as an optimization problem by assigning value to training objectives, and formalizing notions such as the relationship between training content and Knowledges, Skills, and Experiences (KSE’s). The relationship includes the ability of differing content to train different KSE’s, and the properties of in-training measures of KSE attainment. Regarding Machine Learning, past educational data stored in a Learning Management System (LMS) can be mined so as to construct the optimization problem. In this paper, we report on work to represent the training process as a Dynamic Bayesian Network (DBN), and to optimize training content selection using approaches from the Partially Observable Markov Decision Process (POMDP) literature. However, there are challenges associated with these approaches, related to the complexity of the domain. In practice KSE’s are often correlated, sometimes at a single point in time and sometimes at differing times due to predecessor/successor relationships in KSE attainment. Furthermore, there is growing interest in extending these approaches to team training and team measurement, adding difficulty to the machine learning process and adding additional constraints to the AI planners. In this paper, we discuss the relevant approaches, the sources of complexity in the domain, areas of recent work, and challenges ahead.

Shadow Capital: Emerging Patterns of Money Laundering and Tax EvasionLucas AlmeidaTuesday, 18:00-20:00

Among the many social structures that cause inequality, one of the most jarring is on the use of loopholes to both launder money and evade taxation. Such resources fuel the "offshore finance" industry, a multi-billion dollar sector catering to many of those needs. As part of the push towards greater accountability, its crucial to understand how these decentralized systems structure and operate. As its the usual case with emergent phenomena, the efforts of law enforcement have had limited effects at best.

Such challenge is compounded by the fact that they run under the logic of "Dark Networks" avoiding detection and oversight as much as possible. While there are legitimate uses for offshore services, such as protecting assets from unlawful seizures, they are also a well documented pipeline for money stemming from ilegal activities. These constructs display a high amount of adaptiveness and resilience and the few studies done had to use incomplete information, mostly from local sources of criminal proceedings.

The goal of this work is to analyze the network of offshore accounts leaked under the “Panama Papers” report by the International Consortium of Investigative Journalists. This registers the activities of the Mossack Fonseca law firm in Panama, one of the largest in the world on the Offshore field. It spans over 50 years and provide us with one of the most complete overview thus far of how these activities are networked. There are over 3 million links and 1.5 million nodes, with accompanying information, including time of operation, ownership and country of registry.

The preliminary analysis already performed by cleaning the dyadic relations allow for a snapshot of this universe, which is very receptive to the metrics already current in network science. The betweenness centrality of nodes is extremely skewed, with less than a hundred being the “backbone” of the system, mostly on countries that are already known to be tax heavens ( like the Cayman Islands , Bahamas and Jersey). The degree distribution is very similar to the power-law produced by the Bianconi-Barabasi model of preferential attachment with changing fitnesses. These patterns will be explored in order to better understand the evolution of the system. This work will also model how different strategies of law enforcement intervention can disrupt the flow of illegal resources by testing local(and network-level) targeting metrics.

By crossing the methods from data analysis with the public policy perspective, we expect to contribute to the literature of compliance, as well as the growing field of dark networks. It can also provide an important baseline for understanding other recently uncovered schemes such as the “Car Wash” scandal in Brazil. The perspective of complexity is uniquely well-positioned to shed light on this enigma that neither economics nor law alone have been able to tackle.

Signature forecasting in the context of predictability of infectious disease outbreaksElena Naumova, Aishwarya Venkat, Ryan Simpson, Anastasia Marshak and Irene BoschThursday, 14:00-15:20

The threat of high-profile disease outbreaks has been drawn continuous attention to the ability to predict and early detect an infectious outbreak. Outbreaks are triggered and emerged from the complex interaction of hosts and pathogens that can be altered by the shared environment. These interactions are likely to be depicted by models capable to describe complex adaptive systems. The task of outbreak prediction can be viewed from two perspectives: general predictability of an outbreak and near-term forecasting. With respect to outbreak predictability over long time series, Scarpino and Petri have demonstrated that the forecast horizon varies by disease and these differences likely occur due to both shifting model structures and social network heterogeneity (Scarpino and Petri, 2017). These insights should be informative for developing forecasting models and understanding the latent processes governing periodicity and synchronization of outbreaks. Counterintuitively, near-term forecast models do not rely on long-memory processes. They often lack flexibility due to overly restrictive assumptions, suffer from misspecification and data reporting lags, require information not universally available in real-time, and demand high computational requirements.

In our prior work, we explored a method of sequential combinatorial decomposition of an outbreak (Fefferman and Naumova, 2006). This method embodies a compromise between the complexity of individual behavior and the broad-brush assumption of mass action, population averages, and is based on a set of clinically measurable parameters. It also allows us to incorporate an understanding of the different temporal distributions governing the transition from susceptible to infected to recovered associated with each population. We also went beyond a classic time series approach, developing a non-linear non-parametric extension by modeling an outbreak signature (Naumova and MacNeill, 2005). We smoothed historical outbreak data with a loess-type smoother, updating the estimates upon receipt of a new datum to produce a near-term forecast. Recent data and the near-term forecasts were used to compute a warning index to quantify the level of concern. The algorithm for computing the warning index was designed to balance Type I errors (false prediction of an outbreak) and Type II errors (failure to correctly predict an outbreak).

We now consider a system in which an adaptive forecast can be made by assuming a latent process that defines an “outbreak potential” in a given extent. A pre-specified “outbreak potential” continuously triggers a longer forecast based on a ‘signature’ curve selected from a library of historically observed signatures, e.g. a flexible four-parameter family of Johnson’s distributions. We are testing recently developed apps to produce signature forecasts for a broad range of infections including influenza, rotavirus, Zika, and dengue. We are also exploring entropy-based metrics to characterize the forecast quality, by assessing its reliability, accuracy, skill, resolution, sharpness, uncertainty, and value.

References:

Scarpino, S.V., Petri, G. On the predictability of infectious disease outbreaks. arXiv:1703.07317 [physics.soc-ph] 5 Apr 2017

Fefferman, N.H., and Naumova, E.N. (2006). Combinatorial decomposition of an outbreak signature. Mathematical Biosciences 202, 269-287.

Naumova, E.N., and MacNeill, I.B. (2005). Signature‐forecasting and early outbreak detection system. Environmetrics 16, 749-766.

Signatures of microevolutionary processes in phylogenetic patternsMarcus Aguiar, Carolina Lemes, Paula Lemos-Costa, Flavia Marquitti, Lucas Fernandes, Marlon Ramos, David Schneider and Ayana MartinesWednesday, 15:40-17:00

Phylogenetic trees represent evolutionary relations among species and contain signatures of the processes responsible for the speciation events they display. Inferring processes from tree properties, however, is challenging. Here we considered a genetically explicit model of speciation based on assortative mating where genome size and mating range can be controlled. We simulated parapatric and sympatric radiations, and constructed their phylogenetic trees, computing structural properties such as tree balance and speed of diversification. We showed that parapatry and sympatry are well separated by these characteristics. Balanced trees with constant rates of diversification only originate in sympatry and genome size affected both the balancing and the speed of diversification of the simulated trees. Comparison with empiric data showed that many radiations considered to have developed in parapatry or sympatry are in good agreement with model predictions.

Simulating and Stimulating Social Intelligence Networks: an Agent Based ApproachHamza Zeytinoglu, Bulent Ozel and Leo HalepliThursday, 14:00-15:20

Simulating and Stimulating Social Intelligence Networks: An Agent Based Approach

Social Intelligence Networks (Battiston & Zeytinoglu, 2016) are dynamic movements that faciliate trust based collective action through value allignment. Social groups can identify and gather around certain issues they would like to address. By nature these social issues require social solutions where not only the actors and their counter parties have a chance to win together (classical win-win scenarios), but the society they operate in wins as well (triple win). Thus individuals form groups, and groups make a positive difference on issues they prioritize. Each subject within a group has her values that may and will differ from the others. Deliberative processes are in place and certain events do translate into milestones where such experiences challenge existing values, seeding full transformations or modifications.

We have employed an SIN based approach at an ongoing EU funded project, OpenMaker, that aims to enable a value aware community around maker and manufacturer partnerships. The project aims at creating a transformational and collaborative ecosystem that fosters collective innovations within the European manufacturing sector and drives it towards more sustainable business models, production processes, products and governance systems. In other words, the project is an initiative that seeks to catalyse the ideation, discovery, design & prototyping of business models, production processes, products, and governance systems — driving the radical distribution, decentralisation and mass collaboration between manufacturers and makers. In that sense, OpenMaker project as a whole is an attempt to mediate and stimulate SINs for the emerging maker movement.

One of the methodical objectives of the project is to develop an agent-based model(ABM) to improve our understanding of critical ingredients necessary to the emergence of a cooperative and collectively-aware environment, conducive to innovation. The approach entails a set of integrated components where agent and community specific knowledge, values, and socio-geographic interactions are made explicit via support of advanced machine learning algorithms which in turn may increase awareness of the community values and establish trust for further collaborations and value-consistent actions.

We employ a combination of collaborative filtering and machine learning procedures within an agent-based modeling approach to overcome obstacles or reduce the burdens of information processing while complying with principles of data privacy. ABMs on complex socioeconomic systems are generally developed to be able to run certain policy scenarios under controlled settings and assumptions. From an ABM perspective, we conceive a community formation phenomenon as the emergent results of members' interactions in complex social, cultural, geographical, and political environments. In that sense, the agent-based simulation model may improve our understanding of the impacts of individual and collective choices on emerging values and social network structures of maker communities. The outcome of such an exercise (i) may enable the enablers to compare different offline strategies towards acceleration of their community formation, (ii) may lead to informed insights at adjusting features of the OpenMaker digital social platform, (iii) may help to derive policy recommendations at promoting sustainable maker communities elsewhere.

Simulation of Scale-Free Correlation in Swarms of UAVsShweta Singh and Mieczyslaw M. KokarTuesday, 15:40-17:00

Natural phenomena such as flocking in birds, known as emergence, is proved to be scale-invariant, i.e., flocks of birds exhibit scale-free correlations which give them the ability to achieve an effective collective response to external conditions and environment changes to survive under predator attacks. However, the role of scale-free correlations is not clearly understood in artificially simulated systems and thus more investigation is justifiable. In this paper, we present an attempt to mimic the scale-free behavior in swarms of autonomous agents, specifically in Unmanned Aerial Vehicles (UAVs).We simulate an agent-based model, with each UAV treated as a dynamical system, performing persistent surveillance of a search area. The evaluation results show that the correlation in swarms of UAVs can be scale-free. In this work, we discuss the conditions under which this happens. Since this is a part of ongoing research, open questions and future directions are also discussed.

A Simulator for the Dynamics of Project ManagementBurak Gozluklu, Nelson Repenning and John Sterman Monday, 15:40-17:00

Complex projects including product development, construction, software and others are chronically late, over budget, and fail to meet customer requirements for functionality and quality. For decades, system dynamics simulation models have been successfully used to improve project management and to help resolve disputes. While these models have been very successful, there is a greater need to help managers and students learn how to manage complex projects more effectively before they tackle the real thing. Here, we describe a new management flight simulator that represents modern, complex projects, and present data evaluating its use with human participants. The interactive, web-based simulator extends the project model of Ford and Sterman (1998) to include endogenous and stochastic introduction of new features in the market affecting customer expectations, precedence constraints affecting the degree of concurrency of different tasks, the ability to and consequences of schedule compression, endogenous generation of errors and rework, and other realistic features of modern projects. The model can be parameterized to represent projects in a wide array of settings (civilian and defense; software, consumer products, construction projects, etc.). We calibrate the model to capture typical projects in three important industries: software, hardware and large-scale construction, each differing in terms of timeline, engineering and skilled labor needs, the rate at which technology evolves, costs of rework, ability to repair in the field, customer tolerance for errors, and others. Players make decisions regarding schedule, hiring, task concurrency, scope changes, and so on, and receive continuous, realistic feedback on project performance including progress, labor productivity, costs, errors and rework, etc. We report results of sessions with management and engineering students from different backgrounds, and compare their strategies and performance to simulated players. The decision rules used by the bots are obtained by optimization of parameters using simulated annealing and Monte Carlo methods.

Size increases produce coordination trade-offs in decentralized body plansMircea Davidescu, Pawel Romanczuk, Thomas Gregor and Iain CouzinMonday, 15:40-17:00

A fundamental question of complex systems is how the behavior of systems emerges from the behavior of components, and how such emergent behavior changes as systems increase in size. Among the most well-known examples of emergent behavior is coordination in collective decision-making, which occurs in biological systems ranging from quorum sensing in bacteria to group decisions made by social primates. The problem of coordinated decision making is especially apparent in primitive multicellular organisms such as the earliest-diverged multicellular animals, having body plans with no central organization or specialized tissue for coordination, and can grow to indeterminate sizes. Such organisms also exhibit varying degrees of coordinated motility, from the small, motile Placozoa to the large and immobile adult marine sponges. An important question in the evolution and ecology of such organisms is whether coordination at the scale of the organism is impacted by their tremendous size variation.

We used a mixture of high resolution tracking microscopy, spatio-temporal correlational analyses, and computer simulations to analyze the multicellular dynamics of Placozoa, among the earliest-diverged and arguably simplest motile animals. Placozoa are free-living marine organisms that are effectively a disk of cells three layers thick, which crawls along substrates by the coordinated beating of tens of thousands of ciliated cells on their ventral surface. Their ability to coordinate locomotion under such rudimentary anatomical conditions makes them an ideal model system for understanding if and how coordination can be achieved in the absence of complex network structure. We used an automated stage and tracking system such that individual organisms could be continuously monitored for over long timespans, allowing us to observe how tissue dynamics scale with organism size. We determined how size affects coordination in such animals by measuring the correlation of movement fluctuations and identifying how the correlation length and strength scales with organism size. We find that such animals face a trade-off whereby increasing in size decreases the ability to effectively coordinate locomotion. We further used simulations of fluid and elastic collective particle systems to determine if this trade-off between size and coordination is inevitable in such multicellular sheets, or if it can be overcome under certain conditions. One such condition could be criticality, at the phase transition between ordered and disordered behavior, where the susceptibility of a system to external perturbations, and the correlation length may be maximized, which could be advantageous for coordination. We find that the dynamics of \textit{T. adhaerens} resemble those of multicellular sheets tuned to criticality, but that it is precisely in this critical state that the trade-off between size and coordination is accentuated. We hypothesize that this is because cells are only able to interact with their nearest neighbors, and that this limitation may have been a driver for the evolution of hierarchical structure and nervous systems.

Social Complex Contagion in Music Listenership: A Natural Experiment with 1.3 Million ParticipantsJohn Ternovski and Taha YasseriMonday, 14:00-15:20

Social influence has been an important topic of research in the social sciences. We use Last.fm's song-listen data to quantify social influence on music listenership around live events. We study live events performed by “Hyped” (i.e. trending) and “Top” (i.e. most popular) artists. We analyse how listenership changes around the time of the live event both for users who attended the event, and more importantly, for those who did not, but are friends with someone who attended. We use three distinct types of data: (1) event attendance, (2) track listens, and (3) the Last.fm friends network. We extracted all live events in 2013 and 2014 by the most popular Hyped and Top artists. We tracked listenership of 1.3 million users over a two-month time horizon—with one month of listenership data prior to the attendance of an event and one month of listenership data after the attendance of the event. In order to assess the direct impact of live events on attendees' listenership, we use a regression discontinuity design. We find strong evidence of direct impacts on listenership among concert attendees of both Top and Hyped Artists. As seen in Figure 1, the impacts of the datasets are comparable—a Top Artist live event increases listenership by 1.13 of a song (z-test p-value<.001), while a Hyped Artist live event increases listenership by 1.05 of a song (z-test pvalue<.001). To determine if the attendee then, in turn, influences their friends, who have not attended the same event, we apply the same regression discontinuity design to all friends of the attendees. The indirect effect on friends of Top Artist attendees is a 0.060 additional song plays, or more than 5% of the direct impact on listenership. While this is a trivial impact on its own, it is important to emphasize that the mean Top Artist attendee has 81.3 friends. This means that one user’s attendance translates to 4.8 more song plays on average. The indirect effect on friends of Hyped Artist attendees was not significant. To determine whether the indirect impact increases as the number of friends who attended the event increases, we run a series of regression discontinuity analyses across non-attending users with various numbers of attending friends. We see that as the number of friends who attended the event increases, the influence on listenership increases monotonically. It is possible that individuals with more friends are simply more persuadable. To better discern if this pattern stems from multiple attendees exerting influence on the non-attender, we perform a permutation test. Namely, we look at the discontinuity estimates across number of attendee friends in our actual dataset and then compare these results to a synthetic dataset where we held the friend network constant but randomly assigned that non-attending friend to a different live event date of the same artist in the same 60 day period such that no participant in their friend network attended that live event. We observe that the indirect effects vanish in the shuffled network, confirming the presence of social influence within the friendship network. [1] Salganik, M. J., Dodds, P. S., & Watts, D. J. (2006). Experimental study of inequality and unpredictability in an artificial cultural market. Science,311(5762), 854-856. [2] Centola, D. (2010). The spread of behavior in an online social network experiment. Science, 329(5996), 1194-1197

Social Fragmentation in Various ScalesLeila Hedayatifar, Alfredo J. Morales and Yaneer Bar-YamThursday, 15:40-17:00

Despite the world becoming highly connected, society seems to be increasingly polarized and fragmented. All over the globe, nationalist currents and regionalisms gain strength and threaten to radically transform the composition of countries and States as we know them. The basis of this phenomenon is in the complex structure and dynamics of social systems. Far from homogeneously mixing with each other, we self-organize into groups that can span across multiple scales, from families or friends up to cities and cultures. We study the modular structure of society using mobility and communication data obtained from social media. We see that societies are organized in geographical patches where people meet and interact both offline and online, revealing the geometry of the social tissue. The smaller patches can span across neighborhoods while the largest can span across countries. We found that cultural, political and economic reasons underlie the emergence of these patches, whose behavior seems to be increasingly differentiated from one another. Given the challenges that we face as a global society, it is imperative that we learn the way societies are actually structured.
The emergence of the structure we observe in the data can be explained with a network growth model that combines the preferential attachment mechanism, the human mobility gravity model and a spatial growth process based on nearest neighbors. In our model, we consider a regular lattice representing a map of geographical locations that are connected to each other, simulating the way people travel. Three exponents, alpha, beta and gamma, control the effects of the preferential attachment mechanism, the geographical distance and the spatial growth process, respectively. Model reveals that as highly connected places, cities act like gravity centers attracting human displacements from surrounding areas and creating spatial patches where people inhabit and interact with each other. The model is able to reproduce the heterogeneous spatial patterns of the degree distribution as well as the geographical modular structure of the resulting network. A quantitative analysis of the degree distribution and modularity confirms this similarity.

Social Neointeraction on Facebook, Presidential Campaign Mexico 2018Carlos Augusto Jiménez ZarateTuesday, 18:00-20:00

Social Signals in the Blockchain Trading NetworkYaniv Altshuler and Shahar SominMonday, 14:00-15:20

Blockchain technology, which has been known by mostly small technological circles up until recently, is bursting throughout the globe. Its economic and social impact could fundamentally alter traditional financial and social structures.
Issuing cryptocurrencies on top of the Blockchain system by startups and private sector companies is becoming a ubiquitous phenomenon, inducing the trade of these crypto-coins to an exponential degree.
Apart from being a trading ledger for cryptocurrencies, Blockchain can be observed as a social network, where analyzing and modeling it, can enhance the understanding of Cryptocurrency-world dynamics and specifically its high volatility.

In this work, we investigate the ERC20 protocol compliant cryptocurrencies' trading data, on top of the Ethereum Blockchain. It is composed of over 15 million transactions, lapsing between January 2017 and February 2018, depicting the trading activity of 3.8 million wallets on 350 different crypto-coins.
We report our preliminary findings, focused on analyzing the network dynamics of this Blockchain traffic. We reviewed crypto-coins' popularity, examining number of unique buyers and sellers per coin. As expected, both exhibit a power-law distribution, presented in Figure 1. We further established a network having all ERC20 trading wallets as nodes, and edges are formed by buy or sell trades, correspondingly, considering all crypto-coins trades. We reviewed both incoming and outgoing degrees of nodes in the network, where incoming degree of a node signifies the number of unique wallets, who ever sold ERC20 crypto-coins to the wallet represented by that node, and vice-versa for outgoing degree. We established that both in-degree and out-degree distributions present a strong power-law pattern, presented in Figure 2.

We've validated our findings across 20 different points in time and analyzed varying length periods between 3 days to 3 months. We've encountered that in all cases the power-law distribution remains and presents roughly similar $\gamma$ values.

The sources of complexity of urban phenomena: implications of a new theory of urban scalingAndres Gomez-Lievano and Oscar Patterson-LombaThursday, 15:40-17:00

Network-based explanations of the power-law scaling of socioeconomic phenomena are fundamentally correct, but they are statistically limited. Their main limitation is their inability to parsimoniously account for the different scaling patterns across phenomena without having to invoke a different network for each phenomenon. We recently proposed a different angle to explain these patterns, based on two hypotheses (Gomez-Lievano, Patterson-Lomba, & Hausmann, 2016): that socioeconomic phenomena can only occur if a multiplicity of different, but complementary, factors are simultaneously brought together and combined, and that the diversity of factors scales logarithmically with population size. The first hypothesis is grounded on the theory of economic complexity while the second on the theory of cultural evolution. The unification of these two theories stands as a promising new theory of urban phenomena. Instead of considering social phenomena as the result of what happens when pairs of people interact, we consider social phenomena as the result of multiple elements (material or informational, social or biological, etc.) co-occurring in a social environment following a specific recipe''. In this new theory the notion of complexity'' acquires a central role in determining the probability that a specific socioeconomic phenomenon occurs in a city. The equations born out of the model suggest that complexity'' is a separate property of phenomena, cities, and people, in the sense that each can facilitate or hinder the probability of occurrence. Some phenomena are inherently less difficult to occur than others, some cities are more fostering of certain phenomena than others, and some people are more capable of contributing to certain phenomena than others. We quantify all these sources of complexity and we investigate their implications. We show that this new way of modeling cities has better explanatory and predictive power as compared to other alternative models in a variety of out-of-sample predictive tasks.

The Spatial Network Analysis based on Chinese SurnamesYongbin Shi, Le Li, Jiawei Chen, Yougui Wang and H.Eugene StanleyMonday, 15:40-17:00

Surnames are inherited through paternal line paralleling to the inheritance of Y chromosome. As cheap substitutes of Y chromosome typing, surnames can be considered as genetic metaphor and a useful research tool for analysing complex historical and temporal population migration. Because of its long history and the traditional culture, Chinese surnames are an excellent data source. In this paper, we propose a network-based clustering approach to study the geographical surname affinity and create the geography and ethnicity of surnames. The set of surname data contains 1.28 billion census records from China’s National Citizen Identity Information System, including surnames and regional information of all Chinese registered citizens. We use isonymic distance to characterize the dissimilarity of surname structure between the people of two regions. We then develop an innovative Multi-layer Minimum Spanning Tree (MMST) to construct a spatial network. We use fast unfolding algorithm to detect communities on the network. The distinction of the resulting communities on the map suggests that this classification method is an effective tool for studying the geo-genealogy issues. The findings also provide some evidences for historical population migration, origins and evolutions of modern human in China.

Special Operations Forces: A Global Immune System?Joseph Norman and Yaneer Bar-YamWednesday, 15:40-17:00

The use of special operations forces (SOF) in war fighting and peace keeping efforts has increased dramatically in recent decades. A scientific understanding of the reason for this increase would provide guidance as to the contexts in which SOF can be used to their best effect. Ashby's law of requisite variety provides a scientific framework for understanding and analyzing a system's ability to survive and prosper in the face of environmental challenges. We have developed a generalization of this law to extend the analysis to systems that must respond to disturbances at multiple scales. This analysis identifies a necessary tradeoff between scale and complexity in a multiscale control system. As with Ashby's law, the framework applies to the characterization of successful biological and social systems in the context of complex environmental challenges. Here we apply this multiscale framework to provide a control theoretic understanding of the historical and increasing need for SOF, as well as conventional military forces. We propose that the essential role distinction is in the separation between high complexity fine scale challenges as opposed to large scale challenges. This leads to a correspondence between the role SOF can best serve and that of the immune system in complex organisms--namely, the ability to respond to fine-grained, high-complexity disruptors and preserve tissue health. Much like a multicellular organism, human civilization is composed of a set of distinct and heterogeneous social tissues. Responding to disruption and restoring health in a system with highly diverse local social conditions is an essentially complex task. SOF have the potential to mitigate against harm without disrupting normal social tissue behavior. This analysis suggests how SOF might be leveraged to support global stability and mitigate against cascading crises.

Spread of Zika virus in the AmericasAna Pastore Y Piontti, Qian Zhang, Kaiyuan Sun, Matteo Chinazzi, Natalie Dean, Diana Rojas, Stefano Merler, Dina Mistry, Piero Poletti, Luca Rossi, M. Elizabeth Halloran, Ira Longini and Alessandro VespignaniThursday, 14:00-15:20

We use a data-driven global stochastic epidemic model to project past and future spread of the Zika virus (ZIKV) in the Americas. The model has high spatial and temporal resolution, and integrates real-world demographic, human mobility, socioeconomic, temperature, and vector density data. We estimate that the first introduction of ZIKV to Brazil likely occurred between August 2013 and April 2014 (90% credible interval). We provide simulated epidemic profiles of incident ZIKV infections for several countries in the Americas through February 2017. The ZIKV epidemic is characterized by slow growth and high spatial and seasonal heterogeneity, attributable to the dynamics of the mosquito vector and to the characteristics and mobility of the human populations. We project the expected timing and number of pregnancies infected with ZIKV during the first trimester, and provide estimates of microcephaly cases assuming different levels of risk as reported in empirical retrospective studies[1].

Currently, numerous Zika virus vaccines are being developed. However, identifying sites to evaluate the efficacy of a Zika virus vaccine is challenging due to the general decrease in Zika virus activity. We identify areas that may have increased relative risk of Zika virus transmission during 2017 and 2018. The analysis focuses on eight priority countries (i.e., Brazil, Colombia, Costa Rica, Dominican Republic, Ecuador, Mexico, Panama, and Peru). The model projected low incidence rates during 2017 and 2018 for all locations in the priority countries but identified several sub-national areas that may have increased relative risk of Zika virus transmission in 2017 and 2018[2].

[1] Zhang et al., Proceedings of the National Academy of Sciences May 2017, 114 (22) E4334-E4343; DOI: 10.1073/pnas.1620161114
[2] The ZIKAVAT Collaboration, bioRxiv 187591; doi: https://doi.org/10.1101/187591

Spreading and influence of misinformation and traditional fact-based news in TwitterAlexandre Bovet and Hernan MakseTuesday, 15:40-17:00

Recent social and political events, such as the 2016 US presidential election, have been marked by a growing number of so-called fake news'', i.e. fabricated information that disseminate deceptive content, or grossly distort actual news reports, shared on social media platforms. While misinformation and propaganda have existed since ancient times, their importance and influence in the age of social media is still not clear.

Here, we characterize and compare the spread of information from websites containing fake news with the spread of information from traditional news websites on the social media platform Twitter using a dataset of more than 170 million tweets concerning the two main candidates of the 2016 US presidential election. We find that 29\% of the tweets linking to news outlets points to websites containing fake or extremely biased news.

Analyzing the information diffusion networks, we find that user diffusing fake news form more connected and less heterogeneous networks than users in traditional news diffusion networks. Influencers of each news websites category are identified using the collective influence algorithm. While influencers of traditional news outlets are journalists and public figures with verified Twitter accounts, most influencers of fake news and extremely biased websites are unknown users or users with deleted Twitter accounts.

A Granger-causality analysis of the activity dynamics of influencers reveals that influencers of traditional news are driving the activity of the most part of Twitter while fake news influencers are, in fact, mostly reacting to the activity of Trump supporters.

Our investigation provides new insights into the dynamics of news diffusion in Twitter, namely our results suggests that fake and extremely biased news are governed by a different diffusion mechanism than traditional center and left-leaning news. Center and left leaning news diffusion is driven by a small number of influential users, mainly journalists, and follow a diffusion cascade in a network with heterogeneous degree distribution which is typical of diffusion in social networks, while the diffusion of fake and extremely biased news seem to not be controlled by a small set of users but rather to take place in a tightly connected cluster of users that do not influence the rest of Twitter activity. Our results therefore show that fake and extremely biased news, although present in considerable quantity, do not significantly influence Twitter opinion and that traditional center and left leaning news outlets are driving Twitter activity.

A State-Space Modeling Framework for Engineering Blockchain-Enabled Economic SystemsMichael Zargham, Zixuan Zhang and Victor PreciadoTuesday, 14:00-15:20

Decentralized Ledger Technology (DLT), popularized by the Bitcoin network, aims to keep track of a ledger of valid transactions between agents of a virtual economy without the need of a central institution for coordination. In order to keep track of a faithful and accurate list of transactions, the ledger is broadcast and replicated across the machines in a peer-to-peer network. To enforce that the transactions in the ledger are valid (i.e., there is no negative balance or double spending), the network ‘as a whole’ coordinates to accept or reject new transactions according to a set of rules aiming to detect and block the operation of malicious agents (i.e., Byzantine attacks).
Consensus protocols are particularly important to coordinate the operation of the network, since they are used to reconcile potentially conflicting versions of the ledger. Regardless of the architecture and consensus mechanism used, the resulting economic networks remains largely similar, with economic agents driven by incentives under a set of rules.
Due to the intense activity in this area, proper mathematical frameworks to model and analyze the behavior of blockchain-enabled systems are essential. In this paper, we address this need and provide the following contributions: (i) we establish a formal framework, using tools from dynamical systems theory, to mathematically describe the core concepts in blockchain-enabled networks, (ii) we apply this framework to the Bitcoin network and recover the key properties of the Bitcoin economic network, and (iii) we connect our modeling framework with powerful tools from control engineering, such as Lyapunov-like functions, to properly engineer economic systems with provable properties.
Apart from the aforementioned contributions, the mathematical framework herein proposed lays a foundation for engineering more general economic systems build on emerging Turing complete networks, such as the Ethereum network, through which complex alternative economic models are being explored.

Statistical Complexity in Biomedical SystemsWilliam SulisTuesday, 14:00-15:20

Biomedical systems count among the most complex systems currently known to science. As such, they collectively exhibit all of the features of complex systems including multilevel feedback relationships, nonlinear interactions, chaotic dynamics and adaptability. These have profound implications for the statistical analysis of such systems. In spite of this, researchers continue to use statistical tools designed for stable, simple, linear systems. These are inadequate and frequently inappropriate for dealing with complex biomedical systems. This paper discusses the problem of stochastic complexity in complex biomedical systems including: intrinsic randomness, non-Gaussian probability distributions, the absence of mean and/or variance, non-stationarity, contextuality, and non-Kolmogorov probabilities, the absence of conditional probabilities. Some alternative approaches to standard methods are suggested including time series and fluctuation spectrum analysis and the use of Khrennikov’s contextual probability.

Step by Step to Stability and Peace in SyriaRaphael Parens R and Yaneer Bar-YamTuesday, 18:00-20:00

The revolution and Civil War in Syria has led to substantial death and suffering, a massive refugee crisis, and growth of ISIS extremism and its terror attacks globally. Conflict between disparate groups is ongoing. Here we propose that interventions should be pursued to stop specific local conflicts, creating safe zones, that can be expanded gradually and serve as examples for achieving a comprehensive solution for safety, peace and stable local governance in Syria.

Stochastic algebra of interaction networksTed TheodosopoulosWednesday, 15:40-17:00

We consider spin processes on networks that evolve in response to the stochastic spin dynamics. We present an algebraic framework that allows these interaction networks to act on one another. The resulting algebraic action couples local and global topological components, allowing us to probe non-ergodic properties of the limiting behavior. We discuss applications of these techniques in managing the evolving network complexity in economics and biology.

An structural approach to visualization of biotic bipartite networksJavier Garcia-Algarra and Mary Luz MouronteTuesday, 15:40-17:00

Biotic interactions among two different guilds of species are very common in nature and are modelled as bipartite networks.

The usual ways to visualize them, the bipartite graph and the interaction matrix, produce messy plots when the number of nodes is above 50/60 nodes.

We have developed two new types of visualization, using an observed structural property of these networks, called nestedness. There is a core of strongly connected nodes amongst them, while nodes with low degree are tied to this core.

Using the k-core decomposition we group species by their connectivity. With the results of this analysis we build a plot based on information reduction (Polar Plot) and other that takes the groups as elementary blocks for spatial distribution (Ziggurat plot).

The structure of complex neural networks and its effects on learningPau Vilimelis Aceituno, Gang Yan and Yang-Yu LiuThursday, 14:00-15:20

Introduction:
Reservoir Computing (RC) is one of the rare computing paradigms which can be used both as a theoretical neuroscience model [2] and as a machine learning tool [1]. The key feature of the RC paradigm is its reservoir a directed and weighted network that represents the connections between neurons. Despite extensive research efforts, the impact of the reservoir topology on the RC performance remains unclear. Here we explore this fundamental question and show, both analytically and computationally, how structural features determine the type of tasks that these recurrent neural networks can perform.

Methods:
Computational methods: We create large recurrent networks of sigmoidal neurons test them in different tasks: capacity to retain previous inputs, precision in forecasting time series and voice recognition. The training is done by feeding the network a signal input and then using linear regression on the states of the neurons to obtain an output that matches our training data. The performance is then measured as an error function on test data.
Analytical methods: We consider every network as a dynamical system with high dimension space in which each dimension –corresponding to a neuron-- has a probability distribution which depends on the network structure. Then we derive the effects of changing some parameters of the network structure. And we combine this with the tasks studied by taking the error function associated to each task and derive the effects that different probability distributions will have.

Results and Discussion:
We focus on two network properties: First, by studying the correlations between neurons we demonstrate how the degree distribution affects the short-term memory of the reservoir. And second, after showing that adapting the reservoir to the frequency of the time series to be processed increases the performance we demonstrate how this adaptation is dependent on the abundance of short cycles in the network. Finally, we leverage those results to create an optimization strategy to improve time series forecasting performance. We validate our results with various benchmark problems, in which we surpass state-of-the-art reservoir implementations.
Our approach provides a new way of designing more efficient recurrent neural networks and to understand how the computational role of common network properties.

Supervised machine learning algorithm for accurately classifying cancer type from gene expression dataShrikant PawarTuesday, 15:40-17:00

Intelligent optimization algorithms have been widely used to deal complex nonlinear problems. In this paper, we have developed an online tool for accurate cancer classification using a SVM (Support Vector Machine) algorithm, which can accurately predict a lung cancer type with an accuracy of approximately 95 percent. Based on the user specifications, we chose to write this suite in Python, HTML and based on a MySQL relational database. A Linux server supporting CGI interface hosts the application and database. The hardware requirements of suite on the server side are moderate. Bounds and ranges have also been considered and needs to be used according to the user instructions. The developed web application is easy to use, the data can be quickly entered and retrieved. It has an easy accessibility through any web browser connected to the firewall-protected network. We have provided adequate server and database security measures. Important notable advantages of this system are that it runs entirely in the web browser with no client software need, industry standard server supporting major operating systems (Windows, Linux and OSX), ability to upload external files. The developed application will help researchers to utilize machine learning tools for classifying cancer and its related genes. Availability: The application is hosted on our personal linux server and can be accessed at: http://131.96.32.330/login-system/index.php

Support for a Developmentally-Based Model of Cognition and Implications for Smarter MachinesMichael Commons and Patrice MillerWednesday, 15:40-17:00

In 1950, the British mathematician Alan Turing proposed a method to test whether a computer possessed human-like intelligence. The method was that a panel of humans conversed with an unknown entity via text. If the entity was a computer and the panel believed that it was human, the computer passed the Turing test. At present, computers are still unable to emulate human behavior. This presentation suggests that in order to more precisely emulate a human, a computer must possess both the cognitive developmental and the emotional smarts involved. In order to develop such smarts, a learning machine would have to learn from its environment the way that humans and nonhumans organisms do, rather than the way current computers do.

This paper describes a mathematically-based model of human cognition, the Model of Hierarchical Complexity (MHC). The MHC provides an analytic, a priori measurement of the difficulty of task actions. The difficulty is represented by the Orders of Hierarchical Complexity (OHC) (Commons & Pekker, 2008). There are seventeen known OHCs. Development itself is conceptualized then as changes in the difficulty of tasks that can be completed over the lifetime of an organism. Initially, organisms address only rudimentary tasks, and some organisms never develop beyond this basic level. As one moves up the evolutionary ladder, increasing numbers of organisms are likely to complete increasingly hierarchically complex tasks.

MHC describes a form of information that is different from traditional information theory (Shannon & Weaver, 1948) in which information is coded as bits that increase quantitatively with the amount of information. Here, a task action is defined as more hierarchically complex when: 1) A higher-order task is defined in terms of two or more tasks at the next lower OHC, 2) Higher-order tasks organize the lower order actions, and 3) The lower order tasks are coordinated non-arbitrarily, instead of as an arbitrary chain.

Every task completed by an organism or entity has an OHC associated with it. When an organism or a machine completes an action at a given OHC, they are said to be performing at the Stage of Development with the same number and name as the associated OHC. This important separation, distinguishes the structure of the task from the performance on the task.
This presentation provides evidence in support of this model of cognitive development. It then discusses implications of the model for the design of truly intelligent machines. It highlights that there can be no one valid Turing test. Instead, the Turing test itself has to be considered in developmental terms. Some entities and organisms might only solve very simple tasks; nevertheless if a well-designed machine solves that task, it would be unlikely to detect it as different from a living organism. Conversely, the Model also suggests that humans solve tasks at an OHC far beyond that of other organisms and that a true Turing test for humans would have to go further beyond Turing’s original suggestion.

Synthecracy: "NewPower Deep Organization Structure for Collaboration" the #WeaveletJaap van TillWednesday, 15:40-17:00

A new organization structure is presented, based on a scaling synthetic aperture, self-governing by interconnecting people in parallel, to combine and cooperate, with very diverse skills and different points of view. Such #Weavelet can cope with complexitity by distributing and interconnecting models of realty. And it can, as a collective intelligence with distributed authority, implementing a Neural Network (AI) structure, react fast to unexpected situations. All connected participants have a hologram-like overview of the total field and by orthogonalization complex situations are sorted into stacks of issues that are agnostic of the rest so they can be improved and matched by correlation and convolution. Patterns can be recognized and value can be created by trans-discipline and trans-tribal combining solutions that work and by the resulting synergy, which gives incentives to the individuals that benefit from the general interest. Examples will be found in universities, R&D and for instance "flocks" of interconnected selfdriving cars. Weavelets, like swarms of birds, are very resilient (can cope with imperfections, and incompleteness- like lenses do) can scale up by adding people and they can themselves be interconnected towards forming an Global Brain.

Synergistic Selection: A Bioeconomic Theory of Complexity in EvolutionPeter CorningWednesday, 15:40-17:00

The rise of complexity in living systems over time has become a major theme in evolutionary biology, and a search is underway for a “grand unified theory” (as one theorist characterized it) that can explain this important trend, including especially the major transitions in evolution. As it happens, such a theory already exists. It was first proposed more than 30 years ago and was re-discovered independently by biologists John Maynard Smith and Eörs Szathmáry in the 1990s. It is called the Synergism Hypothesis, and it is only now emerging from the shadows as evolutionary theory moves beyond the long-dominant, gene-centered approach to evolution known as neo-Darwinism. The Synergism Hypothesis is, in essence, an economic theory of complexity. It is focused on the costs and benefits of complexity, and on the unique creative power of synergy in the natural world. The theory proposes that the overall trajectory of the evolutionary process over the past 3.8 billion years or so has been shaped by functional synergies of various kinds and a dynamic of Synergistic Selection (in effect, a sub-category of natural selection). The synergies produced by various forms of cooperation create interdependent “units” of adaptation and evolutionary change. Cooperation may have been the vehicle, but synergy was the driver. I will also briefly describe some of the highlights of my new book on the subject,"Synergistic Selection: How Cooperation Has Shaped Evolution and the Rise of Humankind."

System Dynamics to Improve Success & Sustainability of K-12 Educational InterventionsRoxanne Moore and Michael HelmsThursday, 15:40-17:00

K-12 schools and school systems are highly complex, exhibiting many interconnected relationships and feedback loops, making it difficult to predict the outcomes of potential policy changes. While system dynamics and agent-based modeling have rarely been applied to educational settings, new representations and system descriptions may enable more effective policies to be enacted. Currently, schools looking to change their performance, trajectory, or implement new curricula apply some type of intervention, or the same is done by an external partner. School settings are often crudely described using statistics about socioeconomic status, demographics, and test performance, while interventions are described in terms of their intended student outcomes. However, rarely is the compatibility of a particular school and a particular intervention taken into account, nor is the pathway or mechanisms for change clearly described.

Many educational interventions have proven to be unsustainable or do not scale adequately across diverse K-12 school settings. While other complex disciplines such as healthcare have embraced systems thinking and modeling to improve outcomes, education as a discipline has not adopted that strategy. Promoting usage of tools from complex systems theory within the education community requires frameworks and strategies that can be employed within educational settings and understood by various stakeholders.

In the context of such educational interventions, models could be used to depict the intricate relationships between students, teachers, administration, local and federal policies, the community in which the school is situated, and the intervening agency. Successful interventions require management strategies on multiple ‘levels’: the student level, the teacher level, the administrator level, and the community level. Our research shows that representing the school settings using visual diagrams that depict feedback loops present among the actors and attributes within K-12 school settings can facilitate discussion across various stakeholders and improve the chances of a successful, sustainable intervention.

In this work, causal loop diagrams of interactions between school settings and interventions are used to inform intervention development and iteration and to improve sustainability outcomes. The models were developed and refined over a 4-year grant period where the intervention is a novel computer science module for teaching programming using music remixing. These models highlight attributes that both help and hinder success in achieving the desired student-level outcomes, including improved content knowledge in programming and greater interest and engagement in computing. The models and their development facilitated discussion and decision-making among the intervention team, as illustrated by a qualitative analysis of interview data. Curriculum and teacher training changes were made as a direct result of models created from classroom observation data. Finally, we will discuss the ‘predictive’ power of these models to characterize the strengths and weaknesses of school settings and assess a school’s fitness for implementing this particular intervention. Understanding a school environment prior to implementing a particular intervention will lead to fewer interventions that are incompatible with a school’s climate or are not sustainable.

Temporal metrics for exponential random graph models of time-evolving networksCatalina Obando and Fabrizio De Vico FallaniWednesday, 14:00-15:20

Recently we have seen a growing interest for temporal networks as they are a more accurate representation of real life complex systems. Temporal networks estimated from experimentally obtained data, represent one instance of a larger number of realizations with similar dynamic topology. A modeling approach is therefore needed to support statistical inference.

In this work, we adopted a statistical model based on temporal exponential random graphs (TERGM) to reproduce time-evolving networks. TERGMs were composed by two different local (temporal) graph metrics: temporal triangles and 2-paths. We first validated this approach in synthetic networks generated by using a correlated version of the Watts-Strogatz model, where each temporal network is generated from the previous one by rewiring links with an increasing probability , starting from a lattice.

Results showed that the model including temporal two-paths and triangles statistically reproduced the main properties of synthetic networks assessed by link prediction capacity and global- and local-efficiency.

We finally applied our approach on the Facebook temporal network, and we show that only by including temporal two-paths our TERGM is able to generate a good fit and to statistically reproduce the evolution of the global-efficiency of the network.

These preliminary results support the development of alternative tools to model the evolution of complex networks based on temporal connection rules, with applications ranging from social science to neuroscience.

There is something about networks: Effects of political and regulatory pressure on women's board networksRuth Mateos de Cabo, Pilar Grau, Patricia Gabaldón and Ricardo GimenoTuesday, 18:00-20:00

This paper analyzes the impact of the political and regulatory pressure to increase the presence of women on boards by various European countries on female directors’ centrality on the European board member's network. We use a longitudinal approach, following the evolution of the main topological measures of a European global director network (that is made up of listed firms of 39 countries and 4 territories in Europe obtained from Boardex) from 1999 to 2014. This results in an extensive sample of 425,322 observations of board of director positions, corresponding to 41,107 different directors of which 11.9% are women. The results of the panel data models show that although affirmative action has accelerated the representation of women on boards, it has had different effects on their location on the network. This way, Board Gender Quotas and Corporate Governance Codes recommendations to promote gender diversity on boards influence positively the number of connections women directors have (degree), as well as the number of times they connect two other directors (betweenness). However, the effects on the other common measures of centrality (closeness and eigencentrality) are mixed. Using as dependent variables the factors resulting from a factorial analysis that help us to group and interpret in the field of Women on Boards the different centrality measures, panel results shows that Corporate Governance Codes board diversity mentions has a positive direct effect on those centrality measures that are more related with visibility (degree) and closeness in the network (what we interpret as measures of ‘soft’ influence), whereas Board Gender Quotas produces a clear increase in those other measures that denote real power (betweenness as meaning control of flows, and eigencentrality measuring how well connected a director is).

“To be or not to be” vs. “From being to becoming”: Inequality as a property of complex social systemsCzeslaw MesjaszMonday, 15:40-17:00

The challenges of social and economic inequality have been known since the onset of civilizations. Already in the 20th Century several major works on that topic were published by Sen but a new significant impulse has been given to the discussion on that topic after the publication of research by Piketty and co-authors. Those publications were followed by other works of Stiglitz and Milanovic. They were accompanied by more or less “shocking” reports and results of empirical research papers illustrating dramatic discrepancies in distribution of income and wealth in the world society (OECD, UNDP, UNU/WIDER, World Bank), The discussion on inequality includes two major approaches. The first one embodies narrow empirical approaches, often without a deeper explanation of causes. In the second approach, inequality is analyzed within a framework of broad ideological and political considerations.
There exists a research gap, in which the middle-range theoretical discourse based on systems thinking, and complex systems studies, in particular, can be placed. Analogies, metaphors and mathematical models deriving from complex systems studies can be helpful in a better understanding of causes as effects of socio-economic inequality. Narrowing the discussion to some preliminary issues, the paper aims at showing how modern systems thinking, and especially the ideas dealing with complexity of social systems, can be helpful in a better understanding of the phenomenon of sociopolitical inequality. Validity of complex systems in studying socio-economic inequality is of a special importance under the conditions of the modern Information Society.
The following, to some extent provocative hypothesis or perhaps, conjecture, will be discussed: In society in which the basic needs of population are fulfilled (it concerns the developed countries), and social activities are taking place predominantly in the symbolic sphere, social and economic inequality is a natural, and perhaps even a stable state. In other words, it means, that it may lead to social tensions but large scale crises could be caused by other factors and not the inequality alone. The main reason for this situation is that the mechanisms of increasing inequality are taking place in the symbolic sphere. It may be even hypothesize that it is a kind of Baudrillard’s simulacrum-like society. A preliminary argument for this thesis can be illustrated with simple examples. First, the market value of FaceBook and Google are purely symbolic. In addition, in the broadest sense, money is also but information and the derivative instruments are the “purest” information, emergence of hierarchies is different in physical systems and in purely symbolic (information) systems as depicted in the works by H. Simon.
As to argue preliminarily that in information-dominated society inequality is a natural phenomenon, a survey of applications of following concepts taken from complex systems will be developed: causes and consequences of Pareto distribution, Lorenz distribution, Zipf’s Law, scale-free networks, thermodynamic models and analogies, different types of hierarchical structure of systems, holarchy, heterarchy, functional differentiation of systems and other formal models. In parallel, qualitative ideas of complexity of social systems, such as the Luhmann’s concepts and others will be also used. The results of this preliminary study can be treated as a point of departure for more detailed models and empirical research.
Additional, introductory explanation is needed for the title of the paper. It is designed as to reflect in a metaphorical way, the sense of the paper. The first part, drawn from a very well-knowns source, reflects the society and social strata who find themselves at a low level of economic development. In such a society the basic, material factors play a crucial role. They are able to describe precisely their situation with material well-being indicators. Having their needs fulfilled at a low level creates “to-be-or-not-to-be” situation.
The second part of the title depicts the situation in the Information Society with information overabundance where the basics needs have been fulfilled and the real social life problems concern non-tangible aspects. The most significant feature of such modern society is not the quantitative information overabundance understood as production and necessity of reception of measurable information (signals, impulses, etc.) but the need to assign meaning to that superfluous information (sensemaking). In such society, dominant approach is based upon intersubjective construction of meaning of majority of characteristics with only fuzzy constraints (no “to-be-or-not-to-be” material constraints).
It is proposed that the sense of functioning of such a society in which traditional concepts of binary distinctions, of equilibrium and stability have lost their meaning can be captured by a metaphor drawn from the far-from-equilibrium thermodynamic systems “from-being-to-becoming”. It is the title of the book by Prigogine and it can be easily proved that its use under the described circumstance is a metaphor.

Toward a Quantitative Approach to Data Gathering and Analysis for Nuclear Deterrence PolicyLaura Epifanovskaya, Kiran Lakkaraju, Joshua Letchford, Mallory Stites, Janani Mohan and Jason ReinhardtTuesday, 15:40-17:00

The doctrine of nuclear deterrence and a belief in its importance underpins many aspects of United States policy; it informs strategic force structures within the military, incentivizes multi-billion-dollar weapon-modernization programs within the Department of Energy, and impacts international alliances with the 29 member states of the North Atlantic Treaty Organization (NATO). With revolutions in technology since the Cold War, deterrence has become “cross-domain”, where conflict escalation dynamics that might lead to nuclear use can also include cyber, economic, and space elements, interplaying with each other in potentially complex and unanticipated ways. As the nature of strategic deterrence has changed, so must the approach to studying and developing deterrence doctrine. The technologies that have transformed society can also help to simplify highly complex problems and, used properly, bolster our understanding of the complicated interplay of forces that, whether unintentionally or through deliberate policy action, we harness to shield the nation from an act of nuclear aggression by an adversary state. Our team of researchers at the University of California, Berkeley, Sandia National Laboratories, and Lawrence Livermore National Laboratories is developing a capability that leverages the remarkable power of the internet and advancements in machine learning to study nuclear deterrence in a new, data-driven way, using a war game conducted online that explicitly ties back to academic conflict research.
In order to comprehensively and quantitatively study nuclear deterrence and conflict escalation that might lead to the use of a nuclear weapon, we have formulated an approach that combines the strengths of a war game (including the ability to explore a broad decision-space, record interactions among multiple players, and examine decisions in real time, capturing the available metadata) with the reproducible, mathematical methodology of international relations literature. Our online game enables us to track and store data so as to compare analysis of war game data directly to academic research. To demonstrate that results from our game can be compared directly to results from the academic literature, we performed a similar analysis on 739 days of serious game data that had been collected by Sandia National Laboratories researchers as part of an unrelated project. This data was “operationalized” to populate the economic, conflict, and control variables for direct comparison of our analysis to the academic literature. A paper was chosen from the body of international relations literature that defines a set of economic variables and measures their effects on conflict using linear regression techniques. We used the same economic variable definitions as this paper, and performed analysis on the data using linear mixed effects regression models run with the lme4 package for R.
Our results demonstrate a correlation between the economic variables and conflict outcomes in the game setting, as is seen in the academic literature. Implications for the development of a serious game to measure the effects of various factors, including economic variables, on deterrence are discussed.

Towards a meta-theory of scientific knowledgeDaniele FanelliTuesday, 15:40-17:00

Scientific knowledge is a complex adaptive system, and it is increasingly a subject of empirical and theoretical investigation. In particular, problems that stifle research progress, such as publication bias and irreproducibility, are the subject matter of the burgeoning field of meta-science. This talk will present elements of a meta-theory of scientific knowledge based on classic and algorithmic information theory. This theory suggests mathematical answers to meta-scientific questions including “how much knowledge is produced by research?”, “how rapidly is a field making progress?”, “what is the expected reproducibility of a result?”, “what do we mean by soft science?”, “what demarcates a pseudoscience?”, and many others. The core claim is that the essence of knowledge is captured by a function K(y;xτ)= y−y|xτ (1) y+x+τ which quantifies the Shannon entropy contained in the finite description of an object explanandum y ≡ nyH(Y ), which is lossless or lossy compressed via an explanans composed of an information input x ≡ nxH(X) and a “theory” component τ ≡ log 1 . The latter is a factor that conditions the Pu(τ) relationship between y and x, with an information “cost” equivalent to the description length of the relationship itself. Combined with two operations that allow information to be expanded and cumulated, this “K-function” is proposed as a simple and universal tool to understand and analyse knowledge dynamics, scientific or otherwise. The talk will offer three arguments to support this claim. First, equation 1 is a natural translation of the widely accepted notion of knowledge as information compression. Second, every statistical measure of effect size can be converted to K(y;xτ), making this quantity a universal measure of the magnitude of scientific findings. Third, equation 1 has an immediate physical interpretation as a measure of negentropic efficiency. Examples and arguments will be presented to suggest that this function is compatible with all forms of information compression and all manifestations of knowledge, and that knowledge is just a pattern-encoding compression activity, as are all other forms of biological adaptation. Scientific knowledge is analysed exactly as ordinary knowledge, with τ encoding a characteristic “methodology” component. “Soft” sciences are shown to be simply fields that yield relatively low K values. Bias turns out to be information that is concealed in ante-hoc or post-hoc methodological choices, thereby reducing K. Disciplines typically classified as pseudosciences are suggested to be sciences that suffer from extreme bias: their informational input is greater than their output, yielding K(y;xτ) < 0. All knowledge-producing activities can be ranked in terms of a parameter Ξ ∈ (−∞, ∞), measurable in bits, which subsumes all quantities presented in this essay and defines a hierarchy of sciences and pseudosciences. This approach yields numerous general results, some of which may be counter-intuitive. For example, it suggests that reproducibility failures in science are inevitable, offering predictions as to where they will occur. It also suggests that the value of publishing negative results may vary across fields and within a field over time, leading to predict conditions in which the costs of reproducible research practices such as publishing negative results and sharing data may outweigh the benefits. The theory makes several testable predictions concerning science and cognition in general, and it may have numerous applications that future research could develop, test and implement to foster progress on all frontiers of knowledge.

Towards Robustness in Machine Learning and OptimizationStefanie JegelkaFriday, 11:00-11:40

When critical decisions and predictions rely on observed data, robustness becomes an important aspect in learning and optimization. Robust formulations, however, can lead to more challenging, e.g., nonconvex, optimization problems, and appropriate notions of robustness are not well understood for all machine learning models. In this talk, I will summarize some recent ideas across learning, optimization and robustness.
First, while “adversarial examples” are well-known to affect supervised deep learning methods, and many approaches are being developed to aid stability, suitable concepts of robustness are much less studied in generative models that learn distributions. We introduce a definition of robustness for the popular Generative Adversarial Networks (GANs), which, despite their name, are not robust per se. We show, theoretically and empirically, conditions that affect and improve robustness in GANs.
Second, submodular functions have emerged as a beneficial concept for many machine learning applications, thanks to their widely applicable definition of “diminishing returns”, and their good optimization properties, making submodularity a “discrete analog of convexity”. We show how submodular and robust optimization benefit each other to achieve algorithms with guarantees and better generalization for stochastic submodular optimization, where only samples from an unknown distribution can be observed, and for robust influence maximization in networks.

Transient Induced Global Response Synchronization in Dispositional Cellular AutomataWilliam SulisTuesday, 14:00-15:20

Synchronization has a long history in physics where it refers to the phase locking of identical oscillators. This notion has been applied in biology to such widely varying phenomena as the flashing of fireflies and the binding problem in the brain. The relationship between neural activity and the behaviour of the organism is complex and still poorly understood. There have been attempts to explain this using the notion of synchronization, but the participating neurons are fungible, their activity transient and stochastic, and their dynamics highly variable. In spite of this, the behaviour of the organism may be quite robust. The phenomenon of transient induced global response synchronization (TIGoRS) has been used to explain the emergence of stable responses at the global level in spite of marked variability at the local level. TIGoRS is present when an external stimulus to a complex system causes the system’s responses to cluster closely in state space. In some models a 10% input sample can result in a concordance of outputs of more than 90%. This occurs even though the underlying system dynamics is time varying and inhomogeneous across the system. Previous work has shown that TIGoRS is a ubiquitous phenomenon among complex systems. The ability of complex systems exhibiting TIGoRS to stably parse environmental transients into salient units to which they stably respond led to the notion of Sulis machines which emergently generate a primitive linguistic structure through their dynamics. This paper reviews the notion of TIGoRS and its expression in several complex systems models including driven cellular automata, cocktail party and particularly dispositional cellular automata. These automata modify both local states and local rules in response to external stimuli. The presence of TIGoRS is marked by a non-linear response curve relating input sampling rates and the degree of synchronization between responses. TIGoRS can be distinguished from passive matching of outputs at high input frequencies since TIGoRS has a non-linear response curve while passive matching shows a linear response. Thus TIGoRS is a real and robust phenomenon arising from the collective action of the agents of an automaton.

Transitivity vs Preferential Attachment: Determining the Driving Force behind the Evolution of Scientific Co-authorship NetworksMasaaki Inoue, Thong Pham and Hidetoshi ShimodairaTuesday, 14:00-15:20

We propose a method for the non-parametric joint estimation of preferential attachment and transitivity in complex networks, as opposite to conventional methods that either estimate one mechanism in isolation or jointly estimate both assuming some functional forms. We apply our method to three scientific co-authorship networks between scholars in the complex network field, physicists in high-energy physics, and authors in the Strategic Management Journal. The non-parametric method revealed complex trends of preferential attachment and transitivity that would be unavailable under conventional parametric approaches. In all networks, having one common collaborator with another scientist increases at least five times the chance that one will collaborate with that scientist. Finally, by quantifying the contribution of each mechanism, we found that while transitivity dominates preferential attachment in the high-energy physics network, preferential attachment is the main driving force behind the evolutions of the remaining two networks.

Triadic closure amplifies homophily in social networksAili Asikainen, Gerardo Iñiguez, Kimmo Kaski and Mikko KiveläTuesday, 14:00-15:20

Much of the structure in social networks can be explained by two seemingly separate network evolution mechanisms: triadic closure and homophily. While it is typical to analyse these mechanisms separately, empirical studies suggest that their dynamic interplay might be responsible for the striking homophily patterns seen in real social networks. By defining a mechanistic network model with tunable amount of homophily and triadic closure, we find that their interplay produces a myriad of effects such as amplification of latent homophily and memory in social networks (hysteresis). Using our model we estimate how much observed homophily could actually be amplification induced by triadic closure in empirical networks, and if these networks have reached a stable state in terms of their homophily patterns. Beyond their role in characterizing the origins of homophily, our results may be useful in determining the processes by which structural constraints and personal preferences determine the shape and evolution of society.

Trust AsymmetryPercy VenegasTuesday, 18:00-20:00

In the traditional financial sector, players profited from information asymmetries. In the blockchain financial system, they profit from trust asymmetries. Transactions are a flow, trust is a stock. Even if the information asymmetries across the medium of exchange are close to zero (as it is expected in a decentralized financial system), there exists a “trust imbalance” in the perimeter. This fluid dynamic follows Hayek’s concept of monetary policy: “What we find is rather a continuum in which objects of various degrees of liquidity, or with values which can fluctuate independently of each other, shade into each other in the degree to which they function as money”. Trust-enabling structures are derived using Evolutionary Computing and Topological Data Analysis; trust dynamics are rendered using Fields Finance and the modeling of mass and information flows of Forrester’s System Dynamics methodology. Since the levels of trust are computed from the rates of information flows (attention and transactions), trust asymmetries might be viewed as a particular case of information asymmetries – albeit one in which hidden information can be accessed, of the sort that neither price nor on-chain data can provide. The key discovery is the existence of a “belief consensus” with trust metrics as the possible fundamental source of intrinsic value in digital assets. Applications include the anticipation of cryptocurrency market crises using measures of collective distrust; the contributions to the complex system literature is in the field of data-driven information measures of complexity. This research is relevant to policymakers, investors, and businesses operating in the real economy, who are looking to understand the structure and dynamics of digital asset-based financial systems. Its contributions are also applicable to any socio-technical system of value-based attention flows.

Understanding Group Dynamics in Social Coding CommunitiesMona Alzubaidi, Bedoor Alshebli, Talal Rahwan and Aamena AlshamsiWednesday, 15:40-17:00

Open source software development projects are rapidly gaining momentum in developers’ communities, whereby individuals participating in a project can be part of an ever more expanding group whose collaborative effort is merged toward serving a common goal. One of the most popular platforms for open source software development is GitHub, which uses a “pull-based” model whereby developers can make their own copy—or “fork”—of a repository and then submit a pull request if they wish the project core members to add their changes into the main project. In addition to coding, participants can involve in multiple roles that contribute towards the outcomes of the project, such as reporting issues and merging pull requests. Although group dynamics in crowd sourcing project could play a key role in the success of the project, previous studies that looked at group dynamics in GitHub focused only on core members and coding activities despite the importance of other activities such as reporting issues, requesting changes or assigning coding tasks.
In contrast to previous work, we focus on open source development at the “crowd” level rather than the team level, bearing in mind that most open source software projects are built upon crowd sourced contributions. In this work, we analyze the group dynamics of forty thousand open-source software development projects hosted on GitHub. Our aim is to understand how the success of any given project is related to the distribution of workload among participants, as well as the overall engagement of participants in Github community. As a measure of success, we consider the number of forks, which can be indicative of the recognition that a project receives from the GitHub community.
To quantify the distribution of workload among participants, we first identified the set of tasks that each group is involved in. We considered each user participating in the project, what tasks she/he performs, and how many times she/he performed these tasks. Using these information, we defined two factors associated with each project. Task diversity captures how diverse the set of tasks is done by each participant in a project. Workload equality captures how the overall efforts are distributed in a team, which reflects whether the group tend to distribute the workload equally between themselves. To quantify overall engagement of participants in Github community, we identified two characteristics of participants in a project in terms of how they participate in other projects. For each project, we looked at each participant and measured his/her engagement level—the total number of other projects he/she participated in, and language diversity—the total number of primary languages of other projects he/she participated in.
Overall, we find that a user participating in highly-successful projects tends to be more specialized in the set of tasks she/he performs, compared to those participating in less successful projects. Moreover, we find that users tend to divide the workload more uniformly in highly-successful projects compared to less successful ones. Finally, we show that users participating in highly-successful projects tend to work with fewer such languages. These findings contribute towards understanding the group dynamics in virtual crowd sourcing teams and how these dynamics are related the success of the group.

Understanding stabilization operations as complex systemsBen GansTuesday, 15:40-17:00

Since the end of the Cold War most Western governments and International Organizations (IOs) invested heavily in the ability to conduct expeditionary operations that are focused on the stabilization and recovery of post-conflict zones (UN, 2001; NATO 2008; Woollard, 2013; De Coning, 2017). Examples of post-conflict zones are the Former Yugoslavia, Iraq, Afghanistan and Mali. IOs such as the United Nations (UN), North Atlantic Treaty Organization (NATO) and the European Union (EU) designed a normative framework to respond to the increasingly complex situations that characterize post-conflict zones (Watkin, 2009). This normative framework is better known as stabilization operations. In its simplest form, stabilization operations are defined as “military and civilian activities conducted across the spectrum from peace to conflict to establish or maintain order in States and regions” (DoD, 2005, p. 2). Moreover, stabilization operations are characterized by international efforts to establish an integrated and comprehensive approach between the many military and civilian actors involved (De Coning and Friis, 2011; Egnell, 2013; Feldmann, 2016; Verweijen, 2017). According to many scholars and practitioners the successful integration of IOs, Non-Governmental Organizations (NGOs), host nation governments, local actors both state and non-state as well as the private sector are key to successful stabilization operations (Dutch Ministry of Defense, 2000; De Coning and Friis, 2011; Smith, 2012; Zelizer et al., 2013).
However, one of the primary lessons learned from the interventions in the Former Yugoslavia, Iraq and Afghanistan was that it is impossible to construct a model that can serve as a blueprint for such an integrated or comprehensive approach since the interactions between the actors involved often show complex and dynamic patterns (Manning, 2003; Rathmell, 2005; Paris, 2009). The first and most obvious complicating factor is the number of actors involved. While actors share the common goal of stabilization, they often must cope with extreme cultural differences causing daily friction (Bollen, 2002; Abiew, 2003; Autesserre, 2014; Holmes-Eber, 2016), and behave strategically to maximize their own interests (Williams, 2011). This can easily lead to opportunistic behavior. As a result, the number of potential interrelationships, coalitions, issues and conflicts increases exponentially as the number of actors increases. Furthermore, in an environment that is characterized by its uncertainty and ambiguity, actors also develop differences in problem perception, conflicting moral judgments - about right and wrong, and about who is right or wrong - which further deepens the contradictions and conflicts of interest (Tajfel and Turner, 1979). This social complexity is boosted by interdependencies, differences in power, knowledge and information levels. Hence, coordination between the many actors and across hierarchies is therefore the main obstacle to overcome (de Coning and Friis, 2011; Rietjens et al., 2013; Verweijen, 2017). These inevitable paradoxes can be best explained by the definition of the primary unit of analysis in this article: the organizational system.
Researchers and practitioners focus on strengthening coordination and integration efforts amongst the actors of the various sub-systems involved in stabilization operations (Patrick and Brown, 2007; De Coning and Friis, 2011; Schnaubelt, 2011; Smith, 2012). This reasoning is based upon the Newtonian paradigm with its linear thought processes in which inputs and outputs are proportional and cause and effect relationships can be mathematically predicted (Von Bertalanffy, 1968; Prigogine, 1984). However, endogenous and exogenous factors influence stabilization operations in a non-linear fashion resulting in the dynamic equilibrium conditions of the complex system. Indeed, as demonstrated in this article, the conditions of the systems are highly uncertain and ambiguous. During stabilization operations profusion of information circulate by different means amongst the actors involved (Rathmell, 2005; Williams, 2011; Autesserre, 2014; Rietjens et al., 2017). To cope with such uncertainty and ambiguity, complex systems require not only quantity but quality of information (Galbraith, 1973; Gell-Mann, 1992; Holland, 1995). Additionally, conflicting interests coupled to a form of incentives to mistrust information, add complexity to the dynamic and uncertainty of stabilization operations (Eriksson, 1999). Congruently, information asymmetry has been identified as the main challenge to be embraced (Manning, 2008; Chandler, 2016; Rietjens et al., 2017).
Little is known regarding the impact of the non-linear sciences on stabilization operations. To better control or predict the impact of information asymmetry in such context, this article focuses on gaining a greater understanding on how concepts and principles operate in theories and practices. Particularly, this article explores how endogenous and exogenous factors influence the predictability of stabilization operations as a complex system. Second, it addresses subsequent influences on the self-organizing ability of the system to differentiate and integrate its various sub-systems, their organizational resources and competencies. Third, this article regards the development and adjustment of condition-dependent capabilities as key in reaching a state of dynamic equilibrium while processing, distributing and exchanging information.

Understanding the patterns of success in online petitioningTaha YasseriWednesday, 14:00-15:20

As people go about their daily lives using social media, such as Twitter and Facebook, they are invited to support myriad political causes by sharing, liking, endorsing, viewing and following. Chain reactions caused by these tiny acts of participation form a growing part of collective action today, from neighbourhood campaigns to global political movements. Political Turbulence shows how most attempts at collective action online fail. Those that succeed can do so dramatically, but are unpredictable, unstable, and often unsustainable. Petition platforms, where citizens may create and sign petitions to the government or legislature, have been adopted in many liberal democracies, including the US, the UK and Germany. These platforms can be mined to generate ‘big data’ on petition signing which provides new insight into the ecology of this form of political participation, data of a kind rarely seen in political science. This paper visualizes and models such data for all petitions created on the UK government petition platform over a three-year period (including a comparison with similar data for the US); We have collected the counts of the number of signatures to more than 20,000 petitions with hourly resolution and by analysing these data try to reveal the patterns and mechanisms of success in such platforms. The first observation is that the success is quite rare and only few petitions can gain considerable numbers of signatures. More surprising is that the success/failure of petitions is determined within few days after the creation indicating the very highpace dynamics of the online platform. We model the dynamic of the system using a multiplicative growth model and by that are able to quantify this high pace. We further discuss the role of the social media in the success of online petitions by collecting and analysing social media mentions of petitions as well as the web traffic data that we are provided by the UK government. Here, we find that the social media activity is the main driving force behind the success of petitions. We also investigate the role of social information and peer influence by performing a natural experiment in which a design change on the petitioning platform is involved. We show that the promotion of the trending petitions indeed has a significant effect on increasing the success rate even further. Publications: Yasseri, T., Hale, S. A., & Margetts, H. Z. (2017). Rapid rise and decay in petition signing. EPJ Data Science, 6(1), 20. Margetts, H., John, P., Hale, S., & Yasseri, T. (2015). Political turbulence: How social media shape collective action. Princeton University Press.

A unifying dimensionality function for fractal to non-fractal stochastic growth morphodynamicsJosé Roberto Nicolás-Carlock, José Luis Carrillo-Estrada and José Manuel Solano-AltamiranoMonday, 14:00-15:20

In his celebrated book “On Growth and Form”, D'arcy Thompson suggested that natural selection was not the only factor shaping the biological development of species, but that in nature, “no organic forms exist save such as are in conformity with physical and mathematical laws”. Since then, some of the most important scientific endeavors of the modern era have dealt with the exploration of the fundamental physical processes behind morphogenesis, the establishment of the mathematical tools for their analysis, and the development of appropriate control mechanisms for their further scientific and technological application. As complex as this problem is, fractal to non-fractal morphological transitions generated by simple (although non-trivial) stochastic growth models allow for the systematic study of the physical mechanisms behind fractal morphodynamics in nature. In these systems, the fractal dimension is considered a non-thermal order parameter commonly computed from the scaling of quantities such as the two-point density radial or angular correlations. However, persistent discrepancies found during the analysis of basic growth models, using these two radial-angular quantification methods, have not found a complete clarification. In this work, considering three fundamental fractal/non-fractal morphological transitions in two dimensions, we show that the unavoidable emergence of growth anisotropies is responsible for the breaking-down of the radial-angular equivalence, rendering the angular correlation scaling crucial for establishing appropriate order parameters. Specifically, we show that the angular scaling behaves as a critical power-law, whereas the radial scaling as an exponential. Under the fractal dimension interpretation, these quantities resemble first- and second-order transitions, respectively. Remarkably, these and previous results (which include radius of gyration and mean-field results) can be unified under a single fractal dimensionality equation.

References:
[1] J. R. Nicolás-Carlock, J. M. Solano-Altamirano, J. L. Carrillo-Estrada. Angular and radial correlation scaling in stochastic growth morphodynamics: a unifying fractality framework. http://arxiv.org/abs/1803.03715
[2] J. R. Nicolás­-Carlock, J. L. Carrillo­-Estrada, and V. Dossetti, Universal fractality of morphological transitions in stochastic growth processes, Scientific Reports 7; doi: 10.1038/s41598­017­03491­5

Universal Decay Patterns in Human Collective MemoryCristian Candia-Castro-Vallejos, Cristian Jara-Figueroa, Carlos Rodriguez Sickert, Albert-Laszlo Barabasi and Cesar HidalgoWednesday, 15:40-17:00

The theoretical literature on human collective memory proposes that memory decays through two mechanisms, one involving communicative memory--the memory sustained by oral communication--and another involving cultural memory--the memory sustained by the physical recording of information. Yet, there is no statistical evidence supporting the decay of collective memory through these two mechanisms, or exploring the universality of these decay patterns. Here, we use time series data on papers and patents' citations, and on the popularity of songs, movies, and biographies, to test the hypothesis that the temporal dynamics of human collective memory involves the decay of communicative and cultural memory and its universality.
We derive a mathematical model from first principles by formalizing these two mechanisms and we contrast the bi-exponential function, predicted by this model, with others decay functions proposed in the literature. Our results support the hypothesis that the decay of human collective memory involves the combined decay of communicative and cultural memory, predicting a bi-exponential decay function that is universal across multiple cultural domains.
These findings allow us to explain the dynamics of the attention received by a piece of cultural content during its lifetime, and suggest that the dynamics of human collective memory follows universal mechanisms across a variety of domains.

The unobserved backbone of international influenceIsabella Loaiza, Morgan Frank and Alex 'Sandy' PentlandTuesday, 18:00-20:00

How to prevent the outbreak of war and all of its devastating effects? How to incentivize cooperation between nations to mitigate climate change? These perennial questions at the heart of international relations have puzzled scholars and policymakers alike. Many strategies have been devised to this end with varying levels of success. Perhaps one of the oldest and most widely used, is the creation of supranational institutions that, through membership criteria, scope of action and economic benefits, help forge alliances and shape the international outlook. This approach has given way to organizations like United Nations, NATO, the European Union, the International Monetary Fund, Mercosur, and the Paris Agreement just to name a few.

However, relationships between countries, like relationships between people, change in time and are a reflection of complex hierarchies, power dynamics and political or economic interests. Even between members of an international institution there are clear asymmetries in country's abilities to influence the decision-making process.Thus, binary attributes like membership in formal government institutions does not allow us to capture the richness of nations' behavior.

In other domains, like organization studies it is common knowledge that formal organizational structures -like organizational charts- provide a limited picture of organizational behavior. The 'informal' organization is what truly drives company dynamics and outcomes. One of the most known articles on this topic was published in 1993 in the Harvard Business Review by David Krackhardt and Jeffrey Henton.In their article titled 'Informal Networks: the company behind the chart', Krackhardt and Henton argued that the network of relationships between employees across divisions and ranks can help or hinder management's most carefully designed plans. Thus, learning to map these unobserved connections can help managers 'harness the real power' in their companies by designing formal structures that channel or build on the existing informal structures.If this is true for organizations -which can be larger than entire villages or towns-, could the same logic be scaled up to international institutions? Is there an 'informal' backbone in international relations that drives dynamics of conflict and cooperation? Luckily, network science can help uncover this structure while providing a more detailed account of how and with who do nations relate with.

Network science has been applied in a variety of domains to solve social problems like the spread of disease, outages in energy grids, and understand phenomena like the uptake of innovative products.

A networked view of international relations can help inform future policies by creating maps of influence, which go beyond the traditional bilateral or multilateral ways of thinking about international agreements. Networks can also prove to be useful in the study of the spread of conflict and as it can detect which are the main avenues through which it travels.

In order to unearth the hidden backbone of international relations, I used data about the interactions between countries that was extracted from over six thousand news sources around the world. This data spans 20 years from 1995 to 2015 and includes approximately 175 countries.

Using a technique called Convergent Cross Mapping, I reconstructed the network of international influence between countries. The network confirmed that nations differ in their ability to influence the global outlook not just in the number of countries a given nation has influence over, but in the strength of that influence.

Going from asserting that there are asymmetries in the power that countries have to influence others to asserting that there is an informal structure which drives international dynamics is hard. Where to begin?

A good starting point would be to look for what network scientists call the 'Rich Club Phenomenon'. This is just a way to describe a network where a fraction of it's nodes - the richest ones - are connected to each other. Other real world networks like the network of international trade or the network of the Italian interbank money market have been found to have this kind of structure. Thus making this structure a good starting point the search for the unobserved backbone of international relations.

While a node's 'richness' can be measured in any way, in this context richness can either be the number of countries a country has influence over or the total amount of influence that it exerts on the world regardless of how many actors it influences. Using both metrics for richness, I analyzed the network of international influence shown in figure 2A. I discovered that there is in fact a rich club of influential countries. Both richness metrics point to an almost identical rich club comprised by the US, the UK, China, Russia, France, Germany, Japan, and Turkey. Italy becomes a part of the rich club only if richness is measured as the total amount of total influence exerted in the world. Figure 2B shows the rich club members and the strength and directionality of influence among them.

The rich club's presence suggests that power might be more concentrated than expected. This finding can also reveal the different influence strategies employed by countries. For example, the US is the top influencer in terms of number of countries it reaches but also in terms of strength of influence. However, it seems to preferentially influence countries not in the rich club. China seems to influence both rich club members and non club members a like, while Italy's influence is heavily targeted towards other rich cub members.While there is still much to learn about the consequences of the presence of the rich club it's existence reveals the need to take into account 'the world behind the institutions' when making policy. Perhaps this is what Kant meant all along, after all he never mentioned that membership that promoted peace between nations was to an official government institution.

Unpacking the polarization of workplace skillsAhmad Alabdulkareem, Morgan Frank, Lijun Sun, César Hidalgo and Iyad Rahwan.Tuesday, 15:40-17:00

Economic inequality is the “defining challenge of our time”, with potential for “corrosive effects on social and political cohesion”. This concern manifested itself in recent political developments in Europe and the US, producing what some call a “fading of the American dream.” Inequality has been recently exacerbated by growth in high- and low-wage occupations, at the expense of middle-wage occupations, leading to a hollowing' of the middle class which is one of the most-cited causes for inequality. But we know very little about how this process takes place. The traditional study of labor operates at high levels of granularity, without attention to the complex structure of skills and tasks that make up jobs. Yet, our understanding of how workplace skills drive this process is limited. Specifically, how do skill requirements distinguish high- and low-wage occupations and does this distinction constrain the mobility of individuals and local labor markets (cities)? Using unsupervised clustering techniques from network science, we show that skills exhibit a striking polarization into two clusters that highlight the specific social-cognitive skills and sensory-physical skills of high- and low-wage occupations, respectively. The connections between skills explain various dynamics: how workers transition between occupations, how cities acquire comparative advantage in new skills, and how individual occupations change their skill requirements. We also show that the polarized skill topology constrains the career mobility of individual workers, with low-skill workers stuck' relying on the low-wage skill set. Together, these results provide a new explanation for the persistence of occupational polarization, and inform strategies to mitigate the negative effects of automation and off-shoring of employment.

Urban School Leadership & Adaptive Change: The “Rabbit Hole” of Continuous EmergencePatrick McQuillanThursday, 15:40-17:00

Urban Security Analysis in The City of Bogotá Using Complex NetworksGuillermo Rubiano Galindo and André Cristovao Neves FerreiraThursday, 14:00-15:20

In an increasingly globalized and borderless world, fast access to reliable information about cities has become almost a necessity. From tourism to business trips and emigration, one should have a good knowledge about the destination to avoid problems and to assure a good adaptation to the local region. As such, by exploring complex networks concepts and open data initiatives, this study focuses on the city of Bogotá as a model for security analysis, defined by official crime records and social strata percentages. In addition, a comparison of the previous data can be made with the location of police stations, as well as a urban traffic analysis. Finally, it’s possible to do a regional quality classification and a quicker and safer route recommendation, in function of the reliable data extracted from databases, obtained from specialized institutions, with national accreditation for this work.

User Identity Linkage Across Websites By Neural NetworksYuanyuan Qiao, Fan Duo and Jie YangThursday, 15:40-17:00

Internet provides us with digital trace of people in cyberspace. Many achievements have been made on application of complex network theory with World Wide Web (WWW) data since the beginning of 20th centuries. Researchers usually test the model on each kinds of social media website separately because online behavior of people is isolated between websites. Different websites on the internet usually target on specific user interest, e.g., Facebook links friends and lets people share their life with each other, LinkedIn links co-worker and enhances professional network. In order to fully understand the structure of online world from user's view point, User Identity Linkage (UIL) problem, which focuses on linking user's online identities across websites, becomes crucial.

Traditionally, features extracted from profile, content and network of online social platforms are taken in to consideration when constructing model to solve UIL problem. Then, classification model is applied to predict whether a pair of user identities chosen from different social networks respectively are belong to a same real natural person. Based on the idea that individual keeps similar intrinsic characteristics in daily life, models are proposed to examine the intrinsic structure of underlying user in different network platforms, i.e., learning the mapping function from heterogeneous network platforms to a homogenous space. In some researches, user mobility traces are proven to be very useful to link user identities across domains when source data has rich spatio-temporal information. Latest researches pointed out the challenges of UIL problem on various mining and learning tasks, including dynamic and cross-domain UIL.

Based on above observations, we introduce a UIL framework with Neural Networks, aims to learn the intrinsic structure of users by considering content feature of browsed webpages dynamically. To the best of our knowledge, we are the first to study UIL problem on multi-heterogeneous online domains at a scale of dozens of websites. Proposed model can also be extended to an unsupervised learning framework with seed rule. Furthermore, since spatio-temporal data is too sensitive to collect in fine-grained and at large scale, sparse user trajectories data are input to verification module of our framework, which further optimizes and adjusts the model automatically.

For experiment, we collected two real datasets from Internet traffic for several months, one dataset covers campus users, the other dataset covers users in a province in China. The datasets have different geography scale and distinct user interest preference. Comparing with baseline and state-of-the-art methods, experiments with real ground truth data demonstrate the effectiveness of our framework, and the possibility to link all online behaviors generated by a real user. Our study with UIL problem contributes to fully understanding users' online interests, which may bring new opportunities on providing better recommendations, solving cold start and data sparsity problem, and further driving the study on information diffusion, and network dynamics.

Using Fractals to Measure ImmunosenescenceElena Naumova, Yuri Naumov and Jack GorskiTuesday, 14:00-15:20

A scale-free behavior has been observed in many living systems at micro- and macro-scales. These systems continue to stimulate the interests in theoretical studies, including the understanding of aging and specifically of immunosenescence. The immune system represents the major system with a large cellular component dedicated to the generation of adaptive memory to pathogens. It is this component of immunity which is the most instructive in understanding the life stages of humans.

In the experimental studies of the adaptive immune system, we had observed a scale-free network governing the repertoire of memory T-cells (Naumov et al, 2003). At the molecular level, we observe that a memory immune response to influenza virus becomes diverse upon repeated exposures to the virus that can be modeled as a fractal self-similar system. Theoretical explanation of experimental findings has been described by the small-world construction (Ruskin and Burns, 2006) as a special case of the scale-free network (Albert and Barabasi, 2002). We then simulated the fractal behavior mimicking immune memory - its generation, maintenance and senescence (Naumova et al, 2008) and experimentally illustrated the general stability of the power-law structures and age-related changes. Our recent theoretical work confirms the assumptions that multiple expansion-contraction cycles define the robustness of immune response and correspond to memory formation (Saito and Narikiyo, 2011). Saito and Narikiyo had proposed the dynamical network of the adaptive immune system as a self-organized critical state in which the avalanche feedback reinforcement may reduce immunosenescence.

At the population level, we also observed the evidence of exposure to influenza as a marker of “immunological age.” In the cohort of healthy donors, each encounter with an infectious agent was unique for every person. Yet, the commonality in responses formed “immunological kinship” among all affected individuals, manifested by a preserved T-cell clonal pool. The diverse responses to flu and changes in diversity allow us to make an inference to “immunological kinship” and “immunological age.” Our experimental data indicate that at a certain point the continuing exposures to influenza begin to decrease the diversity of immune response. These observations lead us to explore theoretical conditions governing the “stable” and “volatile” components of the T-cell repertoires via dynamic neural networks. Such separation allowed us to detect a condition indicative of acceleration of immune aging. We derived the initial network parameters based on a specially designed anchored power-law regression fit of experimental data from middle-aged and older donors over time and illustrated age acceleration and immunosenescence in humans.

References:

Albert, R., and Barabási, A.L. (2002). Statistical mechanics of complex networks. Reviews of Modern Physics 74, 47-97.

Naumov, Y.N., Naumova, E.N., Hogan, K.T., Selin, L.K., and Gorski, J. (2003). A fractal clonotype distribution in the CD8+ memory T cell repertoire could optimize potential for immune responses. J Immunol 170, 3994-4001

Naumova, E.N., Gorski, J., and Naumov, Y.N. (2008). Simulation studies for a multistage dynamic process of immune memory response to influenza: an experiment in silico. Ann Zool Fennici 45, 369-384.

Ruskin, H.J., and Burns, J. (2006). Weighted networks in immune system shape space. Physica A: Statistical Mechanics and its Applications 365, 549-555.

Saito, S., and O. Narikiyo, O. (2011). Scale-free dynamics of somatic adaptability in the immune system. Biosystems 103: 420-424.

Using Local Force Cues to Guide a Distributed Robotic Construction SystemNathan Melenbrink and Justin WerfelTuesday, 18:00-20:00

Due to the irregular and variable environments in which most construction projects take place, the topic of on-site automation has previously been largely neglected in favor of off-site prefabrication. While prefabrication has certain obvious economic and schedule benefits, a number of potential applications would benefit from a fully autonomous robotic construction system capable of building without human supervision or intervention –for example, building in remote environments, or building structures whose form changes over time.
Construction of spatially extended, self-supporting structures requires a consideration of structural stability throughout the building sequence. For collective construction systems, where independent agents act with variable order and timing under decentralized control, ensuring stability is a particularly pronounced challenge. Previous research in this area has largely neglected considering stability during the building process. Physical forces present throughout a structure may be usable as a cue to inform agent actions as well as an indirect communication mechanism (stigmergy) to coordinate their behavior, as adding material leads to redistribution of forces which then informs the addition of further material. Here we consider in simulation a system of decentralized climbing robots capable of traversing and extending a two-dimensional truss structure, and explore the use of feedback based on force sensing as a way for the swarm to anticipate and prevent structural failures. We consider a scenario in which robots are tasked with building an unsupported cantilever across a gap, as for a bridge, where the goal is for the swarm to build any stable spanning structure rather than to construct a specific predetermined blueprint. We show that access to local force measurements enables robots to build cantilevers that span significantly farther than those built by robots without access to such information. This improvement is achieved by taking measures to maintain both strength and stability, where strength is ensured by paying attention to forces during locomotion to prevent joints from breaking, and stability is maintained by looking at how loads transfer to the ground to ensure against toppling. We show that swarms that take both kinds of forces into account have improved building performance, in both structured settings with flat ground and unpredictable environments with rough terrain.

Using machine learning to increase research efficiency: A new approach in environmental sciencesGeovany Ramirez, Debra Peters and Lucas JoppaTuesday, 18:00-20:00

Data collection has evolved from tedious in-person fieldwork to automatic data gathering from multiple sensor remotely. Scientist in environmental sciences have not fully exploited this data deluge, including legacy and new data, because the traditional scientific method is focused on small, high quality datasets instead of more complex and larger datasets. We present a system that helps with the implementation of the new scientific approach based on a knowledge-driven, open access system that learns and becomes more efficient and easier to use as data streams increase in variety and size. Our Knowledge, learning, and analysis system (KLAS) implemented a recommendation system based on multiple users' behavior using machine learning to serve as a guide during the experimentation process. The learning mechanism should be able to improve the accuracy and quality of recommendation as more users interact with KLAS. We implemented tools to help the scientific community to reuse data, methods, and models. Another feature of KLAS is the ability to improve efficiency of field-collected data. For instance, we have interconnected KLAS with an automatic system for data harvesting and data QA/QC from a network of meteorological stations. Users can easily perform experiments with data collected by sensors spatially distributed in the field, and user decisions can be guided by past user experiences. In addition to simplifying the research process, we see KLAS as a learning tool for students where they can learn from experienced users.

Using Variational Auto-Encoders to interpret Single Cell Transcriptomic DataHelena Andres Terre and Pietro LioMonday, 14:00-15:20

The introduction of single cell RNA-seq data was a major breakthrough in the field of biology, and particularly useful for research in areas like comparative transcriptomics or disease studies.
Stem Cell's differentiation has also benefited from this new technique, being able now to characterise gene expression levels for individual cells, and analyse the different stages of the differentiation process. Computational analysis of such data is essential to understand the experimental results, therefore new techniques are needed in order to adapt and interpret the data.

These datasets are known to be sparse and highly dimensional, with a large number of genes describing each cell. One of the main objectives is to identify the most relevant features for the underlying processes contained in the data. After discarding a large number of genes due to low variability, current methods use dimensionality reduction techniques such as Principal Component Analysis (linear) or tSNE (non-linear). The new components are then used to plot the data, perform further analysis for classification tasks or to describe differentiation processes.

While these methods have been proved valid to address the aforementioned challenges,they also introduce some restrictions when trying to characterise middle states of differentiation, due to the assumptions they are based on. The linear nature of the dimensionality reduction and the initial cut on the number of genes are factors that influence the amount of information retained by the new variables.

Here we present a non-supervised Machine Learning technique for dimensionality reduction of single cell data. We used Variational Autoencoders to extract a number of significant components or features that characterise individual cells based on their gene expression, using a deep learning bottle-neck" approach.

The new representation is evaluated both by its reconstruction accuracy, or the ability to recover an original gene expression given its encoded vector, and also by its performance on cell type classification.

We have proved that the use of Variational Autoencoders not only provides a good characterisation of cell features (i.e.cell types or cell markers) but also an easy reconstruction of the data in its original space, and allows the study of stochasticity and relevance of specific genes.

This methodology can be further generalised to extract and identify new features from Single Cell gene expression data. It can also be combined with other techniques such as Graph Attention Networks or Multi-Agent models to integrate data from different sources and achieve a broader picture of the biological processes.

Utilizing hierarchy to approximate dynamics on networksNima Dehmamy, Yanchen Liu and Albert-Laszlo BarabasiWednesday, 15:40-17:00

Dynamical processes, such as diffusion, can be prohibitively computationally expensive on large networks, proportional to number of nodes squared for each step. Many real-world networks, however, are highly modular and sometimes possess a hierarchy of modules. In the extreme case of a binary tree, knowing the hierarchical structure would allow us to iterate a diffusion process with O(N log(N)) time-complexity, significantly better than O(N2). Thus, extracting hierarchy of modules can lead to a more efficient computation of a dynamical process on a network. We demonstrate how this can be achieved via a new method for hierarchical clustering which is based on the objective function of the dynamical process in question. Our method exploits the separation of time scales associated with different eigenmodes of the dynamical process to find the groups of nodes that should be clustered together. In diffusion, for instance, the objective function contains the Laplace matrix (or normalized forms of it) and the time scale for equilibration of a community becomes the inverse of the community eigenvalue. Having a large spectral gap between, therefore, leads to a separation of time scales. Degenerate eigenvectors define the degrees of freedom, or clusters, that equilibrate at a given time scale [1]. We “coarse-grain” the network by identifying these cluster and replacing them with supernodes, as well as aggregating their links. We then repeat the process on the coarse-grained network until no cluster eigenvalues can be found any more. The existing literature on hierarchical clustering either relies on modularity maximization or on the stochastic block model, requiring construction of a stochastic block ensemble and doing inference on the ensemble. These methods are also not directly related to a dynamical process. The goal here is not to find a statistically validated set of clusters, but rather to find modes in the dynamical process that exist at different time scales, which would allow us to speed-up the computations of the dynamical process. The advantage of our method is that it does not rely on inference from a large ensemble, which are computationally expensive and generally disregard most network properties save for the degree or block structure. Our method can also be used in efficient solving of problems such as graph layout and other problems relying on gradient descent, including classification of nodes in a network.

Validation and performance of effective network inference using multivariate transfer entropy with IDTxlLeonardo Novelli, Patricia Wollstadt, Pedro A.M. Mediano, Joseph T. Lizier and Michael WibralWednesday, 15:40-17:00

IDTxl is a new open source toolbox for effective network inference from multivariate time series using information theory, available from Github at http://github.com/pwollstadt/IDTxl. The primary application area for IDTxl is the analysis of brain imaging data (import tools for common neuroscience formats, e.g. FieldTrip, are included); however, the toolkit is generic to analysing multivariate time-series data from any discipline and complex system.
For each target node in a network, IDTxl employs a greedy iterative algorithm to find the set of parent nodes and delays which maximise the multivariate transfer entropy. Rigorous statistical controls (based on comparison to null distributions from time series surrogates) are used to gate parent selection and to provide automatic stopping conditions for the inference.

We validated the IDTxl Python toolkit on different effective network inference tasks, using synthetic datasets where the underlying connectivity and the dynamics are known. We tested random networks of increasing size (10 to 100 nodes) and for an increasing number of time-series observations (100 to 10000 samples). We evaluated the effective network inference against the underlying structural networks in terms of precision, recall, and specificity in the classification of links. In the absence of hidden nodes, we expected the effective network to reflect the structural network. Given the generality of the toolkit, we chose two dynamical models of broad applicability: a vector autoregressive (VAR) process and a coupled logistic maps (CLM) process; both are widely used in computational neuroscience, macroeconomics, population dynamics, and chaotic systems research. We used a linear Gaussian estimator (i.e.Granger causality) for transfer entropy measurements in the VAR process and a nonlinear model-free estimator (Kraskov-Stoegbauer-Grassberger) for the CLM process.

Our results showed that, for both types of dynamics, the performance of the inference increased with the number of samples and decreased with the size of the network, as expected. For a smaller number of samples, the recall was the most affected performance measure, while the precision and specificity were always close to maximal. For our choice of parameters, 10000 samples were enough to achieve nearly perfect network inference (>95% according to all performance measures) in both the VAR and CLM processes, regardless of the size of the network. Decreasing the threshold for statistical significance in accepting a link lead to higher precision and lower recall, as expected. Since we imposed a single coupling delay between each pair of processes (chosen at random between 1 and 5 discrete time steps), we further validated the performance of the algorithm in identifying the correct delays. Once again, 10000 samples were enough to achieve nearly optimal performance, regardless of the size of the network.

We emphasise the significant improvement in network size and number of samples analysed in this study, with 100 nodes / 10000 samples being an order of magnitude larger than what has been previously demonstrated, bringing larger neural experiments into scope. Nonetheless, analysing large networks with 10000 samples and using the model-free estimators is computationally demanding; therefore, we exploited the compatibility of IDTxl with parallel and GPU computing on high-performance clusters.

Vector Representation Learning with Text Information and Structural Identity for Paper RecommendationXiangjie Kong, Mengyi Mao, Wei Wang, Jiayiing Liu and Bo XuWednesday, 14:00-15:20

Scholars need to search, read, and analyze many scientific papers in their research field to find previous related works that might be insightful for starting a specific research when they are doing researches. However, finding relevant papers is a non-trivial problem for scholars based on bibliographical search due to the tremendous amount of academic information in the fast-moving and complex citation network. Scientific paper recommendation systems have been developed to solve such problem by recommending relevant papers to scholars. Although there are many techniques for scientific paper recommendation systems including content-based filtering, collaborative filtering, and graph-based recommendation, these previous paper recommendations calculate paper similarity either on the research topic extracted from paper content or network structure extracted from citation network based on hand-engineered features which are inflexible.
To address this problem, the main goal is to capture and preserve more features in low-dimensional latent vectors from complex network topologies through unsupervised deep learning. Inspired by the idea of network representation learning, we develop a scientific paper recommendation system in this paper, namely VOPRec, by vector representation learning of paper in citation networks. VOPRec takes advantages of recent research in both text and network representation learning for unsupervised feature design. The main steps as follows:
In VOPRec, the text information is represented with word embedding from paper content (e.g., title, abstract or the main body of the full paper) based on doc2vec to find papers of similar research interest.
Then, the structural identity is converted into vectors based on struc2vec to find papers of similar network topology structure.
The m-nearest text-based neighbours and n-nearest structure-based neighbours are connected to each paper along with its citations and references to build a new citation network. In this citation network, edges may represent not only citations but also textual similarity and structural similarity. The weight of these edges are different according to the different relationships between two papers.
After bridging text information and structural identity with the citation network, vector representation of paper can be learned with network embedding. By optimizing the citation network in Skip-Gram model, VOPRec can preserve both text information and network structure.
Finally, top-Q recommendation list is generated based on the cosine similarity calculated with paper vectors. Papers in recommendation list are likely to be connected to the target paper and evaluated through the link prediction method.
Through a real-world data set, the APS data set, we conduct a sensitivity analysis of VOPRec to three important parameters: the number of nearest text-based neighbours m, the number of nearest structure-based neighbours n, and vector dimension k, and find that VOPRec is Parameter Sensitive. We also show that VOPRec outperforms state-of-the-art paper recommendation baseline methods on different length of recommendation lists and different ratio of test set measured by precision, recall, F1, and NDCG, which are metrics related to the information retrieval in the recommendation list.

Vector-Valued Spectral Analysis of Climate VariabilityJoanna Slawinska and Dimitrios GiannakisWednesday, 15:40-17:00

We study Indo-Pacific climate variability using a recently developed framework for spatiotemporal pattern extraction called Vector-Valued Spectral Analysis (VSA). This approach is based on the eigendecomposition of a kernel integral operator acting on vector-valued observables (spatially extended fields) of the dynamical system generating the data, constructed by combining elements of the theory of operator-valued kernels for multitask machine learning with delay-coordinate maps of dynamical systems. A key aspect of this method is that it utilizes a kernel measure of similarity that takes into account both temporal and spatial degrees of freedom (whereas classical techniques such as EOF analysis are based on aggregate measures of similarity between 'snapshots'). As a result, VSA has high skill in extracting physically meaningful patterns with intermittency in both space and time, while factoring out any symmetries present in the data. We demonstrate the efficacy of this method with applications to various model and observational datasets of oceanic and atmospheric variability in the Indo-Pacific sector. In particular, the recovered VSA patterns provide a more realistic than conventional kernel algorithms representation of dominant climate modes and in particular ENSO diversity.

Visualizing urban versus rural sentiments in real timeJackson HowellThursday, 14:00-15:20

Discrepancies in sentiment between urban and rural communities represent a divide which has garnered much media attention yet so far has yielded little research or analysis. In this research, we use sentiment analysis to parse tweets in order reveal the mood of each demographic group when discussing specific topics. We expose this method through a publicly accessible web application for sentiment tracking. Users are able to track specific keywords on Twitter in order to collect data at different scales, filtering by country, state, or even neighborhood. Using this tool, we find that across a broad range of topics generally believed to be polarizing, urban and rural groups actually express very similar sentiment scores. The only two areas where significant differences were found were related to religion and the perception of time. These results suggest that even though two demographic groups might hold completely opposite views on an issue, there is usually a certain symmetry in the emotion that both groups bring to the discourse.

When to ease off the brakes (and hopefully prevent recessions)Harold Hastings, Tai Young-Taft, Amanda Landi and Thomas WangTuesday, 14:00-15:20

Increases in the federal funds rate (FF) aimed at stabilizing the economy have inevitably been followed by recessions, and recently, peaks in the federal occurred 6-16 months before the start of recessions (data from https://fred.stlouisfed.org/). Apparently reductions in interest rates occurred too late to prevent recession. It would therefore be useful to find early leading indicators for when to halt or reverse these increases in FF.

Many authors have found the yield curve (the spread between the interest rates on the ten-year Treasury note and the three-month Treasury bill) to be a useful predictor of recessions [1-4] We modify and extend this approach in several ways. Here we investigate the effective federal funds rate (FF) as a control variable on recessions in the United States, and compare it to the three month Treasury bill and ten-year Treasury note. We employ a control theory paradigm to analyze its influence. Second, we use Takens embedding and lag plots to replace the use of multiple variables. The lag is determined using the standard method (the time scale of the decay in autocorrelation [5]) and in particular is notably longer than the lag used in Liu and Moench [3]. Third, we smooth and detrend the data with the Locally Weighted Scatterplot Smoothing (LOWESS) method [6,7] prior to Takens embedding. LOWESS yields a non-linear separation of time scales so that Takens embedding focuses on an empirically appropriate timescale, yielding a 2D attractor in 3-space.
Although more analysis is needed, dynamics of the 10 year Treasury - FF spread appears to be a good leading predictor of recessions. Since declines in this spread are a leading predictor, and FF is a control variable, the Fed might consider easing off the brakes by restricting increases in FF if the spread appears to narrow excessively. The spread has a relatively long correlation time: ~7Q, and the “core dynamics” after LOWESS smoothing and detrending can be can be realized in the 3D embedding: {(spread, spread-7Q, spread-14Q)}, yielding a ~2D attractor in 3-space. Many other models, including the Lokta-Volterra model (in ecology as well as economics) [8], and the Bar-Yam [9] model appear to display low-dimensional dynamics. Future work includes alternative non-parametric approaches: Empirical Data Modelling (EDM) [10] and the Takens-Kalman filter [11].

References:
[1] Estrella, A., Mishkin, F. (1998). Review of Economics and Statistics, 80, 45.
[2] Estrella, A.,Trubin, M. (2006). Current Issues in Economics and Finance, 12(5).
[3] Liu, W. Moench, E. (2014). Federal Reserve Bank of NY. Staff Report 691.
[4] Berge, T.J. (2015). Journal of Forecasting, 34, 455.
[5] Constantine, W., Percival, D. (2017). fractal: A Fractal Time Series Modeling and Analysis Package. R package version 2.0-4. https://CRAN.R-project.org/package=fractal
[6] Cleveland, W. S. (1979). J. Amer. Stat. Assoc. 74, 829.
[7] Warnes, G.R., Bolker, B., Bonebakker, L., et al. (2009). gplots: Various R programming tools for plotting data. R package version, 2(4). .https://cran.biodisk.org/web/packages/gplots/gplots.pdf
[8] Goodwin, R. (1990). Chaotic Economic Dynamics. Oxford University Press.
[9] Bar-Yam, Y., Langlois-Meurinne, J., Kawakatsu, M., Garcia, R. (2017). Preliminary steps toward a universal economic dynamics for monetary and fiscal policy. arXiv preprint arXiv:1710.06285.
[10] Ye H., Beamish R.J., Glaser S.M., et al. (2015). Proceedings National Academy of Sciences 112, E1569.
[11] Hamilton, F., Berry, T., Sauer, T. (2017). European Physical Journal Special Topics 226, 3239.

Who buys Who?: A study on the entrepreneurship network based on the investor relationshipSangmin Lim, Ohsung Kwon and Duk Hee LeeTuesday, 18:00-20:00

The number of startups and also investors forming the startup-investor network has been growing greatly within the past few years. The main nodes (startups, investors) and links have increased in numbers and size as the investment amount and types have evolved. Recently, with more diverse methods being developed for initial investments, the network has become more complex and dense. With the predictions that such network would also follow the power law, as certain startups are dominating the investment in amount and links, we use the network approach with the intention to analyze the topology and characteristics of the network.
We use the amount of investments differed in type (angel, series A,B,C) as links, to compose the network. Based on the network analysis, our results show that first, all series of investment were found to show a power law. As the rounds proceed, a few startups were focused for investment. Second, clusters were found due to the heavy investors. Investors that were mutually connected also were found to invest in the same startups forming clusters. Finally, the network topology was found to be very fragile as the connectivity was focused extremely to major investors and startups.

Why are the US parties so polarized? A "satisficing" dynamical model for political electionsVicky Chuqiao Yang, Georgia Kernell, Daniel Abrams and Adilson MotterThursday, 14:00-15:20

Since the late 1950's, the Democrat and Republican members of the US Congress have become increasingly polarized, while the ideology distribution of the US public stayed relative stationary. Here, we proposed a mathematical model as a possible origin of this phenomena. Most existing voter models assume voters are "maximizers": they exhaustively seek the best. However, psychology research suggests that when making decisions, many people tend to be "satisficers": they settle for what is "good enough" and do not obsess over other options. In this work, we incorporate "satisficing" decision making into a mathematical framework, and derive a differential equation system for political party positions in response to "satisficing" voters.

We validate our model using a dataset of Congress members' ideology positions inferred from their roll call records with the DW-NOMINATE method. Our model reaches good agreement with the historical trajectory of the two parties, and find party polarization can be a consequence of increasing ideological homogeneity in their election platforms.

World Human Dynamics: Evidence from the Changing Patterns of Centers of Gravity Since the 1960sJose Balsa-Barreiro, Yingcheng Li, Alex "Sandy" PentlandTuesday, 18:00-20:00

For years, companies have been trying to understand customer patronage behavior in order to allocate a new store of their chain in the right location. Customer patronage behavior has been widely studied in market share modeling contexts, which is an essential step in solving facility location problems. Existing studies have conducted surveys to estimate merchants' market share and their factors of attractiveness to use in various proposed mathematical models. Recent trend in big data analysis enables us to understand human behavior and decision making in a deeper sense. This study proposes a novel approach of transaction based patronage behavior modeling. We use the Huff gravity model together with a large-scale transactional dataset to model customer patronage behavior in a regional scale. Although the Huff model has been well studied in the context of facility location-demand allocation, this study is the first in using the model in conjunction with a large scale transactional dataset to model customer retail patronage behavior. This approach enables us to easily apply the model to different regions and different merchant categories. As a result, we are able to evaluate indicators that are correlated with the Huff model performance. Experimental results show that our method robustly performs well on modeling customer shopping behavior for a number of shopping categories including grocery stores, clothing stores, gas stations, and restaurants. Regression analysis verifies that demographic diversity features such as gender diversity and marital status diversity of a region are correlated with the model performance.The contribution and advantages of our approach include the following:
1-Merchants and business owners can implement our model in different geographical regions with different settings to determine what locations are suitable for new stores.
2-One can use different merchant categories to compare cross-category performance of shopping behavior models.
3-A deeper analysis is possible on evaluating shopping behavior models and multiple factors derived from transaction data such as demographic diversity, mobility diversity, and merchant diversity.
4-It is computationally inexpensive to rebuild a model. One can simply replace transaction data and fit models in the same manner as previous models. This eliminates the need and associated costs to conduct surveys for data collection under different settings.