Monday

7:50-8:30 Breakfast Networking: Un-Conference

8:30-10:00 Plenary Session

Session Chairs: Emma Towlson and Derrick Van Gennep

Sandra Chapman
(University of Warwick)

Nassim Nicholas Taleb
(Real World Risk Institute, NYU Tandon School of Engineering)

10:00-10:20 Coffee Break & Hallway Chat

10:20-12:00 Parallel Sessions & Workshops

12:00-12:40 Lunch Networking: Un-Conference

12:40-14:10 Plenary Session

Session Chair: Hiroki Sayama

Sara Walker
(Arizona State University)

Mark W. Moffett
(Smithsonian Institution)

14:10-14:30 Coffee Break & Hallway Chat

14:30-16:10 Parallel Sessions & Workshops

16:15-17:15 Un-Conference

Remarks: Yaneer Bar-Yam

Tuesday

7:50-8:30 Breakfast Networking: Un-Conference

8:30-10:00 Plenary Session

Session Chair: Carlos Gershenson

Irena Vodenska
(Boston University)

Melanie Moses
(University of New Mexico)

10:00-10:20 Coffee Break & Hallway Chat

10:20-12:00 Parallel Sessions & Workshops

12:00-12:40 Lunch Networking: Un-Conference

12:40-14:10 Plenary Session

Session Chair: Ali Minai

Orit Peleg
(University of Colorado Boulder)

Anna Orlova
(Tufts University)

14:10-14:30 Coffee Break & Hallway Chat

14:30-16:10 Parallel Sessions & Workshops

16:10-17:00 Herbert A. Simon Award Ceremony

Session Chair: Yaneer Bar-Yam

Award Recipient & Keynote: Melanie Mitchell

17:00-18:00 Speed Networking

Wednesday

7:50-8:30 Breakfast Networking: Un-Conference

8:30-10:30 Special Session: Complexity Aware Economics: Understanding and Reclaiming Agency

Session Chair: Garth Jensen

Rana Faroohar
(Financial Times)

Lina Khan
(U.S. House Subcommittee on Antitrust, Commercial, and Administrative Law)

Sarah Miller
(American Economic Liberties Project)

10:30-10:50 Coffee Break & Hallway Chat

10:50-12:30 Parallel Sessions & Workshops

12:30-13:10 Lunch Networking: Un-Conference

13:10-14:40 Plenary Session

Session Chair: Marcus Aguiar

W. Brian Arthur
(Santa Fe Institute)

Stephen Wolfram
(Wolfram Research)

14:40-15:00 Coffee Break & Hallway Chat

15:00-16:40 Parallel Sessions & Workshops

16:40-18:00 Poster Session

Thursday

7:50-8:20 Breakfast Networking

8:20-10:15 Plenary Session

Session Chair: Emma Towlson

Elsa Arcaute
(University College London)

Tina Eliassi-Rad
(Northeastern University)

Una-May O'Reilly
(MIT Computer Science & Artificial Intelligence Laboratory)

10:15-10:30 Coffee Break & Hallway Chat

10:30-12:00 Parallel Sessions & Workshops

12:00-12:30 Lunch Networking

12:30-14:40 Special Session: Complexity & Military Application

Session Chair: Bonnie Johnson, Garth Jensen

Wayne Porter
(Naval Postgraduate School)

Gary Langford
(Portland State University)

Antulio Echevarria
(Army War College)

14:40-14:50 Coffee Break & Hallway Chat

14:50-17:35 Speaker Panels

17:40-19:00 CGT: Complexity's Got Talent

Friday

7:45-8:15 Breakfast Networking

8:20-10:25 Special Session: COVID-19 Research & Applications

Session Chair: Elena Naumova, Derrick Van Gennep

Jack Gorski
(Versiti WI)

Aistis Šimaitis
(Office of the Government of the Republic of Lithuania)

Elena N. Naumova
(Tufts University)

10:25-10:35 Short Break

10:35-12:05 Special Session (cont.)

JP James and Sandeep Prabhakara
(Hive Financial Systems, Libreum Research Institute)

Joa Ja'keno Okech-Ojony
(World Health Organization)

Stephen Ataro Ayella
(Uganda Medical Association, Public Health Consultant)

12:05-12:40 Lunch Networking

12:40-13:45 Special Session (cont.)

Carlos Gershenson
(National Autonomous University of Mexico)

Closing Remarks: Yaneer Bar-Yam

13:50-15:00 Special Workgroup: Letter to the Community @ Covid-19

Session Chairs: Jeremy Rossman, Joaquin Beltran, Elena Naumova

13:50-15:00 Post-conference Zoomy Social

 

Plenaries

Elsa Arcaute (University College London) | Thursday 8:30-9:05

W. Brian Arthur (Santa Fe Institute) | Wednesday 13:20-14:00

Stephen Ataro Ayella (Uganda Medical Association, Public Health Consultant) | Friday 11:00-11:35

COVID-19 PANDEMIC RESPONSE IN UGANDA- EXPERIENCES, LEARNING AND HARNESSING OPPORTUNITIES AMIDST THE CRISIS

Sandra Chapman (University of Warwick) | Monday 8:40-9:20

Antulio Echevarria (Army War College) | Thursday 14:00-14:40

Tina Eliassi-Rad (Northeastern University) | Thursday 9:05-9:40

Rana Faroohar (Financial Times) | Wednesday 8:45-9:25

Jack Gorski (Versiti WI) | Friday 8:40-9:15

Evolution and ecology of a pandemic

JP James (Hive Financial Systems, Libreum Research Institute) | Friday 10:35-11:00

Rise of the COVID Wars: Agent-based modeling of the Collapse of Modern Society and Rise of Global Conflict as a function of the virus

Lina Khan (U.S. House Subcommittee on Antitrust, Commercial, and Administrative Law) | Wednesday 9:25-10:05

Gary Langford (Portland State University) | Thursday 13:15-13:55

Sarah Miller (American Economic Liberties Project) | Wednesday 10:05-10:45

Melanie Mitchell (Santa Fe Institute) | Tuesday 16:20-17:00

Mark W. Moffett (Smithsonian Institution) | Monday 13:30-14:10

Melanie Moses (University of New Mexico) | Tuesday 9:20-10:00

Space matters: from robots to viral pandemics spatial interactions dominate dynamics

Elena N. Naumova (Tufts University) | Friday 9:50-10:25

Challenges of forecasting seasonality of corona virus circulation

Una-May O'Reilly (MIT Computer Science & Artificial Intelligence Laboratory) | Thursday 9:40-10:15

Anna Orlova (Tufts University) | Tuesday 13:20-14:00

Representation of Computable Data, Information and Knowledge in Complex Systems in Healthcare: Towards Digital Twins Technology

Orit Peleg (University of Colorado Boulder) | Tuesday 12:40-13:20

Wayne Porter (Naval Postgraduate School) | Thursday 12:40-13:10

Sandeep Prabhakara (Hive Financial Systems, Libreum Research Institute) | Friday 10:35-11:00

Rise of the COVID Wars: Agent-based modeling of the Collapse of Modern Society and Rise of Global Conflict as a function of the virus

Aistis Šimaitis (Office of the Government of the Republic of Lithuania) | Friday 9:15-9:50

Nassim Nicholas Taleb (Real World Risk Institute, NYU Tandon School of Engineering) | Monday 9:20-10:00

Irena Vodenska (Boston University) | Tuesday 8:40-9:20

Sara Walker (Arizona State University) | Monday 12:50-1:30

Stephen Wolfram (Wolfram Research) | Wednesday 14:00-14:40

Towards a Fundamental Theory of Physics, and a Surprising New View of Complexity


Parallel Sessions

City Science | Thursday 10:30-12:00

Cognitive and Neuroscience 1 | Monday 14:30-16:10

Cognitive and Neuroscience 2 | Tuesday 10:20-12:00

Session Chair: Dahui Wang

Assessing effect of sleep deprivation in crayfish using information theory | Mireya Osorio-Palacios, Laura Montiel-Trejo, Iván Oliver-Domínguez, Rodrigo Aguayo-Solis, Jesús Hernández-Falcón and Karina Mendoza-Ángeles

Critical dynamics of active and quiescent phases in brain activity across the sleep-wake cycle | Fabrizio Lombardi, Manuel Gomez-Extremera, Jilin Wang, Plamen Ivanov and Pedro Bernaola-Galvan

Just stop believing: need and consequences of probabilistic induction | André Martins

A Process Algebra Model of Collective Intelligence Systems and Neural Networks | William Sulis

Collective Behaviors 1 | Tuesday 10:20-12:00

Collective Behaviors 2 | Wednesday 15:00-16:40

Computational Social Science 1 | Monday 10:20-12:00

Session Chair: Alfredo J. Morales

A Novel Viewpoint on Social Complexity and the Evolution Model of Social Systems based on Internal Mechanism Analysis | Wei Wang

Polarization during dichotomous Twitter conversations | Juan Carlos Losada, Gastón Olivares Fernández, Julia Atienza-Barthelemy, Samuel Martin-Gutierrez, Juan Pablo Cárdenas, Javier Borondo and Rosa M. Benito

Extremism definitions in opinion dynamics models | André Martins

An agent-based simulation of corporate gender biases | Paolo Gaudiano and Chibin Zhang

Computational Social Science 2 | Tuesday 14:30-16:10

Session Chair: Derrick Van Gennep

Communicative Vibration: A Graph-Theoretic Approach to Group Stability in an Online Social Network | Matthew Sweitzer, Robert Kittinger, Casey Doyle, Asmeret Naugle, Kiran Lakkaraju and Fred Rothganger

A Visual Exploratory Tool for Migration Analysis | Mert Gürkan, Hasan Alp Boz, Alfredo Morales and Selim Balcisoy

Enhanced Information Gathering May Intensify Disagreement Among Groups | Hiroki Sayama

Computational Social Science 3 | Wednesday 10:50-12:30

Session Chair: Flavio Pinteiro

Multi-level Co-authorship Network Analysis on Interdisciplinarity: A Case Study on the Complexity Science Community | Robin Wooyeong Na and Bongwon Suh

Emergent regularities and scaling in armed conflict data | Edward D. Lee, Bryan C. Daniels, Chris Myers, David Krakauer and Jessica C. Flack

Towards Novel, Practical Reasoning based Models of Individual Rationality in Complex Strategic Encounters | Predrag Tosic

Building Bridges: Analyzing Venezuelan integration in Colombia with mobile phone metadata | Isabella Loaiza, Germán Sánchez, Lina Ramos, Felipe Montes and Alex Pentland

Covid-19: Modeling | Monday 14:30-16:10

Covid: Multi-Disciplinary | Thursday 10:30-12:00

Covid: Strategy | Wednesday 15:00-16:40

Session Chair: Aaron Green

Motion and Emotion: Tracking Sentiment and Spread of COVID-19 in New York City | Elizabeth Marsh, Dawit Gebregziabher, Nghi Chau and Shan Jiang

Eliminating COVID-19: The Impact of Travel and Timing | Alexander Siegenfeld and Yaneer Bar-Yam

COVID-19 lessons for the future of governance | Anne-Marie Grisogono

Scientific Logic, Natural Science, Polarized Politics and SARS-CoV-2 | J. Rowan Scott

CHAOS COMPLEXITY COMPLEX SYSTEMS COVID-19: 30 Years Teaching Health Professionals chaos and complexity | Vivian Rambihar, Sherryn Rambihar and Vanessa Rambihar

Economics 1 | Monday 10:20-12:00

Session Chair: Sanith Wijesinghe

The Economy as a Constraint Satisfaction Problem | Dhruv Sharma, Jean-Philippe Bouchaud, Marco Tarzia and Francesco Zamponi

Foundations of Cryptoeconomic Systems | Michael Zargham and Shermin Voshmgir

Trustable risk scoring for non-bank gateways in blockchain and DLT financial networks | Percy Venegas

Symmetric Information but Asymmetric Trust? The revealed and tacit knowledge of markets under pandemic risk | Percy Venegas and Tomas Krabec

Innovation Ecosystem as a Complex Adaptive System; Implications for Analysis, Design and Policy Making | Alireza Valyan, Jafar Taheri Kalani and Mehrdad Mohammadi

Economics 2 | Tuesday 14:30-16:10

Economics 3 | Wednesday 10:50-12:30

Session Chair: Oscar Granados

Crisis contagion in the world trade network | Célestin Coquidé, José Lages and Dima L Shepelyansky

Hey, What Happened ? The Impact Sector as a Complex Adaptive System - A Conceptual Understanding | Tanuja Prasad

Complexity and Ignorance in Social Sciences | Czeslaw Mesjasz

Does exist gap-filling phenomenon in stock markets? | Xiaohang Liu, Yan Wang, Ziyan Jiang, Qinghua Chen and Honggang Li

Economics 4 | Thursday 10:30-12:00

Session Chair: Sanith Wijesinghe

Empirical scaling and dynamical regimes for GDP: challenges and opportunities | Harold Hastings and Tai Young-Taft

What does theoretical Physics tell us about Mexico’s December Error crisis | Oliver López Corona and Giovanni Hernández

Cyborgization of Modern Social-Economic Systems: Accounting for Changes in Metabolic Identity | Ansel Renner, A. H. Louie and Mario Giampietro

The devil is in the interactions: SDGs networks to enhance coherent policy | A. Urrutia, O. Rojo-Nava, C. Casas Saavedra, C. Zarate, G. Magallanes-Guijón, B Hernández-Cruz and Oliver López Corona

Emergence | Thursday 10:30-12:00

Session Chair: Bill Sulis

Anthro-technial Decision Making | Justin Shoger

Complexity and Corruption Risk in Human Systems | Jim Hazy

Circle Consciousness | Jiyun Park

Chinese Medicine Clinical Reasoning is Complex | Lisa Conboy, Lisa Taylor Swanson and Tanuja Prasad

Exploring Complex Systems Through Computation | Patrik Christen

Engineering | Wednesday 15:00-16:40

Session Chair: Babak Ravandi

Complexity-inspired Innovation Trajectories for Product Development Teams | David Newborn

Complexity and Scrum: A Reappraisal | Czeslaw Mesjasz, Katarzyna Bartusik, Tomasz Malkus and Mariusz Soltysik

Bio-inspired understanding and engineering of business innovations | Nelson Alfonso Gómez-Cruz and David Anzola

Exploring Order-Theoretic Measurement for Complex System Operations | Christopher Klesges

Evolution & Ecology | Thursday 10:30-12:00

Machine Learning & AI 1 | Monday 10:20-12:00

Machine Learning & AI 2 | Wednesday 10:50-12:30

Machine Learning & AI 3 | Thursday 10:30-12:00

Military & Defense 1 | Monday 10:20-12:00

Military & Defense 2 | Tuesday 10:20-12:00

Network Cascades | Wednesday 15:00-16:40

Session Chair: Chen Shen

Information Dynamics in Neuromorphic Nanowire Networks | Ruomin Zhu, Joel Hochstetter, Alon Loeffler, Mike Li, Joseph Lizier and Zdenka Kuncic

Searching for Influential Nodes in Modular Networks | Zakariya Ghalmane, Chantal Cherifi, Hocine Cherifi and Mohammed El Hassouni

Diffusion on Multiplex Networks with Asymmetric Coupling | Zhao Song and Dane Taylor

Networks 1 | Monday 10:20-12:00

Session Chair: Leila Hedayatifar

Utilizing complex networks with error bars | Istvan Kovacs

Hypernetwork Science: From Multidimensional Networks to Computational Topology | Cliff Joslyn, Sinan Aksoy, Tiffany Callahan, Lawrence Hunter, Brett Jefferson, Brenda Praggastis, Emilie Purvine and Ignacio Tripodi

Network Subgraphs in Real Problem-Solving Networks | Dan Braha

Distributed and Centralized Regimes in Controllability of Temporal Networks | Babak Ravandi

Visualizing Community Structured Complex Networks | Zhenhua Huang, Zhenyu Wang, Wentao Zhu, Junxian Wu and Sharad Mehrotra

On Efficiency and Predictability of the Dynamics of Discrete Boolean Networks | Predrag Tosic

Networks 2 | Tuesday 14:30-16:10

Session Chair: Flavio Pinteiro

Neuromorphic Nanowire Networks: Topology and Function | Alon Loeffler, Ruomin Zhu, Joel Hochstetter, Mike Li, Adrian Diaz-Alvarez, Tomonobu Nakayama, James M Shine and Zdenka Kuncic

The darkweb: a social network anomaly | Kevin O'Keeffe, Virgil Griffith, Yang Xu, Paolo Santi and Carlo Ratti

Eigenvalues of Random Graphs with Cycles | Pau Vilimelis Aceituno

Emergence of Hierarchy in Networked Endorsement Dynamics | Philip Chodrow, Nicole Eikmeier, Mari Kawakatsu and Daniel Larremore

Decomposing Bibliographic Networks into Multiple Complete Layers | Robin Wooyeong Na, Bryan Daniels and Kenneth Aiello

Networks 3 | Wednesday 10:50-12:30

Session Chair: Morgan Frank

Naruto Complex Network: A mathematical approach to a fictional universe. | Aidee Lashmi García-Kroepfly, Iván Oliver-Domínguez, Jesús Hernández-Falcón and Karina Mendoza-Ángeles

Localization of Hubs in Modular Networks | Zakariya Ghalmane, Chantal Cherifi, Hocine Cherifi and Mohammed El Hassouni

Dynamical, directed networks from physical data: auroral current systems from 100+ ground based SuperMAG magnetometers | Sandra Chapman, Lauren Orr and Jesper Gjerloev

Multiplex Markov Chains | Dane Taylor

The Capacitated Spanning Forest Problem is NP-complete | George Davidescu

Non-Linear Dynamics 1 | Monday 14:30-16:10

Session Chair: Amir Akhavan Masoumi

Cardiorespiratory activity in the auto-organization of the hierarchical order in crayfish | Iván Oliver-Domínguez, Aidee Lashmi García-Kroepfly, Mireya Osorio-Palacios, Jesús Hernández-Falcón and Karina Mendoza-Ángeles

A 2D Ising Model Cellular Automaton Mapped Onto Catenary Involute | Goktug Islamoglu

Spiral defect chaos in Rayleigh-Bénard convection: asymptotic and numerical study of flows induced by rotating spirals | Eduardo Vitral, Saikat Mukherjee, Perry H. Leo, Jorge Viñals, Mark R. Paul and Zhi-Feng Huang

Inferring the phase space of a perturbed limit cycle oscillator from data using nearest neighbor prediction | Rok Cestnik and Michael Rosenblum

Nonlinearity, time directionality and evolution in Western classical music. | Alfredo González-Espinoza, Gustavo Martínez-Mekler, Lucas Lacasa and Joshua Plotkin

Non-Linear Dynamics 2 | Tuesday 10:20-12:00

Security | Wednesday 15:00-16:40

Social Systems 1 | Monday 14:30-16:10

Social Systems 2 | Tuesday 10:20-12:00

Systems Biology 1 | Monday 10:20-12:00

Session Chair: Andew Zamora

Ribosome disruption in Luminal A breast cancer revealed by gene co expression networks | Diana García-Cortés, Jesús Espinal-Enriquez and Enrique Hernandez-Lemus

A model of cells recruited by a spatiotemporal signalling pattern inducing cell cycle reduction during axolotl spinal cord regeneration | Emanuel Cura Costa, Aida Rodrigo Albors, Elly M. Tanaka and Osvaldo Chara

Understanding the hierarchical organization of the cell through cancer mutational rewiring | Eiru Kim and Traver Hart

Mapping human aging with longitudinal mutli-omic and bioenergetic measures in cellular lifespan system | Gabriel Sturm, Jeremy Michelson, Meeraj Kothari, Kalpita Karan, Andres Cardenas, Marlon McGill, Michio Hirano and Martin Picard

Systems Biology 2 | Tuesday 14:30-16:10

Systems Biology 3 | Wednesday 10:50-12:30

Session Chair: Hui Li

Identifying Shared Regulators & Pathways Between Idiopathic Diseases | Tuck Onn Liew and Chandrajit Lahiri

Immunodietica: Interrogating interactions between autoimmune disorders and diet | Iosif Gershteyn and Leonardo Ferreira

Multifractal-based Analysis for Early Detection of Central Line Infections | David Slater, James Thompson, Haven Liu and Leigh Nicholl

Mahalanobis Distance as proxy of Homeostasis Loss in Type 2 Diabetes Pathogenesis | Jose L. Flores-Guerrero, Margery A. Connelly, Marco A. Grzegorczyk, Peter R. van Dijk, Gerjan Navis, Robin P.F. Dullaart and Stephan J.L. Bakker

Emergent collective behaviors in coupled Boolean networks | Chris Kang and Nikolaos Voulgarakis


Panels

The Highly Complex Defense Environment: Challenges and Potential Consequences | Thursday 14:50-16:05

Dr. Sean Lawson, University of Utah and Modern War Institute at West Point

Mr. Byron Mushrush, US Central Command

Dr. Tonya Henderson, US SPACECOM

Solutions Leveraging Complexity Science: Methods, Tactics, and Engineered Systems | Thursday 16:10-17:35

Dr. Ying Zhao, Information Sciences, Naval Postgraduate School

Mr. Jack Crowley, Forge AI

Dr. Philip Brown, NORAD & USNORTHCOM


Workshops

Workshop Organizer: Percy Venegas and Liz Johnson

Confirmed Speakers:
John C. Havens (World Economic Forum, IEEE)
Dr. Sabrina Martin (Oxford University)
Severin Engelmann (Technical University of Munich)
Dr. Wojciech Samek (Heinrich Hertz Institute)
Dr. Atif Farid Mohammad (University of North Carolina, NASA)
Dr. Biplav Srivastava (University of South Carolina, IBM)
Dr. Anand Rao (PricewaterhouseCoopers)
Mark Caine (Lead, AI and Machine Learning, World Economic Forum)
Dr. Bonnie Johnson (US Naval Postgraduate School)

Complex Social Networks and Social Sciences | Tuesday 10:20-12:00

Workshop Organizer: Alan Cohen

Divergent trajectories of single-cell aging | Nan Hao

Methylation Landscapes in Aging Tissues and Cells | Morgan Levine

Alzheimer: a Complex Systems Research Challenge | Marcel MG Olde Rikkert

Complex Systems Dynamics in Aging Biology 2 | Tuesday 14:30-16:10

Workshop Organizer: Alan Cohen

Quantifying the state and dynamics of sparsely sampled biological networks to understand aging | Alan Cohen

Using synthetic biology to learn genome-scale causal relationships with applications to aging | Scott McIsaac

Interdependency in physiologic networks | Nicholas Stroustrup

Workshop Organizer: Edna Pasher PhD and Boaz Tadmor MD

Why we embraced complexity in the Rabin medical center | Boaz Tadmor MD

Embracing complexity before and during the Covid19 pandemic in the ICU | Shaul Lev MD

Cyber & Physical Resilience | Thursday 10:30-12:00

Workshop Organizers: Jianxi Gao and Shouhuai Xu

A Resilience Approach to dealing with COVID-19 and future systemic shocks | Igor Linkov

Dynamic resilience of complex networks | Baruch Barzel

Reconstructability Analysis, a Complex Systems Data Modeling Methodology | Wednesday 15:00-16:40


Posters

Wednesday 16:40-18:00

Session Chairs: Naoki Masuda and Leila Hedayatifar

16:40-17:20 Part 1

  • Modelling any complex system using Hilbert’s World Formula approach | Troy Vom Braucke and Norbert Schwarzer

Abstract - Download PDF - Download Audio

  • A Receptor-Centric Model for Emergence Phenomenon | Michael Ji and Hua Ji

Abstract - Download PDF - Download Audio

  • Toward Conceptualizing Race and Racial Identity Development Within an Attractor Landscape | Sean Hill

Abstract - Download PDF - Watch Video

  • An Introduction to Complex Systems Science and its Applications | Alexander Siegenfeld and Yaneer Bar-Yam

Abstract - Download PDF - Download Video

  • Addressing the Challenge of Climate Change: Seeking Synergy with the Challenges of Aging Populations and Automation | Harold Hastings, Tai Young-Taft and Chris Coggins

Abstract - Download PDF - Download Audio

  • Influential Spreaders in Networks with Community Structure | Zakariya Ghalmane, Chantal Cherifi, Hocine Cherifi and Mohammed El Hassouni

Abstract - Download PDF - Download Video

  • The impact of composition on the dynamics of autocatalytic sets | Alessandro Ravoni

Abstract - Download PDF - Download Audio

17:20-18:00 Part 2

  • Final states of Threshold based Complex-contagion model and Independent-cascade model on directed Scale-free Networks Under Homogeneous Conditions | Chathura Jayalath, Chathika Gunaratne, Bill Rand, Chathurani Senevirathna and Ivan Garibay

Abstract - Download PDF - Download Audio

  • Design of controllable networks via memetic algorithms | Shaoping Xiao, Baike She and Zhen Kan

Abstract - Download PDF - Download Video

  • Addressing Climate Change Using the Carbon Individual Retirement Account (C-IRA) | Jason Makansi

Abstract - Download PDF - Download Video

  • Effect of voluntary distancing measures on the spread of SARS-CoV2 in Mexico City | Guillermo de Anda-Jáuregui, Concepción Gamboa-Sánchez, Diana García-Cortés, Ollin D. Langle-Chimal, José Darío Martínez-Ezquerro, Rodrigo Migueles-Ramirez, Sandra Murillo-Sandoval, José Nicolás-Carlock and Martin Zumaya

Abstract - Download PDF - Download Video

  • Describing the complexity of Chinese Medicine Diagnostic Reasoning: the example of suspected COVID-19 | Lisa Conboy, Lisa Taylor-Swanson and Tanuja Prasad

Abstract - Download PDF - Download Audio

  • CHAOS COMPLEXITY COMPLEX SYSTEMS COVID-19: 30 Years Teaching Health Professionals chaos and complexity | Vivian Rambihar, Sherryn Rambihar and Vanessa Rambihar

Abstract - Download PDF


Abstracts

A 2D Ising Model Cellular Automaton Mapped Onto Catenary Involute | Monday 14:30-16:10

Goktug Islamoglu

In this work, the author proposes a two dimensional Ising model that pushes the automata to evolve in ferromagnetic alignment uniaxially and antiferromagnetic alignment in the normal axis. With this setup, the periodic boundary condition functions in one direction, mapping onto a catenoid instead of a torus. The model reaches an arrested state, and its critical values correspond exactly to a tractrix, or a catenary involute. Roots of a coded in logistic map return the critical temperature.

Abstract: A Simple Model for Scalable Pattern-spotting in Open Office Settings | Tuesday 10:20-12:00

Tonya Henderson and Philip Brown

Complexity science adherents generally accept the premise that human systems function as complex adaptive systems (CAS), a belief that leads us to examine the ways these systems can be generatively influenced. The literature addressing complex human systems typically targets formally empowered leaders. Yet, as with much of the organizational literature, it may very well do so to at the expense of rank and file workers. In light of the grassroots nature of many significant changes in CAS across the globe, we are compelled to explore the lived experience of those in technical, non-managerial roles. Our emphasis area for this excursion focuses on civilian and contract workers in the Department of Defense (DoD) and their abilities to better appreciate hidden power structures, create space and time for reflection, and craft decisions.

The average cubicle-bound DoD civilian or contractor faces difficult challenges to recognize the emergence and dissipation of informal power structures that often affect how (and if) work gets done. Yet this kind of observation, coupled with self-awareness and sense-making, offers a key to long-term, productive employment. Building on our earlier works (e.g. Henderson and Boje, 2015), we propose the adaptation of a simple tool for multi-level awareness as a means of pattern-spotting (with a view toward influencing) that is accessible to individuals and teams at all levels of government organizations.

This framework for observation, particularly relevant to DoD workers, has utility in typical open office/cubicle environments. These office layouts necessitate the adaptation of one’s personal and professional persona to work constructively in the absence of privacy. Although teleworking is on the rise due to COVID-19, the topic remains relevant as we contemplate a return to some semblance of normalcy and a return to working in close proximity to others. We reason that rank and file DoD workers, lacking the ability to experience, process, and manage any sense of displeasure or disappointment without an audience, endure constant observation by their peers. Potentially, this situation steals any chance for reflection or personal decision making. A simple tool for multi-level observation may aid these kinds of workers in more effectively managing themselves and navigating the various levels of interaction occurring in the modern workplace. Such observation enables the user to better identify and contextualize patterns of individual and group behaviors amid the ever-shifting fabric of informal power structures affecting rank-and-file worker.

This tool’s concept originated as “relational introspection. It emerged as the most common theme discovered in a storytelling study of socio-material fractals, wherein nonprofit leaders were asked to describe any instances of self-similar repetition observed in their work (Wakefield, 2012). It was first defined as “the threefold, dynamic exercise of self-awareness, regard for others, and ecosystem knowledge” (Wakefield, 2012, p. 114). The model was subsequently explored as “fractal relationality,” (Boje & Henderson, 2015) and finally used in consulting settings as the more approachable “SOS” model, Self-Others-Situation.
In this presentation we will briefly discuss informal power structures from a socio-material fractal perspective, consider the need for sense-making in the context of open plan offices, and introduce a simple, complexity-derived tool for reflection and decision-making.

References:

Boje, D., & Henderson, T. L. (2015). Fostering Awareness of Fractal Patterns in Organizations in B. Burnes (Ed.), Change Into Practice: Routledge.

Henderson, T. L. (2013). Fractals & relational introspection Paper presented at the 3rd Annual Quantum Storytelling Conference, Las Cruces, NM.

Henderson, T. L., & Boje, D. M. (2015). Organization Development and Change Theory: Managing Fractal Organizing Processes: Routledge.

Mills, J.H., Thurlow, A., & Mills, A. J. (2010). Making sense of sensemaking: The critical sensemaking approach. Qualitative Research in Organizations and Management, 5(2), 182-195.

Wakefield, T. H. (2012). An ontology of storytelling systemicity: Management, fractals and the Waldo Canyon fire. (Doctorate of Management Doctoral dissertation), Colorado Technical University, Colorado Springs, CO.

Wakefield, T. H. (2013). Fractal Management Theory. Paper presented at the Academy of Management, Orlando, FL.

Weick, K. E. (1995). Sensemaking in organizations. Thousand Oaks: Sage Publications.

Adaptive Neural Networks Accounted for by Five Instances of “Respondent-Based” Conditioning | Wednesday 10:50-12:30

Patrice Marie Miller and Michael Lamport Commons

Neural networks may be made faster and more efficient by reducing the amount of memory and computation used. In this paper, a new type of neural network, called an Adaptive Neural Network, is introduced. The proposed neural network is comprised of 5 unique pairings of events. Each pairing is a module, and the modules are connected within a single neural network. The pairings are simulations of respondent conditioning. However, the simulations do not necessarily represent conditioning in actual organisms. The specific pairings are as follows. The first pairing is between the reinforcer and the neural stimulus that elicits the behavior. This pairing strengthens and makes salient that eliciting neural stimulus. The second pairing is that of the now salient neural stimulus with the external environmental stimulus that precedes the operant behavior. The association of the neural event (srb) with the US/SR+ makes the internal neural event more salient and thereby helps to strengthen the operant response. The third is the pairing of the environmental stimulus event with the reinforcing stimulus. This is a “when” pairing because the cue or cues in the environment elicit the neural stimulus, srb, determining when it occurs. The fourth is the pairing of the stimulus elicited by the drive with the reinforcement event, changing the strength of the reinforcer. Pairing the environmental stimulus with the reinforcing stimulus establishes the environmental S Environment as an incentive (see Killeen, 1982). The fifth pairing is that, after repeated exposure, the external environmental stimulus is paired with the drive stimulus. After multiple trials of this type of pairing, the properties of the environment or a similar environment are paired with drive stimulus SDrive.This drive stimulus is generated by an intensifying drive. Within each module, a “0” means no occurrence of a Pairing A of Stimulus A and a “1” means an occurrence of a Pairing A of Stimulus A. Similarly, a “0” means no occurrence of a Pairing B and a “1” means an occurrence of a B, and so on for all 5 pairings. To obtain an output, one multiplies the values of Pairings A through E. In one trial or instance, all 5 pairings will occur. The results of the multiplications are then accumulated and divided by the number of instances. In the theory presented here, the pairings in respondent conditioning become aggregated together to form a basis for operant conditioning. The aggregation is derived from the Model of Hierarchical Complexity. In the model, difficulty of task is operationalized based on what is called the Order of Hierarchical Complexity (Commons, et al, 1998). Operant conditioning is one order more complex than respondent conditioning, as it consists of three instances of respondent conditioning that are coordinated together in a nonarbitrary fashion (Commons and Giri, 2016). Stacked Neural Networks operates in a similar way in that the lower stacks operate at a lower Order of Complexity, and their outputs are fed into the next higher stack, where the outputs are processed and coordinated as inputs, and the outputs are fed into even higher stacks. Meanwhile, there is a feedback loop to simulate the third pairing event. The use of these simple respondent pairings as a basis for neural networks reduces errors. Examples of problems that may be addressable by such networks are included.

Addressing Climate Change Using the Carbon Individual Retirement Account (C-IRA) | Wednesday 16:40-18:00

Jason Makansi

"Nudge" (incentive-based) economic concepts are popular in academic circles (e.g., Thaler, University of Chicago) and some have proven practical. Strangely, for addressing climate change, no one has proposed rewarding the individual who reduces his/her carbon footprint over many years by converting the avoided carbon into funds deposited into a retirement account or other long-term financial obligation (e.g., school loan, residential mortgage). The C-IRA addresses both carbon-induced climate change and the anemic rate of retirement savings and indebtedness of most Americans. It's primary economic attribute is it finally overcomes fluctuating energy prices as the primary driver of energy-driven consumer choices and lifestyle behaviors. As a response for climate change, the C-IRA supports carbon reduction without necessarily requiring new taxes; it is a constant, long-term reward, albeit with delayed gratification, and thus supports permanent lifestyle modifications. It's a "shove," in other words, more than a "nudge."

Economic and climate systems are naturally complex, even at the regional level. Testing and simulating the C-IRA concept as a response for climate disruption requires complex modeling. However, it has the great advantage of potentially reversing the traditional economic goal of energy-fueled growth by actually and permanently rewarding people to consume less. This is absolutely critical, at least until non-carbon renewable energy substantially displaces carbon-laden fossil fuels, if climate disruption is to be reversed, rather than merely accommodated through resilience strategies.

This presentation/paper elaborates on the Carbon IRA concept and how to implement it,

Addressing the Challenge of Climate Change: Seeking Synergy with the Challenges of Aging Populations and Automation | Wednesday 16:40-18:00

Harold Hastings, Tai Young-Taft and Chris Coggins

Global climate change poses an existential threat as well as a significant challenge to our national security. A 2018 IPCC special report argued that global warming should not exceed 1.5 degrees, of which 1 degree has already occurred. Recent record high worldwide average temperatures and severe weather events add emphasis to the urgency of slowing climate change by reducing worldwide carbon emissions.

The world also faces two other challenges: first, global population decline due to declining birth rates and an aging population in much of the developed world, and second, the threat of sharply reduced employment as automation and artificial intelligence (AI) rapidly replace routine components of manufacturing and some service jobs. However population declines can help reduce carbon emissions, thus slowing climate change. In this sense, the demographic transition to low birth rate, low death rate dynamics becomes not a problem, but in fact a partial solution to global warming – the consequent reduction and reversal of population growth will help reduce carbon emissions. The world should therefore welcome this transition and look to addressing inevitable and potentially serious side effects, namely aging and a decline in the relative size of the workforce. It is here that a carefully managed application of automation and AI can play a key role.

One major challenge to reducing carbon emissions is the large variation among nations: 40 tons of carbon dioxide per capita in Qatar, compared with the US: 16 tons, China: 8 tons, the EU: 7 tons, Indonesia: 2 tons, less developed parts of Africa: less than 1 ton (Wikipedia and World Bank). Reducing global carbon emissions while the developing world industrializes will require both convergence toward rates achieved in the most efficient industrialized countries, and a global reduction in these rates, a strategy the IPCC called “converge and contract”. But this strategy with realistic levels of contraction appears to fall far short of what is needed.

On the other hand, declines in population due to declining birth rates in much of the developed world, will clearly yield proportionate reductions in carbon emissions, synergistic with the “converge and contract” strategy. Birth rates have declined well below the replacement rate (2.1) in much of the developed world and are approaching the replacement rate in many less developed countries, as these total fertility rates show: Indonesia 2.32, India 2.30, US 1.89, China 1.64, Italy 1.49, Japan 1.48, South Korea 1.32 (worldpopulationreview.com).

The transition to a longer life expectancy, low birth rate world has both favorable and challenging consequences. The resulting population decline clearly helps address global warming – earth’s carrying capacity appears less limited by resources than by carbon emissions - but also brings an aging population, with a declining fraction of working age. Attempting to reverse this trend, as China proposes by ending their one child policy, will not only exacerbate carbon emission and thus climate change, but is likely to be unsuccessful – birth rates decline when women gain opportunity and families gain enough confidence in social safety nets and invest more in each child. Local population and workforce declines associated with this demographic transition are better addressed by immigration from high birth rate regions, which also accelerates worldwide population decline, helping address climate change, as immigrant birth rates converge to local rates.

It is in addressing declines in workforce that automation and AI show great potential. Exponential growth of automation and AI in replacing low and intermediate-skilled, routine tasks may free up more of the population to work at socially important and potentially more rewarding tasks including teaching, healthcare, social work, ecological restoration, and sustainable agriculture.

Finally, addressing the challenge of population decline and aging may require recognition of the worth of irreplaceable service jobs as well new economic models, such as universal basic income. We shall describe a systems approach to addressing these as well as related challenges. In conclusion, one should not regard climate change, population decline, and automation and AI as separate challenges, but rather exploit synergies among their solutions in order to address the existential threat of climate change.

Aiding Organizational Resilience with Complexity-informed Work Function Policies | Monday 14:30-16:10

David Newborn

Typical organizations are subject to bureaucratic decisions that impact effectiveness throughout and outside the organization. This paper attempts to view some examples of bureaucratic processes from the perspectives of complexity science principles. Examples are drawn from procurement and inventory management. In each case, the bureaucratic decisions demanding increasingly ordered systems, which started to exhibit robust-yet-fragile properties. In one case, a system was temporarily modified to cope with increased demand, thus exemplifying graceful extensibility within an organizational network to adapt and flex to unexpected and unknowable challenges.

An agent-based simulation of corporate gender biases | Monday 10:20-12:00

Paolo Gaudiano and Chibin Zhang

Diversity & Inclusion (D&I) is a topic of increasing relevance across virtually all sectors of our society, with the potential for significant impact on corporations and more broadly on our economy and our society. In spite of the fact that human capital is typically the most valuable asset of every organization, Human Resources (HR) in general and D&I, in particular, are dominated by qualitative approaches. We introduce an agent-based simulation that can quantify the impact of certain aspects of D&I on corporate performance. We show that the simulation provides a parsimonious and compelling explanation of the impact of hiring and promotion biases on the resulting corporate gender balance. We show that varying just two parameters enables us to replicate real-world data about gender imbalances across multiple industry sectors. In addition, we show that the simulation can be used to predict the likely impact of different D&I interventions. Specifically, we show that once a company has become imbalanced, even removing all promotion biases is not sufficient to rectify the situation, and that it can take decades to undo the imbalances initially created by these biases. These and other results demonstrate that agent-based simulation is a powerful approach for managing D&I in corporate settings, and suggest that it has the potential to become an invaluable tool for both strategic and tactical management of human capital.

Anthro-technial Decision Making | Thursday 10:30-12:00

Justin Shoger

Better understanding of real-world complexity by combining technology and human capabilities.

In an ever increasingly complicated world, different ways of grappling with that complexity need to be explored. Knowing that both the human and technologies available have limitations on how and to what capacity they can aid complexity exploration is a step towards better capabilities. Specifically, pursuing ways to blend the strengths of each, man and machine, should be sought as a combined solution set to help address ill-defined problems people encounter in life.

Technology has been developed to support human endeavours throughout history. We are always looking for the next tool to do something better, reach something inaccessible before now. These tools have been developed successfully through application of formal mathematical and logical constructs, reducing problems to their lowest level with understood interdependence, or extrapolating to extreme space or size, and independence. Often, the tools are built to replace the human in the activity. The formal rules of both ends of this problem solving spectrum have been productive, helping us understand our world and universe. In between those extremes of simple sets, and vast quantities, lies the vast majority of real-life experiences, ripe for investigation and deeper understanding.

By including how the person operates in real-life offers additional perspective and resources in approaching how to deal with complex experiences. Those events often do not exhibit clean, understandable causes and effects. Several factors precipitate this disorganization or messiness, non-linear relationships and dynamics, systems that have multiple points of stability, if any, and ever increasing amount of data and knowledge without the time and computational resources to mine it for deeper understanding, and more. Despite this lack of clear, concise organization, life continues. Because people are able to operate within ill-defined circumstances, looking at their capabilities in making connections and seeing relationships between options which can differ from formal, technological approaches, presents an opportunity. Prospectively, we seek to comprehend more deeply how we see and interact with complexity within our environments. The ability for humans to operate beyond formal logical systems continues to be an area worth understanding better. Formal systems can categorize, prioritize, and compute processes based on human structure and inputs. It is a human input that at some point set the rules, preferences, and priorities within which those formal systems function. How the human comes to setting those variables and goals includes rational thought and extends onto meaning and value(s). The human can combine the benefits of formal systems with perspective to successful navigation through complex circumstances where the probability of success is unknown or incalculable. The human is able to make educated and instinctive extrapolations under uncertainty, lacking formal analysis and are difficult to predict by synthesizing their perceptions and cognition. They sense the context, and through self-reference establish a means to achieve their desired goal. People who have the opportunity to make choices in real-time situations use the information presented, and apply that to their understanding of what needs to be done, establishing meaning and valence to the information; and thus, prioritizing the next steps to reaching the preferred end-state.

The human quality of self-reference enables decision making in uncertain, low information, high pressure, and time constrained conditions. This human capability, balanced appropriately with technological capabilities is an area open for exploration. Investigating avenues towards socio-technical capabilities to enable greater scientific and social progress is a next step.

Applications of Complexity Science to the Test and Evaluation of Autonomous Systems | Tuesday 10:20-12:00

David Newborn

The test and evaluation of autonomous systems has become critically important to the U.S. Department of Defense. Autonomous systems present difficult challenges due to the expectation of nondeterministic, unpredictable behaviors. Fortunately, the principles of complexity science provide insightful, reasoned, qualitative, and quantitative approaches to coping and levering uncertainty. This paper focuses on qualitative justifications for why complexity science is an essential component to the test and evaluation of autonomous systems, leading to overall success in operationally-relevant situations.

Assessing effect of sleep deprivation in crayfish using information theory | Tuesday 10:20-12:00

Mireya Osorio-Palacios, Laura Montiel-Trejo, Iván Oliver-Domínguez, Rodrigo Aguayo-Solis, Jesús Hernández-Falcón and Karina Mendoza-Ángeles

It has been proposed that Fisher information and permutation entropy offers a robust method to assess the stability of a system and uncertainty over time. Systems in stable dynamic states have constant Fisher information. Systems losing organization migrate toward higher variability and lose Fisher information. In this work we calculated this measurement in time series of physiological variables. We all know that sleep is essential for the maintenance of life itself. More specifically, in vertebrate and invertebrate animals, sleep is necessary for survival. For example, in rats, after two to three weeks of sleep deprivation, animals loss weight despite a great increase in food intake, if deprivation continues, they finally die. In crayfish, 24 hours of sleep deprivation are enough to cause death. Sleep deprivation in mammals elicits changes in the structure of electroencephalogram and dysregulation of cardiorespiratory activity. Nonetheless, we do not know if this occurs also in invertebrates, particularly in crayfish. The purpose of this work was to analyze time series like electroencephalogram (EEG) and electrocardiogram (EKG) of adult crayfish Procambarus clarkii in order to compare the dynamics of these variables in control conditions and after sleep deprivation by using an information theory approach and wavelet transform. We used male animals in intermolt, synchronized to light-dark cycles 12:12. In cold anesthetized animals we implanted electrodes on deutocerebrum and cardiac sinus. After two days of recovery, we recorded, simultaneously, behavioral and electrical activity during 8 continuous hours in two different conditions: a) control and b) after one hour of sleep deprivation. For behavioral records, we defined two body positions of the animal: walking and lying on one side and associated each one with the time of recording. To analyze brain electrical activity, we used wavelet transform. To analyze EKG time series we quantified Fisher information and permutation entropy in the data in both conditions. We found that brain electrical activity from control sleeping animals showed a decrease in power at 30 Hz, as compared to walking animals that had high power in the entire frequency band analyzed, 0-60 Hz. Sleep deprived animals presented lower power in all EEG frequencies, even when they were allowed to sleep. On the other hand, we found that Fisher information and permutation entropy from control animals remains constant, however, in sleep deprived animals there is an increase in permutation entropy while lose Fisher information and increase their variability. Summarizing, sleep deprivation modifies the dynamic states of brain and cardiac electrical activity in crayfish, which means that under sleep deprivation, crayfish evolves to a non-stable dynamic state.

This project is partially funded by UNAM-DGAPA-PAPIIT IN231620.

Bio-inspired understanding and engineering of business innovations | Wednesday 15:00-16:40

Nelson Alfonso Gómez-Cruz and David Anzola

Business innovation can be understood as a systemic process of value creation that involves the various dimensions of a business system (Sawhney, Wolcott & Arroniz, 2006). Hence, an organization can innovate in the products and services it creates, the client segments it targets, the processes it employs, and the strategies devised to put its products on the market. Although several models of the innovation process have been proposed (Marinova & Phillimore, 2003), it has been shown that those based on biological evolution logic are better able to capture in a robust and precise manner some of the most distinctive features of business innovation (Kell & Lurie-Luke, 2015).

To date, most of the literature exploring the link between biological evolution and business innovation discusses how the latter could be conceptually understood using the theoretical framework developed for the former. We argue that the biological analogy or metaphor is useful not only conceptually, but also methodologically. Over the last decades, engineering and computer science have both used biological processes as inspiration to develop artificial systems that are capable of evolution, information processing and problem solving, in an analogous way to living organisms in their natural environment (Maldonado & Gómez-Cruz, 2012). Due to their focus on populations of agents that interact with each other and with the environment, these artificial systems engage in bottom-up processes that result in behavioral, functional, or structural patterns that can be thought of as producing innovations.

Biologically motivated methods developed in engineering and computing, such as artificial life (Kim & Cho, 2006), evolutionary computation (Goldberg, 2002), organic computing (Würtz, 2008), morphogenetic engineering (Doursat, Sayama & Michel, 2013), guided self-organization (Prokopenko, 2014), agent-based simulation (Ma & Nakamori, 2005) and bio-inspired computation (Forbes, 2004) can be tapped into, not only to understand innovation, but to engineer it. These methods, we argue, when transferred to the organizational context, can act as innovation accelerators.

In this contribution, we, first, outline a conceptual framework, developed after a systematic literature review, that describes how different bio-inspired methods could be repurposed for the study, development and evaluation of innovations produced within the business innovation context. Later, we discuss three potential applications of the framework outlined: as mechanisms that promote the constant generation of novelty (Banzhaf et al., 2016), as computational processes in which the space of possibilities for an innovation is explored (Wagner & Rosen, 2014), and as simulated scenarios (artificial societies or markets) in which the impact of specific innovations can be tested (Ma & Nakamori, 2005).

References

Banzhaf, W., Baumgaertner, B., Beslon, G., Doursat, R., Foster, J. A., McMullin, B., … White, R. (2016). Defining and simulating open-ended novelty: requirements, guidelines, and challenges. Theory in Biosciences, 135(3), 131-161.
Doursat, R., Sayama, H. & Michel, O. (2013). A review of morphogenetic engineering. Natural Computing, 12(4), 517-535.
Forbes, N. (2004). Imitation of Life: How Biology Is Inspiring Computing. Cambridge, MA: MIT Press.
Goldberg, D. E. (2002). The Design of Innovation: Lessons from and for Competent Genetic Algorithms. Dordretch: Springer.
Kell, D. B. & Lurie-Luke, E. (2015). The virtue of innovation: innovation through the lenses of biological evolution. Journal of the Royal Society Interface, 12(103), 20141183.
Kim, K. J. & Cho, S. B. (2006). A comprehensive overview of the applications of artificial life. Artificial Life, 12(1), 153-182.
Ma, T. & Nakamori, Y. (2005). Agent-based modeling on technological innovation as an evolutionary process. European Journal of Operational Research, 166(3), 741-755.
Maldonado, C. E. & Gómez-Cruz, N. A. (2012). The complexification of engineering. Complexity, 17(4), 8-15.
Marinova, D. & Phillimore, J. (2003). Models of innovation. In L. V. Shavinina (Ed.), The International Handbook on Innovation (pp. 44-53). Oxford: Elsevier.
Prokopenko, M. (Ed.) (2014). Guided Self-Organization: Inception. Heidelberg: Springer.
Sawhney, M., Wolcott, R & Arroniz, I. (2006). The 12 different ways for companies to innovate. MIT Sloan Management Review, 47(3), 75-81.
Wagner, A. & Rosen, W. (2014). Spaces of the possible: universal Darwinism and the wall between technological and biological innovation. Journal of the Royal Society Interface, 11(97), 20131190.
Würtz, R. (Ed.). (2008). Organic Computing. Berlin: Springer.

Bohmian Frameworks and Post Quantum Mechanics | Thursday 10:30-12:00

Maurice Passman, Philip Fellman and Jack Sarfatti

In this paper, we examine the foundational linkage between David Bohm’s ontological interpretation of quantum mechanics and how this interpretation may be extended so a particle is not just guided by the quantum potential, but in turn,
 through back-activity, modifies the quantum potential field. Back-activity introduces nonlinearity into the evolution of the wave function, much like the bidirectional nonlinear interaction of space-time and matter-energy in general relativity. This generalisation has been called Post Quantum Mechanics. The mathematical exposition presented herein is subsequently developed in a companion paper, linking Dimensional Analysis, Fractal Tessellation and Quantum Information Theory.

Bridging the gap: Concept and results of Idiographic System Modelling | Monday 14:30-16:10

Benjamin Aas and Systelios Thinktank

Most psychotherapy research rests on aggregated group means and their subsequent statistical inference. Group-based aggregates, however, do not inform clinicians how to adapt therapeutic decisions to individual patients, mathematically described with the ergodicity theorem (Molenaar, 2007). Human change processes are non-ergodic, meaning that “the structure of inter-individual variation at the population level is not equivalent to the structure of intra-individual variation at the single-subject level”. Psychotherapy concerns single subjects, hence there needs to be a shift from group-based science to idiographic science.

This session presents idiographic systems modelling: Client and therapist collaboratively draw a personal network model, based on a semi-structured interview. From this network, individual questionnaires are being created and administered daily via an online monitoring tool. Visualizations of the accumulating timeseries are being used by therapists for feedback sessions to inform individualized therapeutic hypotheses. In addition, time series analysis tools suggest critical timepoints (i.e. critical fluctuations, critical slowing down, critical synchronization) to client and therapist in real time, allowing to adapt treatment decisions.
Data will be presented that suggest that this complex system based, idiographic treatment approach outperforms classical, non-individualized psychotherapy.

Building Bridges: Analyzing Venezuelan integration in Colombia with mobile phone metadata | Wednesday 10:50-12:30

Isabella Loaiza, Germán Sánchez, Lina Ramos, Felipe Montes and Alex Pentland

The political and economic crisis in Venezuela has triggered an unprecedented mass migration across Latin America. This migration, estimated to top 4 million by mid 2019[1], shows no signs of slowing down, and could potentially surpass the Syrian refugee crisis in numbers[2]. With most Venezuelans migrating within the region, neighboring countries are faced with many challenges including addressing the humanitarian crisis in the short-term, and providing jobs, education, health and opportunities for migrants in the long-term. One of the most important long-term challenges will be the integration of migrants and locals to build new, prosperous communities that are able to cohabit peacefully. However, little is known about how the process of integration unfolds, especially, in situations in which migration happens at such an accelerated rate and scale.

Colombia is the country with the largest inflow of migrants in the region, followed by Peru, Chile and Argentina[4]. By the end of 2019, it was estimated that close to 2 million migrants had crossed over the Colombian border in as little as two years. The number of migrants in Colombia is now roughly the size of Colombia’s third largest city, making it one of the most relevant places to study migrant integration under mass migration conditions.

We study the process of migrant-local integration by analyzing the structure and evolution of their social networks as captured by mobile phone metadata. We tackle questions like: Who do migrants talk to the most, locals or other migrants? How do the social networks and calling patterns of migrants and locals differ? Do the networks of migrants and locals become more intertwined in time or do they remain rather disconnected? We also examine how social integration varies across different areas in the country.This is especially relevant in Colombia, a country with stark regional differences in culture and economic development.

We use Call Detail Records (CDRs) as our primary data source because sources like the census and surveys are hard to scale to keep up with such fast-paced phenomenon. Another common issue with these sources, is that an important fraction of the migrants are afraid, unwilling or don’t know how to officially register with relevant authorities in their host country [3]. CDRs were provided by one of the largest telecommunications companies in Colombia. They span 4 years between 2015 and 2018 and encompass over 15 million unique users across 27 of the 33 Colombian departments. Thus, making it one the largest datasets of this sort to be used for humanitarian purposes. Users were previously tagged as locals or migrants. How the tags were generated is out of the scope of this paper, but it is important to highlight that this is a very unbalanced dataset with less than 5% of migrant users. To gain more insight into the demographic information not found in CRDs, we use data collected by the Colombian Statistics Department, DANE.

Using the tagged CDRs, we recover the underlying monthly social network structure of calls and texts for both locals and migrants, and describe them at the ego, regional and national level. First, we compare these networks using a set of measures like average degree, density, response rate, clustering, among others. We analyze network evolution and measure changes in migrant’s networks with special attention to new social groups between locals and migrants. Our approach is not only to understand how migrants’ networks change, but how locals’ networks change as well. After all, integration is a two-sided street. Since we can partially observe the network ties that migrants keep in their home country, and study how these ties change or not as they become more integrated to their new communities.

We find that who migrants communicate with is strongly related to city-size and how migrant-dense a particular area is. In larger cities like Bogotá and Medellín, migrants have a higher average degree and tend to have a more balanced migrant-to local call ratio. While in more migrant dense areas, the local-to-migrant call ratio is less balanced. We also find that densification of migrant networks starts occurring a few months after their first appearance in our dataset, but starts to slow down around a year after their first appearance. These results follow the intuition that integration is a gradual process of building social capital and finding centrality in one’s network. From the DANE data we find a similar trend, that might complement CDR findings: Migrant household size has been on the rise. In early 2016, the average household size for migrants was roughly 2, and by late 2019 it is almost 4, surpassing the local average. This might suggest that the densification we see is not only due to migrants becoming integrated in Colombia but also because of the arrival of migrant relatives and friends.

On the one hand, our results have real-world implications because they can be readily used by policy makers to foster the formation of more cohesive communities made up of migrants and locals. On the other hand, to our knowledge, this is the first time we can observe with such resolution, how migrants’ social networks unfold in time, providing us with unique insight into the process of integration.

References
[1] Refugees and migrants from Venezuela top 4 million: UNHCR and IOM. 07 06 2019. Available: https://www.unhcr.org/news/press/2019/6/5cfa2a4a4/refugees-migrants-venezuela-top-4-million-unhcr-iom.html.
[2] D. Bahar y M. Dooley, Venezuela refugee crisis to become the largest and most underfunded in modern history. 09 12 2019. Available: https://www.brookings.edu/blog/up-front/2019/12/09/venezuela-refugee-crisis-to-become-the-largest-and-most-underfunded-in-modern-history/.
[3] J. Palotti, N. Adler, A. Morales, J. Villaveces, V. Sekara, M. Garcia Herranz, I. Weber y A. Asad, Real-Time Monitoring of the Venezuelan Exodus through Facebook's Advertising Platform. 2019.
[4] Plataforma Regional de Coordinación Interagencia. ONU, 05 02 2020. Available: https://r4v.info/es/situations/platform.

Canonical Knowledge Structures and Complexity in the Design of Artificial Intelligence | Wednesday 10:50-12:30

Charles Pezeshki and Kshitij Jerath

Artificial intelligence (AI) algorithms are increasingly being called upon to execute a variety of tasks which may vary significantly in their range of complexity. While there are ongoing research efforts of varying levels of sophistication and complexity with regards to AI design, there remains little work on how to characterize the complexity of knowledge that these algorithms posit to encapsulate. To understand the complexity and structure of knowledge generated by stand-alone or interacting AI, we must first understand the emergence of knowledge in a group of cognitive entities – and then apply it to design of AI algorithms. For the purpose of this research, the canonical social structures are taken from Graves’ and Beck’s theory of Spiral Dynamics, though there are other concomitant examples in the historical progression literature. When coupled with Conway’s Law, one ends up with an ever-increasing pattern of complexity practiced in how different social systems drive relational behavior, and how knowledge may potentially emerge from it. By understanding knowledge complexity, AI performance in the face of that complexity can be assessed, and a systemic understanding of capacity can be performed. This can lead to deeper understanding of what types of emergent social and knowledge structures are required to perform certain tasks, and can lead to specific tailoring of AI designs to the level of uncertainty in the environment, and the desired outcome.

The Capacitated Spanning Forest Problem is NP-complete | Wednesday 10:50-12:30

George Davidescu

The smart grid is envisioned as an automated energy exchange network that can be reconfigured through the selective activation of links in the network. An expected feature of smart grids is automated power distribution from energy producers to consumers, subject to the physical topology of the grid. We model this problem as the Capacitated Spanning Forest Problem (CSF), namely the graph optimization problem of finding a spanning forest with a capacity constraint on each tree limiting its total weight. In this work we prove that the Capacitated Spanning Forest problem is NP-complete by reduction from the general boolean satisfiability problem (SAT). We show that CSF is NP-complete for cases with more than two sources and positive capacities. This NP-completeness result justifies the heuristic approaches taken by the authors in other works.

Cardiorespiratory activity in the auto-organization of the hierarchical order in crayfish | Monday 14:30-16:10

Iván Oliver-Domínguez, Aidee Lashmi García-Kroepfly, Mireya Osorio-Palacios, Jesús Hernández-Falcón and Karina Mendoza-Ángeles

Under laboratory conditions, social interactions between triads of crayfish result in the establishment of a hierarchical order with a dominant animal and two submissive (1 and 2 respectively). Some authors have reported changes in the cardiorespiratory electrical activity associated with social interaction. This seems to indicate that crayfish presents autonomic-like responses during agonistic encounters, as it happens in vertebrates, but without an autonomic nervous system being reported. A tool that allows the study of the autonomic nervous system of vertebrates and the adjustments made under physiological conditions is the analysis of heart rate time series. In particular, the heart rate variability (HRV), which is based on the study of the intervals between each beat and whose fractal and nonlinear characteristics contribute to a detailed description of the cardiac dynamics.

The aim of this work was to analyze cardiorespiratory electrical activity of adult crayfish Procambarus clarkii before and during social interactions in order to compare the dynamics of these variables between dominants and submissive animals.

We used triads of adult crayfish in intermolt, with a weight and size difference less than 5%, that were implanted with Pt-Ir electrodes in the cardiac sinus and one branchial chamber (animals were cold anesthetized). Behavioral and electrophysiological recordings were made simultaneously.

We recorded the electrical activity of the heart from each member of a triad, 15 minutes before any experiment, in isolation (Condition 1), then, each triad was placed in an arena and videotaped for one hour. The first 15 minutes animals shared the same aquarium which was divided by a plastic separator (Condition 2). For the next 45 minutes, we removed the separator, and animals fought with each other (Condition 3). After this time the behavior was analyzed, and the hierarchical order of each member was determined. The signals were processed by standard methods. Data were stored on a computer and analyzed off line.

We used HRV, time and frequency domain, some nonlinear parameters, and Fourier spectral analysis.

Results indicate that dynamics of the cardiorespiratory electrical activity of adult crayfish is different before (Condition 1) and during social interaction (Conditions 2 and 3), between dominants and submissive animals.

The higher HRV was observed in the dominant during conditions 1 and 3; in condition 2, the submissive 1 had the highest HRV, the dominant one had the lowest HR and RR during all conditions. Submissive 2 had the lowest HRV but the highest HR and RR during all the experiment.

In conclusion, crayfish present autonomic-like responses during agonistic encounters that seem to depend on their hierarchical order.

All the experiments involving animals have been approved by the Commission of Investigation and Ethics of Facultad de Medicina, UNAM.
This project is partially funded by UNAM-DGAPA-PAPIIT IN231620.

Chaos Complexity and Complex Systems: to help understand and stop COVID-19 and its global disruption. | Thursday 10:30-12:00

Vivian Rambihar

The pandemic of COVID-19 emerged from the complex dynamic nonlinear interactions of the novel corona virus, disrupting global health, society and the economy in expected, unexpected and unprecedented ways. Chaos, complexity and complex systems should help understand and stop this complex and dynamic 21st century problem.

An online search for chaos, complexity, complex systems, COVID-19 was done and found a wide variety of examples ranging from Reports, science, policy, politics to practical advice and methods to stop/end this virus.

Websites and webpages:
Santa Fe Institute: Science for a Complex World - “What does complexity science tell us about COVID-19?”
New England Complex Systems Institute Analysis, Reports, Policy, practical advice, Stop the Corona Virus.
Endcoronavirus.org: All you need to CRUSH COVID-19, NECSI, Harvard, MIT, UCLA, etc
Center for Complex Network Research NE University, network science and coronavirus
Complexity Digest: papers, discussions, conferences, books, virtual Conf Hackathon, etc on COVID-19

Medical journals and websites
The Lancet EClinical Medicine Bradley, “Systems Approach to COVID-19”, CMAJ Letter Rambihar “Chaos, complexity and complex systems to contain and manage COVID-19,” BMJ Letter Rambihar “Who dropped the ball with COVID-19?” Many sites describing novel/very unexpected clinical presentations – multiorgan inflammation/thrombosis and failure, pediatric inflammatory syndromes and death, etc.

Science journals:
Scientific American: Clifford Brangwinne – “How a Landmark Physics Paper from the 1970s Uncannily Describes the COVID-19 Pandemic” – “Phil Anderson’s article “More Is Different” describes how “different levels of complexity require new ways of thinking” with COVID-19 exponential replication leading to global transformation and disruption from healthcare, society to the economy.

Newspapers ad newsletters:
Atlantic newspaper March 24, 2020: Tufekci, America’s Coroanvirus response failed because we did not understand the complexity of the problem, and failed messaging and leadership.
Guardian March 26, 2020: Taleb, Bar-Yam - The UK coronavirus policy may sound scientific- it isn’t. UK theorising about complexity and getting it wrong – in modeling and policymaking
Forbes April 20: Bedzow, Wake Up Call For Industry Leaders: The Time To Think About COVID-19 As A Complex Adaptive Challenge Is Now – creating complex adaptive challenges in health care, economics and industry

Blogs:
Martin: “How Complexity Theory can Help Decision-Making making in chaotic times - Cynefin model.
Blignault: Cognitive Edge Webinar April 2020 -- Reflections on Complexity, Chaos and COVID-19
Strazewski: AMA April 4, 2020 Public Health - What’s ahead on COVID-19? - N Christakis - the U.S. should have been better prepared
Rickards: The Daily Reckoning -Complex Systems Collide, Markets Crash; Applying complexity theory to the corona virus crisis- systems flip from complicated to complex.

Conclusion: Chaos, complexity and complex systems are widely reported to help understand and stop COVID-19 and its global disruption. This included: lack of complexity thinking leading to the emergence and global spread of the virus, Reports and science advocating early and sustained intensive actions to flatten and crush the curve, examples of mitigation of disruption on health, society and business/economy, novel unexpected critical clinical features and death, and applications to manage and stop/end the virus.Submitted:May 14, 13:52 GMT

CHAOS COMPLEXITY COMPLEX SYSTEMS COVID-19: 30 Years Teaching Health Professionals chaos and complexity | Wednesday 15:00-16:40

Vivian Rambihar, Sherryn Rambihar and Vanessa Rambihar

Background and purpose:

Chaos and Complexity, considered the science for the 21st century by Stephen Hawking and the science for a complex world by the Santa Fe Institute, should apply to medicine, health, education and global challenges like COVID-19. It describes complex dynamic social, economic, biologic, behavioral and other interactions leading to health and disease, and between health professionals, patients and society. These exhibit nonlinearity, sensitive dependence, feedback, adaptation, uncertainty, self-organization and emergence, and used as a tool for change, as adaptive, dynamic, co-evolving and co-learning.

Results:
A 30 year experience teaching, using, and advocating for thinking complexity, through education/scholarship and leadership included:

Lectures at - University West Indies 1991, Trinity College and Newton Institute for Mathematical and Physical Sciences, Cambridge University UK, 2000, U of Toronto various, and various other universities 1990- present.
Editorials/Commentary Canadian J Cardiology 1993 Jurassic Heart: From the Heart to the Edge of Chaos.
Presentations/abstracts/posters various conferences on Health, Medicine, Education, UK, Canada, US, WI, eg Chaos in Medicine and Medicine out of this World 1993, Myocardial Infarction at age 2, 25 and 30: the Role of Chaos and Chance, Creating a Pandemic of Health using chaos and complexity, Society for Chaos Theory in Psychology and Life Sciences 2020, Chaos in Medical Education.
Books CHAOS From Cos to Cosmos: a new art science and philosophy of medicine, health…and everything else
CHAOS Based Medicine: the response to evidence. Tsunami Chaos Global Heart: using complexity science to rethink and make a better world.
Global networking and advocacy for complexity thinking in medicine, health and society.
Health promotion/Preventing premature heart disease and book South Asian Heart: preventing heart disease.
Book Chapter using complexity science for community health promotion.
Project on Complexity in Health and Diversity at The Scarborough Hospital and community
Complexity to rethink/transform medical education for the 21st century, with McMaster med education as complexity
Proposal for Thinking Complexity for Educating Health Professionals for the 21st century (after Lancet 2010 Report)
Letters to the Ed posted on using chaos, complexity and systems approach to prevent, contain and manage CCOVID-19 – CMAJ 2020 and Who dropped the ball withCOVID-19? BMJ 2020.

Conclusion:
Health professionals were taught chaos and complexity over a 30 year period 1990-2020, to understand and manage the complex dynamic 21 st Century interactions of medicine, health, society and disease, starting in 1990 with use in preventing heart disease, to 2020 understanding, containing, managing and stopping Covd-19.

Notes: The Poster is highly visual with imagery from chaos and complexity, and adapted from Posters presented at the University of Toronto International Conference “Creating a Pandemic of Health” and “Medical Education Conference at Sunnybrook Hospital” 2015, and “30th Annual Society for Chaos Theory in Psychology and Life Sciences Conference,” Fields Institute of Mathematics, University of Toronto, 2020.
end

Characterizing, visualizing and quantifying space weather and its risks | Monday 8:40-9:20

Sandra Chapman

Space weather is driven by the interaction of the sun’s expanding atmosphere with earth’s own magnetic and plasma environment. Its impacts include power loss, aviation disruption, communication loss, and disturbance to satellite systems and are becoming increasingly important as our society relies increasingly on being globally interconnected. We are now moving into a data-rich era of solar terrestrial physics with multiple, inhomogeneous, satellite and ground based observations and there are common approaches to characterization and visualization with other fields such as earth climate. We present the first use of directed dynamical networks to parameterize the spatial pattern of space weather impacts at earth. We apply new methodology to historical datasets spanning over the last 150 years to quantify the risk of extreme events that would cause continent-wide power blackouts today. This work highlights the integration of ‘machine driven’ data analytics approaches with ‘human driven’ physical insight.

Chinese Medicine Clinical Reasoning is Complex | Thursday 10:30-12:00

Lisa Conboy, Lisa Taylor Swanson and Tanuja Prasad

Introduction: Traditional Chinese Medicine (TCM) and other Asian Medicines use diagnostic and treatment procedures that are individualized to the patient. The complexity of TCM is a core concept and strength of traditional acupuncture and can be maintained successfully in an RCT format and often with better results than standardized protocols. This project aims to describe clinicians’ reasoning during the provision of individualized treatment.

Methods: We used several qualitative techniques, including (1) Diagnostic Interviewing to identify and describe variations in diagnostic reasoning and heuristics as described in retrospective accounts given by acupuncturists in response to their review of clinical records of a small sample of patients they treated in a clinical trial of acupuncture, (2) double coding the Diagnostic Interviewing data for themes of complexity, considering the TCM diagnostic framework as a complex system, and (3) frame analysis to identify trends of changes in codes across practitioners. Purposive sampling was used to create a sample of practitioners from the parent study (n=4) with variation in training, offered diagnoses, and years of experience. Each clinician completed 2 interviews covering 2-3 patient cases.

Results: We found support for the TCM diagnostic system to act as a complex adaptive system. Found codes include aspects of complex systems including emergence, adaptation, connectivity, emergence, self-reflection, self-organization, non-linearity, and a critical phase change in a clinician’s thinking. Frame analysis revealed patterns in codes across clinicians.

Conclusions: Individualized diagnoses require a complex process, more than recitation of memorized “facts”. Considering the diagnostic process as a complex system may offer insight into the operation of other complex systems of clinical reasoning, such as biomedicine, in addition to adding to the medical education literature.

The interviews were funded by:
DOD W81XWH-15-1-0695 (CONBOY, LISA A) Funder: Army. Designing a Successful Acupuncture Treatment Program for Gulf War Illness

Circle Consciousness | Thursday 10:30-12:00

Jiyun Park

Abstract Proposal
jiyun park – architect, artist, shaman, visionary
The Cooper Union, Architecture 1997
University of Michigan, 1991 [transfer 1993]
Papers and poster presentations at Consciousness Conference, Nexus Maths, Bridges Interdisciplinary.

Around 500-700BC, Chinese morality, cosmology, and consciousness remained unified like figure as ground or particle as wave or fields. Fields in reciprocity of formed patterns/structures were known as, li trace vital life force or ki as subtle energies also known as ch’i, prana, mana, or flow. Thermal and water dynamics at the edge of transmuting as solids states; biomorphing as the growth of cells, tissues, organs, systems as plants, animals, and humans of ki in increasing complexities of li patterns. Li patterns are shape-shifting isometric and fractal expressions of field potential along quantum s(lines) of potential force/energies of flow being radial, rotational, or spiral. This hidden in plain/plane sight circle, is also hidden in other planes or dimensions. Fifth-dimensional light/sound sonoluminescence transmutes into fourth-dimensional spacetime through knotted circular geometries. They are visible in Cymatics by Hans Jenny and Mereon.org.

Former MIT Linguist, now at the Consciousness Center, Tuscon, Noam Chomsky hints at Occult Chemistry and Mysterium, while Princeton Institute for Advanced Study, Edward Nitten states M-theory as miraculous, still DesCartes three dreams, recorded by his biographer Adrien Baillet, lend itself to spaces of Stephen LeBerge’s lucid dreaming, where imagination, linked to memories are collective, unified, subconscious, or unconscious patterns transmuting circular shape-shifting as Carl Jung's meta-morphic archetypes. Classical mechanical systems give way to dynamic biomorphic systems and phenomenological growth of forms. Field’s of reciprocal forces and energies hinging and eclipsing from higher dimensions push, pull, and rotate beyond x,y,z coordinate systems – through circle coordinates of 360^360. A circle having 360-degrees of potential reimagines consciousness.

Cognitive connotations of large-amplitude bursts in human brain dynamics | Monday 14:30-16:10

Kanika Bansal and Javier Garcia

Cascading large-amplitude bursts in neural activity, also known as avalanches, provide insight into the complex spatially distributed interactions in neural systems. In large-scale human neuroimaging, avalanches show scale-invariant dynamics, supporting the hypothesis that the brain operates near a critical point facilitating functional diversity and, consequently, efficient inter-region communication. Such scale-invariant dynamics, characterized by power-law probability distributions in these avalanches, are widely observed in a variety of in vivo and in vitro systems; however, no single mechanism of their origin has been agreed upon in literature. Even though the analysis of avalanches and other methods using complex systems theory have proven successful in supporting the “criticality” hypothesis, it is still unclear how the framework would account for the omnipresent cognitive variability, whether across individuals and/or tasks. In order to address these issues, we analyzed avalanches in the electroencephalography (EEG) activity of healthy humans during different task conditions including a resting state. Our first set of results indicated that the features describing the dynamics of avalanches varied between individuals and tasks such that, the global (whole-brain-level) distributions remain scale-variant, while regional probabilities change in a scale-specific manner [Bansal et al. (2020), arXiv:2003.06463]. Here, we present how the regional, scale-specific changes relate to changing cognitive demands. We defined a region-specific metric, ‘normalized engagement’, that captures the relative probability of a brain region to engage in observed avalanches. During varying cognitive task demands, we found that the normalized engagement of different brain regions also changed systematically and even correlated with task performance. Contrary to a popular theoretical perspective which suggests that the features describing the dynamics of avalanches are universal, we found both the global and regional avalanche features to be varying between individuals and tasks. Our results suggest that the study of avalanches in human brain activity provides a tool to understand cognitive variability.

Collective dynamics and criticality in neuromorphic nanowire networks | Tuesday 10:20-12:00

Joel Hochstetter, Ruomin Zhu, Alon Loeffler, Adrian Diaz-Alvarez, Tomonobu Nakayama and Zdenka Kuncic

Scale-invariant avalanches result in rich spatio-temporal dynamics close to phase transitions in a wide-variety of real-world non-linear dynamical systems, including the human brain. In this state, known as criticality, information transfer is believed to be optimised. We find signatures of criticality in a novel brain-inspired nano-system the `neuromorphic nanowire network', in both simulation and experiment.

Neuromorphic nanowire networks are complex adaptive systems formed by polymer-coated inorganic nanowires that self-assemble into a densely connected neural network-like topology. In response to electrical stimuli intersections between nanowires (known as junctions) exhibit history-dependent non-linear resistive switching behaviour. The dynamics of these electrical networks, as studied by the conductance time-series, is a collective response of the interactions between individual junctions. Experimental studies suggest these neuromorphic networks may be poised at criticality but requires further investigation. We addressed this by analysing the role of individual junction dynamics in collective network behaviour and quantifying critical dynamics.

1) We found that in response to constant voltage stimulus network converges to attractor states, where the conductance states of junctions are finely balanced. Junctions conductance states increase until they reach meta-stable states. This in turn triggers previously inactive adjacent junctions to increase their own conductance states. This dynamic mechanism causes groups of junctions to switch collectively in response to local changes to the networks voltage distribution. Above a critical network voltage value, this results in formation low resistance transport pathways spanning the network. The qualitative features of network conductance time-series is in good agreement between this model and experiment, with predicted meta-stable states observed in experimental data

2) In the vicinity of this critical voltage value, transitions between meta-stable states are found to occur by discrete avalanches. In both experimental and simulation conductance time-series, we found the size and life-time of avalanches follow power laws. In this regime the power law exponents obey universal scaling relations, equivalent to those observed in many other systems, such as the brain. Mean temporal profiles of avalanches of varying durations are found to approximately collapse to a single scaling function. At significantly lower and higher voltages this scale-invariant behaviour is lost consistent with sub-critical and super-critical behaviour. This suggests that neuromorphic nanowire networks can exhibit dynamical criticality.

3) To further understand these suggested critical dynamics, we calculated the maximal Lyapunov exponent for simulated networks driven by periodic stimuli. For constant polarity stimulus we found the network always had a negative Lyapunov exponent (converged to attractor). However, for networks under alternating polarity the stimulus altering the amplitude and frequency of the pulse allowed networks to be tuned from attractor to chaotic dynamics. To test computational capacity of these states we performed the simple learning task of non-linear wave-form transformation. The states at the `edge-of-chaos', a state of infinite memory to perturbations (Lyapunov exponent close to zero) were found to perform considerably better for a range of target wave-forms than chaotic or attractor states.

The observed potentially critical dynamics support the proposed application of neuromorphic nanowire networks as a next-generation information processing system.

Collective Ecophysiology and Physics of Social Insects | Tuesday 12:40-13:20

Orit Peleg

Collective behavior of organisms creates environmental micro-niches that buffer them from environmental fluctuations e.g. temperature, humidity, mechanical perturbations etc., thus coupling organismal physiology, environmental physics and population ecology. This talk will focus on a combination of biological experiments, theory and computation to understand how a collective of bees can integrate physical and behavioral cues to attain a non-equilibrium steady state that allows them to resist and respond to environmental fluctuations of forces and flows. We analyze how bee clusters change their shape and connectivity and gain stability by spread-eagling themselves in response to mechanical perturbations. Similarly, we study how bees in a colony respond to environmental thermal perturbations by deploying a fanning strategy at the entrance that they use to create a forced ventilation stream that allows the bees to collectively maintain a constant hive temperature. When combined with quantitative analysis and computations in both systems, we integrate the sensing of the environmental cues (acceleration, temperature, flow) and convert them to behavioral outputs that allow the swarms to achieve a dynamic homeostasis. 

The Collision of Healthcare & Complexity During Covid-19 | Thursday 10:30-12:00

Keira Lum and Richard Nason

This paper draws attention to complex systems in healthcare from the perspective of different stakeholders in order to understand the level of complexity appreciation that exists among healthcare professionals. Structured interviews with a variety of healthcare professionals, highlight the importance of acknowledging and understanding healthcare complexity when identifying problems and solutions.

The results of this paper underscore that understanding the health care system within the context of systems theory is paramount to navigating an environment of significant complexity and interconnectivity. It is argued that it is imperative that healthcare decision makers appreciate the differences between complicated versus complex systems and how we can best operate and address these systems. When addressing complicated problems, we often end up with reductionist thinking, where one focuses on static properties of elementary parts. In looking at complex problems, we need to use holistic thinking which assess a situation as a whole compared to a collection of parts. This requires that we understand the environment and context in order to understand the system or problem as a whole.

The interview results from this study are a step forward in framing suggestions for building understanding, awareness and adaptability among healthcare networks for the future.

Communicative Vibration: A Graph-Theoretic Approach to Group Stability in an Online Social Network | Tuesday 14:30-16:10

Matthew Sweitzer, Robert Kittinger, Casey Doyle, Asmeret Naugle, Kiran Lakkaraju and Fred Rothganger

Social groups are complex dynamic systems – some may grow while others stagnate and die out, often interdependent with the state of other groups in the system. Recent research has attempted to predict the stability or vitality of meso-level groups using a wide variety of sociometric data, often with inconsistent results. The present study advances a new technique for characterizing group stability which we call “communicative vibration”. This technique leverages graph theory to assess the energy in networked communication. Specifically, we adapt the Fruchterman-Reingold layout algorithm to measure changes in a node’s position over time, representative of changes to the structure and frequency of communication within the group and situated in the broader social context. We then assess the role of this dynamism in predicting relevant group-level outcomes, such as group growth and death, as well as merging of groups, and compare communicative vibration to network and group characteristics which have been used in prior studies of group vitality.

The Fruchterman-Reingold algorithm is a force-directed graph layout algorithm designed with the purpose of maximizing certain aesthetic qualities of a graphic representation of a network, such as minimizing vertex and edge overlap. However, in achieving those goals, the authors created an algorithm which mimics several material properties and processes; spring or attractional forces bring connected vertices together, repulsive forces spread non-connected vertices apart, and movements of vertices in space are iterated via simulated annealing to produce a (local) maxima for vertex placement. By manipulating the strength of attractive and repulsive forces, as well as the cooling schedule for the iterative placement of nodes, we can fine tune our measurement of vibration. In this way, our measure of communicative energy in social groups reflects the use of vibration to study material strength in physical sciences.

We utilize this procedure to study data from a Massively Multiplayer Online Role-Playing Game (MMORPG), which we refer to as “Game X”. Game X allows users to join explicit groups, called guilds, which allow players to coordinate their in-game actions, including combat with other players or guilds, and pool resources. We construct weighted and directed networks using 730 days of in-game communication between players. We then calculate communicative vibration in three ways: as movement of networked players, as the movement of the centroid of the small group, and as changes in the distance of a group’s members from its centroid between each time step in the data. These measures are then included as a guild-level feature in a variety of machine learning frameworks (e.g., random forests, naïve Bayes, etc.) to predict group events. These events include the dissolution of the group, the merging of one group with another, and the formation of new groups. Communicative vibration is compared against other group features, such as age and diversity of in-game skills [1]. We conclude with a discussion of the computational feasibility in other dynamic networked contexts, the sensitivity of the layout parameters, and the implications of this method for studies of small group lifecycles.

Complex Systems and Classical Military Theory | Tuesday 10:20-12:00

Dave Lyle

War has always been an emergent phenomenon, comprised of countless, constantly interacting physical and cognitive elements of individuals, groups, societies, and nation states competing violently for power, influence, and access to resources. If war is the sum total of these multiple levels of competition, can war be any less complex than any of the phenomena that play a part in it?

Schools of military theorists have largely followed the same tensions that schools of science have followed between “positivist” schools of thought born during the Enlightenment who believed that the world is inherently ordered and controllable via by formal scientific description and method, and those “romantics” who suspected that randomness played a much greater role in the universe than our scientific tools and reason could handle on their own. Various military writers throughout the centuries have sought to offer prescriptive principles that can be used to make warfare more predictable and manageable. But modern understandings of complex systems can help us better understand the true degrees of efficacy we can hope to achieve through our calculations of war, and help us to avoid coming to false conclusions about what we can hope to achieve through the force of arms alone.

A Complex Systems Approach to Cultural Sense Making Within Language Acquisition | Tuesday 10:20-12:00

Ian Edgerly

Culture has been clearly identified as critical in the learning or sustainment of a language due to the context that it provides, essentially acting as the carrier of implicit meaning within language. The conceptual difficulties with defining culture are rigorously complex within their own right, but a drive to understand them via a distinct methodology outside of language curricula deepens that intricacy. With that, the integration of a deliberate teaching methodology for culture inside of, but distinct from language is anything but simple as described within Badger and MacDonald’s (2007) works on culture and language pedagogy. Although complex, this bifurcation of culture module development and formalized aspects of language learning has proven successful in initial, second language acquisition within United States Army Special Operations language courses. This measurement of success is not without its own empirical complexities, as the required measurement of abstract concepts inherent to cultural sense making within language learning requires a similar abstract approach to observation and codification. Conducting this analysis at a programmatic level, where these indicators of cultural knowledge attainment via indirect methods are required for continued refinement and progress, also requires a bifurcation into domains specific to knowledge attainment at the student level and knowledge codification in the form of development at the program level.

With this in mind, I have found that the utilization of complexity theory at the onset of program development has proven to help identify critical aspects of andragogy, in our program’s case, that allows for both a micro and a macro view of the dynamical systems that impact both student learning and program development. Specifically, the Cynefin framework allows for granularity within planning and understanding. The framework’s focal point of the paradoxical notion of knowledge as both a thing and flow allows for an understanding of an integral culture and language program within the four domains of known, knowable, chaos, and complexity. At a programmatic level, these frames allow for built in flexibility as well as extreme understanding as to what is needed to make the program succeed within its specific contexts. At a micro or meso level, the framework allows for a better understanding of what knowledge elements within cultural constructs will help with contextual language development via the notion of a shift to different perspectives, or simply put perspective taking. Although complex in nature, once understood, this process allows for a “de-cluttered” and less complicated approach than other qualitative or quantitative methodologies. This entire process has been labeled as the operationalization of culture, as our team, via a complexity approach, has been able to identify how and when cultural concepts assist with language attainment and when they provide rich contextual information that assist with general understanding of another’s worldview.

A Complex Systems Approach to Develop a Multilayer Network Model for High Consequence Facility Security | Wednesday 15:00-16:40

Adam Williams, Gabriel Birch, Susan Caskey and Thushara Gunda

Protecting high consequence facilities (HCF) from malicious attacks is challenged by the increasing complexity observed in today’s operational environments and threat domains. This complexity is driven by a multi-faceted—and interdependent—set of trends that include the analytical challenge of modeling the intentionality of adversaries, (r)evolutionary changes in adversary capabilities, and less control over HCF operating environments. Current HCF security approaches provide a strong legacy on which to explore next generation approaches—including recent calls to more explicitly incorporate multidomain interactions observed in HCF security operations. Insights from complex systems theory and advances in network science suggest that such interactions—which can include relationships between adversary mitigation mechanisms (e.g., physical security systems and cybersecurity architectures) and facility personnel and security procedures—can be modeled as interactions between layers of activities. From observation and qualitative data elicited from diverse HCF security-related professionals, applying such a “layer-based” approach is a promising solution for capturing the interdependencies, dynamism, and non-linearity that challenge current approaches. Invoking a multilayer modeling approach for HCF security leverages network-based performance measures which (1) helps shift underlying design, implementation, evaluation, and inspection from a “reactive” to a “proactive” ethos; (2) incorporates multidomain interactions observed in HCF security; and, (3) builds a better foundation for exploring HCF security dynamics and resilience.

After exploring these interactions between cyber, physical, and human elements—this paper introduces major modifications to conceptualizing HCF security grounded in data elicited from a range of related subject matter experts. Next, this paper leverages insights from the systems theory and network science literatures to describe a method of constructing complex, interdependent architectures as multilayer directed networks to better describe HCF security. The utility of such a multilayer network-based approach for HCF is security is then demonstrated with a hypothetical example. Lastly, key insights are summarized and the implications of incorporating network analytical performance measures into HCF are discussed.

Complex Systems Modeling for Reliability, Latency, Resilience, and Capacity of Information Supply Chains | Monday 10:20-12:00

Kevin Stamber, Daniel Pless, Michael Livesay, Stephen Verzi and Anneliese Lilje

Complex systems modeling is performed for a variety of purposes – understanding reliability, latency, resilience, and capacity of systems. A range of techniques are used, including Bayesian networks, linear programming, and stochastic simulation. These techniques are applied across a range of domains – from the representation of integrated chemical production at hundreds of facilities nationally and thousands globally, to the modeling of information systems. Combining purpose, technique, and domain requires a common set of elements to describe the systems and their interplay. These common elements – entities representative of systems and/or subsystems that receive products and produce other products from them; products representative of these entities’ inputs and outputs; and the “stoichiometry,” or formulaic methods applied at the entity level for processing inputs to an entity to create outputs – work together with additional data for purpose-driven models to enable a range of phenomena modeling. Capturing these phenomena are an essential reason why we perform complex systems modeling.

This research explores the development of a novel and parsimonious common structure driving multiple, disparate complex systems models for this range of purposes, driven by interest in the domain space of information supply chains. Reliability modeling is a process involving Bayesian logic networks; here the research has addressed the issue of cyclic graphs while still permitting a unique solution. Latency modeling follows a similar construct but requires an identified initiation product for determining latency of system products relative to that initiator. System capacity modeling relies on knowledge of throughput rates and input-output ratios for individual products in an optimization setting, allowing for determination of maximal throughput as well as identification of bottlenecks; this work leads to a unique means of optimization of system performance built on the nature of the common structure.

Modeling of systemic resilience is a more complex problem: The current model employed leverages time to recovery from a long-tail (low probability, high consequence) catastrophic failure of multiple system components as a basis for a resilience metric. Other, ongoing research in this space, leverages previous work defining a resilience metric as a function of three elements of systemic performance: First, systemic impact (SI) of an event (or series of events over time), usually one in the probabilistic tail that influences the mean restoration event to be far removed from the median restoration event; Second, the total recovery effort (TRE) involved in restoring the system to nominal function; Third, nominal system performance (NSP), which factors in normal recovery staffing, which is typically designed for the more frequent median disruptive event. The forward-looking element of this research discusses both the current and potential future state of capturing resilience, including possible incorporation of TRE and NSP, leveraging some of the earlier-described techniques, in addition to Monte Carlo simulation.

Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525.

Complexity-inspired Innovation Trajectories for Product Development Teams | Wednesday 15:00-16:40

David Newborn

The path of innovation can sometimes appear as a chaotic tangle of inefficient and counterproductive trajectories, in which most paths lead to little or no impact and value, and some paths might lead to genuine breakthroughs. Qualitatively, impact and value can be significantly improved by incorporating various principles, techniques, and methods from toolsets and philosophies like Human-Centered Design, Design Thinking, the Agile philosophy, and the Lean Startup approach. Unique, context-specific trajectories through components of these toolsets and philosophies have yielded high innovation returns, considering the complex, adaptive nature of research, development, science, and technology endeavors within the U.S. Department of Navy. This paper captures examples of trajectories by teams working with existing and novel technologies, documented and emerging techniques, in support of know and discovered value propositions. In an attempt to counter the Hawthorne Effect and survivorship bias, both successful and unsuccessful trajectories are presented. For each trajectory, an assessment is made of how consistent each trajectory is to relevant complexity science principles.

Complexity and Corruption Risk in Human Systems | Thursday 10:30-12:00

Jim Hazy

Typical theories of corruption focus on individual ethics and agency problems during human organizing. In contrast, this paper applies concepts from complexity science to describe the emergence of corruption risk during conditions that reflect different types of uncertainty intrinsic to social interaction systems.

In this paper, I argue that disruptive disequilibrium conditions, like those that emerge as opportunities or threats are observed by agents in a system, create opportunity potential. This potential perturbs previous alignment among agents with regards their expectations about how they realize value from participation in the system. Expectations about this new value potential heightens their sense of urgency to act, but at the same time, they must do so in an increasingly complex decision space. This reduces their level of participation in other system activities, weakening the system’s internal process momentum which, for agents, is seen as instrumental in distributing system value to its members. This further increases uncertainty and reinforces a positive feedback loop. As a consequence, an increasing number of agents begin to take choices and act in ways that realize new value for themselves locally and perhaps harming others.

Corruption risk is present if these decisions involve contingent pathways that advantage the decision maker while potentially harming others even if only by denying them access to a benefit to which they have a claim. Decisions along these potentially harmful pathways are said to support a ‘value sink’ in the ‘reference system’ because, like a heat sink in thermodynamics, the value that flows to them is lost to the system. To clarify, the risk of corruption is present whether or not any particular agent realizes that its action may bring harm to others, even if only denying the benefit in the value sink to others who are entitled to it. Systemic corruption risk is present when a subset of agents have the opportunity to form a distinct social identity within the organization or ‘reference system’ which draws members to take choices that advantage those inside their loop of social identity while disadvantaging—causing harm to—those in the reference system but outside the loop. Furthermore, stabilizing feedback loops may act to embed the value sink structure into the reference system in ways that would be considered corrupt.

I argue that this approach to studying corruption is valid across multiple levels of structural scale in social systems (Hazy, 2004, 2012, 2019). This suggests the following definition of corruption in the complexity sense: corruption involves actions that exploit local resource opportunities in the ecosystem for their own use in ways that deprive others of those same benefits even though that would otherwise be entitled to them. Further, I define a ‘value sink’ to be any structure that extracts, or ‘drains’, resources from the system for the exclusive use of some agents while depriving deserving or otherwise entitled others of those benefits. Based on this theory, I identify four distinct types of corruption risk and suggest policy interventions to manage them.

References
Hazy, J.K. (2004). Leadership in complex systems: A meta-level information processing capabilities that bias exploration and exploitation. Proceedings of the 2004 NAACSOS conference, Carnegie Mellon University, Pittsburgh, PA.

Hazy, J. K. (2012). Leading large: emergent learning and adaptation in complex social networks. Int’l Journal of Complexity in Leadership and Management, 14(1/2), 52-73.

Hazy, J.K. (2019). How and Why Effective Leaders Construct and Evolve Structural Attractors to Overcome Spatial, Temporal, and Social Complexity. In J. P. Sturmberg, (ed.) Embracing Complexity in Health, https://doi.org/10.1007/978-3-030-10940-0_13. Basel, Switzerland: Springer, pp. 223-235.

Hazy, J.K. & Boyatzis, R.E. (2015). Emotional contagion and proto-organizing in human interaction dynamics. Frontiers in Psychology, 6(806), 1-16.

Complexity and Ignorance in Social Sciences | Wednesday 10:50-12:30

Czeslaw Mesjasz

In purely logical terms, the utterance “this is complex” is equivalent to the acknowledgment of ignorance of the obvserver/participant. This observation is relevant to all situations, beginning from a daily life and ending with applications of complexity science in all areas of studies. The challenge deriving from the role of ignorance and complexity has a special role in modern social sciences. From a philosophical point of view, the challenges of knowledge and ignorance are eternal. Their symbol is the declaration supposedly made by Socrates: “I know that I know nothing”.

So what new can be said today about complexity, knowledge, ignorance in the studies of society? What are the new factors determining the increase in complexity and ignorance? How to answer to the above questions not being trapped in another level of complexity of complexity of explanations and another level of ignorance depicted with the famous sentence – I do not know that I do not know.

The paper includes the results of my book-size project “Complexity and Ignorance in Modern Management”. The proposed conceptual framework is also applicable in other normative, action-oriented disciplines, e.g. economics or security studies. One of the sources of inspiration for the project is the popularity of the terms “chaos theory” and “edge of chaos” showing how the “catchy” names assigned to the classes of equations – chaos (Li and Yorke 1975 ) and “edge of chaos (Langton 1992) may have a strong impact on theory and on policy making.

The project results from my long-time studies of applications of the ideas of complexity in social sciences with special stress put on normative, action-oriented disciplines – management and security studies. This approach is also applicable in the studies of complexity of environmental challenges. The project is developed with awareness of limitations of the meaning of all terms – complexity, knowledge and ignorance.

The main determinant of increasing impact of ignorance on management, and on the social life, is not rapidly growing amount of produced and received information and knowledge called information explosion, information abundance, etc. The equal, if not a bigger challenge, is to comprehend, or in a more general sense, to assign the meaning to those over-flooding streams of bits. In a broader interpretation, the sensemaking of this overwhelming flood of impulses is the main challenge to modern management. The development of Information Technology, and particularly, Artificial Intelligence, and its part, cognitive science, create another challenge.

As another determinant of the studies of complexity and ignorance is the development of social theory: advanced mathematical modelling and post-modernism, post-structuralism, interpretative approaches also contribute to a higher level of ignorance.

The first results of the project to be presented at the ICCS 2020 can be depicted as follows. In management the ideas taken from complexity science include mathematical models, analogies and metaphors (of course, reference to the latter is a simplification). Thus as the first step, a typology of applications of the ideas of variously defined complexity in social studies is prepared. It embodies most of the cases of the use of the term complexity, beginning from Kolmogorov-Solomonoff-Chaitin ideas, through thermodynamics (the concept of entropy also means acknowledgment of ignorance), and ending with some purely metaphorical applications of the term complexity and associated notions in social sciences. Subsequently a methodological framework is presented which allows disclose what could be unknown in all the above cases of applications of the terms complex and complexity.

The proposed methodological framework which can be called CAISA (Complexity-and Ignorance-Sensitive-Approach) or CAISSA (Complexity-and-Ignorance-Sensitive-Systems-Approach) will include two elements: identification of patterns of ignorance associated with the applications of the concept (utterance?) complexity in all the above mentioned circumstances and the ways how this ignorance can be dealt with. Of course, any “naive”, far-reaching expectations are not presented. Just hope that the level of “negative” ignorance will be diminished and the level of “positive” ignorance (the more I know, the better I know what I do not know yet), will increase.

Complexity and Scrum: A Reappraisal | Wednesday 15:00-16:40

Czeslaw Mesjasz, Katarzyna Bartusik, Tomasz Malkus and Mariusz Soltysik

Complexity, emergence, chaos, edge of chaos, non-linearity, self-organization and other concepts associated with broadly defined complexity science have become a part of project management theory and practice since the 1990s. They are also treated as distinctive facets of the Agile Project Management (APM), including Scrum. Due to multiple uses and misuses, sometimes even as “buzzwords”, their applications are challenged by the dilemmas: “Fad or a promising concept” or “Fad or radical challenge”. Deeper studies of multitude, variety and scope of applications of complexity-related concepts in APM allow that they are used as mathematical models, analogies and metaphors. The following research questions can be formulated:

1. What was the impact of applications of ideas drawn from complexity science in the development of methodological frameworks with a collective name Agile Project Management Frameworks

2. What is the current status of those applications?

3. Are they the “fad” or are their applications valuable for practice and theory of APM?

4. What should be changed in applications of complexity-related ideas in the APM?

The aim of the paper is to identify the applications of complexity-related ideas – complex, complexity, chaos, the edge of chaos, emergence, and self-organization in Agile methodological project management frameworks. Scrum methodological framework is used as an example.

The paper includes the preliminary results of our research project in which we try to assess the role of complexity science in increasing the effectiveness of project management both in software development and in other types of projects. We have already finished an introductory study of the impact of complexity science on Scrum. The primary results show that broadly defined complexity science is used in Scrum as a source of interpretations of the complexity of projects, complexity of project teams, and complexity of the environment, sometimes with a deep reflection and sometimes just superficially.

We propose a conceptual framework for further theoretical studies and several ways of improvement and refinement of Agile Approach necessary in dealing with broadly defined complexity in project management. It is not possible to develop classically verifiable/falsifiable hypotheses in a broad survey-like study. The following conjectures are scrutinized:

1. The models, analogies, and metaphors drawn from complexity science including such concepts as complexity, chaos, the edge of chaos, and related terms were initially applied in all types of Agile project management without sufficiently deepened reflections on their origins, meaning, and usefulness.

2. In spite of declarations of the links between Agile methods and mathematical models declaratively capturing complexity, chaos, and related notions, only weak links between the models and reality of project management can be identified.

3. Applications of the complexity-related concepts in project management have to be reassessed with the use of state-of-the-art knowledge from complexity studies, linguistics, psychology, cognitive science, and deepened knowledge about specificity of advanced projects. This reappraisal should help in elaborating more effective interdisciplinary approaches in managing various types of projects, not only in software development. It concerns the complexity of projects, complexity of their environment, and activities of the project teams (adaptation, learning, self-organization).

An important factor determining all discussions on practice and theory of Agile Project Management and related concepts must be taken into account. The proponents of making project management adaptable, e.g. the authors of the Agile Manifesto face the basic paradox – how to define something more precisely what by definition is “agile” – imprecise, due to inherent uncertainty, unpredictability, adaptation, and learning.

Complexity Economics: A Different Way to Think about the Economy | Wednesday 13:20-14:00

W. Brian Arthur

In the last few years a new framework for economic thinking has emerged—complexity economics. It was pioneered in the early 1990s by a small team at the Santa Fe Institute led by Brian Arthur. The standard economic framework sees behavior in the economy as in an equilibrium steady state; people making decisions face well-defined problems and use perfect deductive reasoning to base their actions on. The complexity framework by contrast sees the economy as always in process, always changing; people try to make sense of the situations they face using whatever reasoning they have at hand, and together create outcomes they must individually react to anew. The resulting economy is not a well-ordered machine, but a complex evolving system that is imperfect and perpetually constructing itself anew. Arthur will explain how complexity economics came to be and how it works.

Conflicting Information and Compliance With COVID-19 Stay-At-Home Orders and Recommendations | Monday 14:30-16:10

Asmeret Naugle and Fred Rothganger

The ultimate effects of the COVID-19 pandemic will be determined largely by human response to its spread. Governments around the world have implemented different measures to encourage social distancing and other behaviors that reduce the spread of the virus (BBC News, 2020). Some of these governments, including many U.S. states, have implemented stay-at-home orders (Mervosh et al., 2020), which ask or require that citizens limit exposure to people and locations outside of their homes.

Even with stay-at-home orders in place, individuals make daily decisions about whether to leave their homes, for what reasons they should venture out, and what protective measures, such as using face masks, to take when they do. Early implementation of stay-at-home orders in the United States have seen mixed results; the orders have been somewhat effective at reducing movement and social interaction (Andersen, 2020), but have also met with resistance, both from citizens (Seigler, 2020) and from some state governments (Blake, 2020). While most Americans support restrictions related to COVID-19 (Van Green and Tyson, 2020), the timing and character of responses to stay-at-home orders and other COVID-19-related restrictions has been impacted by polarization (Heath, 2020; Alcott et al., 2020). This polarized response has likely been exacerbated by divergent reporting from news sources (Bursztyn et al., 2020) that tend to be followed by different portions of the population (Jurkowitz et al., 2020).

We developed an agent-based model of societal compliance with stay-at-home orders, with individuals making those decisions based on reporting from news sources, social influence, and economic motivations. The model shows that polarization can emerge from divergent media reporting, even when those reports are based on similar information. Specifically, we incorporate social comparison theory (Festinger, 1954), which states that individuals tend to adjust their opinions to assimilate with others’, as long as those others’ opinions are similar enough to the individual’s. If the others’ opinions are substantially different from the individual’s however, the individual will tend to adjust their opinions to contrast with the others’. We show that if media reporting on COVID-19 situations differ in the timing and magnitude of their reporting on the danger of the disease, substantial polarization in compliance with stay-at-home orders can emerge.

Understanding how psychology and information interact to affect individuals’ decisions about complying with stay-at-home orders is vital in managing the response to the COVID-19 pandemic. In order to determine the best ways to ease restrictions, and to re-implement restrictions should that become necessary, we must know how people are likely to react to these orders. The specific implementation of any such restrictions will affect compliance, as will any social and psychological factors that affect decision making in general. The interaction between these factors and restrictions will determine how restrictions should be implemented (and lifted) in order to create the social distancing necessary to reach desired reductions in COVID-19 transmission, without restricting unnecessarily rigidly.

References

Allcott, H., Boxell, L., Conway, J., Gentzkow, M., Thaler, M., & Yang, D. Y. (2020). Polarization and public health: Partisan differences in social distancing during the Coronavirus pandemic. NBER Working Paper, w26946.

Andersen, M. (2020). Early evidence on social distancing in response to COVID-19 in the United States. Available at SSRN 3569368.

BBC News. (2020, April 1). Coronavirus: What measures are countries taking to stop it? BBC News. https://www.bbc.com/news/world-51737226

Blake, A. (2020, April 2). Which states are resisting tougher coronavirus measures? The Washington Post. https://www.washingtonpost.com/politics/2020/04/01/states-like-florida-lag-coronavirus-response-white-house-declines-push-harder/

Bursztyn, L., Rao, A., Roth, C., & Yanagizawa-Drott, D. (2020). Misinformation during a pandemic. University of Chicago, Becker Friedman Institute for Economics Working Paper, 2020–44.

Festinger, L. (1954). A theory of social comparison processes. Human Relations, 7(2), 117–140.

Heath, B. (2020, March 6). Americans divided on party lines over risk from coronavirus: Reuters/Ipsos poll. Reuters. https://www.reuters.com/article/us-health-coronavirus-usa-polarization/americans-divided-on-party-lines-over-risk-from-coronavirus-reuters-ipsos-poll-idUSKBN20T2O3

Jurkowitz, M., Mitchell, A., Shearer, E., & Walker, M. (2020). U.S. Media Polarization and the 2020 Election: A Nation Divided. Pew Research Center.

Mervosh, S., Lu, D., & Swales, V. (2020, April 20). See Which States and Cities Have Told Residents to Stay at Home. The New York Times. https://www.nytimes.com/interactive/2020/us/coronavirus-stay-at-home-order.html

Siegler, K. (2020, April 18). Across America, Frustrated Protesters Rally To Reopen The Economy. NPR. https://www.npr.org/2020/04/18/837776218/across-america-frustrated-protesters-rally-to-reopen-the-economy

Van Green, T., & Tyson, A. (2020, April 2). 5 facts about partisan reactions to COVID-19 in the U.S. Pew Research Center. https://www.pewresearch.org/fact-tank/2020/04/02/5-facts-about-partisan-reactions-to-covid-19-in-the-u-s/

Contingent tunes of neurochemical ensembles in the norm and pathology: can we see the patterns? | Tuesday 14:30-16:10

Irina Trofimova

Multi-disciplinary analysis of multiple neurochemical systems shows their transience and constructive nature. This nature challenges technological approaches that imply stable functionality of the brain structures and pay less attention to chemical bodily systems. The FET framework suggests having a classification of temperament traits and their extreme deviations in psychopathology as based on the universal functional structure of mammals' behaviour. Such an approach highlights functional relationships between multiple neurochemical systems formally and systematically. This allows presenting psychiatric disorders (and temperament traits in healthy people) in a compact taxonomy, which is verifiable in neurophysiological experiments on humans and animals. The presentation highlights benefits of functional constructivism approach, in contrast to behaviourism, graph theory and networks approach.

The Continuum from Temperament to Mental Illness: Dynamical Perspectives | Monday 14:30-16:10

William Sulis

Temperament in healthy individuals and mental illness have been conjectured to lie along a continuum of neurobehavioural regulation. This continuum is frequently regarded in dimensional terms, with temperament and mental illness lying at opposite poles along various dimensional descriptors. However, temperament and mental illness are quintessentially dynamical phenomena, and as such there is value in examining what insights can be arrived at through the lens of our current understanding of dynamical systems. The formal study of dynamical systems has led to the development of a host of markers which serve to characterize and classify dynamical systems and which could be used to study temperament and mental illness. The most useful markers for temperament and mental illness apply to time series data and include geometrical markers such as (strange) attractors and repellors and analytical markers such as fluctuation spectroscopy, scaling, entropy, recurrence time . Temperament and mental illness, however, possess fundamental characteristics that present considerable challenges for current dynamical systems approaches: transience, contextuality and emergence. This review discusses the need for time series data and the implications of these three characteristics on the formal study of the continuum and presents a dynamical systems model based upon Whitehead’s Process Theory and the neurochemical Functional Ensemble of Temperament model. The continuum can be understood as second or higher order dynamical phases in a multiscale landscape of superposed dynamical systems. Markers are sought to distinguish the order parameters associated with these phases and the control parameters which describe transitions among these dynamics.

Cores and Peripheries in Complex Cities | Thursday 10:30-12:00

Olga Buchel

Business cores and peripheries in economics are often identified by the location of company clusters. However, cities are complex systems with regards to scaling laws for population ranks and for the complexity of their transportation patterns (Masucci et al., 2009). A natural approach to study complex systems is by means of street network analysis (Boeing, 2017). Street networks are spatial signatures of cities that capture their historical structures, and allow to make inferences about dynamics of transportation in the city. Network analysis of streets enables researchers to compare cities in terms of their network structures, connectedness, density, centrality, and resilience (Boeing, 2017).

Empirical studies in a number of European, USA, and Asian cities have shown that certain types of businesses favor certain centrality measures of streets (especially, retail and entertainment businesses, hospitals). Porta et al. (2012) show that street centralities in Barcelona, Spain, are correlated with the location of economic activities and that the correlations are higher with secondary than primary activities. Wang, Antipova, and Porta, (2011) demonstrate that population and employment densities in Baton Rouge, LA, are highly correlated with street centrality values. Wang, Chen, Xiu, & Zhang (2014) found that specialty stores in Changchun, China, value various centralities most, followed by department stores, supermarkets, consumer product stores, furniture stores, and construction material stores. Rui, & Ban (2014) found that the density of each street centrality in Stockholm is highly correlated with one type of land use. Their results suggest that various centralities can capture the land development patterns can reflect human activities. Rui et al. (2016) use network centrality measures to study retail service hot-spot areas and the spatial clustering patterns of a local retail supermarket, Suguo, and foreign-brand retail chains in Nanjing city. Their findings can be used for optimization of the locational choice of new stores. Ni et al. (2016) examined the relationship between road network and healthcare facility distribution; they showed that the distribution of hospitals correlates highly with the street centralities, and that the correlations are higher with private and small hospitals than with public and large hospitals. Lin, Chen, & Liang’s (2018) results show that street centrality in Guangzhou has a large impact on the location of retail stores and that different store types have different centrality orientations.

In this project, we are looking at the relationships between street centrality and types of businesses in Shanghai, China and Boston, MA. For analyzing streets we are using street and subway networks from Open Street Maps, datasets of businesses in these two cities, and traffic data from two locations. For Shanghai, China, we are using the Toyo Keizai Inc. dataset (2014 edition), that has locations of more than 20,000 Japanese subsidiaries in China. For Boston, we use a large dataset of private companies from Orbit database (over 80,000 companies). We use multivariate regression to analyze the relationship.

Coronavirus Theory: The Theoretical and Practical Realities of COVID-19 Pandemic Management | Friday 9:15-9:50

Aistis Šimaitis

COVID-19 has been a difficult virus to understand in theory and even harder one to manage in practice. In this talk I will discuss the empirically observed dynamics of the virus spread and theoretical underpinnings of such behaviour that we used to develop COVID-19 management strategy in Lithuania. I will also cover a set of practices for testing, monitoring and quarantine relaxation that we have been using to identify and control outbreaks across the country and discuss several difficulties in applying theoretical best-practices in real life.

A coupled variational autoencoder for improved the robustness of generated images | Monday 10:20-12:00

Kenric Nelson, Shichen Cao, Jingjing Li and Mark Kon

We present a coupled Variational Auto-Encoder (VAE) method that improves the accuracy and robustness of the probabilistic inferences on represented data. The new method models the dependency between input feature vectors (images) and weighs the outliers with a higher penalty by generalizing the original loss function to the coupled entropy function, using the principles of nonlinear statistical coupling. We analyze the histogram of the likelihoods of the input images using the generalized mean, which measures the model’s accuracy as a function of the relative risk. The neutral accuracy, which is the geometric mean and is consistent with a measure of the Shannon cross-entropy, is improved. The robust accuracy, measured by the -2/3 generalized mean, is also improved. Compared with the traditional VAE algorithm, the output images generated by the coupled VAE method are clearer and less blurry.

We improve the original VAE algorithm by modifying the loss function. The original loss function consists of two terms: the first term is the KL-divergence between the prior density and the posterior density of the latent variables, and the second term is the cross-entropy between the posterior densities of latent and generated data points, which can also be viewed as the expected reconstruction error. In our work, we modify the loss function with “nonlinear coupling,” which measures the long-range correlation between the states. The modified loss function improves the results in two aspects. First, by assuming that the states in the system are no longer independent, we discount the amount of available information. This forces the trained model to have more certainty and thereby be more robust and accurate when reconstructing data. Secondly, the coupled entropy weights low probability with a higher cost and imposes more penalty for the divergence between approximate posterior and the prior of latent variables. In this case, the model has to be more sensitive about outlier so that it can increase the probability density for generative data points under the decoding model. This ensures that outliers in the training process will not be over-confident.

The performance of the Coupled VAE is assessed using the MNIST dataset of handwritten numerals. Sixty thousand images are used for training and 10,000 images are used for testing, each of which consists of 28x28 grey-scale images. The images are encoded on to a multivariate Gaussian latent space. A two-dimensional distribution is used to visualize the space and 20-dimensional distribution is used to evaluate the performance. Without the coupling, the {robustness, accuracy, decisiveness} likelihood metrics were measured to be {10^{-79}, 10^{-39}, 10^{-15}}. These likelihoods compare favorably with the likelihood of a uniformly random image of 10^{-1886}. A coupling value of 0.1 for the training loss function penalizes images with a very low likelihood. All three metrics show improvement with substantial increases for the robustness and accuracy: {10^{-71}, 10^{-29}, 10^{-12}}.

COVID-19 -- A Complex Multi-dimensional Tangled Web | Thursday 10:30-12:00

Krishnan Raman

In late 2019 , micro-organisms in East Asia jumped from an animal [maybe a bat] to a human. The Black Swan that was rapidly released is having world-wide repercussions, jolting our lives and society. Its effects are biological, epidemiological, economic, sociopsychological and political, and have the potential of bringing today’s human society to a near-collapse.

Human society, especially modern urban society, together with its natural environment, is an intricate network of many entities, interactions and processes. To comprehend what is happening, we need to model these components and their interconnections.

The familiar daily-life entities include Individuals/families, homes, suppliers of basic needs and services, health-care providers, workplaces, schools, businesses & supply chains, the financial world, social collectives, and the governance and public-health infrastructures. To these we add essential Meta-level entities – including Values and Norms, Nature of Society, Attitudes & Outlooks, and Governance rules – which can all play a decisive role.

The daily-life processes include scripts for habitual activities – Home-to-Workplace commuting; Procuring essential supplies; Going to Schools, Conferences and Social/Religious Collectives; Small-business Supply Chains; and Daily Wage-earners/Migrant-workers Earning their living. We now add some Public-health-related processes – Habitual Use of PPE, Drug-discovery Research Programs, Vaccination R&D schemes, and new Governance processes such as Lockdown and ReOpening.

Processes we earlier took for granted now require scrutiny and modification, as they can affect health, safety and financial solvency.

The Problem: To analyze this complex human system, and try to find acceptable scenarios / outcomes.

Note: The entities above have multiple dimensions, and are time-varying.

There are multiple objectives -- competing, possibly conflicting.

– Avoiding Infection vs Earning one’s livelihood. Keeping people safe, while preserving the economy, minimizing unemployment. Also preserving Social Relationships and Mental Stability.

With Multiple Objectives, there is generally no optimal solution.

Instead --

We can look at multiple solutions, a Preferred solution being selected by Advocacy.

OR Deliberately decide on a weighted risk of health safety and financial well-being – allowing change with time.

OR An Alternative approach -- Use Heuristics – Satisficing [ -- Herbert Simon] instead of optimizing.

We will discuss the following:

How good Data Analytics and data availability are required for estimating the number of infections and recovery rate – needed for satisfactory policy-making – Possible role of using ML and AI .

What are suitable Influence Points ?

The role of Time Constants – COVID-19 proceeds very rapidly – little time for adjusting.

Will timing and staging of Lockdown & Reopening lead to better health and financial outcomes ?.

Moral hazard situations of different types .

How meta-level factors can affect the outcome – now and in the long-term :

Attitude toward Authority;

Attitude toward “Science” and Misunderstanding of Science in this context;

Individual freedom & Privacy vs Collective safety & welfare;

Nature of Society – Democracy vs Authoritarian;

The role of political values, attitudes & events in COVID scenario outcomes;

Global vs Isolationistic outlook.

What we can expect and/or plan for a post-COVID society --

Developments in technology which enable new lifestyles;

New relationships between Workplace and Home -- evolution of new alternatives;

Possible Integrated alternatives for Workplace, Transportation and Residence;

New Developments in Online Interactions, Online Education & Online Communities;

New Political Governance models;

New SocioPolitical Give-and-Take involving Individual Freedom, Privacy, Collective Welfare and Risk-taking .

It is clear that changes are needed.

When Vesuvius was erupting, and volcanic ash falling on Pompeii, many people were still going about enmeshed in petty daily squabbles. Today appears similar.

Will the COVID pandemic help persuade us about the need for change ?

COVID-19 lessons for the future of governance | Wednesday 15:00-16:40

Anne-Marie Grisogono

The history of life is replete with instances of the emergence of collectives which develop agency as a new type of individual entity at a higher scale than the components entities of which they are composed. Loosely speaking, each such transition can only occur if the collective benefits outweigh the costs at both scales. A key factor is how conflicted values at the two scales are traded off and enforced to maintain coherence of the collective entity. Evolution has demonstrated an astonishing variety of such mechanisms permitting the evolution of complex life forms, from the eukaryotic cell, to multi-cellular organisms, the evolution of intelligence and arguably, the wide range of complex social, political and governance structures which define today's collective human enterprises. We focus here on one aspect of these structures - evaluating their potential competence in managing very complex problems so as to maintain their own coherence. We draw on extensive research into the factors determining individual complex decision-making competence, to propose an evaluation framework for existing and hypothetical governance systems. The COVID-19 pandemic has provided a unique opportunity to test such a framework by examining the outcomes produced by each existing form of governance in attempting to deal with a similar challenge. We seek to draw insights for the future of more effective governance, both challenges and opportunities.

COVID-19 Pandemic Signatures: Estimating the Onset Time with Kolmogorov-Zurbenko Adaptive Filter | Monday 14:30-16:10

Elena N. Naumova, Ryan B. Simpson, Meghan A. Hartwick, Aishwarya Venkat, Irene Bosch, Sanjib Bhattacharya, Abani Patra

The rapid spread of COVID-19 infection calls for attention to provide robust estimates of disease progression at various spatial levels, such as community, county, city, region, nation. Any outbreak is exhibiting a unique signature based on verifiable cases distributed over time and could be characterized numerically. At any level of spatial aggregation these characteristics include critical points, like the onset time, peak time, and the time of return to a pre-outbreak baseline. For the ongoing pandemic the ability to reliably detect these critical are essential for developing and implementing public health strategies.

Outbreaks are triggered and emerged from the complex interaction of hosts and pathogens and such interactions are altered by the shared environment. These interactions are the core of interests for researchers and practitioners and are likely to be depicted by models capable to describe complex adaptive systems. In real life conditions, the “verifiable cases” typically represented by a very noisy process in which the error structure could fluctuate intermittently. In the presented study we explored approaches to identify critical points and periods in presence of structural missingness, reporting lags, and high-frequency periodicity.

We focused on the features of the Kolmogorov-Zurbenko (KZ) filter, which is essentially a repetitive moving average filter. It relies on selecting a window in which a simple average of available information is estimated within the interval of m points disregarding the missing observations within the interval. It has been shown that the KZ filter is robust and nearly optimal and its parameters provides straightforward interpretation. The adaptive version of KZ filter (KZA) specifically targets the identification of breaks in nonparametric signals covered by heavy noise by a) detecting potential time intervals when a break occurs and b) examining these time intervals more carefully by reducing the window size so that the resolution of the smoothed outcome increases.

To achieve our goal, we modified the KZA filter with tailoring the moving average windows to reduce the high-frequency periodicity stemming from the weekly effects. Based on a priori assumption of an outbreak signature we utilized higher-order derivatives in a sequential manner. We applied the modified version of KZA filter to handle daily time series of confirmed cases of COVID-19 in 14 counties of Massachusetts for all available range of time series. We illustrate the method, its relation to nonlinear spectral analysis, the sequence of onsets across MA, and the potential of the method to scale up to a broad range of real-life scenarios.

Crisis contagion in the world trade network | Wednesday 10:50-12:30

Célestin Coquidé, José Lages and Dima L Shepelyansky

The Google matrix analysis of the world trade network (WTN) allows to probe the direct and indirect trade exchange dependencies between countries. Unlike the simple accounting view obtained from the usual import-export balance, relying on the total volumes of exchanged commodities between countries, the PageRank-CheiRank trade balance (PCTB) allows to take account of the long range inter-dependencies between world economies. We present a WTN crisis contagion model built upon the iterative measure of the PCTB for each country. Once a country have a PCTB below a threshold k, it is declared in a bankruptcy state in which it can no more import commodities excepting some vital one for the industry, eg, petroleum and gas. This state corresponds either to the fact that a country with a very negative trade balance have not enough liquidity to import non essential commodities, or to the decision of a supranational economic authority trying to contain a crisis by placing an unhealthy national economy in bankruptcy. The bankruptcies of economies with PCTB less than k induce a rewiring of the world trade network which possibly weaken other economies. In the phase corresponding to a bankruptcy threshold k<kc, the crisis contagion is rapid and contained since it affects only less than 10% of the world countries and induces a total cost of less than 5% of the total USD volume exchanged in the WTN. This total cost of the crisis drops exponentially with the decrease of k. In the phase corresponding to a bankruptcy threshold k>kc, the cascade of bankruptcies can not be contained and the crisis is global, affecting about 90% of the world countries. In the global crisis phase k>kc, at the first stage of the contagion, myriads of countries with low exchanged volume (ie, low import and export volumes) go to bankruptcy. These countries belong mainly to Sub Saharan Africa, Central and South America, Middle East, and Eastern Europe. In the next stage of the crisis contagion, the conjugated effect of the bankruptcies of these countries contribute to the fall of big exporters, such as the US or Western European countries. As an example, for 2004, 2012, and 2016 WTNs, the bankruptcy of France is solely due to the failure of many low exchanged volume countries, which, here, individually import from France a volume of commodities less than 10 billions USD. Otherwise stated, France failure is caused by the failure of many small importers. Great Britain is a similar case for the 2004, 2008, and 2016 WTNs. Among the big exporters (ie, with a exchanged volume greater than 10 billions USD), European and American countries are the sources of the crisis contagion. The gates from which crisis enters Asia are Japan, Korea, and Singapore. Generally, Asian countries go to bankruptcy at the end of the crisis contagion, with China, India, Indonesia, Malaysia and Thailand, being, with Australia, usually the last economies to fall. We also observe that failures of the four BRIC occur during the last stages of the crisis contagion.

Critical cluster tomography to test the universality class and conformal invariance | Tuesday 10:20-12:00

Sean Dougherty and Istvan Kovacs

Scale-invariant geometric clusters are naturally formed in a large variety of critical systems. Studying their fractal dimension and surface properties are standard tools to estimate the critical exponents and determine the universality class. As a more precise alternative, here we propose to perform cluster tomography, asking the following simple question: how many clusters do we cross following a simple trajectory comprised of line segments (Figure 1)? In two dimensional critical systems this question reveals deep structural properties of the underlying conformal field theory, providing a versatile and extremely sensitive tool to determine the universality class [1]. This way we can address several open questions in statistical physics on the universality class and symmetries of critical systems. For example, we can test the validity of the conformal conjectures of two dimensional, zero temperature spin glasses [2].

Critical dynamics of active and quiescent phases in brain activity across the sleep-wake cycle | Tuesday 10:20-12:00

Fabrizio Lombardi, Manuel Gomez-Extremera, Jilin Wang, Plamen Ivanov and Pedro Bernaola-Galvan

Sleep periods exhibit numerous intermittent transitions among sleep stages and short awakenings, with fluctuations within sleep stages that may trigger micro-states and arousals. Despite the established association between dominant brain rhythms and emergent sleep stages, origin and functions of sleep-arousals and sleep-stage transitions remain poorly understood. Empirical observations of intrinsic fluctuations in rhythmic cortical activity, and the corresponding temporal structure of intermittent transitions in sleep micro-architecture, raise the hypothesis that non-equilibrium critical dynamics may underlie sleep regulation at short time scales, in co-existence with the well-established homeostatic behavior at larger time scales. In this talk, I will discuss recent results on the dynamics of dominant cortical rhythms across the sleep-wake cycle that support such hypothesis (Lombardi et al. J. Neurosci. 2020; Wang et al. Plos. Comp. Biol. 2020 ). I will focus on cortical theta and delta rhythms in rats, which are associated with arousals/wakefulness and sleep respectively. I will show that intermittent bursts in theta and delta rhythms exhibit a complex temporal organization: Theta-burst durations follow a power-law distribution, whereas delta-burst durations follow an exponential-like behavior. Such features are typical of non-equilibrium systems self-tuning at criticality, where the active phase is characterized by bursts with power-law distributed sizes and durations, while quiescent periods (inactive phase) are exponentially distributed. By interpreting theta-bursts as active phases and delta-bursts as inactive phases of the cortical activity in the sleep-wake cycle, I will then draw a parallel with other non-equilibrium phenomena at criticality, and demonstrate that theta-bursts exhibit a peculiar organization in time described by a single scaling function (Gamma distribution) and closely reminiscent of earthquake dynamics. Overall, such results constitute a fingerprint of critical dynamics underlying the complex temporal structure of intermittent sleep-stage transitions at the behavioral level, and ideally complement previous observations of critical behavior at the neuronal level (we find similar scaling exponent). Importantly, our analysis consistently links those observations to the collective behavior of neuronal population leading to emerging cortical rhythms in relation to physiological alternation of sleep and wake bouts.

Cyborgization of Modern Social-Economic Systems: Accounting for Changes in Metabolic Identity | Thursday 10:30-12:00

Ansel Renner, A. H. Louie and Mario Giampietro

In Part 1 of this paper, the metabolic nature of social-economic systems is explored. A general understanding relating the various constituent components of social-economic systems in a relational network is presented and used to posit that social-economic systems are metabolic-repair (M,R) systems of the type explored in relational biology. It is argued that, through modernization and globalization, social-economic systems are losing certain functional entailment relations and their ability to control replication. It is further argued that modern social-economic systems are losing control over their identity. In Part 2, the implications of those realizations are explored in terms of effective accounting methodology and a practical set of methods capable of harnessing the deep complexity of social-economic systems. In terms of methods, a practical set of metrics defined through the lenses of a macroscope, a mesoscope, and a microscope is presented. Intended to be used simultaneously, the various descriptive domains suggested by our three scopes may be useful for decision-makers who wish to make responsible decisions concerning the control of system identity change or to combat processes of societal cyborgization.

The darkweb: a social network anomaly | Tuesday 14:30-16:10

Kevin O'Keeffe, Virgil Griffith, Yang Xu, Paolo Santi and Carlo Ratti

We study the darkweb hyperlink graph and find its structure is unusual. For example, $ \sim 87 \%$ of darkweb sites \emph{never} link to another site. To call the darkweb a ``web'' is thus a misnomer -- it's better described as a set of largely isolated dark silos. As we show through a detailed comparison to the World Wide Web (www), this siloed structure is highly dissimilar to other social networks and indicates the social behavior of darkweb users differs from that of www users. We show a generalized preferential attachment model can partially explain the strange topology of the darkweb, but an understanding of the anomalous behavior of its users remains out of reach. Our results are relevant to network scientists, social scientists, and other researchers interested in the social interactions of large numbers of agents.

Decomposing Bibliographic Networks into Multiple Complete Layers | Tuesday 14:30-16:10

Robin Wooyeong Na, Bryan Daniels and Kenneth Aiello

When analyzing bibliographic networks, nodes are often considered as the most important component of our interest. In co-authorship networks for example, we are often interested in the centrality of each node and communities formed by groups of nodes. However, analyzing the underlying source that creates the edges is also important. In the case of co-authorship networks, the source would be the publication data. A co-authorship network can be thought as a composition of multiple layers, where each layer corresponds to a single article and consists of a complete network of the article’s authors. From this idea, we suggest a new framework for decomposing a weighted network into a multilayer network where each layer consists of a complete graph of unit edge weight.

Two extremes exist for this multilayer decomposition. In one extreme, all layers have only one edge. In this case, the number of layers is maximized, equal to the number of edges in the network. In the other extreme, we can decompose the network in a way that the number of layers is minimized, thus maximizing the size of network in each layer. In the case of a co-authorship network, the former extreme is the hypothetical case where every publication has one or two authors while the latter extreme is the case where the number of authors for each publication is maximized. The distribution of the number of authors (nodes) in each publication (layer) falls somewhere between the two extreme cases. We are interested in where the real publication data falls between the two extremes. If co-authorship data falls closer the former extreme, it means that the co-authorship network is created collectively by publications with a few authors. If it falls closer to the latter extreme, there exist many publications co-authored by many people. By suggesting a new framework of analyzing bibliographic network data, we propose a new metric that measures whether the network is composed of numerous small-sized layers or a relatively few large-sized layers. We call our metric the rate of complete clusters (RCC), scaled from the value of 0 to 1 where they each correspond to the former and latter extreme of decomposition.

We apply this decomposition method to the co-publication journal network for researchers studying the microbiome. A co-publication journal network sets journals as nodes, connected by weighted edges indicating the number of authors who published in both journals. We analyze the RCC of our data where an RCC of 0 indicates that every author published in one or two journals, and an RCC of 1 indicates the extreme case where authors publish in numerous journals. The RCC of our microbiome data linearly increases from the year 2010 to 2017 (R-squared: 0.9935). Also, the RCC shows very high correlation with the famous inequality index of Gini Coefficient (R-squared: 0.9797). The reason for this high correlation is conjectured to be the scale-free property of our co-publication journal network.

Describing the complexity of Chinese Medicine Diagnostic Reasoning: the example of suspected COVID-19 | Wednesday 16:40-18:00

Lisa Conboy, Lisa Taylor-Swanson and Tanuja Prasad

Introduction: We are currently executing a prospective, longitudinal, descriptive cohort study in a pragmatic clinical practice for adults with symptoms that may be related to COVID-19 infection who participate in Chinese herbal medicine (CHM) telehealth visits and take CHM (ClinicalTrials.gov Identifier: NCT04380870). CHM includes over 400 medicinal substances and CHM formulas are individualized at each visit according to the patient’s presentation. CHM has been used to treat cough, shortness of breath, and fatigue and mechanisms of action have been investigated for SARS and H1N1 influenza prevention and treatment by anti-inflammatory effects and antiviral activity. Yet, there is a gap in our understanding of the clinical application of CHM in a community sample of individuals experiencing symptoms that may be related to COVID-19. We have no pragmatic clinic data about the use of CHM for coronaviruses. This project aims to address this lack.

In addition, one of the data fields that we are collecting is clinician’s diagnostic reasoning. This project follows other work of ours and other scientists, considering the diagnostic process as a complex system.

Methods: Chart notes reflecting clinicians’ clinical reasoning are being collected and will be content analyzed, and double coded for themes of complexity, considering the TCM diagnostic framework as a complex system.

Results: We will present descriptive summary of clinical outcomes found to date, with a deeper analysis of the diagnostic reasoning process used by clinicians. We focus on the question of the TCM diagnostic system acting as a complex adaptive system, with codes including aspects of complex systems including emergence, adaptation, connectivity, self-reflection, self-organization, non-linearity, and a critical phase change in a clinician’s thinking.

Conclusions: Individualized diagnoses require a complex process, more than recitation of memorized “facts”. Considering the diagnostic process as a complex system may offer insight into the operation of other complex systems of clinical reasoning, such as biomedicine, in addition to adding to the medical education literature. We are also circulating this information to the Chinese Medicine community and the larger scientific community. This will provide timely information for the CHM clinical community from highly experienced clinicians.

Design of controllable networks via memetic algorithms | Wednesday 16:40-18:00

Shaoping Xiao, Baike She and Zhen Kan

In many engineered and natural multi-agent networked systems, there has been great interest in leader selection and/or edge assignment during the optimal design of controllable networks. In this paper, we present our pioneering work in leader-follower network design via memetic algorithms, which focuses on minimizing the number of leaders or the amount of control energy while ensuring network controllability. We consider two problems in this paper: (1) selecting the minimum number of leaders in a pre-defined network with guaranteed network controllability; and (2) assigning edges, i.e., interactions, between leader and follower nodes to form a controllable network with the minimum control energy requirement. The proposed framework can be applied in designing signed or unsigned and directed or undirected networks. It should be noted that this work is the first to apply memetic algorithms in the design of controllable networks. We chose to use memetic algorithms because they are both more efficient and more effective than the standard genetic algorithms and other heuristic search methods in some optimization problem domains.

Detectability theory and distributed machine learning for networks with heterogeneous communities | Thursday 10:30-12:00

Bao Huynh and Dane Taylor

Networks are widely used to model diverse biological, social, and physical systems and can represented by a network-encoding matrix such as Laplacian and adjacency matrix. Their spectral properties (i.e., eigenvalues and eigenvectors) are often used to study these complex systems, yet it remains unclear how network complexities including community structure affect these linear-algebraic properties of networks. Focusing on a popular generative process to create random networks with communities called stochastic block models (SBMs), we study the effect of "community heterogeneity" on these spectral properties. Specifically, we employ random matrix theory to characterize the varying effects of different sizes and numbers of communities, possibly having different densities. We apply our theory to two applications. First, we analyze detectability phase transitions for spectral community detection - that is, communities can only be detected using eigenvectors if and only if they correlate with the communities. One can study phase transitions for these correlations by modifying one or more parameters of an SBM (e.g., edge probability or community size), and our theory is found to accurately predict this behavior. Second, we apply our results to study the convergence rate of distributed machine learning algorithms on high-performance computing clouds, which can be represented as a network of "compute nodes" (e.g., CPUs). Community structure naturally arises in these networks due to the physical organization of nodes into vertical stacks and the prioritization of local communication. Focusing on a gossip-based classication algorithm called GADGET SVM, we predict the effects of this structure convergence time.

DETERMINISTIC CHAOS CONSTRAINTS FOR CONTROL OF MASSIVE SWARMS | Tuesday 10:20-12:00

Josef Schaff

Problems such as the control of massive swarms or MANET radios in a Battlespace on the order of tens of thousands of entities is challenging with conventional approaches. Entities that enter or leave this space need to have either some means of coordination or known constraints. Moreover, each entity should have some limited awareness of the relative positioning of all the others, without the massive computational expense incurred by a combinatorial explosion. This may rule out the quantum state space type of approach where each entity is a computationally-unique unit vector.

To control such behaviors, and allow self-organizing MANET relay nodes as well as collective swarm control, the author uses a topological structure based upon deterministic chaos – the fractal. By constraining the entities to “believe” that they exist only within the boundaries of an adaptive fractal, forces their topological layout to map to topological clusters. The invariant features of a fractal topology allows each node to compute the IFS (Iterative Function System) to effectively “know” the relative positions of all the other entities, and form adaptive, self-healing MANET or swarm clusters. One of many possible equations for the adaptive fractal is given, which computationally is of O(n). Since it uses one floating point number, the computations for each entity/node to determine the relative positions of all Battlespace entities is trivial, therefore each entity can achieve situational awareness in near-realtime. A simulation of the dynamics for different adaptive fractal topologies written in Mathematica, can be demonstrated during the paper presentation.

Developing An Ontology for Emergence using Formal ConceptAnalysis | Tuesday 10:20-12:00

Shweta Singh and Mieczyslaw M. Kokar

The phenomenon of emergence has been studied in a variety of fields. Each of these fieldshas their own understanding and interpretation of what emergence means. Thus far, there is no agreed-upon definition of emergence. In fact, the debate about what emergence is and its validity in scientificunderstanding is still continuing. Having a clear understanding concept of emergence is critical foranalysis of such behaviors. In this work-in-progress paper, we discuss our approach for developing anontology of emergence which can be viewed as adescriptive theory of emergence. The ontology will createa foundation which will support the understanding of the meaning and implications of emergence. Aside benefit of this ontology will be the taxonomy of both the emergent behaviors and the emergenceresearch. Thus researchers in this field will be able to identify which aspects of emergence are coveredby the various research works and which aspects are not covered and thus open for investigation.Conceptually, the ontology will be ameta-theoryof emergence. By raising the level of abstraction, themeta-theory of emergence will allow to express various theories of emergence within one framework,thus allowing multiple theories that may contradict each other to live in the same place. This willallow one to query the ontology about both the commonalities and differences between the theories.We discuss the development of such ontology and evaluation approach in this paper.

The devil is in the interactions: SDGs networks to enhance coherent policy | Thursday 10:30-12:00

A. Urrutia, O. Rojo-Nava, C. Casas Saavedra, C. Zarate, G. Magallanes-Guijón, B Hernández-Cruz and Oliver López Corona

It is widely understood that in order to achieve the 2030 agenda we need to implement evidence-based policymaking, but recently It has started to become clear that it is also key to design coherent policies, and for this, we need to incorporate a complexity approach to take into account SDGs interactions. We construct an interaction (Mutual Information) network using data from SDGs progress in most important metropolitan areas in México. We perform a node ranking analysis and compare the results with a theoretical network. We show that empirical network and theoretical has a different focus, showing that SDG needs at least different policies for urban and rural areas. We also analyzed the effect of individual (ignoring interactions) SDG achievement using a Bayesian Network. We show that in general monolithic SDG policies translate to a higher probability of low progress. We also made some comments related to current monolithic general focusing on Poverty fighting by the Mexican government in contrast with what available data suggest would be better: Decent Work and Economic Growth but with a Responsible Consumption and Production. Then investing in education and scientific research in order to advance in Industry, Innovation, and Infrastructure, which requires a strong law and justice procuration system that enables peace, justice, and strong institutions.

Diffusion on Multiplex Networks with Asymmetric Coupling | Wednesday 15:00-16:40

Zhao Song and Dane Taylor

A leading approach for studying biological, technological, and social networks is to represent them by multiplex networks in which different layers encode different types of connections and/or interacting systems. Examples include interconnected critical infrastructures such as transportation systems, power grids, and water lines as well as multimodal social networks containing different categories of social ties. Here, we study diffusion processes on multiplex networks that are allowed to contain directed edges within and/or between layers. We develop perturbation theory to rigorously characterize several asymptotic limits including when the interlayer coupling points in a single direction as well as the limits of strong and weak coupling between layers. Our main finding is that asymmetric coupling between layers has interesting in unexpected effects on diffusion including the optimization of the diffusion rate. For example, we often observe that a small number of asymmetry speeds diffusion, but too much asymmetry slows it down.

Distributed and Centralized Regimes in Controllability of Temporal Networks | Monday 10:20-12:00

Babak Ravandi

One of the approaches to understand and manage complex adaptive systems is through inferring the functional outcome of these systems (e.g., capacity to self-organize) from their structure. The controllability of temporal networks can be achieved by identifying a set of driver nodes that can steer the state of a network from an initial state to the desired state in finite time. Many control applications are interested in identifying a Minimum Driver node Set (MDS) with size N_c that can fully control a network. However, most complex networks have multiple MDSs. Identifying an MDS for temporal networks is computationally prohibitive since it involves testing 2^N driver node sets. Hence, the author developed a heuristic approach to find Suboptimal Minimum Driver node Sets (SMDSs) with size N_s≥N_c.

In this work, we seek to understand the overall behavior of temporal networks from the controllability perspective. For instance, what structural characteristics correspond to capacities such as self-organization. Generally, complex systems operate under two control regimes: a distributed or a centralized control regime. This work proposes that a centralized control regime represents systems that have a small number of MDSs with large N_c relative to the system’s size. In contrast, a distributed control regime represents systems that have a large number of MDSs with small N_c. The aforementioned heuristic approach was analyzed on networks of ants’ interactions from six colonies and email communications of a manufacturing company.

Moreover, we were able to find MDSs for the ant networks using brute force since these networks have a small N_c. Figure 1 summarizes our findings for MDSs and SMDSs, where Count SMDSs presents the number of SMDSs with size N_s and N_s+1. Overall, the results are in line with key behaviors of the complex systems modeled by the datasets. The emails network is in a centralized control regime; that is, a small number of unique groups of employees with large members can fully control the network. These employees (SMDSs) have a large intersection that could represent the company managers. In the ant networks, queen ants tend to avoid becoming driver nodes within both MDSs and SMDSs. Also, most colonies are in a distributed control regime. For instance, considering MDSs, in Fig. 1b there are 21 unique groups of ants with only 2 members who can fully control their colony (also, the intersection between these 21 groups of ants is empty). These results illustrate a showcase of inferring the functional behavior of a complex system from their network structure.

Does exist gap-filling phenomenon in stock markets? | Wednesday 10:50-12:30

Xiaohang Liu, Yan Wang, Ziyan Jiang, Qinghua Chen and Honggang Li

To Chartists, there is an axiom about the stock gap which is “gaps always get filled soon”. However, there is a lack of sufficient academic literature to discuss this kind of issue carefully. This paper exhibits some characteristics of the gap by collecting empirical data in Chinese and American stock markets. By applying a detrending and random exchange process on the original data, the results reveal that the real data series has some inherent structure behind the price variation and the overall trend hinders the gaps’ generation and slows down the gaps’ refilling process to a certain extent. The difference between the original data and the randomly exchanged data after the detrending suggests that the gap-filling phenomenon may exist in the stock markets of China and the United States.

Drawing on the CAS Metaphor to Promote Educational Transformation | Tuesday 10:20-12:00

Patrick McQuillan

Efforts at educational reform are too often ineffective. To a substantial degree, this occurs because the change process is assumed to be a predictable, linear matter of technical precision. It isn’t. Systems change is more complex. To create a more robust, holistic conception of educational change, I draw on complexity theory, in particular, the analytic construct known as a complex adaptive system (CAS; Stacey, 1996; Lewin, 1999; Waldrop, 1992; Wheatley, 1999). In contrast to assumptions that have informed many reform endeavors, the CAS heuristic offers a means to conceptualize educational transformation that is attentive to the iterative, recursive, interdependent, and unpredictable nature of educational transformation.

Specifically, this analytic frame offers a means to conceptualize the workings of non-linear systems in which “a diversity of agents . . . interact with each other, mutually affect each other, and in so doing generate novel, emergent, behavior for the system as a whole” (Lewin, 1999, p. 198). Such actions allow systems to adapt and survive, processing information from the environment and modifying behavior in response to an ever-changing context, thereby enacting the on-going process of learning and transformation inherent to the complex adaptive system, which ultimately makes “the behaviour of a system . . . difficult and sometimes impossible to predict” (Hoban, 2002, p. 23). This systemic view seems compelling, as nothing stands alone; everything interconnects. As Albert-Lazlo Barabasi (2002) aptly stated, “Everything is connected to everything else.”

The practical value of complexity theory becomes apparent when one considers the history of educational reform. As previously noted, many reforms have ignored the multiple, interrelated and interacting elements of schools, conceptualizing the educational system as relatively isolated and discrete structures, and therefore assuming complex phenomena can be understood by analyzing constituent parts, when the sum of the whole may be greater, and more complex, than that of the individual parts. Consequently, reforms often modify one or two elements in a system (e.g., testing or block scheduling), apart from related elements, and assume these “reforms” will produce the intended outcome through a linear, cause-and-effect relationship. In contrast, complexity theory posits that, rather than assuming predictable interactions among discrete system elements, one should identify interrelationships among a rich cross-section of system features and analyze emerging patterns to gain a broad understanding of the dynamics of change. Such an understanding should illuminate various issues surrounding educational transformation, including whether any reform is likely to be effective.

To conceptualize this complex process of educational change, I propose blending the qualitative with the quantitative. That is, starting with qualitative conceptions of four aspects of the emergence process—disequilibrium, distributing authority, creating a shared institutional culture, and balancing on the “edge of chaos”—I developed a series of related essential questions that allow those enacting educational change (or most any form of institutional change) to assess the nature of the work being undertaken. In essence, asking themselves on a regular basis, “Are we creating a complex adaptive system? Are we generating the features of a complex adaptive system that will allow us to effectively address the challenges that are likely to arise in the course of our efforts at institutional transformation?” Though the probes that would be utilized in a formal study would be more extensive, the following queries provide a sense for the general nature of this experience, as to how four dimensions of complex emergence might be conceptualized so as to inform the ongoing work of a school community:

Disequilbrium
 Is there a sense of unease or dissonance around some issue that is pushing people to change?
 Has a context been created that disrupted normal routines?
 Are new opportunities available for people?

Distributing Control
 Are there opportunities for people working in similar areas to collaborate with their collective work in-depth?
 Are participants creating new networks for themselves or others?
 Are people generating relevant connections beyond their immediate network(s)?

Creating Shared Cultural Values
 Are people united in their beliefs about key aspects of this change, both in terms of how they undertake that work and why it is important?
 Is there a collective vision?
 Are there mechanisms or opportunities that promote shared values and beliefs?

Balancing on the Edge of Chaos
 Are key points of tension in the overall endeavor kept in balance—not too much, not too little?
 Challenge & support: Are people challenged to innovate in ways that align with the change endeavor but also supported to undertake this work?
 Authority & autonomy: Is some power centralized and some power shared more broadly?

These questions could be posed in an ongoing fashion so that those enacting change can monitor the nature of their work and see if they are generating a complex adaptive system that can evolve and grow to meet their needs. Further, this can be seen as a democratic process that seeks to draw upon the experiences and insights derived by those closely connected with any reform. And as Sirkin and his colleagues (2005) note, “A long project reviewed frequently stands a far better chance of succeeding than a short project reviewed infrequently” (p. 3).

References
Barabási, A. L. (2002) Linked: The new science of networks. Cambridge, MA: Perseus.
Hoban, G. (2002). Teacher learning for educational change. Philadelphia: Open University Press.
Lewin, R. (1999). Complexity: Life at the edge of chaos. Chicago: University of Chicago Press.
Sirkin, H. L., Keenan, P., & Jackson, A. (2005). The hard side of change management. . Harvard Business Review, pp. 1-12. October
Stacey, R. D. (1996). Complexity and creativity in organizations. San Francisco: Berrett-Koehler Publishers.
Waldrop, M. (1992). Complexity: The emerging science at the edge of order and chaos. New York: Simon & Schuster.
Wheatley, M. J. (1999). Leadership and the new science: Discovering order in a chaotic world. San Francisco: Berrett-Koehler Publishers.

Dynamic resilience of complex networks | Thursday 10:30-12:00

Baruch Barzel

Resilience, a system’s ability to retain functionality under errors, failures and environmental stress, is a defining property of many complex systems. Still, despite its widespread and universal applications, events leading to loss of resilience are rarely predictable and are often irreversible. This lacuna is rooted in a deep theoretical gap: we have strong theoretical tools to treat low-dimensional systems, which comprise only few interacting components, but lack the tools to understand the resilience of complex multidimensional systems, such as power grids, living cells or interconnected eco-systems. How then do we predict and influence the resilience of a complex networked system? To achieve this we seek the natural control parameters of network resilience, providing us with a universal framework to understand, predict and ultimately influence the resilience of complex networks. Along the way, we will also learn why your friends are more popular than you are...

For more details see: Universal resilience patterns in complex networks, Nature 530, 307–312 (2016)

Dynamical, directed networks from physical data: auroral current systems from 100+ ground based SuperMAG magnetometers | Wednesday 10:50-12:30

Sandra Chapman, Lauren Orr and Jesper Gjerloev

Networks provide a generic methodology to characterize spatio-temporal pattern in large datasets of multi-point observations. Networks are now a common analysis tool in societal data where it is clear whether two nodes are connected to each other or not. In observations from real physical systems, a ‘connection’, meaning significant cross-correlation or coherence between the timeseries seen at two observation points, is more subtle to establish. We have developed methodology to construct dynamical directed networks of the SuperMAG 100+ magnetometers for the first time. If the canonical cross-correlation (CCC) between vector magnetic field perturbations observed at two magnetometer stations exceeds a threshold, they form a network connection. The time lag at which CCC is maximal determines the direction of propagation or expansion of the structure captured by the network connection. If spatial correlation reflects ionospheric current patterns, network properties can test different models for the evolving current system during space weather events such as geomagnetic substorms. Importantly, once the network is constructed, one can quantify it with a few parameters. Parameters that we have found characterize this data are the network degree and modularity. Such parameters make possible quantitative statistical comparisons of hundreds of substorms and storms that capture the full spatial distribution of activity, without relying on gridding or infilling of data. In the first applications of this network methodology we have obtained the timings of the propagation of the ground magnetic signal of changes in the interplanetary magnetic field [1] and have established the characteristic evolution of the ground disturbance pattern seen in 86 isolated substorm events which captures the emergence of global-scale coherent current systems [2].

[1] J. Dods, S. C. Chapman, J. W. Gjerloev, Characterising the Ionospheric Current Pattern Response to Southward and Northward IMF Turnings with Dynamical SuperMAG Correlation Networks, JGR, 122, doi:10.1002/2016JA023686. (2017)
[2] L. Orr, S. C. Chapman, J. Gjerloev, Directed network of substorms using SuperMAG ground-based magnetometer data, GRL doi:10.1029/2019GL082824 (2019)

The Dynamics of Intelligent Cyber Adversaries | Thursday 9:40-10:15

Una-May O'Reilly

Artificial Adversarial Intelligence (AAI) assumes that adversaries can learn from their iterating engagements with one another. I study AAI in the context of adversarial dynamics that evolve in cyber space. I will describe how bio-inspired competitive coevolutionary algorithms can model the arms races of cyber attacks and defenses.

The Economy as a Constraint Satisfaction Problem | Monday 10:20-12:00

Dhruv Sharma, Jean-Philippe Bouchaud, Marco Tarzia and Francesco Zamponi

Constraint satisfaction problems (CSPs) are well studied problems, at the interface between computer science, optimization, and machine learning. Typically, one has a set of N variables, which have to satisfy a number M of constraints. For example, the variables could be the weights of a neural network, and each constraint imposes that the network satisfies the correct input-output relation on one of M training examples, for instance distinguishing images of cats from dogs.

The idea behind our work is that the economy can also be thought of as a constraint satisfaction problem. Agents in the economy have to satisfy budget and/or production constraints, and they thus have to adjust their strategies to maximize their profit while satisfying these constraints. Our objective is to incorporate a budget constraint in the simplest possible way and explore whether this model undergoes a phase transition. Phase transitions appear in the macroeconomy as ``dark corners'' where the economy becomes strongly non-linear and the response to small fluctuations can become catastrophically large. Our present model confirms the central role that debt levels play in the stability of the economy: too high a debt level and we have periodic crises, too low a debt level and the agents cannot sustain themselves long enough for long-lived structures to appear and survive. Our work presents a break from previous studies on the leverage cycle by coupling the production output and trading within the economy with agents' budgetary constraints.

In this work, we formulate a simple macroeconomic Agent-Based Model based on a modification of the well-known "perceptron" model of machine learning. The variables generally used in the study of the perceptron are given a precise economic interpretation. All agents in our model subject to a budgetary constraint and are allowed to incur debt. They are boundedly rational, and we use heuristics to rules to model agent behavior.

We find that the model displays an interesting phase transition between a "good" phase characterised by a low bankruptcy rate and a "bad" phase in which the economy collapses leading to frequent bankruptcies. The transition from one phase to the other is controlled by the average level of debt in the economy. The "bad" phase is found for low levels of debt. On the other hand, a higher level of debt allows agents to be flexible in their behavior and bankruptcy rates are low. Moreover, this transition is robust to a change in other parameters of the model such as the average production cost of goods.

The phase with high-debt is characterized by long-lived agents with very rich dynamics at the aggregate and the micro level. In this phase, there are dynamical switches between goods - spontaneously a high-demand good can experience a fall and another low-demand good will take its place. Interestingly, at the aggregate level, the distributions of the demand (supply) also vary dynamically and are found to be bimodal.

Finally, the “good" phase also has two distinct regimes: for intermediate levels of debt, the economy is stable with a low bankruptcy rate and with low volatility. However, for extremely high levels of debt, the economy enters a regime where the bankruptcy rate is still low but periodically undergoes crashes. These endogenously created cycles make the economy highly volatile. We conclude that there exists a “Goldilocks" zone: a sustainable range of debt can be found without the economy undergoing boom-bust cycles.

Effect of voluntary distancing measures on the spread of SARS-CoV2 in Mexico City | Wednesday 16:40-18:00

Guillermo de Anda-Jáuregui, Concepción Gamboa-Sánchez, Diana García-Cortés, Ollin D. Langle-Chimal, José Darío Martínez-Ezquerro, Rodrigo Migueles-Ramirez, Sandra Murillo-Sandoval, José Nicolás-Carlock and Martin Zumaya

Background: Non-pharmacological interventions (NPIs), such as physical distancing and mobility restrictions, were implemented worldwide to mitigate spreading of the infection and disease caused by the SARS-CoV2 virus. In Mexico, several reasons, including economic factors have prevented the total compliance with these regulations and a total lockdown was not implemented. Instead, physical distancing was driven by the closure of non-essential economic entities including schools and universities. The Mexican government promoted physical distancing and stay-at-home (cooperators) measures, particularly for people with mild symptoms (auto-isolation).

Aim of the study: To develop and implement an agent-based SIR model on a modular network based on mobility data to estimate the effect of cooperators on the spread of SARS-CoV2.

Methods: We extracted and processed mobility and epidemiologic data from public sources to parametrize a modular network built using an adaptation to the Lancichinetti–Fortunato–Radicchi (LFR) scheme. To estimate the fraction of cooperators and non-cooperators (>25% and <25% mobility reduction between prior- and post-COVID-19 mobility, respectively) for NPIs, we compared prior- (Origin-Destination mobility survey, 2017) and post-COVID-19 mobility data (Google community mobility reports and mobile data from the UNDP/Grandata lab, 2020) from 10 regions of Mexico City. Our agent-based model considers demographic characteristics and heterogeneous agent characteristics such as recovery time, number of encounters, and total connections (degree) in the interaction network. Recovery time and number of meetings were selected from Gaussian and exponential distributions, respectively and their degree was determined by the interaction network. We simulated 25 fixed networks with random starting conditions for different cooperation probability values (10 - 100%).

Results: The infected fraction of the population, the number of simultaneous infections, and the duration of the epidemic decreased as the mean cooperation probability in the population increased, reaching a lower and earlier epidemic peak when a total lockdown was simulated (100% cooperators). On one hand, the infected fraction of the population decreases linearly at a rate of 0.4 (10% reduction per every 25% increase in the cooperant fraction) between 0 and 60% cooperant fraction, after which it barely decreases and depends only on the date of NPI implementation onset. On the other hand, the number of simultaneous infections decreased in a nonlinear manner with 21% cooperating fraction reducing the total number of simultaneous infections by half.

Conclusions: More than 30% of cooperants are required to decrease the spread of the disease by half, with maximum decrease when the fraction of cooperants exceeds 60%. The minimal fraction of infections depends on the moment the lockdown is announced. Even though some features of the model are kept fixed, we developed a scalable and modifiable framework that can be tuned to represent the mobility and demographic dynamics of populations coping with epidemics —real-world data.

Abbreviations: UNDP, United Nations Development Programme; SIR, susceptible-infected-recovered.

The effects of inherited and uninherited traits on senescence | Thursday 10:30-12:00

André Martins

Spatial and temporal components are fundamental to explain why animals grow old. The viscous spread of characteristics allows those who age to compete among themselves. As a consequence, mortal lineages can show a higher evolvability rate than immortal ones. That advantage can sometimes offset the disadvantage of dying of old age, allowing senescence to win the evolutionary competition as an actual adaptation. Here, we will use simulated models to discuss the consequences of the existence of non-inheritable characteristics, for example, learned skills, in the evolution of senescence. Results suggest that, as the ability of agents to learn increases, senescence seems to be favored. But only to a certain point. Too strong learning can lead to a state where non-senescent species have the advantage, even if they grow genetically weak.

Efficient Self-Organized Feature Maps through Developmental Learning | Wednesday 10:50-12:30

Adit Chawdhary and Ali Minai

earning and adaptation occur at several temporal and spatial scales in animals with central nervous systems. In the broadest sense, these include evolution, development, synaptic learning, and real-time modulation. Of these, artificial neural networks have focused mainly on learning at the synaptic level with fixed network sizes and architectures, thus excluding some very important dimensions of adaptation available to biological systems. With the increasing scale of problems to which neural networks are being applied, and the quest to achieve more robust and flexible general intelligence in artificial systems, it is important to explore learning at other scales as well, of which developmental learning is the least explored. In this paper, we study a simple model of developmental learning in self-organized feature maps with the narrow goal of demonstrating that adding a developmental aspect to synaptic learning can enhance the performance of neural networks while reducing computational load. Though simplistic, the developmental approach described represents a general paradigm for neural learning that mimics the gradual complexification that biological neural networks undergo in animals. We also argue that this is a crucial step for neural learning to move beyond narrow applications towards a more general intelligence framework with continuous learning.

Eigenvalues of Random Graphs with Cycles | Tuesday 14:30-16:10

Pau Vilimelis Aceituno

Introduction
Networks are often studied using the eigenvalues of their adjacency matrix, a powerful mathematical tool with a wide range of applications. Since in real systems the exact graph structure is not known, researchers resort to random graphs to obtain eigenvalue properties from known structural features. However, this theory is far from intuitive and often requires training of free probability, cavity methods or a strong familiarity with probability theory. Here we offer a different perspective on this field by focusing on the cycles in a graph.

Results
First, we use undergraduate-level notions of probability and linear algebra to show that the cycles of a graph give the moments of its eigenvalue distribution, then use this result to study the eigenvalues of random graphs with circular motifs, where we show that the eigenvalues take almost polygonal shapes depending on the length of the cycles: if the cycles are of length three, they look like a triangle, for length four like a square, and so on. More specifically, we use the so-called method of moments to show that the eigenvalues have a rotational symmetry, and then account for the stability of those systems by showing how the number of cycles behaves as the length goes to infinity.

Discussion
Our results have a simple interpretation in terms of systems theory. As cycles are feedback loops that enhance or dampen certain frequencies in the systems dynamics, adding cycles of length L will generate dominant eigenvalues with phases corresponding to the Lth roots of unity, explaining some of the geometric shapes that appear here.

The systems theory interpretation can also be used in the opposite direction. As matrices and graphs represent interactions between elements, our results allow us to study the stability and resonances of systems with multi-element interactions. In this sense, our results simply extend the elliptic law of random matrices, which has been critical in fields as diverse as nuclear physics, wireless communications, or theoretical ecology to higher order interactions.

References:
- Aceituno, P. V., Rogers, T., and Schommerus, H., “Universal hypotrochoidic law for random matrices with cyclic correlations”, Physical Review E 2019
- Aceituno, P. V., “ Eigenvalues of random graphs with cycles”

Eliminating COVID-19: The Impact of Travel and Timing | Wednesday 15:00-16:40

Alexander Siegenfeld and Yaneer Bar-Yam

We analyze the spread of COVID-19 by considering the transmission of the disease among individuals both within and between regions. A set of regions can be defined as any partition of a population such that travel/social contact within each region far exceeds that between them. COVID-19 can be eliminated if the region-to-region reproductive number—i.e. the average number of other regions to which a single infected region will transmit the virus—is reduced to less than one. We find that this region-to-region reproductive number is proportional to the travel rate between regions and exponential in the length of the time-delay before region-level control measures are imposed. Thus, reductions in travel and the speed with which regions take action play decisive roles in whether COVID-19 is eliminated from a collection of regions. If, on average, infected regions (including those that become re-infected in the future) impose social distancing measures shortly after active spreading begins within them, the number of infected regions, and thus the number of regions in which such measures are required, will exponentially decrease over time. Elimination will in this case be a stable fixed point even after the social distancing measures have been lifted from most of the regions.

Emergence of Hierarchy in Networked Endorsement Dynamics | Tuesday 14:30-16:10

Philip Chodrow, Nicole Eikmeier, Mari Kawakatsu and Daniel Larremore

We introduce an adaptive network model of prestige-driven hierarchy. This model is distinctive in being amenable to both analytical study of its steady states and principled statistical inference of its parameters from data. The system state at time t is encoded by a directed matrix At of endorsements. At each time-step, all n agents compute the same ranking function φ : R n×n → R n . Then, a uniformly selected agent i endorses agent j, with j selected proportionally to a choice function of j’s rank φj . The system updates according to At+1 = λAt + (1−λ)Eij, where λ ∈ [0,1] is a memory parameter governing the relative importance of new and old endorsements in the system state, and E contains a single nonzero entry in the ith row and j column. We first consider the long-memory limit, and show mathematically that for several choices of the function φ, there exist distinct regimes of egalitarianism and hierarchy separated by a phase transition. We then use our model to analyze data. Studying the exchange of mathematics PhDs between universities, we find maximum likelihood estimates of the model parameters consistent with information half-life of 5-7 years (the range of PhD completion times), strong prestige preferences, and a “ladder” effect in which graduates of prestigious institutions tend to place at only somewhat less prestigious ones. We compare and contrast these findings to those in other social and biological networks

Emergent collective behaviors in coupled Boolean networks | Wednesday 10:50-12:30

Chris Kang and Nikolaos Voulgarakis

In a multicellular organism, each individual cell does not function as an entity. However, when a multicellular organism is formed, it demonstrates qualitatively different functional capabilities. Thus, the transition from the chemical state at a single-cell level to the life state at the multicellular level is emergent.

Exploring the spatio-temporal dynamics in coupled gene regulatory networks as a mathematical model brings us one step-closer to understanding the origin of life. In the paper, Emergence of diversity in homogeneous coupled Boolean networks [1], we showed through model simulations that complex behaviors such as phenotypic diversity in gene regulatory networks of isogenic cells can arise in simplistic, stochastic non-linear dynamical systems - specifically, coupled Boolean networks with perturbation. The work corroborated numerous past findings that complexity of information processing is maximized in systems when they operate near a phase transition between an ordered and a chaotic regime of dynamical stability. One of such observations in the model was the checkerboard pattern formation in long-term behaviors (characterized by the steady-state distributions of Boolean states) on a tissue of interacting cells at “near-critical” Lyapunov stability. In recent, we have improved and redefined the interactions of cells as an Ising model, where the phase transition of ferromagnetism is clearly understood [2]. Direct comparison with experimental results in collective cell-state transition in mouse embryonic stem cells will be discussed.

[1] Kang, C., Aguilar, B., & Shmulevich, I. (2018). Emergence of diversity in homogeneous coupled Boolean networks. Physical Review E, 97(5), 052415.
[2] Kang, C. & N. Voulgarakis (2020) Emergent collective behavior in coupled Boolean networks (in preparation).

Emergent regularities and scaling in armed conflict data | Wednesday 10:50-12:40

Edward D. Lee, Bryan C. Daniels, Chris Myers, David Krakauer and Jessica C. Flack

Large-scale armed conflict is a characteristic feature of modern societies. The statistics of conflict show remarkable regularities like power law distributions of fatalities and duration, but lack a unifying framework. We explore a large, detailed data set of $10^5$ armed conflict reports spanning 20 years across nearly 10^4 kilometers. By systematically clustering spatiotemporally proximate events into conflict avalanches, we show that the duration, diameter, extent, fatalities, and number of conflict reports satisfy consistent power law scaling relations. The temporal evolution of conflicts measured by these scaling variables display emergent symmetry, collapsing onto a universal dynamical profile over a range of scales. We propose a model building on a random fractal growth process that captures many of the observed scaling relations, recapitulating how geography and regional variation constrain conflict. Our findings suggest that armed conflicts are dominated by a low-dimensional process that scales with physical dimensions in a surprisingly unified and predictable way.

Empirical scaling and dynamical regimes for GDP: challenges and opportunities | Thursday 10:30-12:00

Harold Hastings and Tai Young-Taft

We explore the distribution of gross domestic product (GDP) and per capita GDP among different countries in order to elucidate differences in the dynamics of their economies. An initial analysis of GDP data and per capita GDP data from 1980 and 2016 (and many years in between) typically finds three scaling regions – signatures of likely differences in dynamics. The GDP of the largest ~25 economies (nations, EU) follows a power law GDP ~ 1/rank (c.f. Garlaschelli et al. 2007); followed by a second scaling region in which GDP falls off exponentially with rank and finally a third scaling region in which the GDP falls off exponentially with the square of rank. This broad pattern holds despite significant changes in technology (enormous growth in computing power, “intelligent” automation, the Internet), the size of the world economy, emergence of new economic powers such as China, and world trade (almost free communication, containerized shipping yielding sharp declines in shipping costs, trade partnerships, growth of the EU, multinationals displacing the traditional economic role of nation-states.

Thus, empirically, these patterns may be universal in which case one approach to growth of less developed economies (in the second and third scaling regions of per capita GDP) may be to identify and target causative differences between these economies and those in the first (power law) scaling region. For example, Montroll and Shlesinger (1982) suggest a basic lognormal distribution as a consequence of multiplying many independent random variables, with a power law high-end tail because “the very wealthy generally achieve their superwealth through amplification processes that are not available to most.” On the other hand, Reed and Hughes (2002) show how power law behavior can arise if “stochastic processes with exponential growth in expectation are killed (or observed) randomly.”

References
Garlaschelli, D., DiMatteo, T., Aste, T., Caldarelli, G., Loffredo, M.L. 200). Interplay between topology and dynamics in the world trade web, European Physics Journal B 57, 159-164.
Montroll, E.W. and Shlesinger, M.F. 1982. On 1/f noise and other distributions with long tails. Proceedings of the National Academy of Sciences, 79(10), 3380-3383
Reed, W.J. and Hughes, B.D, 2002. From gene families and genera to incomes and internet file sizes: Why power laws are so common in nature. Physical Review E, 66(6),067103.

Enhanced Information Gathering May Intensify Disagreement Among Groups | Tuesday 14:30-16:10

Hiroki Sayama

We explore potential linkages between the advancement of information communication technology and the widening disagreement among groups in society by constructing and analyzing a mathematical model of population dynamics in a continuous opinion space. The model is constructed based on diffusion and
migration models commonly used in mathematical biology. To model the enhancement of information gathering ability of populations, we adopted the interaction kernel approach used in physics, applied mathematics, and mathematical biology, and introduced a generalized non-local version of spatial gradient as a local
perception kernel of individuals. We first analytically obtained the condition under which non-homogeneous patterns (i.e., groups) form in the opinion space, and then confirmed the analytical prediction with numerical simulations. We found that the characteristic distance between population peaks becomes greater (i.e., disagreement becomes more intensified among groups) as the wider range of opinions becomes available to individuals or the greater attention is attracted to extreme opinions distant from their own. These findings may provide a possible explanation and implications for why disagreement is recently growing in various social, political and cultural scenes in today's increasingly interconnected world, without attributing its cause only to specific individuals or events. Our model also allows for testing hypothetical scenarios that can provide insight into potential intervention strategies and their effectiveness. Limitations and future directions are also discussed.

Environmental pollution due to reduced mobility on times of COVID-19 on Mexico City | Thursday 10:30-12:00

Ana Leonor Rivera, Rafael Silva-Quiroz, Juan Antonio Lopez-Rivera and Alejandro Frank

Environmental pollution in cities is due to several human factors, for instance the number of cars in circulation, fuel efficiency and industrial waste, as well as orographic and meteorological conditions that determine air circulation. In a previous publication we proposed that during the dry-hot season (from March to May) on Mexico City, ozone contingencies are triggered by an atmospheric blockage with meteorological conditions of almost no wind, low relative humidity, and small temperature fluctuations. Other authors had proposed that mobility restrictions reduce Ozone levels on the environment preventing from environmental contingencies, supporting the circulation restriction programs implemented on many cities. Due to the Covid-19 pandemic, the population of Mexico City has been in confinement since march 21st, producing a notable decrease in the number of cars circulating. This provides a suitable setting to test the hypothesis of atmospheric triggers of environmental contingencies on a city surrounded by mountains like Mexico’s valley. Here, we analyze atmospheric composition (O3 and NOx) data of 25 atmospheric monitoring stations on Mexico City (available from aire.cdmx.gob.mx) during 41 days starting from March 21, 2020, comparing against the same period of time from 2004 until 2019. We found, as expected, that NOx atmospheric content reduced significantly in 2020, as it is mostly produced by cars. Moreover, a clear difference exists between weekends and work days regarding NOx content, and its weekly periodicity has disappeared during 2020. However, there is no statistically significant difference in the moments or the correlations of the Ozone atmospheric content between 2020 and the years between 2004 and 2019. There were 5 days with Ozone levels above 100 ppb on 2020 (the level considered by WHO as dangerous for human health), while the average on previous years is 7±4 days (between 2 and 16 days). Thus, the trigger of environmental Ozone contingencies on Mexico City during the hot dry season is an atmospheric blockage, even under severe restrictions on cars circulation (which do affect NOx levels).

Exploration of the Space of Agent-Based Network SIR Models for COVID-19 | Monday 14:30-16:10

Christopher Wolfram

Network SIR models have become a popular method for studying the spread of disease, especially in the context of COVID-19. Network SIR models generally work by constructing a graph in which each vertex represents an agent, and agents are connected by an edge if they come into contact. Each agent is marked as susceptible (S), infected (I), or recovered (R), and simple update rules are applied iteratively to 1) stochastically spread the disease from infected agents to adjacent susceptible agents, and 2) to stochastically mark infected agents as recovered after some number of steps.

Network SIR models depend on several parameters, from the choice of network, to probability of disease spread (transmissibility), to the distribution of recovery times. For this reason, it is often hard to separate fundamental properties of network SIR models from the properties of a particular implementation.

We develop a flexible network SIR model which allows us to explicitly study the effect of these modeling choices. We start with the contact network, and mark a few agents as infected. At each step, each agent samples from a distribution which determines how many of its neighbors it will “contact”. If a susceptible agent contacts an infected one, the susceptible one will be marked as infected at the end of the step. Because these contact distributions are independent of the structure of the contact network, an agent might have high degree in the contact network, but will only contact a few of its neighbors at each step, allowing us to model agents like cashiers, among others. Finally, when an agent becomes infected it samples from a distribution of recovery times to determine how many steps it will remain infected before being marked as recovered. Thus our model takes three inputs: the contact network, the distribution of recovery times, and a map from agents to distributions of the numbers of contact events.

We start by using a network generated by the Watts-Strogatz model. We find a critical point in the mean number of contact events per step which determines whether the disease dies out quickly, or spreads to infect the majority of the population. This critical point corresponds with the critical point at R_0 = 1 in simpler SIR models.

However, we find that by changing the distribution of the number of contact events, the behavior around the critical point various considerably even if the mean and variance are held constant (particularly when using distributions with fat-tails). We also find that the model behaves differently when different models of the contact network are used. For example, the final number of infected agents is highly dependent on whether the contact network was generated by the Watts-Strogatz model, the Barabasi-Albert model, or by a model for generating planar graphs, as well as the parameters of the Watts-Strogatz and Barabasi-Albert models.

In summary, we find that network SIR models are highly sensitive to the details of their implementation, and researchers should be careful when drawing conclusions for a single network SIR model. However, we also find some trends which are robust to many modeling choices, like that a few superspreader agents can dramatically increase the total infection rate, and that heterogeneous populations behave differently from homogeneous ones.

Exploring Complex Systems Through Computation | Thursday 10:30-12:00

Patrik Christen

Complex systems are abundant in our universe, especially in the form of biological and social systems. By complex system I refer to systems that are composed of local elements interacting with each other in a inhomogeneous, time-varying, specific, and often nonlinear manner [1], ultimately leading to emergent behaviour. Exploring complex systems is essential to increase our knowledge about the universe. Since Newton and Galileo, we mostly approach this by looking at the system as a whole and thus from a macroscopic or global point of view and by formulating mathematical equations describing the system’s behaviour from this viewpoint. The current mathematics of this approach becomes hard if not impossible to use for describing the kind of interactions we find in complex systems and although we know that, we still investigate complex systems as Newton and Galileo did – neglecting the way complex systems compute. In this abstract, based on an example from biology, I illustrate what structure complex systems have and how they operate or compute on their structure. I then propose a way of exploring complex systems exploiting computation in complex systems.

Biological systems develop and grow building almost exclusively network structures [1]. Local elements such as cells interact with each other in networks and form higher-level structures such as tissues. It might start small and simple but the interaction in such multilayer networks leads to the emergence of large and highly complex systems such as organisms and societies. Biological research mainly follows different pathways through the multilayer networks of biological systems. But such an isolated view of cellular and molecular interactions is misleading as beautifully illustrated by the omnigenic hypothesis [2], which suggests that all genes affect every complex trait. Additionally, knowing which elements interact with each other leads to a good approximation of the emergent behaviour despite the lack of detailed kinetics [3], highlighting the importance of interaction.

If we now look at a biological system in terms of computation, we can make use of the concepts of structure and operation. Complex systems and systems in general consist of at least one structure and one operation computing on this structure [4, 5]. E.g. a computable function is an operation and its input and output are structures. Accordingly, biological systems develop and grow starting from a few interconnected elements and repeatedly applied simple rules over time. The structure that forms is a network or, if including multiple length scales, a multilayer network. The operation is a simple rule or a set of rules determining whether a particular element is on or off at a particular discrete time step. This description is identical with Stuart Kauffman’s random Boolean networks [6]. He was able to show that representing gene regulatory networks with simple Boolean rules iteratively applied to network structures leads to regulatory behaviour and network topologies also found in biological gene networks. The system’s behaviour is thus emerging from the interacting elements based on rules applied repeatedly over time. It is also similar to Stephen Wolfram’s elaborative work with cellular automata where he found that simple rules can lead to complex behaviour [7] and that cellular automata can been seen as models of complexity [8]. This allows us to generalise the described computation in complex biological systems to any other type of complex system. Most recently, Wolfram presented a new model [9] based on hypergraphs that is used in the same way but has a network structure, which makes it identical with the computation described above.

These observations suggest that computation in complex systems is based on network structures that start with few local elements and emerge as a result of iteratively applied simple rules over time. It is therefore a rule-based bottom-up approach and requires a device capable of performing an extremely large number of computations, which makes it algorithmic [1]. Instead of trying to come up with a mathematical equation describing the global or macroscopic behaviour of a complex system, complex system computation suggests to start with simple rules and small structures at the local or microscopic level. It also suggests that the system’s structure needs to develop and grow and thus many computational experiments with different rules are required to get an impression of the system’s behaviour. Instead of analytically solving mathematical equations, computational experiments are conducted. The computed data can of course be analysed and where computational reducibility permits it, equations and analytical solutions might be possible. Starting almost structureless and performing countless computational experiments is uncommon and seems tedious at first but it seems that through computation and by making as few assumptions as possible, the complexity of a system can unfold.

References
[1] Stefan Thurner, Rudolf Hanel, and Peter Klimek. Introduction to the Theory of Complex Systems. Oxford University Press, New York, 2018.
[2] Evan A. Boyle, Yang I. Li, and Jonathan K. Pritchard. An expanded view of complex traits: From polygenic to omnigenic. Cell, 169(7):1177–1186, 2017.
[3] Marc Santolini and Albert-L´aszl´o Barab´asi. Predicting perturbation patterns from the topology of biological networks. Proceedings of the National Academy of Sciences, 115(27):E6375–E6383, 2018.
[4] Patrik Christen and Olivier Del Fabbro. Cybernetical concepts for cellular automaton and artificial neural network modelling and implementation. 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), pages 4124–4130, 2019.
[5] Patrik Christen and Olivier Del Fabbro. Cybernetical concepts for cellular automaton and artificial neural network modelling and implementation. arXiv:2001.02037, 2019.
[6] Stuart A. Kauffman. The Origins of Order: Self-Organization and Selection in Evolution. Oxford University Press, New York, 1993.
[7] Stephen Wolfram. A New Kind of Science. Wolfram Media, Champaign, 2002.
[8] Stephen Wolfram. Cellular automata as models of complexity. Nature, 311(5985):419– 424, 1984.

Exploring Order-Theoretic Measurement for Complex System Operations | Wednesday 15:00-16:40

Christopher Klesges

The presentation explores the possibility and availability for describing and measuring complex processes. As is familiar to complex systems theory, behavior create distinct, independent phenomena which effect constructive methodologies. As working example, describing and representing process interoperability creates difficulty in engineering systems. Yet with several available examples of complex models, how is it these approaches present mathematical intuition? Where there is representation, what constructs can be proscribed and does this match sufficient description? These questions are explored through an order-theoretic and operator system principles, showing the makings for geometric sense. This would otherwise support how complex representations can be realized, and if so, several constructs should be investigated.

An Extended Variational Principle of Stationary Action for Self-organization in Complex Systems | Wednesday 15:00-16:40

Georgi Georgiev

First principles determine the equations of motion and the conservation laws in physics. Those same principles should determine the evolution of complex systems towards more organized states [1]. The variational action principles of physics state that all motions occur with the amount of action which is extremized (critical, stationary, minimum). Gray and Taylor prove that it can never be a maximum [2]. In Hamilton’s principle, the action is sometimes a minimum, other times a saddle point, in both cases stationary. The Leibniz, Maupertuis, and Jacobi Least Action principles actually are true minimum principles, because, they are time-independent and their solutions are geodesics, which are by definition and proven by theorems to be paths of least length, as suggested by the ancient Greeks. Fermat’s principle of least time, Hertz’s Principle of least curvature, and Gauss Principle of Least Constraint are also true minimum principles, which can be derived from the principles of least action and can be used as alternatives for deriving the same results. Euler insisted that action should always be a minimum, and never stationary. Hamilton’s principle has been extended for dissipative systems [3]. In complex systems, due to obstructive constraints to motion, elements do not move along their geodesics but always along longer trajectories. In addition, organized systems are dissipative, therefore there is always friction, which further increases action. Whether the solutions of the stationary action principle are minimum or saddle points, the action of the elements is far larger than those solutions, and to obey the principle, they always tend toward that extremum (stationary, critical) point, therefore reducing their action with the increase of the organization. This can be translated that they occur with maximum action efficiency possible in the system, given all of its constraints, which is an optimum at each stage of self-organization, because the action efficiency is proportional to the rest of the characteristics of complex systems and grows proportionally as a power law of them and exponentially in time [4-7]. In the language of flow-networks, in systems out of equilibrium, the motion on the network occurs with minimum action cost. Therefore we investigate the action efficiency of organized systems, as a measure of their level of organization [4-7]. We find that they evolve toward more action efficient states. Therefore the principle of stationary action, not only determines all motions and all conservation laws in all of physics, but also the evolutionary states in complex systems. In order to measure action efficiency in complex systems, the principle of least action needs to be modified, from the stationary of action along a single trajectory to the minimum of the average action per event in a complex system, within an interval of time. We measure that the increase of action efficiency in the evolution of complex systems happens in positive feedback with the rest of the characteristics of the complex systems, such as the total amount of action for all events in it, the total number of elements in the system, the total number of events, the free energy rate density in it and others. This positive feedback leads to exponential growth in time of all of those characteristics, and a power-law dependence between each two of them, which is supported by data [4-7].

References
[1] Georgiev G., Georgiev I., “The least action and the metric of an organized system” Open Systems and Information Dynamics, 9(4), p. 371-380 (2002).
[2] Gray, C. G., & Taylor, E. F. (2007). “When action is not least.” American Journal of Physics, 75(5), 434-458.
[3] Gay-Balmaz, F., & Yoshimura, H. (2019). “From Lagrangian mechanics to nonequilibrium thermodynamics: A variational perspective.” Entropy, 21(1), 8.
[4] Georgiev G.Y., Chatterjee A., Iannacchione G.S. “Exponential Self-Organization and Moore's Law: Measures and Mechanisms” Complexity, (2017). Article ID 8170632
[5] Georgi Yordanov Georgiev, Atanu Chatterjee “The road to a measurable quantitative understanding of self-organization and evolution” Ch. 15. In Evolution and Transitions in Complexity, Eds. Gerard Jagers op Akkerhuis, Springer International Publishing, (2016). p. 223-230.
[6] Georgiev G., Henry K., Bates T., Gombos E., Casey A., Lee H., Daly M., and Vinod A., “Mechanism of organization increase in complex systems”, Complexity, 21(2), 18-28, DOI: 10.1002/cplx.21574 7/25 (2015).
[7] Georgi Yordanov Georgiev “A quantitative measure, mechanism and attractor for self-organization in networked complex systems”, in Lecture Notes in Computer Science (LNCS 7166), F.A. Kuipers and P.E. Heegaard (Eds.): IFIP International Federation for Information Processing, Proceedings of the Sixth International Workshop on Self-Organizing Systems (IWSOS 2012), pp. 90–95, Springer-Verlag (2012).

Extremism definitions in opinion dynamics models | Monday 10:20-12:00

André Martins

There are several opinion dynamics models where extremism is defined as part of their characteristics. However, the way extremism is implemented in each model does not correspond to equivalent definitions. While some models focus on one aspect of the problem, others focus on different characteristics. This paper shows how each model only captures part of the problem, and how Bayesian inspired opinion models can help put those differences in perspective. That discussion suggests new ways to introduce variables that can represent the problem of extremism better than we do today.

Final states of Threshold based Complex-contagion model and Independent-cascade model on directed Scale-free Networks Under Homogeneous Conditions | Wednesday 16:40-18:00

Chathura Jayalath, Chathika Gunaratne, Bill Rand, Chathurani Senevirathna and Ivan Garibay

There are a variety of information diffusion models, all of which simulate the adoption and spread of information over time. However, there is a lack of understanding of whether, despite their conceptual differences, these models represent the same underlying generative structures.

Comparison of the possible causal trajectories that simulations of these models may take, allow us to look beyond conceptual discrepancies and identify mechanistic similarities.

In this study, we analyzed the diffusion of information through social networks through agent-based simulations of a Linear-threshold based complex-contagion model and Independent-cascade model on directed Scale-free networks.

The Linear-threshold model postulates that adoption occurs once the fraction of an individual's adopted neighbors has exceeded an internal threshold. The adoption in the Independent-cascade model is governed through a Bayesian probability.

We discover, empirically, that the final fraction of adopted nodes follows similar dynamics with both the threshold of the Linear-threshold model and the probability of adoption of the Independent-cascade model.

In addition, we examine the fraction of infected-to-susceptible edges that drive the spread of the information in both models and discover that fraction of these transmissible edges also follows similar dynamics towards the end-states of both models.

We thereby show that, despite differences in their conceptual motivations, both the Linear-threshold model and the Independent-cascade model function equivalently and can describe the same state space under homogeneous conditions on Scale-free networks.

Through this study we attempt to highlight the importance of understanding the underlying causal mechanisms of models that might at first misleadingly seem to be conceptually different.

Foundations of Cryptoeconomic Systems | Monday 10:20-12:00

Michael Zargham and Shermin Voshmgir

Blockchain networks and similar cryptoeconomic networks are complex systems. They are adaptive networks with multi-scale spatiotemporal dynamics. Individual actions towards a collective goal are incentivized with “purpose-driven” tokens. These tokens are equipped with cryptoeconomic mechanisms allowing a decentralized network to simultaneously maintain a shared state, support peer-to-peer settlement, and incentivize collective action. These networks therefore provide a mission-critical and safety-critical regulatory infrastructure for autonomous agents in untrusted economic networks. They also provide a rich, real-time data set reflecting all economic activities in their systems. Advances in network science and data science can thus be leveraged to design and analyze these economic systems in a manner consistent with the best practices of modern systems engineering. Research that reflects all aspects of these socioeconomic networks needs (i) a complex systems approach, (ii) interdisciplinary research, and (iii) a combination of economic and engineering methods, here referred to as “economic systems engineering,” for the regulation and control of these socio-economic systems.

Furthermore, design and analysis of cryptoeconomic systems is a subset of complex systems engineering which requires expertise in software engineering, economics, organizations as well as law and ethics. While it is difficult to assemble suitably interdisciplinary teams, it is even more difficult to build a shared language, especially identifying and resolving critical semantic collisions. These challenges have been addressed by leveraging the existing interdisciplinary field of complex systems science, which is already populated with experts from the full range of relevant fields. Nonetheless, new experiences have led to new learnings. In particular, research has been conducted using formal mathematical methods derived from the fields of dynamical systems and game theory with a particular focus on multi-agent state based potential games. Through this lens one can construct formal arguments for the validity of some cryptoeconomic patterns. Analytic techniques identify the configuration spaces of these games and characterize the games properties for all realizations as the properties of these reachable spaces created by considering all sequences of admissible actions. The cryptographic enforcement of the admissible action set make these guarantees far stronger than they would be for other social systems.

Additionally, these mathematical models also help to identify insidious failure modes of proposed cryptoeconomic patterns. Attack surfaces are most commonly caused by composing mechanisms without properly analyzing the coupled dynamics. Common economic mechanism design techniques require strong assumptions on utility functions and fail to account for game warping caused by composing games, or more generally addressing the challenges of open games. In our work, composition is addressed by borrowing methods from model-based systems engineering. Most frequently insights are derived from computational methods applied leveraring the state based games formalism. Once a computational model of a particular cryptoeconomic system is built it may be used to execute a wide range of controlled counterfactual experiments to explore the properties of the economic mechanisms, including latent instabilities or sensitivities to behaviors outside the control of the mechanism designers. When combined with empirical data, computational models built from mathematical models also serve as tools to estimate unobservable states and forecast trajectories; both of which improve transparency for complex systems and inform decision making for participants and maintainers alike. New open source software tools have been developed to streamline computational experiments and best practices for design and analysis are emerging in the user community.

Reference Materials
https://epub.wu.ac.at/7309/8/Foundations%20of%20Cryptoeconomic%20Systems.pdf (under review)
https://epub.wu.ac.at/7385/1/zargham_shorish_paruch.pdf (short version to appear IEEE ICBC 2020)
https://epub.wu.ac.at/7433/1/zargham_paruch_shorish.pdf (full version to appear MARBLE 2020)
https://youtu.be/HldQF_MJN_Y (Presentation on this topic at MIT March 8, 2020)
https://github.com/BlockScience/cadCAD? (open source library)

From the Global Financial Crisis of 2008 to Covid-19: Differences and Similarities | Tuesday 8:40-9:20

Irena Vodenska

The occurrence of extreme events in complex financial and economic systems is not a new phenomenon. We study the 2008 US real estate market collapse as an extreme event, its relationship with financial markets, and the aftermath that it left on the global banking system. We simulate crashes of specific real estate assets identified by bank balance sheet analysis and study the cascading failure process throughout the entire banking system after imposing initial shocks on selected assets. We show that not all assets are created equal and some bear higher responsibility for the 2008 global financial crisis compared to others. Our risk propagation model is based on a bipartite network structure with banks on one hand and assets on the other. The damage through the system spreads bi-directionally between the banks and the assets. We find that as the cascading failure process propagates through the network, certain banks start to collapse, and the banking system approaches a critical point when it experiences an abrupt phase transition from stable to unstable state. We test the predictive power of our model for cascading failures in the banking system by using the 2007 US bank balance sheet dataset to compare the list of failed banks identified by our model with the official Federal Deposit Insurance Corporation – Failed Bank List (FDIC-FBL). We find that our model identifies a significant portion of the truly failed banks for the period between 2008 and 2011 when over 350 banks in the US failed. We expand this bi-partite model to the European Sovereign debt crisis, where we include response parameters for the financial institutions and the financial markets, and observe that the results closely match real-world events (e.g. the high risk of Greek sovereign bonds and the distress of Greek banks). We suggest that our model could become complementary to existing stress tests, incorporating the systemic risk contribution of banks and assets in time-dependent networks. Also, the model provides a simple way of assessing the stability of a system by using the ratio of the log-returns of sovereign bonds and the stocks of major sovereign debt holders as a stability indicator. We also propose a systemic importance ranking, BankRank, for the financial institutions to assess the contribution of individual banks to overall systemic risk. We also explore the effect of COVID-19 disruption on the U.S. economy by using industry-by-industry total requirements table provided by the Bureau of Economic Analysis (BEA) for the year 2018, with a disaggregation level of n = 15. An entry Mij in the matrix represents the output required by industry i to satisfy one unit of industry j’s output. This network is described by a weighted digraph, with industries as nodes and production relationships as links. Further, for the disaggregation n = 15 the graph is complete, though some weights are very close to zero. The in-degree of each node corresponds to column sums in M, which define each industry’s total input requirements. Similarly, a node’s out-degree describes its total production corresponding to row sums in M. The structure of this network is characterized by degree distributions that follow a power law, namely a Pareto distribution. To study the disruption presented by COVID-19, we model the independent shocks to each sector by the fractional loss of employment in the U.S. between March and April 2020 as reported by the Bureau of Labor Statistics (BLS). The propagation of unemployment shocks is determined by the fractional loss in inputs, and the resiliency of all nodes is a free parameter between [0,1]. For resiliencies above a critical value, we observe a smooth reduction in total production, while below the critical value we observe an abrupt industrial collapse.

Generalized Langevin Equations and the Climate Response Problem | Tuesday 10:20-12:00

Nicholas Watkins, Sandra Chapman, Aleksei Chechkin, Ian Ford, Rainer Klages and David Stainforth

There can be few greater scientific challenges than predicting the response of the global system to anthropogenic disruption, even with the array of sensing tools available in the “digital Anthropocene”. Rather than depend on one approach, climate science thus employs a hierarchy of models, trading off the tractability of simplified energy balance models (EBMs) [1] against the detail of Global Circulation Models. Since the pioneering work of Hasselmann in the 1970s stochastic EBMs have allowed treatment of climate fluctuations and noise. They remain topical, e.g. their use by Cox et al to propose an emergent constraint on climate sensitivity [2]. Insight comes from exploiting a mapping between Hasselmann’s EBM and the original mean-reverting stochastic model in physics, Langevin’s equation of 1908.

However, it has recently been claimed [3,4] that the wide range of time scales in the coupled global atmospheric-ocean system may necessitate a heavy-tailed model of the response of global mean temperature (GMT) to perturbations, instead of the familiar exponential seen in the Langevin and Hasselmann pictures. Evidence for this includes long range memory (LRM) in GMT, and the success of a fractional Gaussian model in predicting GMT [5,6].

Our line of enquiry is complementary to [3-6] and proposes mapping a model well known in statistical mechanics, the mori-Kubo “Generalised Langevin Equation” (GLE) to generalise the Hasselmann EBM [7]. If it is present, LRM then simplifies the GLE to a fractional Langevin equation (FLE). As well as a noise term the FLM has a dissipation term not present in [3,4], generalising Hasselmann’s damping constant. We describe the corresponding EBMs [8] that map to the GLE and FLE, discuss their solutions, and relate them to existing models, in particular Lovejoy’s Fractional Energy Balance Model (FEBE) [6,9].

[1] Ghil M (2019) Earth and Space Science 6: 1007–1042
[2] Cox P et al. (2018) Nature 553: 319-322
[3] Rypdal K. (2012) JGR 117: D06115
[4] Rypdal M and Rypdal K (2014) J Climate 27: 5240-5258.
[5] Lovejoy S., et al (2015) ESDD 6:1–22
[6] Lovejoy S., (2019), Weather, macroweather, and the climate : our random yet predictable atmosphere, Oxford University Press.
[7] Watkins N W (2013) GRL 40:1-9
[8] Watkins N. W., et al (2019) https://www.essoar.org/doi/abs/10.1002/essoar.10501367.1
[9] Lovejoy S. (2019) submitted, Nonlinear Processes in Geophysics.

The Generation of Meaning and Preservation of Identity in Complex Adaptive Systems: The LIPHE4 Criteria | Tuesday 10:20-12:00

Mario Giampietro and Ansel Renner

This paper combines an interdisciplinary set of concepts and ideas to explore the interface between complexity science and biosemiotics and to describe the processes of generation of meaning and preservation of identity in complex adaptive systems. The concepts and ideas used include: (i) holons (from hierarchy theory); (ii) the state-pressure relation (from non-equilibrium thermodynamics); (iii) the four Aristotelean causes (as used in relational biology); and (iv) upward and downward causation. Further insights from other disciplines, including biosemiotics, cybernetics, codepoiesis, theoretical ecology, energetics, and bioeconomics, are also borrowed to explore the mechanisms underlying the organizational unity of the various processes leading to a resonance between the tangible and notional definitions of identity. An original set of criteria, to be used for the characterization of this organizational unity, is then put forward: Learning Instances Producing Holarchic Essences: Expected, Established, and Experienced (LIPHE4). The LIPHE4 criteria help explain how complex adaptive systems can remain the same (preserve their identity) while becoming something else (evolve) and succeed even while implementing imperfect models to guide action.

Greater than the sum of its parts? | Tuesday 14:30-16:10

Julia Grübler and Oliver Reiter

Political debates and economic analyses often focus on single free trade agreements and their potential economic effects on participating trading partners. This study contributes to the literature by shedding light on the significance of trade agreements in the context of countries’ positions in worldwide trade agreement networks by combining network theory with gravity trade modelling. We illustrate, both numerically and graphically, the evolution of the global web of trade agreements in general, and the network of the European Union specifically, accounting for the geographical and temporal change in the depth of implemented agreements. Gravity estimations for the period 1995-2017 distinguish the direct bilateral effects of trade agreements from indirect effects attributable to the scope of trade networks and countries’ positions therein.

The Hard Problem of Life | Monday 12:50-1:30

Sara Imari Walker

Identifying general principles of governing life, if they exist, remains a deeply stubborn problem, with important implications for solving the origin of life and for guiding our search for life elsewhere. While it is widely argued ‘information’ is the key distinguishing property separating the living and non-living realm, what exactly is meant by ‘information’ in a biological sense is subject to intense debate. Many of the most significant challenges stem from the apparent relationship between information and causal structure in living things: information seems to, somehow, be calling the shots. Here I discuss new approaches to understanding the causal structure of living systems, how this may differ from traditional physics and the implications for our understanding of how such systems emerge in the first place, and how the problems of life and consciousness are similar and different. 

Hey, What Happened ? The Impact Sector as a Complex Adaptive System - A Conceptual Understanding | Wednesday 10:50-12:30

Tanuja Prasad

Two years ago at this conference, I presented the need for the application of Complexity Science to the Impact Sector.

Briefly, the Impact Sector is the ‘third sector’ of business where businesses have both a for-profit and a pro-social/pro-environment mission. The sector includes activities such as social entrepreneurship, impact investing, social stock exchanges, etc.

In many ways, the impact sector is a leader in the application of complex systems theory. In fact, the impact sector is born from the realization that traditional models of analysis, business and implementation have not solved the larger problems, therefore a more holistic and comprehensive approach was needed.

This time, I will present some examples of how using a complexity lens changes the data and ,the processes of Impact Management. Additionally, I will show how even a conceptual understanding of social and environmental systems as complex adaptive systems allows us to view system dynamics in different, more realistic ways. Even without the analytical data, there is much to be gained by switching to a complexity lens.

The standard approach to model-building usually involves assuming a normal distribution and then building towards ‘maximum’ coverage using a probability measure.

However, in times of change, the normal distribution tends towards a long-tail distribution; the greater the rate of change, the greater this tendency. Consider the case of a company introducing a new product into the market: for example, the iPhone. Every such introduction initiates a change in the market; when the business endeavors to make its product ‘successful’, which means it attempts to achieve broad uptake quickly, it is seeking to create a non-normal-curve progression.

We desire and design for non-normal curves, yet we model with normal curves.

In impact businesses, this discrepancy becomes especially emphasized because the nature of the change that is aimed for is deliberately non-linear and complex. As the norm is for social and environmental systems.

This presentation is about the effects of that discrepancy and the risks that arise from it.

How colorectal cancer arises and treatment strategies, based on complexity theory | Tuesday 14:30-16:10

Nat Pernick

Introduction: Colorectal cancer is the second leading cause of US cancer death after lung cancer, with 53,200 projected deaths in 2020.

Design: We initially review colorectal cancer risk factors, their population attributable fraction (PAF) and their mechanism of action. We then categorize them within the context of nine chronic stressors previously identified as causing most adult cancer: chronic inflammation, carcinogen exposure, reproductive hormones, Western diet, aging, radiation, immune system dysfunction, germ line changes and random chronic stress or bad luck. We then theorize how colorectal cancer arises and propose treatment strategies based on a complexity theory perspective.

Results: The PAFs for US colorectal cancer risk factors are: nonuse of screening 22%, physical inactivity 16%, excess weight 10-20%, tobacco 10%, alcohol 10%, Western (proinflammatory) diet 5% and germ line / family history 2-4%. The PAF is unknown or lacks consensus regarding aging, asbestos, diabetes, inflammatory bowel disease and the protective effects of menopausal hormones and aspirin. The PAF is estimated at <5% for random chronic stress or bad luck. These risk factors operate through chronic inflammation (excess weight, physical inactivity, tobacco use and diet, antagonized by aspirin), carcinogen exposure (alcohol, tobacco, diet, asbestos), aging, immune system dysfunction and germ line changes. We theorize that in the correct cellular context and in the presence of other chronic stressors, these risk factors promote network changes that reinforce each other within and between colonic epithelial cells, leading to intermediate (premalignant) and malignant states which ultimately propagate systemically.

Conclusions: No single treatment modality for colorectal cancer is likely to be curative due to its diverse origins and because aggressive tumors and widespread disease are accompanied by systemic changes different in character from those present in tumor cells. To attain high cure rates, we propose combining treatment strategies that: (1) kill tumor cells via multiple, distinct methods; (2) move tumor cells from "cancer attractor" network states towards more differentiated or less hazardous states; (3) target different aspects of the microenvironment nurturing the tumor; (4) counter tumor associated immune system dysfunction; (5) identify, reduce and mitigate patient-related chronic stressors; (6) eliminate premalignant lesions through more effective screening; (7) identify and target germ line changes associated with tumor promotion and (8) promote overall patient health.

See full paper at http://www.NatPernick.com.

How Developmentally-Based Stacked-Neural Networks will Exceed the Capacity of Current Neural Networks | Wednesday 10:50-12:30

Michael Commons and Sofia Leitte

To more precisely emulate how a human acts upon the environment, a computer must learn from the environment in a way that is closer to the way that humans do. Moreover, the way humans learn from the environment is an evolutionary extension of how nonhuman animals learn. Hence, to more precisely emulate a human, both human and nonhuman animal learning should be taken into account. This paper describes a mathematically-based model of cognition and species evolution, the Model of Hierarchical Complexity (MHC). The MHC proposes an analytic, a priori measurement of the difficulty of task-actions called the Order of Hierarchical Complexity (Commons & Pekker, 2008). Task-actions mean actions directed toward problem-solving. According to the MHC, task-actions grow in complexity throughout development and evolution. The definitions for what makes an action more hierarchically complex will be presented. An example of an initial application of the model, to completely solving the Balance Beam problem will be shown. It is suggested that if this model is followed, that based on the developmental and evolutionary principles contained within it, we should ultimately be able to create Androids that will be at least as smart as humans. They will not only pass the Turing test, as will be explained, but they will also be able to complete other tests specifically designed and appropriate for humans.

Hypernetwork Science: From Multidimensional Networks to Computational Topology | Monday 10:20-12:00

Cliff Joslyn, Sinan Aksoy, Tiffany Callahan, Lawrence Hunter, Brett Jefferson, Brenda Praggastis, Emilie Purvine and Ignacio Tripodi

As data structures and mathematical objects used for complex systems modeling, hypergraphs sit nicely poised between on the one hand the world of network models, and on the other that of higher-order mathematical abstractions from algebra, lattice theory, and topology. They are able to represent complex systems interactions more faithfully than graphs and networks, while also being some of the simplest classes of systems representing topological structures as collections of multidimensional objects connected in a particular pattern. In this paper we discuss the role of (undirected) hypergraphs in the science of complex networks, and provide a mathematical overview of the core concepts needed for hypernetwork modeling, including duality and the relationship to bicolored graphs, quantitative adjacency and incidence, the nature of walks in hypergraphs, and available topological relationships and properties. We close with a brief discussion of two example applications: biomedical databases for disease analysis, and domain-name system (DNS) analysis of cyber data.

Identifying Shared Regulators & Pathways Between Idiopathic Diseases | Wednesday 10:50-12:30

Tuck Onn Liew and Chandrajit Lahiri

Idiopathic diseases serve as great examples of biological systems being affected by heterogeneous, interacting entities. While many factors drive their complex pathogenesis, their multifactorial nature may cause different diseases to have similar driving factors, which in some cases may be due to a disruption of gene function(s). For the latter, network biology can take advantage to explain the differences and similarities between these diseases. Here we present a case study utilizing a dataset of susceptibility proteins of three diseases, namely, autism spectrum disorder (ASD), rheumatoid arthritis (RA) & type 2 diabetes (T2D), to find the best indicators for finding shared regulators among diseases that are most likely to lead to comorbidities between them. With established protein interaction data, we have built three individual “disease” interactomes and four subsets of interactomes combining these diseases, based on their degree of implication and confidence of interactions. Essentially, six centrality measures are used to infer shared or unique hubs, e.g. degree, betweenness, closeness, eigen-vector, stress and local area connectivity. To this end, consensus pathway analyses of the sub-networks, built by genes implicated by three different methods, are used to infer important pathways. As a side objective, we made observations on how the aforementioned network filtering criteria affects credibility of our results. Our analyses reveal the top-ranking crucial genes to be HRAS for ASD, SRC & TP53 for RA and MAPK1 for T2D, when the respective individual disease interactomes are analysed using centrality parameter algorithms. Combining the gene lists from these three diseases reveals MAPK1 & TCF7L2 as the only genes common among all three. A further deep probing of the combined interactomes of the three diseases revealed that genes belonging to the RA dataset are highly overrepresented within the top 100 genes ranked by the centrality measures. Among the four subsets of interactomes, SRC, TP53, KAT2B, PTEN and IL-6 are found to be most common across the above six centrality measures within the top ten, whereas AGT, APP are found alongside SRC, PTEN and IL-6 when the universal consensus is considered. Upon comparison of the pathway analyses, the repercussions of dysregulated thyroid hormones, microRNAs and the adherens junction pathways on these diseases are projected, with possible comorbidities of glioblastoma, as well as prostate and pancreatic cancer. Our analysed genes across the six centrality measures, within the top ten from our combined interactomes, were validated to be the most frequently focussed in actual experimental research. Furthermore, the network filtering criteria significantly affects these results.

Identifying the Coupling Structure in Complex Systems through the Optimal Causation Entropy Principle, Information Flow and Information Fragility | Tuesday 10:20-12:00

Erik Bollt

Inferring the coupling structure of complex systems from time series data in general by means of statistical and information-theoretic techniques is a challenging problem in applied science. The reliability of statistical inferences requires the construction of suitable information-theoretic measures that take into account both direct and indirect influences, manifest in the form of information flows, between the components within the system. In this work, we present an application of the optimal causation entropy (oCSE) principle to identify the coupling structure and jointly apply the aggregative discovery and progressive removal algorithms based on the oCSE principle to infer the direct versus indirect coupling structure of the system from the measured data. A geometric alternative will also be discussed. We will include discussion of examples such as the functional brain network as inferred by fMRI – functional magnetic imaging.

Identifying connections in a complex process manifest as causal direct information flow suggests a new way of detecting and understanding fundamental changes in the dynamical process of a complex system. The question of fragility and robustness concerns how the macroscopic behavior of a system will change in response to local perturbations. We interpret the phrases “robust” and “fragile” as a global descriptor of the system, in terms of the change of the information carrying capacity of paths between states of a complex system, due to the loss of a state, or connection, with a corresponding descriptor in terms of information betweenness. Stated more broadly about the interdependencies of complex systems, consider a large-scale process in which minor changes frequently occur, and the question is, can we define and, hence detect, those changes which would render the system effectively different and likewise significantly alter the system performance, before the system might fail. Thus, here we suggest a fragility-robustness duality to detect a “tipping point” whereby even a minimal detail change can cause a catastrophic systemic outcome.

Immunodietica: Interrogating interactions between autoimmune disorders and diet | Wednesday 10:50-12:30

Iosif Gershteyn and Leonardo Ferreira

Autoimmunity is on the rise around the globe. Diet has been proposed as a risk factor for autoimmunity and shown to modulate the severity of several autoimmune disorders. Yet, the interaction between diet and autoimmunity in humans remains largely unstudied. Here, we systematically interrogated commonly consumed animals and plants for peptide epitopes previously implicated in human autoimmune disease. A total of twenty-four species investigated could be divided into three broad categories regarding their content in human autoimmune epitopes, which we represented using a new metric, the Gershteyn-Ferreira index (GF index). Strikingly, pig contains a disproportionately high number of unique autoimmune epitopes compared to other commonly consumed species analyzed. Additionally, we analyzed the impact of immunogenetics, one’s human leukocyte antigen (HLA) alleles, in this complex interplay. Interestingly, diet-derived epitopes implicated in disease were more likely to bind to HLA alleles associated with the disease than to protective alleles, with visible differences between organisms with similar GF index. We then analyzed an individual’s HLA haplotype, generating a personalized heatmap of potential dietary autoimmune triggers, the Gershteyn-Ferreira Sensitivity Passport. Our work uncovered differences in autoimmunogenic potential across food sources and revealed differential binding of diet-derived epitopes to autoimmune disease-associated HLA alleles, shedding light on the impact of diet on autoimmunity.

The impact of composition on the dynamics of autocatalytic sets | Wednesday 16:40-18:00

Alessandro Ravoni

Autocatalytic sets are sets of entities that mutually catalyse each other production through chemical reactions from a basic food source. Recently, the reflexively autocatalytic and food generated theory has introduced a formal definition of autocatalytic sets which has provided encouraging results in the context of the origin of life. However, the link between the structure of autocatalytic sets and the possibility of different long-term behaviours is still unclear. In this work, we study how different interactions among autocatalytic sets affect the emergent dynamics. To this aim, we develop a model in which interactions are represented by composition operations among networks, and the dynamics of the networks is performed via stochastic simulations. We find that the dynamical emergence of the autocatalytic sets depends on the adopted composition operations. In particular, operations involving entities that are sources for autocatalytic sets can promote the formation of different autocatalytic subsets, opening the door to various long-term behaviours.

References
[1] W. Hordijk, S.A. Kauffman, M. Steel, Required levels of catalysis for emergence of autocatalytic sets in models of chemical reaction systems, Int. J. Mol. Sci., 12(5), (2011), 3085–3101
[2] V. Vasas, C. Fernando, M. Santos, S.A. Kauffman, E. Sathmáry, Evolution before genes, Biol. Direct., 7, 1, (2012)
[3] W. Hordijk, J. Naylor, N. Krasnogor, H. Fellermann, Population dynamics of autocatalytic sets in a compartmentalized spatial world, Life, 8, (2018), 33

Impact of individual actions on the collective response of social systems | Tuesday 10:20-12:00

Samuel Martin-Gutierrez

In this work [1] we study collective human behaviour through three mathematical models that explain how the actions performed by an actor, individual or agent (her activity) influence the collective reaction (or response) of the social system she is embedded in. The models are inspired in the physical sciences; for example, in the perfect gas model, where intermolecular forces are neglected, and in the concept of distinguishability of particles. The developed models consider different levels of dependence between response and activity. In the first model, called Independent Variables (InV) model, we consider activity and response to be completely independent, while in the second and third models the individual activity influences the response. The main difference between these last two models is the distinguishability of the actors. In the Identical Actors (IdA) model, the system is agnostic with respect to the individual that performs the actions, while in the Distinguishable Actors (DiA) model, the dependence between activity and response is determined by the features of the actor that performs the action.

We use the models to obtain the distribution of the efficiency metric, defined as the ratio between the collective reactions of the system and the individual actions. Notice that this metric is a generalization of the user efficiency, first introduced in the context of Twitter by some of the authors (Social Networks, 39, p. 1-11, 2014, Morales et al.). The models are tested with 29 datasets from three systems of different nature: Twitter conversations, the scientific citations network and the Wikipedia collaboration environment. In all the systems the efficiency distribution presents a universal shape with small but relevant differences between systems.

The Independent Variables model is able to explain two fundamental characteristics of the efficiency distribution for which there was previously only empirical evidence: its universal shape and its independence with respect to changes in the activity distribution. Additionally, it reproduces the efficiency distribution for the scientific citations network appropriately. However, there are small discrepancies between the InV model and the data for Twitter and Wikipedia. We find the cause for the discrepancies and take it into account to develop the Identical Actors model, which improves the results for both systems. In this model, the correlations between individual actions and collective response emerge naturally from the hypothesis of the model. The theoretical correlations are comparable to those found in the empirical data. When it comes to the efficiency distribution, the model reproduces adequately the right tail (efficiency>1) for Twitter and Wikipedia. We again study the small discrepancies between the IdA model and the data and from this analysis we develop the Distinguishable Actors model, which fit the Twitter data remarkably well for the whole range of efficiencies. Moreover, the DiA model improves the concordance between theoretical and empirical activity-response correlations.To summarize, in this work we analyze the collective human behaviour in three social systems of different nature, showing the universality of the shape of the efficiency distribution. We explain how this universal shape emerges with a parsimonious model and develop two more sophisticated models to get a thorough understanding of the particularities of the efficiency distribution in each system considered. The three models have clear and intuitive interpretations, pave the way for more elaborated and domain-specific theories and can be used as null models or baselines for them.

[1] Martin-Gutierrez, S., Losada, J.C. & Benito, R.M. Impact of individual actions on the collective response of social systems. Sci Rep 10, 12126 (2020). https://doi.org/10.1038/s41598-020-69005-y

Impact of plate size on food waste: Agent-based simulation of food consumption | Tuesday 10:20-12:00

Babak Ravandi and Nina Jovanovic

Food waste is a substantial contributor to environmental change and it poses a threat to global sustainability. A significant portion of this waste accounts for plate waste and food surplus from food-service operations such as restaurants, workplace canteens, cafeterias, etc. In this work, we seek to identify potential strategies to optimize food consumption in all-you-can-eat food-service operations, in terms of minimizing food waste while ensuring quality of service (i.e., maintaining low wait-time, unsatisfied-hunger, and walk-out percentages). We treat these facilities as complex systems and propose an agent-based model to capture the dynamics between plate waste, food surplus, and the facility organization setup. Moreover, we measure the impact of plate size on food waste. The simulation results show reducing plate size from large to small decreases plate waste up to 30% while ensuring quality of service. However, total waste as the sum of food surplus and plate waste is lower with large plates. Our results indicate the need for optimizing food preparation along with designing choice environments that encourage guests to avoid taking more food than they need.

Inferring the phase space of a perturbed limit cycle oscillator from data using nearest neighbor prediction | Monday 14:30-16:10

Rok Cestnik and Michael Rosenblum

We consider a nearest neighbor prediction method for inferring the phase space of a perturbed limit cycle oscillator from observations. At its core is the one-step prediction which serves as a means to estimate long-term time evolution of arbitrary states. This allows the inference of the oscillator's phase space with traditional techniques relying on asymptotic time evolution. The one-step prediction is a straightforward generalization of the nearest neighbor prediction algorithm given that one has observations of the complete state and all perturbing forces. However, we show that in some cases an accurate prediction can be done even without any knowledge of the perturbing force. This has many practical implications since often in experimental setups not all perturbations are controlled and measured. We showcase the potential of this method with a statistical analysis on simulations of a stochastically perturbed oscillator from which we obtain the unperturbed orbit, the phase response curve and the isochronal structure in both cases where the perturbation is and is not known.

Influential Spreaders in Networks with Community Structure | Wednesday 16:40-18:00

Zakariya Ghalmane, Chantal Cherifi, Hocine Cherifi and Mohammed El Hassouni

Centrality is a fundamental issue in the research on complex networks. It is linked to the network topological properties such as degree distribution, clustering and the community structure. While the topology of real-world networks is known to exhibit a community structure, this property has been largely ignored in the literature of centrality. To address this problem, in this work1,2 we propose to tailor the various centrality measures defined for non-modular networks to integrate the influence of the community structure. In a modular network, we can distinguish two types of influences for a node: A local influence on the nodes belonging to its own community through the intra-community links, and a global influence on the nodes of the other communities through the inter-community links. Therefore, centrality measures should not be represented by a simple scalar value but rather by a two-dimensional vector, the so-called “Modular centrality”. Its first component measures the local influence of the node, while the second component measures its global influence. The Modular centrality is computed in two steps. The global component is defined in the same way for both type of community structure (overlapping and non-overlapping). It is computed on the global network obtained by removing all the intra-community links from the original network. Remaining isolated nodes are also removed. The local component computation depends on the type of the network. In networks with non-overlapping community structure, it is computed on the local graph obtained by removing all the inter-community links from the original network. In networks with overlapping communities, the local component of a node is computed according to its nature. For a non-overlapping node, as previously, only members of its community are considered. For an overlapping node, all the communities that the node belongs to are merged in a single community. Based on this approach, we propose to extend all the classical centrality measures to modular networks.

Information Dynamics in Neuromorphic Nanowire Networks | Wednesday 15:00-16:40

Ruomin Zhu, Joel Hochstetter, Alon Loeffler, Mike Li, Joseph Lizier and Zdenka Kuncic

The rise of atomic switch nanotechnology brings artificial intelligence into a new regime. Not only because they respond to electrical stimuli in the same way as biological synapses. But also they exhibit brain-like properties such as non-linear power-law dynamics and brain-like memories that cannot be readily implemented in software2. Upon the achievement of the physical realization of a self-organized atomic switch network with neuromorphic structure, spatially distributed memory and activation of feed-back and feed-forward sub-networks are observed. Recent studies showed that such networks demonstrate cognitive memory and learning ability.

One effective approach to analyze these networks is through their intrinsic electrical signal transduction and information dynamics, specifically at the activation stage. Internal signal transduction is investigated and its effect on the rest of the network is observed. Activation of atomic switch networks depends on external voltage bias as well as internal topology. With voltage bias close to the activation threshold, the first signal pathway forms at the topological shortest distance from the source to the drain. Electric signals are propagated along the first pathway and also from this pathway to the rest of the network. A graphical measure of current-flow centrality is employed to identify the nanowires and junctions with higher importance. The propagation of electrical signals can thus be traced based on the centrality measure. The nodes with higher centralities will exhibit stronger current traffic. The dynamic redistribution of electric potential can be interpreted by centrality as well. Modularity of the network is also measured as a function of time. The result strongly indicates that the network is more interconnected during the activation.

Meanwhile, the study of information dynamics on ASNs shows how information is stored locally as well as how information is transferred when signals are propagating. An information-theoretic measure of transfer entropy is employed and the results suggest that the nanowires with higher centralities are more likely to obtain richer capacity for sending out information as well as receiving. Whereas the junctions at more preferable positions, which essentially means they exhibit higher centrality, tend to have stronger information transfer than the others. Time series analysis is done to study the information dynamics during the activation of these networks. The time when max information transfer happens lines up with the network's activation period.

An Information Ontology for the Process Algebra Model of Non-Relativistic Quantum Mechanics | Wednesday 15:00-16:40

William Sulis

The Process Algebra model has been suggested as an alternative mathematical framework for non-relativistic quantum mechanics (NRQM). It appears to reproduce the wave functions of non-relativistic quantum mechanics to a high degree of accuracy. It posits a fundamental level of finite, discrete events upon which the usual entities of NRQM supervene. It has been suggested that the Process Algebra model provides a true completion of NRQM, free of divergences and paradoxes, with causally local information propagation, contextuality and realism. Arguments in support of these claims have been mathematical. Missing has been an ontology of this fundamental level from which the formalism naturally emerges. In this paper it is argued that information and information flow provides this ontology. Higher level constructs such as energy, momentum, mass, spacetime, are all emergent from this fundamental level.

Innovation Ecosystem as a Complex Adaptive System; Implications for Analysis, Design and Policy Making | Monday 10:20-12:00

Alireza Valyan, Jafar Taheri Kalani and Mehrdad Mohammadi

With the increasing attention to innovation as one of the most significant drivers of economic and social growth and development in the societies, more and more scholars are interested in analysis and design settings which can foster innovation more. One of these setting that has raised a lot of debates in the past decade is the innovation ecosystem (IE), which is characterized with autonomous players, nonlinear micro-level interactions, emergent macro-level patterns, temporary optimal states and diluted internal and external boundaries. These features are the same as the features in complex adaptive systems (CAS) and thus some scholars have used this approach to describe and analyze innovation ecosystem as a CAS. However, the existing body of literature is mostly directed towards descriptive and general statements and when it comes to the applicable implications that can be used by practitioner and policy makers, there is a limited evidence in the field. With the aim of filling this theoretical and practical gap, in this paper we have used the complex adaptive system (CAS) approach to analysis, design and implement the innovation ecosystem in the Iranian water and power industry. We have used the soft modeling approach in the strategizing workshops with the stakeholders and key players of the ecosystem and used complex adaptive system theory to understand the existing dynamism and interactions. We have then provided strategies/practices required by the policy maker as the practical implications for interventions and control of the ecosystem as a CAS.

Interactive network graphs to analyze public opinion polls | Tuesday 10:20-12:00

Modesto Escobar

Graphs have been used not only to solve topographic problems and to represent social structures, but also to study relationships between variables. Path analysis and structural equation models are well known. Both, however, were initially restricted to be used with quantitative variables. In this presentation, we’ll discuss how relationships between qualitative variables can also be represented, as correspondence analysis already does, but using the technical resources of network analysis and other well-known multivariate techniques such as linear and logistic regressions.

The proposed analysis is based on the realization of simultaneous regression equations and the selection of those relationships with a positive and statistically significant relationship. On these premises, graphs are obtained where dependent variables are linked with those categories that have an adjusted mean or percentage above the overall sample in a series of selected variables.

To improve their analytical potential, these graphs are endowed with an interactive potential. The graphic interface includes the selection of various attributes for the recognition of the elements analyzed, and the modification of parameters to focus on stronger relationships. In the first part of the presentation, I’ll address the mathematical and statistical foundations of these representations, in the second part, I’ll propose programs to make them possible and, finally, I’ll show an example of analyzing electoral and public opinion.

An Introduction to Complex Systems Science and its Applications | Wednesday 16:40-18:00

Alexander Siegenfeld and Yaneer Bar-Yam

The standard assumptions that underlie many conceptual and quantitative frameworks do not hold for many complex physical, biological, and social systems. Complex systems science clarifies when and why such assumptions fail and provides alternative frameworks for understanding the properties of complex systems. This review introduces some of the basic principles of complex systems science, including complexity profiles, the tradeoff between efficiency and adaptability, the necessity of matching the complexity of systems to that of their environments, multi-scale analysis, and evolutionary processes. Our focus is on the general properties of systems as opposed to the modeling of specific dynamics; rather than provide a comprehensive review, we pedagogically describe a conceptual and analytic approach for understanding and interacting with the complex systems of our world.

Jails as Complex Adaptive Systems | Thursday 10:30-12:00

Reena Chakraborty

Jails are safety focused organizations. Jails are of interest because nearly 10.6 million intakes and releases occur involving over 7.5 million individuals who return to communities each year having stayed in community jails a median of fewer than 25 days. Jail-safety related policies and practices are traditionally inflexible; intended to provide uniform responses according to behavioral threats that present. Traditional models of jails consider safety only at the level of individual agents and their behavior. Jail practitioners face a persistent set of safety and public safety challenges arising from information flows and their interactions that are ineffectively explained by these models. Viewing jails as CAS provides more effective models, e.g., the Cynefin framework, for categorizing and responding to such challenges. These could inform more effective practices and result in improved outcomes. The process of arriving at a mental model of Jails as CAS is discussed in this presentation. This is the first step of work in-progress.

Safety in jails can mentally modeled as provided by a human sensor network that is challenged by information gaps, frequent changes in frequency response characteristics due to shift operations, and information overload. The diverse constituents of the jail ecosystem and high levels of movement result in an enormous number of information vectors and information exchange transactions that impact safety in both positive and negative ways and go mostly unmonitored. Jails house a population that frequently suffers from cognitive challenges resulting from long histories of trauma, high incidence of substance use, and high prevalence of mental illness. Inbuilt information flows that operate counter to the jail’s safety objectives arise from the diverse population housed and their motives.

A brief historic review of important insights about CAS are translated in the jail context. A key insight is that interactions between perception, cognition, and action are critical in understanding human CAS like jails: yet, jails are not modeled in this way so these interactions have yet to be characterized and understood. In the context of existing models for understanding failure in CAS, examples of how cognition challenges experienced in jails can lead to adverse safety outcomes are presented. Resilience Engineering is suggested as a potential source for new tools and techniques to improve safety outcomes, reliability and resilience in jails.

Recalling that CAS and Organizational Science were identified as related fields, jails are viewed as high reliability organizations with challenges to achieving full reliability. Bogue’s suggestions for applying Weick and Sutcliff’s “Five Guiding Principles for Achieving High Reliability,” as translated to Corrections agencies like jails are presented and connected to NIJ funded initiatives. The role of transformation processes in jails required to achieve high reliability, and importance of characterizing, promoting and sustaining these to achieve improved safety and pro-social outcomes are put forth. Challenges to understanding and assessing transformation processes are discussed from a process analysis framework. These ideas suggest that methodical development and application of a CAS framework to understand and promote safety in jails could lead to improved safety and public safety outcomes.

Joint Lattice of Reconstructability Analysis and Bayesian Network General Graphs | Wednesday 15:00-16:40

Marcus Harris and Martin Zwick

This paper integrates the structures considered in Reconstructability Analysis (RA) and those considered in Bayesian Networks (BN) into a joint lattice of general graphs. This integration and associated lattice visualizations are done in this paper for four variables, but the approach can easily be expanded to more variables. The work builds on the RA work of Klir (1986), Krippendorff (1986), and Zwick (2001), and the BN work of Pearl (1985, 1987, 1988, 2000), Verma (1990), Heckerman (1994), Chickering (1995), Andersson (1997), and others. The RA four variable lattice and the BN four variable lattice partially overlap: there are ten unique RA general graphs, ten unique BN general graphs, and ten general graphs common to both RA and BN. For example, the specific graph having probability distribution p(A)p(C)p(B|AC) is unique to BN, the RA specific graph AB:AC:BC, which contains a loop, is unique to RA, and the specific graph ACD:BCD with probability distribution p(A|CD)p(B|CD)p(D|C)p(C) is common to both RA and BN. Integration of the RA and BN lattices of general graphs yields a richer probabilistic graphical modeling framework than offered by RA or BN alone.

Just Machine Learning | Thursday 9:05-9:40

Tina Eliassi-Rad

Tom Mitchell in his 1997 Machine Learning textbook defined the well-posed learning problem as follows: “A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.” In this talk, I will discuss current tasks, experiences, and performance measures as they pertain to fairness in machine learning. The most popular task thus far has been risk assessment. For example, Jack’s risk of defaulting on a loan is 8, Jill’s is 2; Ed’s risk of recidivism is 9, Peter’s is 1. We know this task definition comes with impossibility results (e.g., see Kleinberg et al. 2016, Chouldechova 2016). I will highlight new findings in terms of these impossibility results. In addition, most human decision-makers seem to use risk estimates for efficiency purposes and not to make fairer decisions. The task of risk assessment seems to enable efficiency instead of fairness. I will present an alternative task definition whose goal is to provide more context to the human decision-maker. The problems surrounding experience have received the most attention. Joy Buolamwini (MIT Media Lab) refers to these as the “under-sampled majority” problem. The majority of the population is non-white, non-male; however, white males are overrepresented in the training data. Not being properly represented in the training data comes at a cost to the under-sampled majority when machine learning algorithms are used to aid human decision-makers. There are many well-documented incidents here; for example, facial recognition systems have poor performance on dark-skinned people. In terms of performance measures, there are a variety of definitions here from group- to individual-fairness, from anti-classification, to classification parity, to calibration. I will discuss a null model for fairness and demonstrate how to use deviations from this null model to measure favoritism and prejudice in the data.

Just stop believing: need and consequences of probabilistic induction | Tuesday 10:20-12:00

André Martins

Recent experiments in cognition suggest we reason to defend our ideas. Finding the truth might be incidental and not the actual cause of our argumentation and reasoning skills. When we hold a belief, our minds might stop working to find the best answer and go on defensive mode. That is especially true when we identify ourselves with that belief. That suggests we need to be wary of our natural reasoning. Therefore, we might have no choice but to use formal methods when debating ideas about the real world. However, deductive methods only tell us what is true if we assume a set of initial axioms as true. They offer no support to the concept of believing. Inductive methods can make claims about the world. However, using induction, except in approximate ways, might often be beyond our capacities. Moreover, induction does not rate claims as true. It does not provide certainty, except for very trivial statements. Induction can give us plausibilities and, at best, probabilities. That is, We have no support for believing in any idea the way we do, as if we knew the idea was right. Knowledge is possible only inside the artificial worlds of logic and mathematics. On ideas about the real world, the best we can achieve is to measure uncertainty. While sometimes those measurements can get incredibly close to certainty, in most cases, that does not happen. Putting it all together leaves us with the conclusion that beliefs about the real world should be avoided if one seeks to get close to the best possible answers. We must accept that we can not know. At best, we can rank ideas and theories as more or less probable.

Kinds of unfinished description with implication for natural science | Tuesday 10:20-12:00

J Rowan Scott

The Reductive Sciences and Complexity Sciences employ micro-scale first rigorous ‘bottom-up’ reductive Logic as well as the mathematics of Symmetry and Group theory when modeling systemic change, causal relationships and mechanisms. Four kinds of ‘unfinished description’ result from this approach. The, reductive micro-scale first assumption fails to replicate the complexity of natural evolutionary processes. The reductive ‘bottom-up’ metaphor obscures significant facets of a more complicated natural evolutionary Logic. Abstract, rigorous, ‘bottom-up’ reductive Logic is susceptible to undecidable reductive propositions revealing formal reductive incompleteness and its implications, which includes necessary meta-consideration in determining reductive logical consistency. The powerful mathematics of Symmetry and Group Theory is not sufficient when mathematically modeling causal relationships in Nature. Consequently, the Reductive Scientific Narrative creates a ‘comprehensive’ description of causal relationships in Nature and evolution that is fundamentally ‘unfinished’. Explaining and then ‘correcting’ each of the four kinds of ‘unfinished description’ illuminates a novel path that can more closely approximate the natural system, move Reductive Science and Complexity Science toward a deeper consilience and potentially resolve paradoxes associated with the study of Consciousness, Mind and Nature.

Localization of Hubs in Modular Networks | Wednesday 10:50-12:30

Zakariya Ghalmane, Chantal Cherifi, Hocine Cherifi and Mohammed El Hassouni

In complex networks, the degree distribution of the nodes is known to be non-homogeneous with a heavy tail. Consequently, a small set of nodes (called hubs) are highly connected while the vast majority share few connections with their neighbors. The community structure is another main topological feature of many real-world networks. In these networks, the nodes shared by more than one community are called overlapping nodes. They play an important role in the network dynamics due to their ability to reach multiple communities1. In this work, our goal is to characterize the relationship between the overlapping nodes and the hubs. Indeed, we suspect that hubs are in the neighborhood of the overlapping nodes. In order to investigate the ubiquity of this property, we perform series of experiments on multiple real-world networks from various origins. The aim of these experiments is to compare the set of neighbors of the overlapping nodes with the set of hubs. In order to define both sets, the overlapping community structure of the real-world networks is uncovered using an overlapping community detection algorithm. The overlapping nodes and the set of their neighbors are then extracted. Note that if n is the size of the neighbors of the overlapping nodes, the set of hubs is formed using the top n nodes of the network. This choice is motivated by the fact that some similarity measures need to be computed on sets of the same size. We compute classical measures such as the proportion of common nodes, the Jaccard Index, the Rank-biased overlap, the correlation measures between the two sets (Pearson and Spearman) in order to investigate their similarities. We also study their degree distribution and compare the subgraph made of the overlapping nodes and the hubs with the original network.

Longcuts in the Global Migration Network | Tuesday 10:20-12:00

Siyu Huang, Qinghua Chen and Xiaomeng Li

Different from the regular migration, some migrants want the opportunity to move further to a new country rather than settle down. The interesting phenomenon called `transit migration' has attracted the concern of organizations and scholars, because it is significant influential on three relevant countries. More importantly, it would conceals people’s real mobility willingness and be harmful for the effectiveness of traditional studies on immigration. Among relevant issues, systematical analysis of transit migration at the global level based on available data is meaningful and necessary, while revealing possible transit routes is the fundamental processing. Based on quantified barriers with LC model, this paper marks some possible transit countries and routes by irregular triangle relationship, including the popular springboards mentioned in prior research studies, such as Canada and Australia acting as the global hubs, and some typical transit countries directing to Europe, as Russia, Turkey, France and Germany. Besides, it also reveals and quantifies several hidden possible transit countries that were seldom noticed before, like South Africa, Israel and some hubs of local refugee flows in Africa. Exploring these irregular transit phenomena might shed light on policy development. Currently, as the process of internationalization proceeds, transit migration is no longer concentrated in and around the Europe, and more high-income entities compose the main part of transit routes. These results provide an objective view to transit migrants and countries that is free of prejudice and political attitudes.

Macroscale Network Feedback Structure of Transcription During Mouse Organogenesis | Tuesday 14:30-16:10

Lingyun Xiong, William Schoenberg and Jeremy Swartz

Mouse organogenesis is a biological process of cell fate transition. Underlying this process are dynamic and complex changes in the transcription network. Although the changes have been described extensively in terms of selected genes or individual pathways/sub-networks, we aim to provide a formal description of network dynamics at the macroscale level, leveraging data from single-cell transcriptome profiling of developing mouse embryos over a time-course of 48 hours. We identified 982 genes in the mouse genome that are highly variable during gut formation and aggregated them parsimoniously into 14 broad-category biological processes according to Gene Ontology annotations. Based on aggregate gene expression levels, we constructed an Ordinary Differential Equation model of the macroscale network capturing its feedback structure using a data-driven network learning method. We determined the polarity and magnitude of pairwise impact relations between network modules and inferred dominant higher-order interactions. Despite ubiquitously irregular and heterogeneous at the gene level, the transcription network at the macroscale level has a feedback structure that is intrinsically regular and robust. We show the pivotal role of the signaling process in driving systemic changes and uncover the significance of the homeostatic process and the process of establishment of localization in regulating network dynamics. Localized regulatory structures also exist, and they represent domain-specific regulations that are plausibly essential to cell fate transition, such as lipid metabolism. Our model suggests the presence of rhythmic impact in the feedback structure, which appears to be an emergent property from the network architecture. Altogether, this study not only provides a holistic picture of macroscale network feedback structure of transcription during mouse organogenesis, but also offers insight into key aspects of information flow within the network that control cell fate transition.

Mahalanobis Distance as proxy of Homeostasis Loss in Type 2 Diabetes Pathogenesis | Wednesday 10:50-12:30

Jose L. Flores-Guerrero, Margery A. Connelly, Marco A. Grzegorczyk, Peter R. van Dijk, Gerjan Navis, Robin P.F. Dullaart and Stephan J.L. Bakker

The potential role of individual plasma biomarkers in the pathogenesis of Type 2 Diabetes (T2D) has been broadly studied, while the role of interaction of biomarkers among themselves as proxy of physiological dysregulation before the onset of T2D remains underexplored. Leung and colleagues (2018) have showed that the Mahalanobis Distance (MD) of reduced dimensionality features (i.e. Principal Components) containing information of several biomarkers can be used as a measure of homeostasis loss that occurs with ageing.

The aim of the present study was to explore from a complex system approach, the interaction of plasma biomarkers (glucose and lipid metabolism, one-carbon metabolism, and microbiome derived metabolites) as proxy of homeostasis loss with the risk of T2D in a prospective population-based cohort study. We calculated the MD of the Principal Components (PCs) which integrate the information of 27 circulating biomarkers measured in 4446 participants free from T2D at the baseline, from the PREVEND (Prevention of Renal and Vascular End-stage Disease) study.

After a median follow-up of 8.6 years, incident T2D was ascertained in 227 subjects. Six PCs were associated with a reduced risk of T2D; and seven PCs were associated with an increased risk of T2D. The calculated MDs from all the cumulative subsets of PCs were higher on those subjects with higher risk of T2D, revealing a robust signal of homeostasis loss on those prone to develop T2D. Cox regression analyses revealed a significant association between MD and incident T2D. The hazard ratio of the MD calculated from the 27 PCs was 1.87 (95% CI, 1.53–2.29; P<0.001). The highest hazard ratio was obtained using the MD calculated from the first 13 PCs (2.14 (95% CI, 1.77–2.59; P<0.001)). Such associations remained after the adjustment for age, being 1.91 (95% CI, 1.58–2.30; P<0.001) and 2.10 (95% CI, 1.75–2.53; P<0.001) respectively. Interestingly, the association of MDs calculated from all different subsets of PCs were stronger in women than in men (P<0.01), which could be interpreted as a further level of interaction among the circulating biomarkers, presumably driven by a hormone profile. Our results are in line with the premise that MD represents an estimate of homeostasis loss.

This prospective study suggests that MD is able to provide information about the emergent property of dysregulation not only in ageing process, as previously reported, but also in the pathogenesis of T2D.

Mandelbrot's road less taken to 1/f noise: Some little-known history and its enduring relevance | Tuesday 10:20-12:00

Nicholas Watkins

Members of the complexity science community are very familiar with Mandelbrot’s work with Wallis and van Ness in the 1960s on self-similar fractional Brownian motion (fBm) and stationary fractional Gaussian noise (fGn) [1]. fGn has provided an appealing and widely used stochastic solution to modeling the 1/f noise spectra so pervasive in the physical and human worlds. More recently, however, the interest in non-ergodic stochastic models has greatly increased, driven by the experimental accessibility of single trajectories in physics and chemistry [2] and by new insights into some fundamental paradoxes in economics [3]. In physics and chemistry there has been a new interest in fractional renewal models, based on switching between levels at power-law distributed intervals in time, and with application areas including blinking quantum dots [4].

I have thus been greatly surprised to find that Mandelbrot worked on-and published-a series of such non-ergodic fractional renewal models as early as 1963-67. He explicitly drew attention to their non ergodic properties in contrast to the fBm and fGn he was working on at the same time. This presentation will summarise my recent historical research [5] in this area, and make the case for the enduring interest of aspects of his approach. I will speculate about how the evident lack of awareness of this work across the physics, hydrology, economics and statistics communities may have affected the development of complexity science and its applications, for example in Per Bak’s framing of 1/f noise as a problem requiring a single mechanism-his self-organised criticality-when Mandelbrot was already positing at least two [6]. I will also discuss whether Mandelbrot’s “long lost” work on this model may even have pedagogical importance, because of its role as a waypoint on his route to multifractal cascade models in the 1970, which many people find non-intuitive.

[1] Watkins, Mandelbrot’s stochastic time series models, Earth and Space Science, 2019.
[2] Sokolov, Statistics and the single molecule, Physics, 2008
[3] Peters, The ergodicity problem in economics, Nature Physics, 2019.
[4] Stefani, Hoogenboom and Barkai, Beyond quantum jumps: blinking nanoscale light emitters, Physics Today, 2009.
[5] Watkins, On the continuing relevance of Mandelbrot's non-ergodic fractional renewal models of 1963 to 1967, The European Physical Journal B, 2017.
[6] Watkins, Pruessner, Chapman, Crosby and Jensen, 25 Years of Self-organized Criticality: Concepts and Controversies, Space Science Reviews, 2015.

Mapping human aging with longitudinal mutli-omic and bioenergetic measures in cellular lifespan system | Monday 10:20-12:00

Gabriel Sturm, Jeremy Michelson, Meeraj Kothari, Kalpita Karan, Andres Cardenas, Marlon McGill, Michio Hirano and Martin Picard

Human aging is a complex, multi-level process driven by the interactive forces of genetic, metabolic, and environmental factors. Longitudinal studies of human aging are severely limited by high costs, numerous confounding variables, and lengthy intervals required to follow people across decades of lifespan. It is possible to longitudinally track human aging in cultured cells across the replicative lifespan, until they reach replicative senescence (i.e. Hayflick limit). If the molecular features of aging were conserved between in vivo and in vitro aging, this system would enable us to map human aging in a highly controlled experimental environment and accelerated timeframe. Here we longitudinally tracked cellular aging trajectories in primary healthy fibroblasts (n=5 donors) from early passages until replicative senescence (up to 250 days), measuring transcriptomics, proteomics, DNA methylation, secreted factors, and bioenergetic parameters for up to 20 timepoints across the cellular lifespan (i.e. multi-omic kinetics). To evaluate if physiological properties of human aging were conserved in our culture system, we applied a panel of algorithmic aging predictors, trained in cross-sectional human populations, such as DNA methylation age clocks (DNAmAge). DNAmAge clocks could track up to 40 years of biological aging in less than 200 days of culture time, representing a 73-fold accelerated rate of aging relative to physiological aging. This reveals a conserved and accelerated epigenetic aging process in cultured human cells relative to physiological aging. We further aged cells in the presence of several genetic, metabolic, and environmental perturbations to assess how these perturbations affected cellular lifespan trajectories. Genetic perturbations involved culturing fibroblasts derived from patients with SURF1 mutation (n=3 donors), which encodes an assembly factor of mitochondrial respiratory chain complex IV. SURF1 patients often do not survive past early childhood. Compared to controls, SURF1 cells completed 47% fewer population doublings (lower Hayflick limit), showed a 50% accelerated rate of telomere attrition, and the rate of epigenetic aging was also accelerated by 50%. Metabolic perturbations such as mitochondrial nutrient uptake inhibitors (UK5099, BPTES, and etomoxir), ATP synthase inhibitor (oligomycin), and glycolytic inhibitors (beta-hydroxybutyrate, 2-deoxyglucose, and galactose) were used to shift energetic processes between oxidative phosphorylation or glycolysis and produced substantial alterations in cellular lifespan trajectories. For example, based on DNAmAge clocks, shifting bioenergetic demand away from mitochondria decelerated cellular aging by up to 25 biological years. Environmental perturbations included acute and chronic stress exposure using the glucocorticoid receptor agonist dexamethasone (DEX) that mimics cortisol (psychological stress hormone). Acutely, DEX increased the Hayflick limit of cells by 5%, while chronic DEX reduced the Hayflick limit by 23% and accelerated the induction of several aging biomarkers. Together these findings confirm that mapping longitudinal trajectories of replicative senescence in human-derived fibroblasts recapitulates several key hallmarks of human aging and reveals integrated cell-autonomous responses to genetic, metabolic, and environmental perturbations on cellular lifespan. Future work will focus on integrating these multi-omic kinetic datasets, with the goal of mapping the dynamic network structure of human cellular aging.

Methodologies of Building Synergetic Learning Systems | Thursday 10:30-12:00

Ping Guo, Jiaxu Hou and Bo Zhao

In this paper, the methodologies of building a synergetic learning systems (SLS), which is a kind of artificial intelligence (AI) system, are presented. In our viewpoints, an AI system should be constructed with combination of multidiscipline knowledges, not merely deep learning. To build the SLS, we need to develop the methodologies which should draw on the mathematical methods of neurocognitive mechanisms and machine learning, knowledge across a wide range of disciplines including cognitive neuroscience, physics, psychology, medicine, automation, computer science, life science, systems science and social science, and so on. It is integrated with dependent on statistical physics and the systematic view of complexity science. Our methodologies of building SLS are investigated systematically and analyzed on multi-scale, multi-level and multiperspective. With our proposed methodologies, an AI system can be constructed with the properties of interpretability, extensibility and evolvability.

A model of cells recruited by a spatiotemporal signalling pattern inducing cell cycle reduction during axolotl spinal cord regeneration | Monday 10:20-12:00

Emanuel Cura Costa, Aida Rodrigo Albors, Elly M. Tanaka and Osvaldo Chara

Axolotls own outstanding regeneration capabilities after injury. 8 days after tail amputation (dpa) this lovely animal is able to recover more than 2mm of spinal cord. Little is known about the mechanisms underlying the process. In previous studies, we found that tail amputation leads to a cell cycle reduction in cells close to the amputation plane (Rodrigo Albors et al., 2015). Posteriorly, we identified a high-proliferation zone arising 4 dpa and demonstrated that cell cycle acceleration is the major driver of regenerative growth (Rost et al., 2016). What triggers this spatiotemporal pattern of cell proliferation has not been yet elucidated. Here, using a data-driven modelling approach, we recapitulate the emergence of the tissue outgrowth in terms of a cell-based regulation of the cell cycle control both in space and time.

References:

Rost F, Rodrigo Albors A, Mazurov V, Brusch L, Deutsch A, Tanaka EM & Chara O. 2016. Accelerated cell 478divisions drive the outgrowth of the regenerating spinal cord in axolotls. eLife. 5. pii: e20357.

Rodrigo Albors A, Tazaki A, Rost F, Nowoshilow S, Chara O & Tanaka EM. 2015. Planar cell polarity-475mediated induction of neural stem cell expansion during axolotl spinal cord regeneration. eLife. 4: 476e10230.

A Model of the Economy Based on System of National Account and Stock Flow Consistent Theory and its Application | Tuesday 14:30-16:10

Thomas Wang, Tai Young-Taft and Harold Hastings

We propose a model of a closed economy with four interacting components based upon the System of National Accounts (United Nations, 2008) and Stock Flow Consistent theory (Godley and Lavoie, 2007) to model the complex dynamics of the economy. This model is similar to the Leontief model, however it examines the economy from a different perspective. We have added banks as an additional component to the usual three components, namely households, firms, and the government. Their interactions will be modeled with a four-by-four matrix. As Minsky (1994) states on page 2, “A concrete real-world capitalist economy is characterized by an interrelated set of balance sheets, income statements and statements about sources and uses of funds.” Here we attempt to realize this description of the economy using the above framework, and attempt to apply mathematical stability analysis to explain Minsky's financial instability hypothesis.

After analyzing the reasons for changes in asset prices that lead to business cycles, we also propose an asset pricing model that will capture macroeconomic dynamics in asset markets. We analyze the structure of the economy and its stability by running simulations under various conditions. Furthermore, we discuss potential applications of the model in identifying crises by observing the effects of changes of variables in the model. We consider preventing or mitigating crises using control theory, for example, when the Federal Reserve lowers interest rates. This presentation consists of three parts: (1) the matrix model of the economy, (2) the macroeconomic asset pricing model and validation, and (3) stability analysis and potential applications. We find that our model displays cyclic behavior while approaching an equilibrium, and simulates the effects of various shocks to the economy fairly realistically. In particular, we find that a more egalitarian economy appears to be both yield a higher rate of return and be more stable (Marx-Piketty (2014) thesis. The model in its current state is far from complete or realistic, however it shows its capabilities by responding to various situations similarly to the real economy. This might provide some policy insights.

References

Godley, W., and M. Lavoie. Monetary Economics. Palgrave Macmillan, New York, NY, 2007.

Hastings, Harold M., Tai Young-Taft, and Thomas Wang. “When to Ease Off the Brakes (and Hopefully Prevent Recessions).” Levy Economics Institute Working Paper No. 929, 2019. http://www.levyinstitute.org/publications/when-to-ease-off-the-brakes-and-hopefully-prevent-recessions. Last Accessed May 13, 2020.

Minsky, Hyman P. "Marxian Economics: A Centenary Appraisal" Hyman P. Minsky Archive. 170, 1994. https://digitalcommons.bard.edu/hm_archive/170. Last Accessed May 13, 2020.

Piketty, Thomas. Capital in the 21st Century. Harvard University Press, Cambridge, MA, USA (2014).

United Nations. System of National Accounts 2008. United Nations, New York, NY USA. 2009. https://unstats.un.org/unsd/nationalaccount/sna.asp. Last Accessed May 13, 2020.

Modeling and Simulation Analysis of a Complex Adaptive Systems of Systems Approach to Naval Tactical Warfare | Monday 10:20-12:00

Bonnie Johnson

A complex adaptive systems of systems approach is an engineered solution to a highly complex problem domain. Highly complex problems are comprised of many diverse objects and events that are interrelated, rapidly changing, and give rise to unpredictable, severe, and often destructive consequences. A recent study produced a grounded theory that defines complex adaptive systems of systems as a new class of engineered systems that can address highly complex problem domains. Naval tactical warfare was identified as a highly complex problem domain and used as a use case to study the complex adaptive systems of systems theory. A modeling and simulation analysis of a complex naval tactical scenario produced a comparative analysis of the existing traditional approach to naval warfare with the new theoretical complex adaptive systems of systems approach. This paper describes the naval tactical modeling and simulation approach and presents the results of the comparative analysis—providing insight into engineered complex adaptive systems of systems solutions and demonstrating theory validity.

Modelling any complex system using Hilbert’s World Formula approach | Wednesday 16:40-18:00

Troy Vom Braucke and Norbert Schwarzer

Especially when it comes to simulating complex systems, humans tend to linearize their understanding of a model about a system by far too early a stage during the process of abstraction.

Hilbert’s almost forgotten world formula approach [1] shows a method that retains full ‘complexity and non-linearity’ for any complex system, while remaining mathematically manageable [2]. This leads to a fundamental quantum-gravity description of the system and thus, allows for an extremely general consideration of any problem in mathematical form [3, 4].

As complexity is equal to the number of degrees of freedom and thus, to the number of dimensions – the dimensions form space, entangle and subsequently give rise to masses, spin, inertia, potentials and so on… then, there we are with the world formula to have the best description for any complex system you can ever have.

[1] D. Hilbert, Die Grundlagen der Physik, Teil 1, Göttinger Nachrichten, 395-407 (1915)
[2] N. Schwarzer, "The World Formula: A Late Recognition of David Hilbert ‘s Stroke of Genius", ISBN: 9789814877206 (hardcopy available in print in September 2020 from T&F and Jenny Stanford Publishing)
[3] N. Schwarzer, “Societons and Ecotons - The Photons of the Human Society - Control them and Rule the World”, Part 1 of “Medical Socio-Economic Quantum Gravity”, Self-published, Amazon Digital Services, 2020, Kindle
[4] N. Schwarzer, “Mastering Human Crises with Quantum-Gravity-based but still Practicable Models - First Measure: SEEING and UNDERSTANDING the WHOLE: Part 3 of Medical Socio-Economic Quantum Gravity”, Self-published, Amazon Digital Services, 2020, Kindle

Mosaic: Using AI to rightly shape complexity in emergent behavioral systems for DoD | Monday 10:20-12:00

Harrison Schramm, Bryan Clark and Daniel Patt

Recent advances in Artificial Intelligence have allowed for an explosion in both the complex dilemmas that human decisionmakers face, as well as the complexity of ‘digital partners’ to conquer them. This paper focuses on the study of emergent complexity in decision making as part of the Mosaic warfare construct being proposed by the Defense Advanced Research Projects Agency (DARPA). In order to prepare for the next generation of warfare, these questions need to be addressed now.

The specific focus of our talk is man-machine teaming in the command cell, by which we mean the interaction between expert systems to include AI used to make fast decisions on the allocation of resources, measured against combat effectiveness, deception and risk.

Our work focuses on understanding complexity in the sense that we seek to create systems that through optionality impose decision dilemmas on an adversary, yet lead to straightforward decisions for ourselves and allies. A key tool in artfully crafting the right sort of complexity is to use purpose-build narrow AI. A novel feature of this work is using AI to drive both the performance (kinetic) of a combat force, as well as simultaneously craft the apparent complexity; both measures are applied to the same fundamental decision space.

The technical aspects of this topic lead to the heart of forefront topics in both AI and modern optimization, specifically the tradeoff between fragility and complexity as a reflection of the fundamental conflict between efficiency and resilience. Our talk is focused on building prototype interactive tools in the R programming environment, using Shiny, Keras and VisNetwork as enablers.

Motion and Emotion: Tracking Sentiment and Spread of COVID-19 in New York City | Wednesday 15:00-16:40

Elizabeth Marsh, Dawit Gebregziabher, Nghi Chau and Shan Jiang

Physical distancing, often in the form of ‘stay-at-home’ orders, has been the primary measure for reducing the spread of COVID-19 in the United States in 2020. As the U.S. attempts to return to normal mobility patterns while also mitigating the spread of the virus, it is important to understand the effect of distancing measures on the population. Here we investigate two impacts of distancing measures - a sentiment analysis and a mobility analysis. The sentiment analysis uses geo-tagged Tweets to understand changing emotions through the course of the pandemic. Secondly, in our mobility analysis we examined how the spatiotemporal spread of COVID-19 correlated with human mobility and how this pattern changed over time. Our sentiment analysis showed that the most frequent words used in tweets in the first few weeks of the pandemic were related to job loss. We also found that Tweets from Democratic-voting “blue” states had relatively lower levels of happiness and trust, while showing more anger. Our mobility analysis showed that while overall mobility was greatly reduced during the pandemic, a “new normal” mobility pattern emerged in which most travel was conducted by essential workers. We then use this “new normal” as a framework to understand how stay-at-home orders affect the spread of the disease. Our results can be used to inform future policy, to help best curb the pandemic while also preserving emotional wellbeing of affected populations.

Multi-Agent Simulations of Intra-colony Violence in Ants | Thursday 10:30-12:00

Kit Martin and Pratim Sengupta

This paper seeks to elucidate key aspects of a rarely-studied interaction in ant colonies --- intra-colony violence --- using multi-agent-based computational simulations. A central finding is that intra-colony violence is heritable, though not prevalent. Results from our simulations reveal specific conditions in which such infrequent forms of violence occur and can be inherited, which in turn helps us understand why Atta cephalotes may persist killing colony members, even though it dampens colony carrying capacity. We also discuss the concerns and implications of our work for modeling conflict and violence more broadly, which in turn raises questions about the ontological nature of the computational and evolutionary models.

Multi-level Co-authorship Network Analysis on Interdisciplinarity: A Case Study on the Complexity Science Community | Wednesday 10:50-12:30

Robin Wooyeong Na and Bongwon Suh

Determining interdisciplinarity has become crucial as the importance of such collaboration is increasingly appreciated. We aim to demonstrate the interdisciplinarity of the complexity science community through a multi-level co-authorship network analysis. We conduct a multi-level analysis to avoid the pitfalls between multidisciplinarity and interdisciplinarity. We build a weighted co-authorship network and label each author’s disciplines based on their self-declared interests available in the Google Scholar account. Our work uses an applied form of Shannon entropy to measure the diversity of disciplines. While showing the multidisciplinarity through measuring the diversity at the global level, we utilize a community detection algorithm and show the interdisciplinarity via a group level analysis. We also conduct an individual level analysis by measuring the neighbor diversity for each node and comparing it with the author’s degree centrality. Our research shows that the diversity of disciplines in the complexity science community comes from the interdisciplinary characteristics at the group-level and individual-level analysis. The model can be applied to other heterogeneous communities to determine how truly interdisciplinary they are.

Multifractal-based Analysis for Early Detection of Central Line Infections | Wednesday 10:50-12:30

David Slater, James Thompson, Haven Liu and Leigh Nicholl

Many signals are scale invariant, meaning they exhibit self-similarity across scales and generally do not simplify under magnification. They are instead characterized by a power-law relationship that equates the properties of the signal across different resolutions. These properties are the hallmarks of multifractal geometry and they lead to unruly signals that fail to meet the assumption of stationarity required for traditional signal processing. The objective of multifractal analysis is to quantify how statistical moments of a signal change as different resolutions are analyzed. The non-stationary aspect of multifractal signals suggests the moments will change across scales but do so according to a quantifiable power-law.

Given that the human body is a complex system consisting of over a trillion cells it is not surprising that non-stationary signals and datasets are ubiquitous in health care. A prime example is the series of time intervals between spikes in the R-wave of the heart as measured by an EKG known as the RR-interval. This series of inter-arrival times has been shown to be multifractal and it is an important factor in determining the health state of patients. The objective of this research is to determine if signatures related to changes in health state can be detected using the multifractal properties of the RR-interval of a given patient.

A central line-associated bloodstream infection (CLABSI) is defined as an infection that occurs within 48 hours of the insertion of a central venous catheter that is not related to other causes and is laboratory-confirmed. Studies have shown that more than half of ICU patients and close to one-fourth of non-ICU patients have a central line. CLABSI is both preventable and expensive to treat, so early detection of the infection is critical to minimizing harm to the patient and managing the overall cost of health care.

In partnership with Boston Children’s Hospital (BCH), we researched the utility of analyzing multifractal features of RR-intervals as a tool for early detection of the onset of CLABSI. Using data provided by BCH, we constructed a cohort of 450 patients broken down into 4-hour intervals of continuous heart wave monitoring. For each interval we computed the multifractal spectrum and extracted the first two log-cumulants as the defining features. We then attempted to reconstruct patient timelines from the database in order to isolate the most likely time the central line was inserted and the time of CLABSI onset. Using the multifractal log-cumulants, average temperature, heart rate, and age, we applied various machine learning techniques to classify CLABSI versus non-CLABSI patients. We concluded that RR-intervals are indeed multifractal and that evidence exists that CLABSI patients exhibit a less complex (i.e., narrower multifractal spectrum) than those patients without CLABSI. Regarding an early detection procedure, there were a number of confounding factors. The most salient was that BCH generally treats extremely sick children whose heart rates are likely already altered by their underlying condition. More research is required to overcome these factors before an early detection procedure can be implemented.

Multiplex Markov Chains | Wednesday 10:50-12:30

Dane Taylor

Multiplex networks are a common modeling framework for interconnected systems and multimodal data, yet we still lack fundamental insights for how multiplexity affects stochastic processes. We introduce a "Markov chains of Markov chains" model such that with probably (1−ω)∈[0,1] random walkers remain in the same layer and follow (layer-specific) "intralayer Markov chains", whereas with probability ω they move to different layers following (node-specific) "interlayer Markov chains". By coupling "Markov-chain layers" versus "network layers", we identify novel multiplexity-induced phenomena including "multiplex imbalance" (whereby the multiplex coupling of reversible Markov chains yields an irreversible one) and "multiplex convection" (whereby a stationary distribution exhibits circulating flows that involve multiple layers). These phenomena (as well as the convergence rate λ2) are found to exhibit optima for intermediate ω, and we explore their relation to imbalances for the intralayer degrees of nodes. To provide analytical insight, we characterize stationary distributions for when there is timescale separation between transitions within and between layers (i.e., ω→0 and ω→1). [See https://arxiv.org/abs/2004.12820 for more info.]

Naruto Complex Network: A mathematical approach to a fictional universe. | Wednesday 10:50-12:30

Aidee Lashmi García-Kroepfly, Iván Oliver-Domínguez, Jesús Hernández-Falcón and Karina Mendoza-Ángeles

The study of complex systems is gaining importance in different science fields, even though there is no precise definition of what they are, all complex systems share general characteristics, for example they comprise different elements that interact with each other. Each component has a specific function inside the system and the interactions between components give rise to emergent properties. There are different methods to study complex systems and one of them is Network Theory. A network is a set of nodes that interact with each other and may be connected by edges, we can study different types of phenomena with network theory, from the microscopical events like DNA interactions through a genetic regulation network, to macroscopic ones like social interactions with the help of social networks. Networks also have shared properties that give information about the system they are studying, in social networks certain properties often appear like the small-world effect, free scale topology, communities, hubs and different measurable variables. Japanese animation, also known as anime, is a manifestation of Japan’s popular culture that has won popularity across the world. “Naruto” is one anime that has extended its popularity worldwide, as it has been translated to 23 different languages and transmitted in more than 60 countries. During the fifteen years that this series lasted it presented to its audience with a large evolving universe, introducing into the story different nations and characters.

In this work, we constructed 3 different networks all based on the Naruto animated series, each one built under different criteria, in order to evaluate if a fictional universe may present small world network properties and topology, and how this criteria changes each network, even when the data used for their construction has the same origin, giving students the possibility to understand the world of complex networks through a different approximation.

For the construction of the networks, all characters who appeared on the series where considered nodes and in each network the criteria for the nodes to be connected changed. On the first network (N1) the condition needed was to appear in the same episode, for the second one (N2) to talk to each other during the episode and on the third one (N3) to fight alongside against a common enemy, we watched the episodes of the series and recollected the data of each episode in an excel worksheet, all the information collected was used to construct the networks in Python with the use of the library “NetworkX” and then visualized with the program “cytoscape”.

In the resulting networks, different communities were identified and inside each community highly connected nodes appeared, each one having high rankings in degree, betweenness, closeness and eigenvector centrality. Only N1 fulfills all the characteristics to be considered a small-world network like some real-world ones (Facebook, scientific collaborations). All three networks shared the same character as their higher node in degree, betweenness and closeness centrality, this character being the anime protagonist, Naruto.

Negative Representation and Instability in Democratic Elections | Wednesday 15:00-16:40

Alexander Siegenfeld and Yaneer Bar-Yam

The challenge of understanding the collective behaviors of social systems can benefit from methods and concepts from physics, not because humans are similar to electrons, but because certain large-scale behaviors can be understood without an understanding of the small-scale details, in much the same way that sound waves can be understood without an understanding of atoms. Democratic elections are one such behavior. Here, we define the concepts of negative representation, in which a shift in electorate opinions produces a shift in the election outcome in the opposite direction, and electoral instability, in which an arbitrarily small change in electorate opinions can dramatically swing the election outcome. Under general conditions, we prove that unstable elections necessarily contain negatively represented opinions. Furthermore, in the presence of low voter turnout, increasing polarization of the electorate can drive elections through a transition from a stable to an unstable regime, analogous to the phase transition by which some materials become ferromagnetic below their critical temperatures. Empirical data suggest that the United States’ presidential elections underwent such a phase transition in the 1970s and have since become increasingly unstable.

Network Subgraphs in Real Problem-Solving Networks | Monday 10:20-12:00

Dan Braha

Understanding the functions carried out by network subgraphs is important to revealing the organizing principles of diverse complex networks. Here, we study this question in the context of collaborative problem-solving, which is central to a variety of domains from engineering and medicine to economics and social planning. Our empirical results show that unrelated problem-solving networks display very similar local network structure, implying that network subgraphs could represent organizational routines that enable better coordination and control of problem-solving activities.

Neuromorphic Nanowire Networks: Topology and Function | Tuesday 14:30-16:10

Alon Loeffler, Ruomin Zhu, Joel Hochstetter, Mike Li, Adrian Diaz-Alvarez, Tomonobu Nakayama, James M Shine and Zdenka Kuncic

Graph theory has been extensively applied to the topological mapping of complex networks, ranging from social networks to biological systems. It has also increasingly been applied to neuroscience as a method to explore the fundamental structural and functional properties of human neural networks. Here, we apply graph theory to a model of a novel neuromorphic system constructed from self-assembled nanowires, which we call neuromorphic nanowire networks (NNNs), whose structure and function may mimic that of human neural networks.

Simulations of NNNs allow us to directly examine their topology at the individual nanowire–node scale. This type of investigation is currently practically impossible experimentally. We apply network cartographic approaches to compare NNNs with: random networks (including an untrained artificial neural network); grid-like networks and the structural network of \textit{C. elegans}. Finally, we run a non-linear transformation simulation task using each network as a reservoir, to determine how network topology might influence performance.

Our results demonstrate that NNNs exhibit a small–world architecture similar to biological system of\textit{ C. elegans}, and significantly different from random and grid-like networks. Furthermore, NNNs appear more segregated and modular than random, grid-like and simple biological networks and more clustered than artificial neural networks. We also showed that NNNs with a small world propensity of between 0.6 and 0.65 tend to require lower input voltage to achieve higher accuracy on a non-linear transformation task than other networks.

Given the inextricable link between structure and function in neural networks, these results may have important implications for mimicking cognitive functions in neuromorphic nanowire networks.

New Frontiers in Quantum Information Theory | Monday 14:30-16:10

Maurice Passman, Philip Fellman and Jack Sarfatti

In our companion paper, "Bohmian Frameworks and Post Quantum Mechanics" we examined the foundational linkage between David Bohm’s ontological interpretation of quantum mechanics and how this interpretation could be extended so a particle was not just guided by the quantum potential, but in turn,
through back-activity, modified the quantum potential field. Back-activity introduces nonlinearity into the evolution of the wave function, much like the bidirectional nonlinear interaction of space-time and matter-energy in general relativity. This generalisation has been called Post Quantum Mechanics. The mathematical exposition presented therein is developed in the present paper, linking Dimensional Analysis, Fractal Tessellation and Self Organized Criticality to describe new mechanisms of superluminal information transfer and the mechanisms for exploiting closed timelike curves.

Nonlinearity, time directionality and evolution in Western classical music. | Monday 14:30-16:10

Alfredo González-Espinoza, Gustavo Martínez-Mekler, Lucas Lacasa and Joshua Plotkin

Traditional quantitative analysis of the temporal structure underlying musical pieces have mainly addressed linear correlations. It is nowadays widely accepted that music presents a 1/f noise, being this a fingerprint of “appealing sound”. Here, we present a statistical analysis inspired by statistical physics and nonlinear dynamical concepts, where we suggest that the pleasantness of music is related not only to linear structure, but also to nonlinearities present in the music compositions and we relate nonlinearity with other concepts such as time irreversibility and interval asymmetry. Finally, going beyond musical structure and inspired by evolutionary theory we present an exploratory analysis of changing frequencies of n-grams of intervals, dissonance of chords, rhythmical patterns, and measure-to-measure variation in keys, over the course of several centuries in Western musical scores.

Not Everything is black or white: A New Polarization Measurement Approach | Tuesday 10:20-12:00

Juan A. Guevara-Gil

The measurement of polarization has been studied over the last thirty years. Despite the different applied approaches, since polarization concept is complex, we find a lack of consensus about how it should be measured. Here, we propose a new approach to the measurement of the polarization phenomenon based on fuzzy set. Fuzzy approach provides a new perspective whose elements admit degrees of membership. Since reality is not black and white, a polarization measure should include this key characteristic. For this purpose, we analyze polarization metric properties and develop a new risk of polarization measure using aggregation operators and overlapping functions. We simulate a sample of N= 391315 cases across a 5-likert-scale with different distributions to test our measure. Other polarization measures were applied to compare situations where fuzzy set approach offers different results, where membership functions have proved to play an essential role in the measurement. Finally, we want to highlight the new and potential contribution of fuzzy set approach to the polarization measurement which opens a new field to research on.

A Novel Viewpoint on Social Complexity and the Evolution Model of Social Systems based on Internal Mechanism Analysis | Monday 10:20-12:00

Wei Wang

Social systems are composed by people and the environments they lived in. The systems reflect the relationships among people, and the relationships between human beings and environments. Because individuals are with different beliefs, wishes or demands, and the environments are with a series of uncertainties, the complexity of social systems is inevitable. To analyze and understand the complexity is very important. It is helpful or useful for us to model and predict the system and to reconstruct a better society. In this paper, some of the definitions and methods, such as, the definitions of the complexity, the components of social systems, hierarchical and multiscale analysis of social systems, networks methods of the social factors, statistical methods, index methods, and so on, are summarized. From the viewpoint of original motivations of the social development, the analysis on the complexity of social systems is considered. For modelling the evolution of the main social indexes, a novel spatial-temporal model is proposed, in which, the spatial part is a fractal structure model which is iterated by self-interactive process with the external factors, and the temporal part is a diffusion process which is determined by the temporal natural, social, and political conditions.

On Crashing the Barrier of Meaning in Artificial Intelligence | Tuesday 16:20-17:00

Melanie Mitchell

In 1986, the mathematician and philosopher Gian-Carlo Rota wrote, “I wonder whether or when artificial intelligence will ever crash the barrier of meaning.” Here, the phrase “barrier of meaning” refers to a belief about humans versus machines: humans are able to “actually understand” the situations they encounter, whereas AI systems (at least current ones) do not possess such understanding. The internal representations learned by (or programmed into) AI systems do not capture the rich “meanings” that humans bring to bear in perception, language, and reasoning.

In this talk I will assess the state of the art of artificial intelligence in several domains, and describe some of their current limitations and vulnerabilities, which can be accounted for by a lack of true understanding of the domains they work in. I will explore the following questions: (1) To be reliable in human domains, what do AI systems actually need to “understand”? (2) Which domains require human-like understanding? And (3) What does such understanding entail?

On Efficiency and Predictability of the Dynamics of Discrete Boolean Networks | Monday 10:20-12:00

Predrag Tosic

We discuss possible interpretations of "(in)efficient" and "(un)predictable" systems when those concepts are applied to discrete networks and their (deterministic) dynamics. We are interested in computational notions of the network dynamics’ predictability and efficiency. A network's dynamics is efficient if it settles in an appropriate stationary pattern, such as a fixed point (stable) configuration or a temporal cycle; that is, efficiency here is viewed as essentially synonymous with a short transient chain. An inefficient dynamical system, then, is one that takes a long time to converge to a stationary pattern.

The issue of (in)efficiency is orthogonal to another important concept, that of the network dynamics' predictability. We call such dynamics predictable, if the "ultimate destiny" (such as, convergence to a stationary behavior which, in deterministic systems, means either to a "fixed point" configuration or a fixed temporal cycle) can be effectively determined or predicted, without performing a step-by-step simulation of network's dynamics. This notion of predictability is intrinsically computational; for systems with finite configuration spaces, all properties are decidable but particular properties may or may not be possible to predict within reasonable (that is, polynomial in the number of network nodes) computational resources. We therefore overview computational complexity of reachability-flavored problems about several classes of cellular and network automata, arguing that predictable systems modeled by those formal automata models are those for which reachability problems are computationally tractable.

Note, this concept of predictability would imply that systems some of whose aspects of dynamics could well be complex and in particular computationally intractable, would qualify as predictable if they converge relatively fast, regardless of what their dynamics might converge to. For example, consider those networks for which the number of possible dynamical evolutions, across all possible starting configurations, is generally intractable to determine or approximately estimate (the problem of counting "fixed points", denoted #FP, is one important instance of this more general problem of enumeration of all possible dynamics of a dynamical system): their predictability would be determined by how hard is the problem of determining, how fast and to what stationary behavior would such system converge, starting from an arbitrary initial configuration. Therefore, the predictability question is different from the efficiency question as well as from the "diversity of possible dynamics" question, the latter being closely related to the enumeration of all possible asymptotic dynamical evolutions of the underlying dynamical system.

Last but not least, we outline several classes of (finite) cellular, graph and network automata that fall into different "quadrants" with respect to these two orthogonal complexity parameters, namely, efficiency and predictability. The "nicest" systems with respect to these two parameters are those that are both efficient and predictable. Classical (finite) cellular automata with the Majority update rule fall into this category. On the other hand, obviously, the most challenging to analyze and predict dynamics of such discrete networks are those that are both inefficient and unpredictable. Several popular and widely studied classes of network and graph automata turn out to fall into that category. We review several interesting classes of CA and Boolean Networks that are both efficient and predictable, as well as those that are in general unpredictable yet efficient, and lastly those that, in spite of simple description and structure, are provably both unpredictable and inefficient.

On modeling bat coronavirus ecological interaction from different biodiversity dimensions through machine learning | Monday 14:30-16:10

Angel Luis Robles-Fernandez, Andrés Lira-Noriega and Diego Santiago-Alarcon

Based on information available on freely available data sources, this study aims to foster innovative research and discovery in biodiversity computing. The main objective is to model the pathogen-host interactions from a database designed to relate different dimensions of biodiversity. Through the analysis of these data with the latest technologies in machine learning, a general study framework is proposed where susceptibility to a pathogen is related between two pairs of species given their geographical, environmental and phylogenetic distances. Finally, this research aims to release the built methods as well as the databases in a software package accessible to the entire interested community. We demonstrate the applicability of this framework on a study case to predict the susceptibility of bat species worldwide for getting infected by different strains of coronavirus. The implementation of this methodological approach suggests the utility of machine learning routines to predict the pathogen-host interaction risk, although this is overall susceptible to the data used for calibration; however, the biological pattern is overall consistent for the species and geographical patterns, thus allowing for finding potential interactions hotspots with high accuracy. We suggest this methodological approach will be useful for looking at past, present, and future biodiversity patterns and look for the potential evolutionary and environmental underlying processes responsible for these trends. This research will provide both an analytical method to quantify theprobability of host-pathogen interactions at coarse spatial resolutions and for large geographical extents as well as give results that can be incorporated either in decision making by different agents from the scientific community or use them as hypotheses for future theoretical and in-field research.

On Opportunities and Challenges of AI and Robotics Applications in the Next-Generation Health Care | Thursday 10:30-12:00

Predrag Tosic

We discuss several emerging applications of AI, Data Science and Social Robotics in the context of next-generation health care. In most of the post-industrialized world, human populations are rapidly aging, health care costs are rising, and the supply of young (human) workforce to provide care for the growing elderly populations is insufficient. While historically in many of these societies (e.g., the United States, Canada and most of western and northern Europe) the solution to the growing demand has been found in increased immigration of health care practitioners (in particular, "importing" medical doctors and especially nurses from the developing world), rising costs of personalized health care will very likely eventually render this traditional socio-economic solution unsustainable in the longer term. It is our view that the technology-driven solutions to the health care challenges in general, and those pertaining to the care of the elderly in particular, will become not only economically more feasible than increasing hiring of relatively less expensive human work-force, but actually unavoidable in the longer run.

With the rise of AI, robotics, "big data" and data/knowledge-based technologies, many new avenues are opening up for addressing these challenges in health care. We critically overview several of these new opportunities. Some of the key opportunities for AI- and data-driven solutions for major health care challenges in the first half of the 21st century include the following:

i) improved remote patient monitoring via smart wireless technologies and wearable devices;

ii) smart alert and recommendation systems based on such technologies combined with mining a wealth of personalized health- and lifestyle-related data;

iii) cost-effective next-generation geriatric care with the help of social, emphatic robots that provide necessary care as well as much needed companionship to the elderly and the sick;

iv) advanced diagnostics assisted by the AI "domain experts" and intelligent agent technologies integrating feedback from a variety of human and AI experts for customized care and treatment of individual patients;

and

v) progress towards mostly or fully autonomous robotic surgery.

Among the outlined opportunities, our talk with focus in more detail on smart alert & recommendation systems on one, and design and deployment of social, emphatic robotic nurses for geriatric care, on the other hand. We briefly outline some recent research progress in these two exciting sub-domains of next-generation health care, and make some predictions on the near- to medium-term future developments.

These and other AI-based, technology-driven areas of NG health care will open many exciting opportunities and make advanced health care accessible to broader elderly and in-need-of-care populations than ever before. However, as is usually the case, with great opportunities also come a number of challenges, not only in terms of researching & developing reliable AI technologies for health care, but also from the standpoints of public health policy, training medical personnel, social workers and other stakeholders, and adoption of the rapidly changing (and potentially intimidating!) AI and robotics technologies that will be increasingly deployed in a broad variety of contexts in health care and elderly care in not too distant a future. Therefore, in addition to discussing those opportunities, we also reflect on some of the major challenges that we predict will be facing a broader adoption of AI-, Robotics- and Data-driven technologies to enable and enhance the next-generation health care.

On the formal incompleteness of reductive logic | Wednesday 15:00-16:40

J Rowan Scott

A proof of formal incompleteness associated with ‘bottom-up’ reductive Logic has extensive implications for the future of the Reductive Sciences and the relationship the Reductive Sciences have with the developing Complexity Sciences. A proof is explored and some of the implications for the Reductive and Complexity Sciences are spelled-out.

On the Middleware Design, Cyber-Security, Self-Monitoring and Self-Healing for the Next-Generation IoT | Wednesday 15:00-16:40

Predrag Tosic

Internet-of-Things (IoT) is one of the most significant relatively new paradigms and technological advances in the realm of Internet-enabled cyber-physical systems of the past decade. IoT will play an increasing role across many different industries as well as in our everyday lives for years, probably decades, to come. IoT poses many research, technology development and policy-making challenges to those designing, deploying and using various IoT and its various components. We are interested in the distributed intelligence, cyber-security and multi-agent system aspects of what it will, in our prediction, most likely take, to enable reliable, secure, scalable, inter-operable and human-friendly next-generation (NG) Internet-of-Things. More specifically, in this work we focus on those aspects pertaining to distributed software design, "smart" middleware and "smart" self-monitoring, cyber-protection and self-healing of the NG IoT.

Insofar as identifying and applying the most suitable programming abstraction for an intrinsically open and heterogeneous, large-scale distributed system such as IoT, we argue that the classical Actor model of distributed computing is a rather suitable yet under-explored programming paradigm for IoT protocols and infrastructure. The Actor model is based on simple object-like named and addressable entities, called actors, that communicate with each other via asynchronous message passing, Originally the Actor model goes back to the late 1970's; however, until recently it has not found a large-scale "killer application" in the real-world, and therefore has been prototyped and studied, for the most part, by the academic researchers. The recent rise of IoT as an open, heterogeneous, large-scale distributed system made of a broad variety of devices and software protocols that need to interact, communicate and share resources in a fully decentralized manner, we argue, finally provides such a "killer app"; indeed, in our view the Actor model, almost four decades older than IoT, is almost ideally suited as a programming paradigm for IoT.

We next address some of the key desiderata for the NG IoT’s middleware, that would integrate and enable inter-operability of a broad variety of Internet-enabled devices and protocols running on different platforms and operating systems. We focus on the desired flexibility and adaptability properties of such middleware, including but not limited to the ability to learn both online (in real-time, or nearly real-time) and offline (from the stored and then mined interaction patterns) about various malicious behaviors and types of attacks on the IoT infrastructure and devices. The goal is to design middleware that can learn, adapt, self-heal and also both predict and prevent future malicious behavior and cyber-attacks -- thereby protecting the IoT devices and infrastructure over extended periods of time, with little or ideally no direct human intervention.

This discussion of the adaptable, self-healing NG middleware for IoT brings us to the final sub-topic, which is, how to use some AI and Machine Learning techniques, to enable such middleware to learn over time, so it can more effectively predict and prevent future cyber-security threats and attacks. Our key idea in this realm is to apply AI techniques used for “getting better at” playing the well-defined games of strategy – for example, in AlphaGo (a program that defeated convincingly one of the best human Go players back in 2016) and its Go-playing successors -- namely, a "self-play" of sorts: an Actor-based middleware system generates "fake bad actors" and then, via iterated interaction of the “good actors” with the “bad actors”, learns how to identify and prevent future malicious behaviors and cyber-attacks by the "real" bad guys who may try to disrupt or even take over the proper operation of IoT. Such self-play-based learning by the NG IoT "smart middleware" could enable not only more autonomous and more scalable operation of IoT, but importantly also fundamentally more cyber-secure and, in particular, capable of self-healing and recovery next-generation Internet-of-Things.

Organizational Reflexive Modeling | Monday 10:20-12:00

John Bicknell and Werner Krebs

The information environment is an endless ocean which buffets competing nations and organizations. As the British Royal Navy commanded the seas during the age of sail, the nation which anticipates and commands the information environment will lead the world.

Many people think of processes as intentional groups of activities which serve profit maximizing businesses or mission driven government agencies. While this is true, it is an incomplete picture. Organizational process ecosystems exist which are emergent and self-organizing with no human intention. Dynamic human organizations reside within complex ecosystems where they both affect and are affected by their exogenous environments. Corporations, military units, educational institutions, and other organizations consume information constantly in order to compete and survive. Exogenous information stimulates organizations. May organizational reactions to exogenous stimuli be understood empirically and probabilistically? Are organizational reactions generalizable across organizations and cultures? In this discussion, we explore these questions and relate them to national security risks unique to the Information Age. Results from a recent study which modeled an emergent corporate communication ecosystem from semi-structured email data using explainable AI are presented along with a future work agenda.

Clever, resourced peer adversaries and non-state actors are expending resources to understand every portion of the American way of life in order to wage war below the threshold of armed conflict. Less understood is the concept of reflexive control, which has roots in cybernetics and game theory. Since at least the 1960s, Russia has enhanced information warfare with systematic psychological or cognitive understandings of adversarial decision-making processes and continues honing the technique. Reflexive control is a means of conveying to an opponent curated information to incline him to voluntarily (or reflexively) make a predetermined decision desired by the initiator of the action.

Given human bodies exhibit generalized reflexivity to stimuli (for example: knee tap, eye blink, coughing/sneezing) and the human mind generates drawings from which generalizable patterns emerge, then it is a reasonable hypothesis that human organizations exhibit generalizable reflexive patterns as well. A clever adversary which steals organizational data or intercepts signals intelligence may derive temporal process models and develop combined arms information maneuver campaigns against the target organization. If the same enemy monitors the target organization continually, reflexive responses may be understood empirically and even more sophisticated information maneuver deployed. Reflexive responses may be discovered which are generalized across entire critical infrastructure sectors or among critical infrastructure firms with like-characteristics. If true, probabilistic information weaponry may be deployed at scale, without attribution, and with steady, devastating effect against vast segments of the worldwide socio-economic ecosystem.

National security and critical infrastructure defense use cases are vast. Fully developed, an advanced influence capability with profound global economic implications is possible. Governments may discover the technique to be a valuable threat emulation, red teaming, or social engineering training tool, for example. The technique is also a computationally efficient, petabyte-scalable methodology for modeling information flows within “systems of systems,” providing inputs into black-box Ais, increasing situational awareness, and detecting anomalies.

Pandemic Preparation, Consciousness and Complex Adaptive Systems through the Lens of a Meta-Reductive Scientific Paradigm | Thursday 10:30-12:00

J. Rowan Scott

The novel coronavirus SARS-CoV-2 and Covid-19 pandemic and the associated economic crisis provide an opportunity to explore an incompleteness driven novelty generating meta-construction of a Meta-Reductive Scientific Paradigm in search of possible adaptations that might enhance the resilience of individual and community level responses to the Covid-19 pandemic and economic crisis. Unrecognized formal incompleteness of reductive scientific Logic is related to the unresolved integration of Reductive Science and Complexity Science. The failure to integrate these two broad areas of scientific interest can indirectly be related to diminished public trust in scientific and medical information, with consequent political failure to effectively integrate scientific and medical advice in responses to the Covid-19 situation. Careful meta-consideration reveals a subtle way to address formal reductive incompleteness leading to meta-construction of an adjacent possible meta-reductive Complex Adaptive System Model (CAS) and adapted Meta-Reductive Scientific Paradigm. The proposed Meta-Paradigm can integrate willful, intentional, brain, mind and consciousness in Nature, as well as spell-out a number of scientific and cultural adaptations that could improve the integration of medical and scientific information in political management of the Covid-19 crisis and increase the chance that humanity will be better prepared for the next pandemic or next urgent global crisis.

Polarization during dichotomous Twitter conversations | Monday 10:20-12:00

Juan Carlos Losada, Gastón Olivares Fernández, Julia Atienza-Barthelemy, Samuel Martin-Gutierrez, Juan Pablo Cárdenas, Javier Borondo and Rosa M. Benito

Political polarization is a social phenomenon that has several consequences in people’s lives and whose nature is not completely understood. We say that a population is perfectly polarized when divided in two groups of the same size and opposite opinions. In this work, we have studied the polarization phenomena around Twitter conversations concerning different topics with clearly opposed opinions: electoral process with two candidates and social unrest. In each of the conversations, we have found a bipolar opinion distribution and a high value of the polarization index.

Proxymix: Influence of Spatial Configuration on Human Collaboration through Agent-based Visualization | Thursday 10:30-12:00

Arnaud Grignard, Nicolas Ayoub, Cristian Jara Figueroa and Kent Larson

This study proposes the use of agent-based simulation as an alternative to space syntax, a common technique in architecture, to reveal how architectural design can influence scientific collaboration. Using the MIT Media Lab building as a case study, we use Gama-platform to implement a parsimonious agent-based model of researchers' daily routine as they move inside the space. We find that the simulated collaboration network predicts the ground truth collaboration inferred from the Media Lab project database, even after controlling for institutional barriers such as the research lab researchers belong to. Our results highlight that agent-based simulation can be used to construct flexible indicators from architectural blue-prints that reveal important characteristics of people's interaction inside the space.

A Process Algebra Model of Collective Intelligence Systems and Neural Networks | Tuesday 10:20-12:00

William Sulis

The Process Algebra is a mathematical framework for representing and studying interactions between processes as conceived in Whitehead's process Theory. The Process Algebra was inspired by combinatorial game theory and is particularly valuable for modelling complex adaptive systems characterised by transience, emergence and contextuality. These features are typical of both collective intelligence systems and biological neural networks. The Process Algebra comes with a particular model in which processes generate a fundamental discrete dynamics involving the local flow of information via various propagators. This discrete dynamics gives rise in an emergent manner to a continuous dynamics through means of non-uniform interpolation theory. The model has been successfully applied in the context of non-relativistic quantum mechanics. Here it is applied to study information flow in collective intelligence and biological neural networks.

A radical rethinking of San Antonio’s economy: A network view of the Inner West Side Neighborhood | Thursday 10:30-12:00

Belinda Roman

It is time for a radical rethinking of the City of San Antonio’s (COSA) approaches to achieving economic resilience and security. The proposal is that COSA consider taking the view that the city operates as a complex system. The challenge for San Antonio is a better understanding of the complexity of the economy as it stands now and in preparing for future disruptions.

This research presents a sub-network of the larger local economy — San Antonio’s Inner West Side, an area comprised of six ZIP codes that include about 82 industries and 137,000-plus employees. This is also an area in which 6% of businesses are sole proprietorships, which means as many as 8,200 mostly Hispanic people are self-employed.

The Inner West Side is highly Hispanic in character and is home to some of the city’s historic cultural icons such as the Guadalupe Cultural Arts Center and the Archdiocese of San Antonio. Furthermore, two of the nation’s top Hispanic-Serving Institutions (HSIs) ¬ St. Mary’s University and Our Lady of the Lake University— plus the world-class research facilities of Southwest Research Institute and Texas Biomedical Research Institute — are longtime educational and R&D anchors. In other words, the Inner West Side of San Antonio is an ecosystem of significant importance to the bigger regional picture.

The economic contribution of this area to the greater economy is well over 10% of the city’s total economic output, or about $12 billion, through industries, such as construction, food processing, health care and social assistance, and education.

The economic relationships of the Inner West Side can be represented through their linkages. For example, if we focus our attention on the industry hub of health care and social assistance, we see that other industries and their associated businesses will feel the pinch of decreased health care and social spending. There are some 276 individual relationships that would be lost if there is a complete contraction of economic capacity in this one sector of the Inner West Side alone.

The consequences would be even more devastating if we add the social and cultural impacts of such a decline. For example, if we see a 10% decline in any of the major industries in this part of the city, say in health care and social services, we could see an even greater decline in the overall city’s quality of life because of the dependencies and relationships associated within these ZIP codes and other neighborhoods. Additionally, the high concentration of Hispanics in the area means that this segment of the population already hardest hit by the COVID-19 virus will now take an additional blow as businesses and institutions adjust to the economic and financial fallout from of the pandemic.

Complex systems analysis provides a different and more meaningful way to formulate policies that take into account the connections and dependencies at lower levels of aggregation.

Re-examining Political Risk Analysis in Volatile Regions | Tuesday 10:20-12:00

Ghaidaa Hetou

In volatile regions like the Middle East and North Africa (MENA), political risk analysis becomes an operational imperative for corporations, NGOs and foreign governments. Yet standardized, one dimensional approaches to political risk assessment are not only insufficient but downright harmful to the objective of sustaining and expanding business operations and foreign policy objectives. Conventional country risk and political risk indexes, to formalize the process, have attempted to standardize and generalize assessment models for factors that are highly context specific. Hence, the value derived from traditional political risk indexes lack precision, operational relevance, and are therefore less reliable. This paper re-examines political risk analysis and explains how understanding the topology and nature of political risk in emerging and developing markets is a crucial advancement in developing political risk analysis for the private sector and government agencies. Particular focus is given to developing political risk characterization as a risk analysis category. To bridge the conceptual gap between risk assessment and risk management, this paper proposes the concept of complex adaptive systems as the backdrop for emerging political risk scenarios. Understanding regions and states as complex systems necessitates grounding data in indigenous knowledge and contextual intelligence. The main objective of improving political risk analysis is to equip the corporate leadership and government policy makers with political risk management and risk mitigation strategies. Understanding and anticipating disruption by proactively monitoring warning signs is an essential operational mode in volatile regions. Developing professional corporate adaptive and proactive capabilities, tailored for the political environment in a host MENA country, is a competitive advantage in a region ripe with opportunities.

A Receptor-Centric Model for Emergence Phenomenon | Wednesday 16:40-18:00

Michael Ji and Hua Ji

In the past few decades, there has been no breakthrough in the studies of emergence phenomenon. Most of the existing research work considers that an emergence appears when a system has a lot of self-organized parts; and believes that an emergent characteristic belongs to the system "as a whole", not to any individual part. Up to date, we still have no clear ideas about why and when an emergence would appear from a system. One of the main reasons is the lack of a theoretical computing model with which we can apply qualitative and quantitative reasoning. To address this issue, we propose a distributed Receptor-Centric Emergence (RCE) model, which consists of Receptor Agents, Actor Agents, Messages and an Interconnect. (1) A receptor receives messages from an environment, and applies an intrinsic control dynamics to update its internal states. (2) An actor periodically sends messages to the environment and exchanges data, imposing influences on other agents. (3) The interconnect is an asynchronous communication channel for all agents. (4) Any signal being secreted on the interconnect is encapsulated as a structured message. RCE's main idea is that, instead of belonging to the system as a whole, an emergent characteristic is actually an intrinsic state/feature of a receptor; an emergence appears only when a receptor senses/receives "enough" messages from the parts of the system/environment. In this paper, we first define the "Receptor-Centric Emergence" as "an emergence appears iff a receptor's state becomes expressed". Then, we propose both the Receptor State-Upgrading and the Emergence-Decider algorithms, with which we can answer the "why an emergence appears" question. Also, we novelly describe the semantics of "as a whole" from RCE's perspective; then we prove that, if the effect of the messages sent by each actor to a receptor is $\textit{influence}$, and the threshold for the receptor to flip this state's status is $\textit{threshold}$, an emergent characteristic would appear when the total number of actor agents reaches the $\ceil{ \log_{\text{(1-influence)}}{\text{(1-threshold)}}}$.

Reconstructability Analysis and its Occam Implementation | Wednesday 15:00-16:40

Martin Zwick

This talk will describe Reconstructability Analysis (RA), a probabilistic graphical modeling methodology deriving from the 1960s work of Ross Ashby and developed in the systems community in the 1980s and afterwards. RA, based on information theory and graph theory, resembles and partially overlaps Bayesian networks (BN) and log-linear techniques, but also has some unique capabilities. (A paper explaining the relationship between RA and BN will be given in this special session.) RA is designed for exploratory modeling although it can also be used for confirmatory hypothesis testing. In RA modeling, one either predicts some DV from a set of IVs (a directed system), or one discovers relations among a set of variables without making any IV-DV distinction (a neutral system). It can be applied to time-series analysis as well as to spatial patterns. RA can detect high ordinality and nonlinear interactions that are not hypothesized in advance. Unlike neural networks, it is not a black box, but is readily interpretable and explainable. Its graph theoretic conceptual framework allows it to model networks as hypergraphs, and also illuminates in a fundamental way the relationships between wholes and parts, a subject that is central to systems/complexity science.

The talk will be an overview of theory and applications of RA, and will introduce OCCAM, a RA software package developed at PSU that is now open source; see https://www.occam-ra.io/ . The web page, http://www.pdx.edu/sysc/research-discrete-multivariate-modeling, documents this methodology, and includes tutorials, published papers, access to the software and some utility programs, and a user’s manual.

Relationship between language and ideology inference through social network interactions in a political context | Tuesday 10:20-12:00

Julia Atienza-Barthelemy 

In many political scenarios, public opinion is often divided into two extreme and opposite positions. In sociological terms, this process is called polarization, and has important social consequences. Political polarization generates strong effects on society, driving Political polarization generates strong effects on society, driving controversial debates and influencing the institutions. Territorial disputes are one of the most important polarized scenarios and have been consistently related to the use of language. In this work [1], we analyzed the opinion and language distributions of a particular territorial dispute around the independence of Catalonia through Twitter data. We infer a continuous opinion distribution by applying a model based on retweet interactions, previously detecting elite users with fixed and antagonist opinions. The resulting distribution presents a mainly bimodal behavior with an intermediate third pole that shows a less polarized society with the presence of not only antagonist opinions. We find that the more active, engaged and influential users hold more extreme positions. Also we prove that there is a clear relationship between political positions and the use of language, showing that against independence users speak mainly Spanish while pro-independence users speak Catalan and Spanish almost indistinctly. However, the third pole, closer in political opinion to the pro-independence pole, behaves similarly to the against-independence one concerning the use of language.

[1] Atienza-Barthelemy, J., Martin-Gutierrez, S., Losada, J.C. et al. Relationship between ideology and language in the Catalan independence context. Sci Rep 9, 17148 (2019).

Renormalization Group Approach to Cellular Automata-based Multi-scale Modeling of Traffic Flow | Tuesday 10:20-12:00

Zhaohui Yang, Hossein Haeri and Kshitij Jerath

A key problem in modeling and analyzing emergent behaviors in complex systems such as traffic flows, swarms, and ecosystems, is to identify the appropriate scale at which such emergent behaviors should be modeled. Current traffic flow modeling techniques can be neatly categorized as operating at one of three spatial scales: microscopic, mesoscopic, or macroscopic. While significant research effort has been expended on developing models in each of these three modeling approaches, a fundamental question remains unanswered: what is the `appropriate' scale at which to model traffic flow that will enable optimal observation and prediction of emergent traffic dynamics? Recent techniques that attempt to merge models at different scales have yielded some success, but have not yet answered this fundamental question. In the presented work, we seek to use statistical mechanics-inspired traffic flow models, coupled with renormalization group theoretic techniques to allow for modeling of traffic flow dynamics at spatial scales that are arbitrarily multiples of an individual car length. While we do not yet answer the question about optimal spatial scales for observing emergent behaviors, this method provides a stepping stone towards that goal. In addition, the presented method also provides a single mechanism to link microscopic and macroscopic models of traffic flow. Numerical simulations indicate that the traffic phenomena such as backward moving waves and emergent traffic jams are retained with this modeling technique.

A Resilience Approach to dealing with COVID-19 and future systemic shocks | Thursday 10:30-12:00

Igor Linkov

Disruptions to physical and cyber networks are inevitable. When networks are not resilient, i.e., they do not recover rapidly from disruptions, these unpredictable events can cause significant disruptions that may not be proportional to the extent of the initial damage. We argue that emphasis on efficiency in the operation, management and outcomes of various economic and social systems has brought much of the world to rely upon complex, nested, and interconnected systems to deliver goods and services around the globe. While this approach has many benefits, the Covid-19 crisis shows how it has also reduced the resilience of key systems to shocks, and allowed failures to cascade from one system to others. This presentation discusses the notion of resilience, and provides specific recommendations on both integrating resilience analytics for recovery from the current crisis as well as on building resilient infrastructure to address future systemic challenges.  This presentation will also illustrate economic implications of unmitigated random disruptions in urban road systems as an example of disturbance in infrastructure networks. 

Ribosome disruption in Luminal A breast cancer revealed by gene co expression networks | Monday 10:20-12:00

Diana García-Cortés, Jesús Espinal-Enriquez and Enrique Hernandez-Lemus

A cellular context is determined by the coordinated and highly regulated expression of specific sets of genes. In this sense, gene expression profiles and co-expression patterns might provide insight about shared transcriptional regulatory mechanisms. Elements in nearly all levels of transcriptional control are disrupted in cancer, leading to altered gene expression and the promotion of tumor progression. Furthermore, these alterations are highly heterogeneous, generating a diverse set of cancer manifestations both at the molecular and clinical levels. Breast cancer and its molecular subtypes, Luminal A, Luminal B, HER2+ and Basal-like are an emblematic example of cancer heterogeneity. We have previously studied the transcriptional profiles of breast cancer molecular subtypes by using co-expression networks to capture global and local connectivity properties. In breast cancer as opposed to the healthy phenotype, the co-expression networks display an imbalance in the proportion of cis-/trans- interactions (cis- interactions link genes in the same chromosome, while trans- connect genes in different chromosomes), meaning that the majority of high co-expression interactions connect gene-pairs in the same chromosome. Moreover, the strength of gene-pair co-expression depends on physical distance, a feature that promotes the emergence of high density co-expression hotspots associated with localized regions in the chromosomes. Although we have shown structural characteristics related to the transcriptional profile in breast cancer molecular subtypes, we haven’t analyzed the functional implications of these features. To this end, we have selected the Luminal A co-expression network given that its cis-/trans- proportion imbalance is the lowest and its structure resembles the one in the healthy network. Communities in both networks where extracted and an their homophily score was computed according to the chromosome tag for each gene. The distribution of the score confirmed the cis-/trans- imbalance in the Luminal A co-expression network. An over representation analysis was performed using the biological processes in Gene Ontology and KEGG pathways. Communities with low chromosome homophily score were associated with biological processes more frequently than the ones with high scores. Among the significant processes we found telomere maintenance, well-known cancer signaling pathways such as Wnt and NFkB and angiogenesis. A differential gene expression analysis was performed for the genes in the Luminal A breast cancer subtype network and community homophily scores were computed given their “up” or “down” differential expression. The majority of communities have a high homophily score. We decided to focus on two of them to further investigate into the molecular mechanisms associated: one highly up-regulated with genes such as CENPA and FOXM1 and another one down-regulated composed of several ribosomal proteins. These results suggest an interplay between the structural and functional alterations on the co-expression program in breast cancer, particularly in the Luminal A subtype.

Scientific Logic, Natural Science, Polarized Politics and SARS-CoV-2 | Wednesday 15:00-16:40

J. Rowan Scott

Community and individual survival in the face of a global pandemic depends upon whether or not guidance from public health experts and natural scientists is heard and adaptively implemented by politicians and individual citizens. The SARS-CoV-2 pandemic provides a humbling demonstration of the consequence of mixed or failed preparation and response to early warning signals, prior recommendations as well as ongoing advice from medical and scientific experts. Late and inconsistent implementation of decreased social connectivity made infectious transmission more likely, cost lives and threatened the medical system’s sustainability. Patchy implementation of economic reopening agendas sometimes driven by capricious political motives, economic concerns or unreliable medical and scientific advice, serve to prolong the pandemic, precipitate further outbreaks and increase the number of casualties. In the evolving context of the pandemic, the scientific and medical community may have inadvertently played a significant role in weakening the public appreciation of rational logic, truth, proof and consistency, making scientific information less valued in political debate. Unrecognized formal reductive incompleteness is an important property of reductive scientific Logic that may indirectly be involved in diminishing the salience of scientific and medical voices dealing with rapidly developing scientific and medical information and evidence. Formal reductive incompleteness is related to the unresolved integration and a potential future consilience between the Reductive Sciences and the Complexity Sciences. Recognizing the influence and implications of formal reductive incompleteness may improve the integration and implementation of scientific knowledge and increase the acceptance and perceived importance of scientific and medical opinion in the face of the SARS-CoV-2 pandemic.

Searching for Influential Nodes in Modular Networks | Wednesday 15:00-16:40

Zakariya Ghalmane, Chantal Cherifi, Hocine Cherifi and Mohammed El Hassouni

Ranking nodes according to their importance is a fundamental issue in the research on complex networks. While many centrality measures have been proposed over the years based on local or global topological properties of the networks, few studies have considered the influence of the community structure. Despite the fact that many real-world networks exhibit a modular organization, this property is almost always ignored in the design of the ranking strategies. In a modular network, we can distinguish two types of influences for a node: A local influence on the nodes belonging to its own community through the intra-community links, and a global influence on the nodes of the others communities through the inter-community links.Therefore, centrality measures should not be represented by a simple scalar value but rather by a two-dimensional vector. Its first component measures the local influence of the node, while the second component measures its global influence. Based on this assumption, we propose to extend all the classical centrality measures to modular networks. We need to consider two cases corresponding to the nature of the communities (with overlapping nodes or not). Thus, in the following, "Modular Centrality" stands for centrality in modular networks with non-overlapping communities, while "Overlapping Modular Centrality" refers to centrality in modular networks with overlapping communities. We conducted a series of experiments in order to test the relevancy of the Modular centrality and the Overlapping Modular centrality as compared to their classical counterpart. Considering the most influential centrality measures (Degree, Betweenness, Eigenvector, Closeness), the local and global components have been evaluated separately. Additionally, a straightforward combination of both components (modulus of the two-dimensional vector) has been tested. Experiments have been conducted on synthetic networks with controlled community structure, and on real-world networks in an epidemic spreading scenario using the SIR model. Results show that the spreaders identified by the proposed approach are more influential than those targeted by the standard centrality measures.

Selected Political Issues and Problems, Viewed from a Systems Perspective | Monday 14:30-16:10

Krishnan Raman

Political systems/processes provide many examples for which the systems perspective can provide insights.

In recent years, unexpected political changes [ e.g. BREXIT, CAB ], radical developments in technology, communications, and AI and ML, have highlighted issues that require fresh thinking.

This paper will discuss selected issues.

a. Transnational and Intranational Issues
Transnational issues relating to a right to place – Human Rights and Immigration. Objective - a United World, or a collection of Independent “Islands” ?.
Intra-national issues relating to Place -– Citizenship issues and Attempted political exploitation and discrimination.
These issues will be examined from a systems viewpoint. Relevant factors are Socio-political Inertia against attempts to move toward a fair/just system, and the role of “Precedent” in justifying such Inertia.

b. Political Polarization and Conformity/Alignment.
Effect of Group affiliation.
A simple network analogy will be outlined for polarization.

c. Mass Communication Mechanisms
“News” propagated by Mass Media, often regarded as representing “Truth”, strongly influences public decision-making.
Printed Newspapers had limited distribution and slow response. The advent of Radio Broadcasting initiated a quantum change, later amplified greatly by TV.
Mass Media Broadcasting played a major role in defining “Public Opinion”, because of rapid communication with millions of people
In the 1990s, the coming of the Internet created the next radical step change.
More recently, Social Media such as Facebook have become an important source of News. They are also creating new Virtual Communities. Social Media and their data have been important in political systems/processes, as evidenced in National elections.
Relevant features of Mass Broadcasting mechanisms are Rapid Communication, wide Data-dissemination, and “Instant” responses .
These radically change system parameters – delay times and time constants relevant to opinion formation and response; this changes the system’s dynamics. Instant responses imply zero relaxation time. The system now has essentially Non-equilibirium processes, where previously there was a possibility of pseudo-equilibirium and homeostasis.
Mass Broadcasting significantly affects the working of psycho-socio-political systems.
The qualitative system changes also create system hazards . Examples are`
 Fake Data and Manipulated / Slanted Data. Even though manifestly suspect, they are still effective in influencing System outcomes/ Decisions.
 Cybersecurity issues in Political Systems.
Remote Data Manipulation and Distortion are now real threats in Elections. Advances in technology make these easy to implement and difficult to counter.
The Processes/Mechanisms underlying Mass Opinion include different techniques of Persuasion and the Use of Framing to create the desired effects.
Another question -- Which data sources to accept – the “Expert” Elite , or the propagated “Public Opinion” ? Democratic society sometimes favors the latter.

d. Big Data technology [ with AI/ ML ], Data Analytics for Political Systems; Data-driven political strategy
Examples presented :
Political Pattern Recognition, and Predictive Analytics for estimating political outcomes.
Use of large-scale Data Analytics and ML in anticipating/ forecasting voting responses, and planning political strategies.
Extension of Devices such as Simple Gerrymandering [ manipulating district boundaries using existing political frameworks ] -- to sophisticated data tampering for distorting base Data.

e. Finally, How to incorporate values such as Ethics/ Fairness while Modeling Political Systems ?

Simulation of SARS-CoV-2 envelope formation as a platform for screening therapeutics which may interfere with viral protein-protein interactions | Monday 14:30-16:10

Logan Thrasher Collins, Ricky Williams and Pranav Kairon

Protein-protein interactions (PPIs) are crucial for the formation of coronavirus envelopes. PPIs among the E and M proteins in the membrane of the ER-Golgi intermediate compartment (ERGIC) facilitate budding of viral envelopes into the compartment’s lumen. Despite these potential PPI-based targets, PPIs involved in SARS-CoV-2 budding are poorly understood and so are largely underexploited by drug screening and repurposing efforts. We are working on coarse-grained (CG) integrative molecular dynamics (MD) simulations of SARS-CoV-2 envelope formation via NAMD as a strategy for helping to identify potential compounds which may interfere with PPIs among the E and M proteins. To convert atomistic structures into their CG forms and to describe the interactions of the resulting CG particles, we are using the MARTINI model. For the structures of the E and M proteins, we are using the models from the Feig laboratory. Once our NAMD implementation is complete, the simulation platform will feature a patch of ERGIC membrane carrying numerous copies of transmembrane E protein pentamers and M protein dimers. Because the N and S proteins are not required for coronaviral envelope assembly, they will be excluded to decrease the need for computational resources and so to accelerate the simulation. Using our integrative MD simulation, we intend to identify key PPI sites which may serve as targets for drug repurposing. By characterizing the sites at which the PPIs occur, we will pave the way for subsequent computational methods of screening compounds for activity against these sites. Our work is an active project within the COVID-19 HPC Consortium and we have been given an allocation on the supercomputer Frontera to implement the model. Our simulations may help uncover potentially overlooked treatments for COVID-19.

Spiral defect chaos in Rayleigh-Bénard convection: asymptotic and numerical study of flows induced by rotating spirals | Monday 14:30-16:10

Eduardo Vitral, Saikat Mukherjee, Perry H. Leo, Jorge Viñals, Mark R. Paul and Zhi-Feng Huang

Rotating spiral patterns in Rayleigh-Bénard convection are known to induce azimuthal flows, which stimulates the question of how neighboring spirals interact with each other in spiral chaos, and the role of the hydrodynamic coupling to the existence of this regime. Far from the core, we show that rotations lead to an azimuthal body force that is irrotational, and of magnitude proportional to the topological index of the spiral and its rotation frequency. Despite being irrotational, this force contributes to the azimuthal flow, as otherwise it would lead to a nonphysical, multi valued pressure. By calculating the asymptotic dependence of the resulting flow, away from the spiral's core we find a logarithmic dependence for the velocity. When accounting for damping effects due to no-slip conditions on the convection cell's plates, we show that the azimuthal velocity dampens to approximately 1/r, with a dependence on cutoffs related to the size of a spiral. This flow component can give insights on the appearance of spiral defect chaos, and provide additional hydrodynamic interactions among spirals. We show that the derived azimuthal velocity agrees with numerical results for spiral chaos from both generalized Swift-Hohenberg (2D) and Boussinesq (3D) models, and find that the velocity field can be affected by the size and charges of neighboring spirals. For the Boussinesq model, we obtain a large isolated spiral in a cylindrical cell, and show that its velocity profile is remarkably similar to the one obtained for a spiral in the chaotic regime from the generalized Swift-Hohenberg model. Finally, by comparing pattern advection to the unwinding of spirals (due to local curvatures, wave vector frustration), we find a strong correlation between their balance and the appearance of spiral defect chaos, which helps us to better understand the emergence of this regime.

Studies of COVID-19 outbreak controls via agent-based modeling | Monday 14:30-16:10

Shaoping Xiao

We are in the midst of a global COVID-19 pandemic. Most on-going research focuses on questions concerning virus transmission, asymptomatic and presymptomatic virus shedding, vaccine development, etc. As a difference from utilizing mathematical and statistical models, we developed an agent-based model in this paper to study the COVID-19 outbreaks in urban communities. Rules for people's interaction and virus infectiousness were derived. The control measures, including social-distancing, self-quarantine, or community-quarantine, were assessed individually at first. Both community-quarantine and self-quarantine were scaled so that their impacts on the outbreak control can be quantitatively assessed. Then, the dataset, collected from intensive simulations, was used to train an artificial neural network to predict the outbreak control by implementing a combination of control measures. We also conducted a case study to predict the number of daily new cases of COVID-19 in New York City after March 2, 2020, and compared the simulation results to the reported data.

A Study of the Network of Military Treaties | Tuesday 10:20-12:00

Elie Alhajjar

Since the conclusion of World War II in 1945, international military conflicts have largely been between nations of great power and “underdog” nations. This paper studies the network of military treaties between states in the international system. Examining this network leads to important conclusions about how state actors are prone to respond if an international or regional conflict were to arise. One influential such scenario was the launch of World War I, when great power nations got dragged into conflict due to their binding military treaties with lesser power states. The network below is highly connected with all small countries connected to “hubs” representing global powers. Main centrality measures and network analytics are performed on the data and differences between binding and non-binding treaties are highlighted. All data are taken from [1].

[1] Gibler, Douglas M. 2009. International military alliances, 1648-2008. CQ Press.

Percy Venegas and Tomas Krabec

Schneider and Trojani's work on Divergence and the Price of Uncertainty (2019) sees divergence as tradeable in the context of an options portfolio, where realized divergence measures the distinct realized moments associated with time-varying uncertainty. At a macro level, we could identify the special instances where asset prices and a given indicator move in opposite directions during events where survival is at stake (e.g. in the transition from epidemic to pandemic). The true information content of the indicator is not always apparent since the information is seemingly accessible to market participants at a level playing field --nonetheless, divergent anomalies are present. So, how to explain that strange behavior? We propose a nonlinear approach using evolutionary algorithms to capture the dynamics given the nature of the process, as Norman, Bar-Yam, and Taleb, describe in their recent research note Systemic risk of pandemic via novel pathogens – Coronavirus (2020): "We are dealing with an extreme fat-tailed process owing to an increased connectivity, which increases the spreading in a nonlinear way ". The biology-inspired heuristic is based on Kotancheck's implementation of genetic programming. We discuss the following metrics: 

Divergence (Surface): function which returns the response consensus behavior (as measured by the Ensemble Divergence Function) of the supplied model ensemble. A Model Ensemble is a special form of model since it is trustable. The measure of that trustability is the Ensemble Divergence Function which measures the spread in constituent model predictions.

Neighborhood Asymmetry: An Information metric is needed to measure the information content of each data sample. One of the valid choices is the Average Neighborhood Distance which simply returns the average distance between the data record and the other data records implicitly specified by the data matrix. The Neighborhood Asymmetry function which sums the vectors from the data record to the neighbors implicitly defined by the supplied data matrix and returns the length of this resulting neighborhood directionality normalized by the number of neighbors. Thus, this metric is primarily concerned with the symmetry of the neighbor distribution but also contains a contribution from the distance to each of the neighbors.

Data Strangeness: Comparing the strangeness of each of the data records during Outlier Analysis allows calculation of the outlier Distance of each data record which provides a ranking and assessment of the difficulty of modeling that particular data record. 

We evaluate these metrics using empirical datasets available from Statista, Morningstar, and, the Wolfram Data Repository (Epidemic Data for Novel Coronavirus COVID-19). The end goal is to detect speculative behavior by economic agents, that may encode hidden information not released by governments or institutions. 

Technical Health: Understanding, Assessing, And (Ultimately) Intervening In, The System: A Complexity Aware, Design Driven, Empirical Approach | Wednesday 15:00-16:40

Judy Conley and Garth Jensen

Today’s advanced information systems and data analytics has dramatically shifted the landscape for government research laboratory investigations. Federal Government Laboratories face multiple novel and dangerous challenges to their relevance and ultimately existence and persistently being asked to demonstrate value and provide quicker products to the consumer, customers, sponsors, clients, or stakeholders. “What have you transitioned to the Warfighter, lately?”

These justifications has spurred a surge in the demand for metrics for measuring that value. This in turn has stimulated the laboratory knowledge stewards to address the capability and health of their respective organization. The capabilities of a Federal Government laboratory include not just the organic structures (i.e. facilities, processes, and personnel) but also expands to its partners, stakeholders, funding agents, organizational supervision, contractors, and collaborators. As such, the health of such an organization would not just be complicated but actually the architype of a complex system. The question remains, how is one to understand the dynamics of the Federal Government laboratory so as to devise the proper metrics of technical health for S&T investment planning, personnel recruitment, leveraging partnerships, and forecasting future technical states and actions.

An Initial study proposes Five Metaphorical Frameworks as “Complexity Aware” Ways of Understanding and Making Sense of Technical Health. In light of the George Box sentiment, “All models are wrong, but some are useful” we propose that these five models are potentially useful;
1. Collective Intelligence/Collective Knowledge as an Emergent Property, 2. Knowledge as Flow, 3. Adapting or Extending the “Atlas of Economic Complexity” to the laboratory, 4. A System of “Organized Complexity”, and 5. A Portfolio Based Approach.

This study builds off those metaphors and frameworks to point the way toward possible new ways of assessing Technical Health But does not actually finish the job, i.e. it does not actually do the work of building or creating those new assessment methods/tools. Using one or a combination of these frameworks, the study proposes a design driven, empirically based, qualitative, systematic inquiry to be conducted in parallel with, and intertwined with (i.e. they feed each other) the activities associated in fleshing out the methods of assessment. These activities include the following: Ethnographic Research; The lived experience of those working within the system, at various levels, Qualitative Research/Design Driven Discovery; Elicit insights into the system, see that which we cannot currently see, Mapping the System, and Fleshing Out the Metaphorical Frameworks Into Actual Assessment Prototypes (iterative prototyping).

Design is ideally suited to intervening in a complex system. There is no “permanent” end state: policy is not fixed, it is a co-evolutionary game. This approach will uniquely root out the lived experience of those who operate in the system. It will also afford an opportunity to discover what we can’t currently se, making that which is invisible visible; Presence of tacit knowledge, Presence of Snafu catchers, Presence of non-linear regulatory mechanisms.

Toward Conceptualizing Race and Racial Identity Development Within an Attractor Landscape | Wednesday 16:40-18:00

Sean Hill

Concepts, theories, and findings of race and racial identity development were reviewed and conceptualized into a single model based on principles of complexity and chaos. This paper proposes race can be understood as a complex adaptive system and conceptualized as an attractor landscape. In this model, trajectories represent racial identity development or progression through an attractor landscape comprised of racial categories. Although this works well as a conceptual model, the modeling of racial identity development within an attractor landscape is affected by practical constraints related to data collection and many of the same limitations of existing racial identity development theories. The proposed model also creates additional challenges because of its interdisciplinary nature.

Towards a complex systems perspective on the temporal patterns of dialogic collaborative problem solving | Tuesday 10:20-12:00

Liru Hu

Tensions between the various theories and frameworks of collaborative learning have aroused considerable reflection in the community of learning sciences and drawn renewed attention to systems perspectives. This study explicitly introduces a complex systems perspective to investigate the temporality of dialogic collaborative problem solving (CPS), whereby students solve a problem collaboratively, mainly or wholly through productive talk.

Random and aperiodic complex systems are limited by certain constraints, called attractors, and thus show regular changes or patterns in behavior. The present study sought to validate and interpret the existence of two types of attractors in dialogic CPS. Participation inequity was operationalized as the standard deviation of the individual participation rate. It has been claimed as one possible fixed point attractor in online discussions by Kapur and his colleagues (Kapur et al., 2008; Kapur & Kinzer, 2007). Emergent new ideas in dialogic CPS might serve as strange attractors that have been mentioned by Umaschi (2001) to some extent.

Participants were 168 fourth graders from five classes in a primary school in a third-tier city of mainland China. The teachers helped organize the students into groups of four, as far as possible balanced in terms of gender and prior maths grades. Each group was given 30 minutes to collaboratively solve three structured open-response mathematical problems in a normal classroom setting.

In line with the findings of Kapur et al. (2008), participation inequity was also found as a unique fixed-point attractor in face-to-face dialogic CPS. It made collaborative discussion distinguishable from an artificially constructed random turn-taking process. The results further revealed that individuals tended to take turns at a stable percentage across different types of tasks in a specific group. The plateaus of individual participation rates were significantly correlated with individual self-concept and enjoyment in learning mathematics, individual social anxiety, and prior mathematics grades.

The study also found that evaluative talk moves, in particular, the "agree" and "disagree" helped control the bifurcations of knowledge building. This was consistent with Umaschi (2001)’s early findings that arguments and counterarguments formed feedback loops to drive the argumentative talk forward and into complexity. Inferential talk moves such as “elaborate” “justify” and “add on” elicited new ideas, while invitational talk moves like “why”, “say more” and “agree or disagree” helped induce both individual and collective reasoning. These different types of talk moves related to various regulative processes (Järvelä & Hadwin, 2013) and created regulative loops to promote the unfolding of inquiry processes over time.

This study suggests that the concept of attractors in complexity theories provides a fresh perspective to understand and interpret the divergence and convergence of collaborative discussions. Individuals’ participation in dialogic CPS seems a process of building new identities and social roles in a specific group, which is also significantly shaped by the participants’ historical identities. Furthermore, emergent new ideas during the dialogic CPS tend to have a bifurcative structure and serve as a type of strange attractors in dialogic CPS that drives the discussion into complexity.

Towards Novel, Practical Reasoning based Models of Individual Rationality in Complex Strategic Encounters | Wednesday 10:50-12:30

Predrag Tosic

We are interested in computational game theory, more specifically focusing on complex iterated 2-player games that are "far from zero-sum", that is, whose structure provides considerable (implicit) incentives for the two rational agents to cooperate with each other. Arguably, strategic encounters that are not strictly competitive but rather, depending on the circumstances, may elicit nontrivial combinations of competitive and cooperative behaviors, are indeed pervasive in real-world scenarios, be it economics, politics, biological world or other domains. Among a plethora of such games, we are particularly interested in those where both agents would clearly be better off if they cooperated, yet where the classical game theory has different "prescriptions" insofar as how to act rationally.

The particular game we have studied extensively in recent years is the (Iterated) Traveler's Dilemma (ITD). This game is notoriously counter-intuitive to many economists, social scientists and others, given that the game's only Nash Equilibrium not only goes against what most lay people would consider a good (or even merely sensible) strategy, but is well-known to lead to very low payoffs to both players. In particular, in ITD, it suffices that only one of the players "stick" to the game's unique Nash Equilibrium, in order for both players to be guaranteed to fare poorly, regardless of what the other player does. Applying alternative solution concepts from classical game theory (e.g., replacing the Nash Equilibrium with evolutionary equilibria, subgame perfect equilibria or other "old" game-theoretic prescriptions on "acting rationally") does not resolve the problem, either -- as virtually all classical solution concepts recommend "bidding low" (the lowest possible value), resulting in poor outcomes for both players.

Over the past decade, several attempts have been made to address this "paradox" in ITD, looking for alternative, novel concepts of individual rationality. One such promising concept is that of Regret Minimization (proposed originally by Halpern and Pass). While this concept seems promising and is certainly way more suitable for games such as ITD than the classical concepts based on Nash Equilibria, our recent simulation-based investigations show, that Regret Minimization is not necessarily the most adequate approach, either: against a broad range of opponents, both simplistic "always bid high" and more sophisticated, adaptable "cajoling the opponent towards higher bids" strategies have consistently outperformed the strategies based on the regret minimization idea.

Therefore, we pose a question: perhaps the quest for bounded rationality in strategic games such as Iterated Traveler's Dilemma should not be based on *any* concept of an equilibrium, be it Nash or Evolutionary or Regret Minimization based (or other). We argue that Iterated Traveler's Dilemma is an excellent example of "tragedy of the experts", with experimental studies with human subjects repeatedly showing that lay people with no knowledge of game theory generally do better than "experts" (human players or automated strategies based on the classical game theory). Therefore, perhaps a different approach to defining rational behavior in such games is indeed warranted.

In that context, we revisit the models of practical reasoning familiar from the AI and intelligent agents literature; in particular, we focus our attention on the well-known BDI model (Beliefs, Desires, Intentions) of practical reasoning by intelligent artificial agents. We adapt the BDI model of (bounded rational) agency to our game-theoretic context, in particular, to ITD. Our early results indicate that such BDI agents in general do very well against a broad range of opponents, and in particular seem to adapt better to both adversarial and cooperative opponents than regret minimization strategies do. While these results are preliminary, we argue that perhaps such practical reasoning approaches to finding the right model of rational behavior in complex strategic interactions is the right way to go, rather than insisting on equilibrium-based models of rationality that, in the context of Traveler's Dilemma and a few other well-known 2-player games (e.g., the Centipede Game) have been repeatedly shown both theoretically and experimentally to fall short of providing an adequate model of individually rational behavior.

Percy Venegas

In the next 10 years, we will witness important transformations in the way human affairs operate. Let us imagine a world where there is no guarantee of privacy at the personal or societal levels, and where systemically important financial decisions are made by the consensus of algorithms that can not be turned off – a reality where trust in people and institutions (in the way we conceive it today) becomes effectively irrelevant. This is not a thought experiment, it is a plausible future humanity is heading to – in 2019 Google demonstrated for the first time a form of quantum supremacy of the sort that could one day compromise most current implementations of cryptography (the technology that keeps secrets secret) and perform instantly many computations that are currently impossible to see completed in a lifetime. Such power of computation will give a disproportionate advantage to whoever actor wields it.

In the crypto-economic system’s front, decentralized finance has shown a 66.1% compound annual growth from 2014 to 2019 in terms of private market investment, according to Pitchbook. Blockchain is a transactional technology, and transactions are expressions of trust. But decentralized financial instruments, including derivatives, operate using smart contracts (a form of autonomous programs) and are informed about the state of the “real” world by oracles –another technology that might be prone to manipulation. We are effectively automating the finance function without central points of failure or control. The prospect of having self-adapting intelligent agents –that can not be killed– being manipulated by and manipulating markets is very real.

Furthermore, there is already research ongoing to combine the two: smart contracts powered by quantum money (Coladangelo). For all this, humanity is both woefully unprepared, and, blissfully unaware. The transition will occur, either if we are ready or not –so we better understand the ethical implications. From an epistemological view, the belief consensus that decentralization relies on is just representations (mathematical models) of the world, thus the problem naturally lends itself to philosophical exploration framed by the axioms and boundaries of technological development. There is a notorious precedent for this: Spinoza’s use of the geometrical (Euclidean) method in metaphysics shows what is possible –only that the approach is now hundreds of years old, and was based on insights thousand of years old. We are now dealing with interconnected (collective) intelligence that is increasingly sustained on ever-evolving genetic algorithms –this calls for an approach that is cognizant of evolutionary networks and socio-technical complex systems.

Percy Venegas

We propose a method to assess the counterparty risk of non-banking financial institutions that operate as fiat-crypto gateways in blockchain and distributed ledger technology financial infrastructures. The risk scores are suitable to evaluate both traditional money services businesses and fintech companies (including cryptocurrency payments and blockchain systems operators). The main users are banks, investors, and, businesses that need to assess counterparty risk across jurisdictions and under uncertainty, as non-banks are often less regulated than other financial institutions. We follow an automation-focused, multidisciplinary approach rooted in established techniques from network science and genetic programming, as well as in the emerging fields of machine behavior and trustworthy artificial intelligence. The method and findings pertain to any decentralized financial infrastructure with centralized components such as fiat on/off ramps, where counterparty risk assessment becomes a necessity for regulatory, investment, and, operational purposes. The method is demonstrated on a network of Ripple payment gateways.

Understanding, Assessing, and Communicating on a Complex Operational Environment: Syria | Monday 10:20-12:00

Bryon Mushrush

The role of intelligence analysis is to support leadership’s decision making by providing context into enduring issues and insight into future activity. As an environment becomes more interconnected and changes occur more rapidly, traditional analytic methods and techniques begin to fail to provide key insights. Applying complexity science principles offers a way forward and raises new challenges for analyst to understand, assess, and communicate about these challenging security problems.

The ongoing civil war in Syria is a primary example of this type of challenging problem. For the past nine years Syria has devolved into one of the most complex security environments facing the international community. As an open system, multiple state, non-state, and sub-state groups to simultaneously cooperate and compete within Syria to achieve their objectives at the local, state, and regional levels. These interdependencies hinder traditional frameworks and the ability to clearly determine cause and effect from observed activity. New methods and frameworks need to be applied.

To improve understanding of Syria as a complex environment requires breaking down traditional structures and expanding expertise across a broad network. This sacrifices analyst’s ability to quickly develop in-depth expertise of their portfolio by requiring them to take the additional steps to understand how their part fits into the whole. Furthermore, analyst need to assess environmental feedback mechanisms and work closely with other analysts that also have an interest in the feedback pathways. To accomplish this understanding, analyst need to regularly communicate and synchronize their work and incorporate non-traditional viewpoints. Moreover, it is important to incorporate iterative learning by regularly setting aside time to readdress assumptions, review assessments, and balance looking at focus areas with maintaining a larger systemic view of Syria as a whole. Operating in this manner also provides greater flexibility and faster team performance. However, it comes at the risk of information overload and the challenge of sustaining participation.

Assessing the complex environment in Syria requires a combination of looking at Syria as a whole and the links between actors across different scales. This allows analyst to identify phase transitions, tipping points, and zones of stability within the chaotic operational environment. Furthermore, it becomes easier to identify a framework of simple rules to describe the system and provide context and future looking analysis. The key challenges to assessing Syria and other complex environments in this manner is a lack of common training and language among analyst to describe complex systems. The majority structured analytic techniques are rooted in reductionist methods focused on individual components in the system.

One of the greatest challenges in analyzing operational environments, such as Syria, is communicating the complexity to decision makers. With limited time decision makers seek direct answers that will help decide between Course of Action A or Course of Action B. Saying “it’s complex” does not suffice. Concepts related to non-linearity, emergence, and adaptation require clever ways to convey nuances and implications. Focusing on relationships, objectives, and constraints provides a simple method to describe a complex operational environment.

Understanding of misunderstanding between Alice to Bob: universal biases in meaning attribution | Monday 14:30-16:10

Irina Trofimova

The content of information transfers is strongly affected by the nature of observer, i.e. an observer's bias, which works as a selective factor, partially trimming and partially amplifying messages. Observer's bias in information processing is the topic of discussion not only in psychology but also in quantum foundations. Based on own experimental studies in psychosemantics, this presentation shows how the nature of observer, as well as the nature of context contribute to the reading of information and transfer. Several implications for formal languages of quantum foundations are discussed, including a need for multi-set presentations of observers (or information-transferring parties).

Understanding Social Segregation with Reinforcement Learning and Agent Based Modeling | Tuesday 10:20-12:00

Egemen Sert and Alfredo J. Morales

Properties of social systems can be explained by the interplay and weaving of individual actions. Rewards are key to understanding people's choices and decisions. For instance, individual preferences of where to live may lead to the emergence of social segregation. We combine Reinforcement Learning (RL) with Agent Based Modeling (ABM) in order to address the self-organizing dynamics of social segregation and explore the space of possibilities that emerge from considering different types of rewards. The model promotes the creation of interdependencies and interactions among multiple agents of two different kinds that segregate from each other. For this purpose, agents use Deep Q-Networks to make decisions inspired on the rules of the Schelling Segregation model and rewards for interactions. Despite the segregation reward, our experiments show that spatial integration can be achieved by establishing interdependencies among agents of different kinds. They also reveal that segregated areas are more probable to host older people than diverse areas, which attract younger ones. Through this work, we show that the combination of RL and ABM can create an artificial environment for policy makers to observe potential and existing behaviors associated with rules of interactions and rewards.

Sert, E., Bar-Yam, Y. & Morales, A.J. Segregation dynamics with reinforcement learning and agent based modeling. Sci Rep 10, 11771 (2020). https://doi.org/10.1038/s41598-020-68447-8

Understanding the dynamics of the african society related to the women’s status | Monday 14:30-16:10

Mary Luz Mouronte-López

The african women face several important legal, economic and social constraints[1]. We are interested in knowing the evolution of their situation over time and how it can be improved. Several researches apply machine learning to the analysis of dynamic physical systems [2] [3]. According to recent investigations several ways exist to model physical system using machine learning methods, some of them are: carrying out simulation data for machine learning models, understanding the physical processes from data, classifying or predicting complicate processes and also supervising non-linear dynamical systems. Our research studies various socioeconomic indicators related to the women’s situation in several african countries in order to know the dynamic behaviour of the society regarding them (as a dinamical system) and to discover unknown relationships between variables. Since, machine learning provides supplementary data modelling techniques to those achieved with classical statistics. we also apply diffent classification methods of machine learning (hierarchical clustering, etc.) together with other statistical calculations (hyphotesis tests, correlations, etc.). We compare the obtained results using the different classifications algorithms and evaluate their sensitivity with regard to the configuration of parameters (hyperparameters). The more critical variables related to position of women within society (as empowerment, leadership and discrimination) are identified. We conclude describing the situation in the different detected groups of african countries. The data provided by the World Bank Group are used: https://www.worldbank.org/

[1] Sheldon, K.(2017). African Women: Early History to the 21st Century. Indiana, USA: University Press
[2] Z. Chai and C. Zhao (2020),. Enhanced Random Forest With Concurrent Analysis of Static and Dynamic Nodes for Industrial Fault Classification i. IEEE Transactions on Industrial Informatics, 16, 1, 54-66. doi: 10.1109/TII.2019.2915559.
[3] Barbosa, A.N. , Travé-Massuyès, L. , Grisales, V.H. (2015). Trend-Based Dynamic Classification for on-line Diagnosis of Time-Varying Dynamic Systems. IFAC-PapersOnLine, 48, 21, 1224-1231.

Understanding the hierarchical organization of the cell through cancer mutational rewiring | Monday 10:20-12:00

Eiru Kim and Traver Hart

Genetic interactions govern the translation of genotype to phenotype at every level, from the function of subcellular molecular machines to the emergence of complex organismal traits. The systematic survey of genetic interactions – where the knockout of two genes yields a phenotype different than expectations from the single knockouts independently – in the model organism Saccharomyces cerevisiae showed that genes that operate in the same biological process have highly correlated genetic interaction profiles across a diverse set of experiments, and this trend has been exploited as a powerful predictor of gene function.

Issues of technology and scale render such systematic surveys in human cells impossible. The humans have more than three times as many genes (and ten times as many gene pairs) as yeast, and the technology for gene perturbation in mammalian cells lags that for the facile yeast model, even with state-of-the-art CRISPR tools. Nevertheless, indirect methods can be used to infer functional interactions. A major international effort to identify tumor-specific genetic vulnerabilities by whole-genome CRISPR knockout screening in cancer cell lines has yielded estimates of gene knockout fitness effects in over 500 cell lines from more than 30 tissue lineages. We and others have shown that genes with correlated knockout fitness profiles tend to operate in the same biological processes, enabling not just the inference of gene function but also the identification of functional modules in the cell.

This approach offers a powerful toolkit and reference framework for mammalian functional genomics, but inferring causality from correlation networks is problematic. To bridge the gap from correlation to causation, we examine how the presence of mutations – well characterized in these cancer cell lines – rewires the correlation network. From this rewiring we can learn the biological role of numerous mutations, which can impart either gain or loss of function to individual genes and disrupt or enhance the relationships between functionally adjacent genes. In doing so, we are effectively using cancer as a model for cell proliferation (instead of vice versa), and we learn not just modularity but also the hierarchy of organization in the human cell.

Urban complexity science | Thursday 8:30-9:05

Elsa Arcaute

In this talk we will discuss how to think about urban systems in terms of complex systems, and how we can advance this field. In particular, we will look at the tension between bottom-up processes and top-down interventions, which have been shaping different aspects of urban systems, and which can be used to address pressing issues such as inequality.

Using Lexical Link Analysis (LLA) as a Tool to Analyze A Complex System and Improve Sustainment | Monday 10:20-12:00

Ying Zhao and Edwin Stevens

A major challenge in the the complex enterprise of the US Navy global materiel distribution is that when a new operation condition occurs, the probability of fail or demand model of a Naval ship part or item needs to modify to adapt to the new condition and store adequate spare parts. Meanwhile, historical supply databases include demand patterns and associations that are critical when the new condition enters the system as a perturbation or disruption which can propagate through the item association network. In this paper, we first show how the two types of item demand changes can be interacted and integrated to calculate the total demand change (TDC). We show a use case on how to apply the lexical link analysis (LLA) to address the challenges of a complex system and discover the item association network that propagates the TDC.

Utilizing complex networks with error bars | Monday 10:20-12:00

Istvan Kovacs

Network theory is a powerful tool to describe and study complex systems, and there has been tremendous progress in mapping large networks in all areas of science, leading to a growing library of complex network datasets. It is however unrealistic to assume that the obtained graphs are exact. Inherent limitations of the measurement processes lead to errors, biases and missing data. Therefore, as in any other quantitative field, it would be of paramount importance to characterize the uncertainty of our maps. Yet, unlike a simple error bar for a single valued quantity, the uncertainty of a network structure itself is expected to have a complex, network structure, requiring novel methodologies. Currently, there are only a few cases where we have uncharted not only the expected graph structure but also a full quantification of the error and incompleteness patterns. On the example of the yeast genetic interaction network, we will overview how such detailed information can help us to solve key problems, such as link prediction, noise reduction, community detection, stability analyses or functional annotation. To conclude, putting error bars on our network maps is not a nuisance but an essential ingredient in addressing long standing problems in the field.

Virtual Teaching of Linear Algebra with Complex Systems and Artificial Intelligence Case Studies | Thursday 10:30-12:00

Patrik Christen and Terry Inglese

Due to COVID-19, universities, higher education institutions, and schools all around the world had to shut down their on-site activities during the first semester of the academic year 2020 and teaching had to continue in an entirely virtual setting, either asynchronously or synchronously, or in a blended learning style. Several instructors were not fully prepared to teach virtually and a fast adaptation to the new and unusual circumstances was needed. While this unfamiliar situation has brought technical and especially educational hurdles, instructors are currently preparing for the second semester 2020 with a new awareness, taking into account lessons learnt from the previous semester. In this abstract, we outline our instructional concept for virtually teaching a one-semester linear algebra course in the BSc program Business Information Technology at FHNW in Switzerland. Students of this study program are likely to hold management positions working at the intersection between business and IT, where it is essential to conceptually understand mathematical methods, and apply these in dynamic business projects, often involving complex systems and artificial intelligence.

We propose an instructional design based on an overarching educational framework with two streams: understanding by design [1] and virtual teaching best practices [2, 3]. Understanding by design is an instructional planning approach focusing on achieving understanding based on backward design of the curriculum. There are three stages in backward design: identify desired result, determine acceptable evidence, and plan learning experience and instruction. Therefore, we formulated two learning objectives to identify the desired result: (1) students understand and are able to explain basic linear algebra concepts and to solve exercises as well as (2) to relate these basic linear algebra concepts to complex systems and artificial intelligence case studies. Acceptable evidence with respect to the first learning objective is assessed based on individual exercises that have to be handed in. Learning outputs are graded according to the criteria of understanding, correctness, creativity, and context, in combination with academic effort and collaboration between students. Acceptable evidence with respect to the second learning objective is assessed based on case study discussions. Quality and quantity of discussion inputs are the criteria for grading. Learning experience and instruction consist of five steps: motivation, foundation and application, exercise, question and answer, and case study. In our course we chose the five concepts vectors and matrices, systems of linear equations, linear transformations, determinants, and eigenvalues and eigenvectors. For each concept or topic considered in the course, all five steps are covered. The five steps are: (1) students watch a ten-minute video created by the lecturer that introduces the topic motivating students to consider the ingenuity and relevance of the basic concepts as well as connecting these to important and especially intriguing and extraordinary applications and case studies. (2) Students read and study lecture notes created by the lecturer that describe definitions and notations of mathematical concepts and their applications. Lecture notes are structured in the same way for each of the basic concepts. (3) Students solve exercises provided in the lecture notes applying their knowledge about the basic concepts. Exercises are designed in such a way that students have to come up with their own concrete examples, guided by abstract descriptions. The scaffolding for solving these exercises is provided with examples given in the lecture notes. In addition, students help each other, if necessary, in a dedicated video conference. (4) Students ask questions to the lecturer in a one-hour question and answer video conference. (5) The lecturer asks questions to the students about a complex systems or artificial intelligence case study such as random matrices of various complex systems and Google’s PageRank. Articles or book chapters are used as means of case studies and are provided in the appendix of the lecture notes.

The second stream of our educational framework is based on best practices shared by experienced practitioners involved in virtual teaching and evaluation. According to Purdy’s [2] suggestions, we designed our virtual course according to the three C’s of course design, which are consistency, creativity, and community. Consistency in this context means to design every virtual lecture according to the same structure, in order to avoid that students loose time in navigating virtual contents. Creativity in this context refers to multiple possibilities that technology can offer to students to share their understand- ing and learning of mathematical concepts. Using this multimodality as an assessment option, students can demonstrate cognitive and also affective ownership of the instructional content [4]. Creating a learning community in a virtual environment is not an easy task. By designing learning units through which students can engage with the learning content, the instructor, and fellow students, is a way to scaffold the learning experience. Our instructional design is also in agreement with the six principles for planning virtual courses as suggested by Schiefelbein [3]. These six principles are: communication in terms of frequency and regular check-ins, consistency as timely grading feedbacks, organisation as easy access to material, and clear directions and simple navigation, as well as personalisation since they want to be connected with the instructor by hearing and exchanging with him or her, connection with the material, and lastly involvement because students like to be an active part of the entire learning process.

The proposed instructional design will be used for the first time during the upcoming second semester 2020; however, we hope it is useful for other lecturers currently preparing their own virtual or blended courses.

References
[1] Grant Wiggins and Jay McTighe. Understanding By Design. Association for Supervision and Curriculum Development, Alexandria, 2005.
[2] Jill Purdy. How can the three C’s of course design enhance students’ online performance? Magna Publications, 2018.
[3] Jill Schiefelbein. What do students expect from online courses? Magna Publications, 2018.
[4] Mary Susan Love. Multimodality of learning through anchored instruction. Journal of Adolescent & Adult Literacy, 48(4):300–310, 2004.

A Visual Exploratory Tool for Migration Analysis | Tuesday 14:30-16:10

Mert Gürkan, Hasan Alp Boz, Alfredo Morales and Selim Balcisoy

Migration is a complex social phenomenon that emerges as the result of a series of incidents in distinct domains such as geography, demography, sociology, and economy. In order to analyze migration dynamics, conventional methods heavily rely on structural data, i.e., labor-intensive surveys and annual reports, published by international organizations such as OECD and UN. Although such conventional methods yield reliable data, they often address specific research questions. In order to obtain further insights regarding the migration dynamics, unstructured data extracted from social media can be deployed. However, a combination of structured and unstructured datasets results in a significant cognitive load on the users to process. To this end, a visual analytical tool is required to alleviate the cognitive load and better convey the information regarding the underlying dynamics
of migration.

The Global Flow of People [1] grants users a way to analyze immigration and emigration estimates for countries and their trends by allowing users to select year intervals. Flow Monitoring Europe [2] is another tool that is dedicated to monitoring migration movements where the destination is European countries. For the locations where migration cases are highly present, the tool utilized heat maps to display those areas on the world map and additional visual aids in an interactive manner. Lastly, Migration Data Portal [3] displays a variety of migration-related social phenomenon options to be visualized with the user selection. This is achieved by utilizing and combining different public data sets shared by international organizations. In contrast to studies mentioned above which solely utilize structured datasets, in this study, an exploratory visual analytics tool is proposed, which combines multiple sources from both structured and unstructured sources available, so that additional questions and contexts can be invoked, which could have been stayed undiscovered in a single source based analysis. Facebook Marketing API [4] has been utilized as a medium of collecting unstructured data in conjunction with Twitter API [5].

The proposed tool creates a visual environment that presents the relationships between migration and other social and economic phenomena. Because of the complexity brought by the subjects aimed to be investigated, the research objectives of the proposed tool are being guided by domain experts on migration studies. The main research objective that is aimed to be achieved with the visual tool is the ability of integrating new datasets gathered from both structural and unstructured sources in a quick manner to enrich the medium of migration dynamics analysis. This is aimed to be attained by the utilization of obtained data by Facebook Marketing API as migration estimates. Such utilization is possible as the data obtained from Facebook Marketing API reports reach estimates of advertisements to the users of the website based on geographical, demographical, and behavioral categories. As can also be seen in Figure 1, component A of the proposed tool migration estimates gathered from Facebook Marketing API visualizes in a choropleth map structure which eases the comparison of migration statistics of countries. With this alternative data source it is also aimed to propose alternative sources of migration estimates to structured data published by international organizations as these publications tend to be annual or published with long periods.

In addition to the research objectives discussed above, another capability aimed to be achieved with the proposed tool is the analysis datasets from various sources and subjects jointly, especially the ability of analyzing structured datasets and unstructured data together. Component B and C as illustrated in Figure 1 serve this purpose. Illustration in Figure 1 displays a use-case scenario in which the migration dynamics of Portugal is focused. To introduce an additional perspective to the analysis, the trade ego-network of Portugal is displayed in component B. Having a tabular formatted data, as displayed in component C, allows users to inspect findings from multiple sources. With all its components the tool allows its users to incorporate structured and unstructured datasets obtained from multiple sources and conduct a holistic evaluation from an extended perspective gained from the addition of unstructured datasets to the migration analysis.

References

[1] G. J. Abel, N. Sander, Quantifying global international migration flows, Science 343 (6178) (2014) 1520–1522.
[2] International Organization for Migration. Flow Monitoring Europe. https://migration.iom.int (accessed January 10, 2020)
[3] International Organization for Migration. Migration Data Portal. https://migrationdataportal.org (accessed January 10, 2020).
[4] J. Palotti, N. Adler, A. J. Morales, J. Villaveces, V. Sekara, M.Garcia Herranz, M. Al-Asad, I. Weber. Real-Time Monitoring of the Venezuelan Exodus through Facebook’s Advertising Platform. UNHCR, 2019.
[5] E. Zagheni, V. R. K. Garimella, I. Weber, B. State, Inferring international and internal migration patterns from twitter data, in: Proceedings of the 23rd International Conference on World Wide Web - WWW 14 Companion, ACM Press, 2014.

Visualizing Community Structured Complex Networks | Monday 10:20-12:00

Zhenhua Huang, Zhenyu Wang, Wentao Zhu, Junxian Wu and Sharad Mehrotra

Visualizing complex networks is one of the fundamental tasks in complex network analysis. Layout algorithms provide an intuitive way of visualizing and understanding complex networks. Complex networks such as social networks, co-authorship networks, protein interaction networks are composed of community structures. Existing network visualization methods that mostly based on force-directed algorithms do not fully exploit community structures, leading to bad layouts especially when the size and complexity of networks increase. We proposed a novel method, entitled GRA (Generalized Repulsive and Attractive algorithm), leveraging on community information when visualizing complex networks. The algorithm weights repulsive and attractive forces between and within community nodes. GRA simulates the nodes in a network as particles and moves them based on repulsive and attractive forces until convergence. The method is also extended to visualize larger-scale graphs by compressing the original graph using detected communities. An area estimation method based on multivariate Gaussian distribution with noisy tolerance is introduced to quantify the effectiveness of network visualization. The metric estimates the core areas of communities to prevent the visualization from entanglement while making full use of the canvas space as possible. Our method can support network visualization with ten thousand of nodes at a high quality, producing a much better layout than the classical spring layouts. We represent our method with classical visualization methods on several networks and show its advantages.

Water Crises: when water security meets hydrocomplexity | Thursday 10:30-12:00

Osmar Coelho Filho and Oscar Cordeiro Netto

Water happens in their three physical states (solid, gaseous, and liquid) but different forms of storage and patterns of water consumption, whether for the production of food and energy. If, on the one hand, a nexus in supply exists and articulate generation, stock, there is a nexus in water demand which joins water use and governance. Water crises have disclosed complex processes that manufacture a hybrid process composed of dysfunctional economic, fiscal, land, and environmental policies. Water crises are complex processes marked by uncertainties, ambiguities, and ambivalences (contradictions) as well as retroactions and recursions in their feedback loops between causes and effects. But, water crises can be described by their emerging properties of hydrosocial territories. Similar to the financial crisis, a water crisis can be seen as the emergence of multi crises, which add up and alter the trajectories of both the perceived impacts and the government responses. In this sense, it is strategic to characterize water systems as hydrosocial territories where human and non-human agents (environment variables, technologies, governance systems), institutions, ecological and climatic systems, norms, and cultural practices are in constant interaction producing contexts and meanings. These social and natural territories are transformed by power relations, which configure and reconfigure water governance. Interactions in hydrosocial territories are part of a complex process where water occupies a central position in a hub of connections. The complexity comes from the Latin term complexus that means “weaved together”. The concept of Hydrocomplexity can contribute to water crises understanding and water security strategies through the emerging key-factors identification which synthesizes the interconnected processes network where the integration of science and engineering, observational and information systems, computer and communication systems as well as social systems and institutional approaches can offer agile and adapted solutions. Emerging factors are those that appear as new systemic properties, which intensify, diverge, or cancel out variables. To curb water crises, Water Security seeks to develop anticipating capacities to foresee impacts probabilities by integrating hard (infrastructure) and soft (governance) technologies, as well as effective communication among water users. The Process Network methodology was used in this article to identify emergences identification by presenting flows and their feedback associated with time scales representing the variables dependencies and strengths as well as the emergent properties resulting from the grouping of forces, feedback loops, and processes of variables concealment, co-variation, opposition among. The main results in terms of emergence factors in the analyzed waters crises (New York, São Paulo, Cape Town, and Brasília) showed that in periods of economic stimulus land, urban and rural expansion policies triggered a dynamic process of decreasing water availability. At first, this decrease was not observed, given the relative and total volume quantities distributed. In the medium-term, subsequent periods of recession can lead to a cut in new water infrastructure resources slowing the pace of water grabbing to continue. However, as climate change intensifies hydroclimatic variables such as temperature, less water is available to the next economic expansion period.

What does theoretical Physics tell us about Mexico’s December Error crisis | Thursday 10:30-12:00

Oliver López Corona and Giovanni Hernández

A perfect economic storm emerged in México in what was called (mistakenly under our analysis) The December Error (1994) in which Mexico’s economy col-lapsed. In this paper, we show how Theoretical Psychics may help us to understand the under processes for this kind of economic crisis and eventually perhaps to develop an early warning. We specifically analyze monthly historical time series for inflation from January 1969 to November 2018. We found that Fisher information is insensible to inflation growth in the 80’s decade but capture quite good The December Error (TDE). Our results show that under Salinas administration Mexican economy was characterized by unstable stability must probably due to hidden risk policies in the form of macro-economy controls that artificially suppress aleatority out of the system making it fragile. And so, we conclude that it was not at all aDecember error but a sexenal sustained error of fragilization

What Are Group Level Traits? | Thursday 10:30-12:00

Burton Voorhees

Cultural group selection plays a major role in theories of cultural evolution. Group level cultural traits give some groups an advantage in competition with other groups, or in surviving in a hostile environment. Questions arise, however, with respect to how group level traits are defined. The intent in this paper is to explore the way that these traits are conceived in three different approaches to cultural evolution and to make distinctions that will help to clarify the nature of group level traits and illustrate some of the problems that need to be addressed. Group-level traits are often conceptualized as weighted averages over individual traits, manifesting at the group level. In studies of the evolution of cooperation, for example, one group will be considered more cooperative than another if it contains a higher fraction of cooperative individuals. The individual trait of cooperativeness is attributed to the existence of group norms and institutions that alter within group selective pressures so that genetic selection favors psychological tendencies whose manifestations support prosocial behavior. Group level selection acting on this sort of group trait, however, reduces to individual selection and can be dealt with through inclusive fitness. Are there group level traits that cannot be reduced to individual level selection. This is a matter of dispute. Some models of cultural evolution posit group level traits that are irreducible, others do not. Three theoretical approaches to cultural evolution are: culture-gene coevolution (CGC), cultural attractor theory (CAT), and culture-worldview coevolution (CWC). In CGC the prime emphasis is on analysis of means for accurate transmission of cultural elements. CAT, on the other hand, is based on the assumption that cultural elements are reconstructed in the minds of each succeeding generation, transmission is not accurate, but cultural elements remain relatively stable because of cultural attractors, which bias the reconstruction process into limited channels. In both of these cases, the driver for cultural evolution is Darwinian variation and selection. CWC takes a different view, emphasizing that the biological analogy is faulty because with culture there is no phenotype/genotype distinction. In CWC, individual worldviews and cultural elements (cultural idea systems) coevolve through a non-Darwinian process of communal exchange. Further, there are group level traits that are intrinsic to the group. Thus, two types of group level traits are distinguished, synthetic traits which exist at the group level as a result of synthesis over individual traits in the group population, and intrinsic traits, which exist at the group level but cannot be reduced to averages or summations over individual traits. In the latter case, the group trait often prescribes how individuals are expected to behave if they are to remain group members. Thus, one aspect of this distinction is the direction of benefit. With synthetic traits, the group benefits by virtue of traits of individual group members. With intrinsic traits, it is the group members who benefit by adherence to the prescribed behavioral constraints. Finally, the issue of trait distribution is raised, indicating that realistic models of cultural group selection necessarily involve multiple aspects of a group culture.

What Are Societies, and What Keeps Them Together and Tears Them Apart?  | Monday 13:30-14:10

Mark W. Moffett

If a chimpanzee ventures into the territory of a different group, it will almost certainly be killed. But a New Yorker can fly to Los Angeles–or Borneo–with little fear. Psychologists have done little to explain this: For years, they have held that our biology puts a hard upper limit of about 150 people on the size of our social groups. But human societies are in fact vastly larger. How do we manage—by and large—to get along with each other?

An essential feature of any society is the capacity of its members to distinguish one another from outsiders and reject outsiders on that basis. In the majority of vertebrates that live in societies, the members must be able to recall each other as individuals for a society to remain intact and stay clearly separated from other such groups. Such “individual recognition” species generally have societies of a few dozen—a population ceiling likely determined at least in part by the cognitive challenges on each animal to keep track of others (and even of those society members it never cooperates with). These societies are contrasted with those of most social insects and humans, in which membership is “marked” by shared traits such as a scent in insects and a plethora of characteristics like rituals, clothing, and language in humans, allowing for the existence of strangers within a society. In humans and some social insects, consistent employment of these markers of identity permits societies to grow enormous—in a tiny minority of those species, including our own, without bounds when conditions permit. The unusual human capacity to be comfortable around strangers could be of critical importance in a wide range of research on social evolution, including the emergence of complexity both within and between societies.

When Econometrics Met Information Theory:A Real-time Approach to Track Yen Exchange Rate | Tuesday 14:30-16:10

Nan Wang

Tracking currency exchange rates is crucial in many aspects. While the financial press closely tracks rates of major currency exchange pairs, scientific literature has been mostly silent in studying their patterns. This paper proposes an innovative approach to track the exchange rates of the yen against US Dollar (USDJPY), the second most traded currency pair. The tracking index generated by this approach achieves significant linear correlation with USDJPY.

The proposed approach applies econometrics and information theory in a real-time manner, leveraging the former on modeling high dimensional sparse data and the latter on measuring nonlinear dependency .

The proposed approach has three phases: “vintage” generation, “vintage” storage and transfer entropy calculation. The proposed approach is based on a deployed system called DeepMacro Factor System (DFS) which was started building in 2007. Since then, the system has been evolving all the time to adapt to the ever-changing global macroeconomic environments.

The proposed approach is innovative in four aspects:

Use a real time database

Cover a broad range of economic information

Update information in a timely manner

Consider the dependency between countries

When will a large complex system be stable (revisited)? | Tuesday 14:30-16:10

Harold Hastings and Tai Young-Taft

In a seminal 1972 paper, Robert M. May asked “Will a large complex system be stable?” [1] and argued that stability (of a broad class of random linear systems) decreases with increasing complexity, sparking a revolution in our understanding of ecosystem dynamics [2]. Twenty-five years later, May, Sugihara and Levin translated our understanding of the dynamics of ecological networks to the financial world in a second seminal paper “Complex systems: Ecology for bankers.” [3] Just a year later, the US Subprime Crisis led to a near world-wide “great recession,” spread by the world financial network.

May’s 1972 paper stood in sharp contrast to arguments of MacArthur [4] and Hutchinson [5] that the presence of multiple energy pathways in complex food webs increased stability – essentially stability increases with increasing complexity in the presence of appropriate conservation laws.

This apparent conflict can be reconciled as follows. May considered random linear systems defined by random n X n matrices whose entries are chosen independently with probability C (the connectance) from a distribution of mean 0 and variance a^2 and are 0 otherwise. This makes the average vertex degree k = nC. Under suitable additional hypotheses [6-9], such a system is asymptotically almost surely stable if and only if (a^2)nC < 1, the May-Wigner stability theorem. It is then clear that for a family of systems whose interaction strength decrease sufficiently rapidly with increasing vertex degree k, namely lim (a^2)k = 0, a type of conservation law, stability increases with increasing complexity. This simple argument (c.f. Hastings [10]) formalizes the role of averaging in Lyapunov stability.

Moreover, as we shall show, this analysis can be used to formalize and quantify concepts such as “too big” or “too connected” in the context of financial economics.

In addition, extending the May-Wigner stability theorem to correlated interactions provides a way to formalize and quantify how correlated behavior, frequently seen in times of crisis, can destabilize a system.

Finally, recent work [11-13] shows that the May-Wigner transition from stability to instability can be extended to non-linear systems.

References
1. May, Robert M. 1972. “Will a large complex system be stable?” Nature 238(5364): 413-414.
2. Allesina, Stefano, and Si Tang. 2015. “The stability–complexity relationship at age 40: a random matrix perspective.” Population Ecology 57(1): 63-65.
3. May, Robert M., Simon A. Levin, and George Sugihara. 2008. “Complex systems: Ecology for bankers.” Nature 451(7181): 893-895.
4. MacArthur, Robert. 1955. “Fluctuations of animal populations and a measure of community stability.” Ecology 36(3): 533-536.
5. Hutchinson, G. Evelyn. 1959. “Homage to Santa Rosalia or why are there so many kinds of animals?” The American Naturalist 93(870): 145-159.
6. Cohen, Joel E., and Charles M. Newman. 1985. “When will a large complex system be stable?” Journal of Theoretical Biology 113: 153-156.
7. Girko, Vyacheslav L. 1985. “Circular law.” Theory of Probability and Its Applications 29(4): 694-706.
8. Bai, Z. D., and Y. Q. Yin. “Limiting behavior of the norm of products of random matrices and two problems of Geman-Hwang.” Probability Theory and Related Fields 73, no. 4 (1986): 555-569.
9. Geman, Stuart. “The spectral radius of large random matrices.” The Annals of Probability (1986): 1318-1328.
10. Hastings, Harold M. 1984. “Stability of large systems.” BioSystems 17(1984): 171-177
11. Sinha, Sitabhra, and Sudeshna Sinha. 2005. “Evidence of universality for the May-Wigner stability theorem for random networks with local dynamics.” Physical Review E 71(2): 020902.
12. Fyodorov, Yan V., and Boris A. Khoruzhenko. 2016. “Nonlinear analogue of the May− Wigner instability transition.” Proceedings of the National Academy of Sciences 113(25): 6827-6832.
13. Ipsen, J. R. 2017. “May–Wigner transition in large random dynamical systems.” Journal of Statistical Mechanics: Theory and Experiment 2017(9): 093209.

Whether collective evasion in prey is anti-predator or anti-peer? | Thursday 10:30-12:00

Wen-Chi Yang

The coordinated movement of gregarious animals, especially fish and birds, has long been explained as anti-predator functions for the emergence of confusion effects or information transfer effects. The implicit assumption under this viewpoint is that harming predators can improve the fitness of prey individuals. However, criticisms arise from two aspects. First, to those representative species of collective evasion, like sardines, herrings, starlings, and sparrows, their predators are almost undefeatable during hunts, so anti-predator struggles gain no survival benefit to the prey population. Besides, while making predators fail is trivially the only beneficial effort to any solitary prey, the same tactic is not necessarily adapted in evolution when prey live in groups and undergo intraspecific competition. Focusing on the collective motion of abundant prey individuals in front of undefeatable predators, we are going to present several evolutionary simulations about our proposed crowded selfish herd scenario, by means of cellular automata, agent-based models, and game-theoretic models. The compatible outputs have demonstrated that the coordinated movements of animal swarm behavior can emerge from anti-peer adaptations by short-term selfishness without additional mechanisms about group selection or kin selection. The presentation will introduce these theoretical studies and throw out the concerns and perspectives inferred from these findings.