Full Program »
Reciprocal Conceptual Frameworks for Neural System Autonomy and BehaviorConceptual frameworks for understanding how neural systems mediate autonomy and behavior rely on network analyses to derive general principles of brain operation. Networks are a standard representation of data throughout the sciences, and higher order connectivity patterns (Benson et al, 2016) are essential to understanding the fundamental structure that control and mediate the behavior of complex systems. Network neuroscience, notably, enables the examination of neural interactions measured at varying spatio-temporal scales, and multilayer network models are used to guage brain dynamics over time (Muldoon and Bassett, 2016). Stability in brain dynamics is related to the establishment of physical attractors, domain configurations that reset brain dynamical variance in far from equilibrium states; pacemaker neural circuits, for example, are dynamically configured to temporally reestablish default baseline states at regularly spaced temporal intervals. Determination of range allocation for far from equilibrium states constitutes a significant issue for guaging attractor stability for neural networks. Their assessment is of particular interest for characterizing topological boundaries of neural network profiles, which serve to clarify perimeters for system closure and so also constraints on systemic perseverance. Philosophically, attractors thus constitute performance motifs of neural network systems and their topological boundaries act as constraints dictating system closure. Attractor behavior, however, is not determined by constraining influences alone (Winning and Bechtel, 2016). Recurrent attractor models, notably, are subject to externally introduced, input variations like those that ground memory models of fixed point attractors such as Hopfield Attractor Networks. Modulating input, via repetitive Hebbian like conditioning or genetic perturbations that alter underlying attractor parametrics, enhances rather than constrains attractor landscapes to yield novel network profiling, and are, necessarily, environmentally open. Engineered neural perturbations, for example, can be expected to yield new attractor landscapes or, on the other hand, to generate spurious attractors with poorly conditioned basins of attraction that fail to achieve system closure or modify attractor consolidation and tuning. Higher order network connectivity patterns, moreover, work in tandem with input variation to broaden performance ranges creating expansion opportunities for neural system behaviors, a circumstance promiscuous in the natural world. Novel ordering principles governing network patterns, furthermore, emerge at progressively hierarchical levels to propagate sets of new system behaviors. This poster will argue that neural system performance is thus governed by reciprocal influences mediated at hierarchically distinct organizational levels; conceptual frameworks, in consequence, require the inclusion of 1) constraints operative at the level of the performance motif that ensure systemic closure, and 2) an emergent behavioral fecundity operative at supramotif levels, due to concerted intersectioning of external stimuli with an ascending hierarchy of network ordering principles.
Benson AR, Gleich DF, and J Leskovec (2016). Higher-order organization of complex networks. Science 353(6295):163-166.
Muldoon SF and DS Bassett (2016). Network and multilayer network approaches to understanding human brain dynamics. Phil Sci Online
Winning J and W Bechtel (2016). Review of ‘Biological Autonomy’. Phil Sci 83(3):446-452.
Loyola University Chicago