993 resultados para State Complexity
Resumo:
Global complexity of spontaneous brain electric activity was studied before and after chewing gum without flavor and with 2 different flavors. One-minute, 19-channel, eyes-closed electroencephalograms (EEG) were recorded from 20 healthy males before and after using 3 types of chewing gum: regular gum containing sugar and aromatic additives, gum containing 200 mg theanine (a constituent of Japanese green tea), and gum base (no sugar, no aromatic additives); each was chewed for 5 min in randomized sequence. Brain electric activity was assessed through Global Omega (Ω)-Complexity and Global Dimensional Complexity (GDC), quantitative measures of complexity of the trajectory of EEG map series in state space; their differences from pre-chewing data were compared across gum-chewing conditions. Friedman Anova (p < 0.043) showed that effects on Ω-Complexity differed significantly between conditions and differences were maximal between gum base and theanine gum. No differences were found using GDC. Global Omega-Complexity appears to be a sensitive measure for subtle, central effects of chewing gum with and without flavor.
Resumo:
Introduction: Nocturnal dreams can be considered as a kind of simulation of the real world on a higher cognitive level (Erlacher & Schredl, 2008). Within lucid dreams, the dreamer is aware of the dream state and thus able to control the ongoing dream content. Previous studies could demonstrate that it is possible to practice motor tasks during lucid dreams and doing so improved performance while awake (Erlacher & Schredl, 2010). Even though lucid dream practice might be a promising kind of cognitive rehearsal in sports, little is known about the characteristics of actions in lucid dreams. The purpose of the present study was to explore the relationship between time in dreams and wakefulness because in an earlier study (Erlacher & Schredl, 2004) we found that performing squads took lucid dreamers 44.5 % more time than in the waking state while for counting the same participants showed no differences between dreaming and wakefulness. To find out if the task modality, the task length or the task complexity require longer times in lucid dreams than in wakefulness three experiments were conducted. Methods: In the first experiment five proficient lucid dreamers spent two to three non-consecutive nights in the sleep laboratory with polysomnographic recording to control for REM sleep and determine eye signals. Participants counted from 1-10, 1-20 and 1-30 in wakefulness and in their lucid dreams. While dreaming they marked onset of lucidity as well as beginning and end of the counting task with a Left-Right-Left-Right eye movement and reported their dreams after being awakened. The same procedure was used for the second experiment with seven lucid dreamers except that they had to walk 10, 20 or 30 steps. In the third experiment nine participants performed an exercise involving gymnastics elements such as various jumps and a roll. To control for length of the task the gymnastic exercise in the waking state lasted about the same time as walking 10 steps. Results: As a general result we found – as in the study before – that performing a task in the lucid dream requires more time than in wakefulness. This tendency was found for all three tasks. However, there was no difference for the task modality (counting vs. motor task). Also the relative time for the different lengths of the tasks showed no difference. And finally, the more complex motor task (gymnastic routine) did not require more time in lucid dreams than the simple motor task. Discussion/Conclusion: The results showed that there is a robust effect of time in lucid dreams compared to wakefulness. The three experiments could not explain that those differences are caused by task modality, task length or task complexity. Therefore further possible candidates needs to be investigated e.g. experience in lucid dreaming or psychological variables. References: Erlacher, D. & Schredl, M. (2010). Practicing a motor task in a lucid dream enhances subsequent performance: A pilot study. The Sport Psychologist, 24(2), 157-167. Erlacher, D. & Schredl, M. (2008). Do REM (lucid) dreamed and executed actions share the same neural substrate? International Journal of Dream Research, 1(1), 7-13. Erlacher, D. & Schredl, M. (2004). Time required for motor activity in lucid dreams. Perceptual and Motor Skills, 99, 1239-1242.
Resumo:
The Wetland and Wetland CH4 Intercomparison of Models Project (WETCHIMP) was created to evaluate our present ability to simulate large-scale wetland characteristics and corresponding methane (CH4) emissions. A multi-model comparison is essential to evaluate the key uncertainties in the mechanisms and parameters leading to methane emissions. Ten modelling groups joined WETCHIMP to run eight global and two regional models with a common experimental protocol using the same climate and atmospheric carbon dioxide (CO2) forcing datasets. We reported the main conclusions from the intercomparison effort in a companion paper (Melton et al., 2013). Here we provide technical details for the six experiments, which included an equilibrium, a transient, and an optimized run plus three sensitivity experiments (temperature, precipitation, and atmospheric CO2 concentration). The diversity of approaches used by the models is summarized through a series of conceptual figures, and is used to evaluate the wide range of wetland extent and CH4 fluxes predicted by the models in the equilibrium run. We discuss relationships among the various approaches and patterns in consistencies of these model predictions. Within this group of models, there are three broad classes of methods used to estimate wetland extent: prescribed based on wetland distribution maps, prognostic relationships between hydrological states based on satellite observations, and explicit hydrological mass balances. A larger variety of approaches was used to estimate the net CH4 fluxes from wetland systems. Even though modelling of wetland extent and CH4 emissions has progressed significantly over recent decades, large uncertainties still exist when estimating CH4 emissions: there is little consensus on model structure or complexity due to knowledge gaps, different aims of the models, and the range of temporal and spatial resolutions of the models.
Resumo:
The relationship between time in dreams and real time has intrigued scientists for centuries. The question if actions in dreams take the same time as in wakefulness can be tested by using lucid dreams where the dreamer is able to mark time intervals with prearranged eye movements that can be objectively identified in EOG recordings. Previous research showed an equivalence of time for counting in lucid dreams and in wakefulness (LaBerge, 1985; Erlacher and Schredl, 2004), but Erlacher and Schredl (2004) found that performing squats required about 40% more time in lucid dreams than in the waking state. To find out if the task modality, the task length, or the task complexity results in prolonged times in lucid dreams, an experiment with three different conditions was conducted. In the first condition, five proficient lucid dreamers spent one to three non-consecutive nights in the sleep laboratory. Participants counted to 10, 20, and 30 in wakefulness and in their lucid dreams. Lucidity and task intervals were time stamped with left-right-left-right eye movements. The same procedure was used for these condition where eight lucid dreamers had to walk 10, 20, or 30 steps. In the third condition, eight lucid dreamers performed a gymnastics routine, which in the waking state lasted the same time as walking 10 steps. Again, we found that performing a motor task in a lucid dream requires more time than in wakefulness. Longer durations in the dream state were present for all three tasks, but significant differences were found only for the tasks with motor activity (walking and gymnastics). However, no difference was found for relative times (no disproportional time effects) and a more complex motor task did not result in more prolonged times. Longer durations in lucid dreams might be related to the lack of muscular feedback or slower neural processing during REM sleep. Future studies should explore factors that might be associated with prolonged durations.
Resumo:
Neutropenia is probably the strongest known predisposition to infection with otherwise harmless environmental or microbiota-derived species. Because initial swarming of neutrophils at the site of infection occurs within minutes, rather than the hours required to induce "emergency granulopoiesis," the relevance of having high numbers of these cells available at any one time is obvious. We observed that germ-free (GF) animals show delayed clearance of an apathogenic bacterium after systemic challenge. In this article, we show that the size of the bone marrow myeloid cell pool correlates strongly with the complexity of the intestinal microbiota. The effect of colonization can be recapitulated by transferring sterile heat-treated serum from colonized mice into GF wild-type mice. TLR signaling was essential for microbiota-driven myelopoiesis, as microbiota colonization or transferring serum from colonized animals had no effect in GF MyD88(-/-)TICAM1(-/-) mice. Amplification of myelopoiesis occurred in the absence of microbiota-specific IgG production. Thus, very low concentrations of microbial Ags and TLR ligands, well below the threshold required for induction of adaptive immunity, sets the bone marrow myeloid cell pool size. Coevolution of mammals with their microbiota has probably led to a reliance on microbiota-derived signals to provide tonic stimulation to the systemic innate immune system and to maintain vigilance to infection. This suggests that microbiota changes observed in dysbiosis, obesity, or antibiotic therapy may affect the cross talk between hematopoiesis and the microbiota, potentially exacerbating inflammatory or infectious states in the host.
Resumo:
In this paper, we present the evaluation design for a complex multilevel program recently introduced in Switzerland. The evaluation embraces the federal level, the cantonal program level, and the project level where target groups are directly addressed. We employ Pawson and Tilley’s realist evaluation approach, in order to do justice to the varying context factors that impact the cantonal programs leading to varying effectiveness of the implemented activities. The application of the model to the canton of Uri shows that the numerous vertical and horizontal relations play a crucial role for the program’s effectiveness. As a general learning for the evaluation of complex programs, we state that there is a need to consider all affected levels of a program and that no monocausal effects can be singled out in programs where multiple interventions address the same problem. Moreover, considering all affected levels of a program can mean going beyond the borders of the actual program organization and including factors that do not directly interfere with the policy delivery as such. In particular, we found that the relationship between the cantonal and the federal level was a crucial organizational factor influencing the effectiveness of the cantonal program.
Resumo:
Although the computational complexity of the logic underlying the standard OWL 2 for the Web Ontology Language (OWL) appears discouraging for real applications, several contributions have shown that reasoning with OWL ontologies is feasible in practice. It turns out that reasoning in practice is often far less complex than is suggested by the established theoretical complexity bound, which reflects the worstcase scenario. State-of-the reasoners like FACT++, HERMIT, PELLET and RACER have demonstrated that, even with fairly expressive fragments of OWL 2, acceptable performances can be achieved. However, it is still not well understood why reasoning is feasible in practice and it is rather unclear how to study this problem. In this paper, we suggest first steps that in our opinion could lead to a better understanding of practical complexity. We also provide and discuss some initial empirical results with HERMIT on prominent ontologies
Resumo:
Magnetoencephalography (MEG) allows the real-time recording of neural activity and oscillatory activity in distributed neural networks. We applied a non-linear complexity analysis to resting-state neural activity as measured using whole-head MEG. Recordings were obtained from 20 unmedicated patients with major depressive disorder and 19 matched healthy controls. Subsequently, after 6 months of pharmacological treatment with the antidepressant mirtazapine 30 mg/day, patients received a second MEG scan. A measure of the complexity of neural signals, the Lempel–Ziv Complexity (LZC), was derived from the MEG time series. We found that depressed patients showed higher pre-treatment complexity values compared with controls, and that complexity values decreased after 6 months of effective pharmacological treatment, although this effect was statistically significant only in younger patients. The main treatment effect was to recover the tendency observed in controls of a positive correlation between age and complexity values. Importantly, the reduction of complexity with treatment correlated with the degree of clinical symptom remission. We suggest that LZC, a formal measure of neural activity complexity, is sensitive to the dynamic physiological changes observed in depression and may potentially offer an objective marker of depression and its remission after treatment.
Resumo:
Objective The neurodevelopmental–neurodegenerative debate is a basic issue in the field of the neuropathological basis of schizophrenia (SCH). Neurophysiological techniques have been scarcely involved in such debate, but nonlinear analysis methods may contribute to it. Methods Fifteen patients (age range 23–42 years) matching DSM IV-TR criteria for SCH, and 15 sex- and age-matched control subjects (age range 23–42 years) underwent a resting-state magnetoencephalographic evaluation and Lempel–Ziv complexity (LZC) scores were calculated. Results Regression analyses indicated that LZC values were strongly dependent on age. Complexity scores increased as a function of age in controls, while SCH patients exhibited a progressive reduction of LZC values. A logistic model including LZC scores, age and the interaction of both variables allowed the classification of patients and controls with high sensitivity and specificity. Conclusions Results demonstrated that SCH patients failed to follow the “normal” process of complexity increase as a function of age. In addition, SCH patients exhibited a significant reduction of complexity scores as a function of age, thus paralleling the pattern observed in neurodegenerative diseases. Significance Our results support the notion of a progressive defect in SCH, which does not contradict the existence of a basic neurodevelopmental alteration. Highlights ► Schizophrenic patients show higher complexity values as compared to controls. ► Schizophrenic patients showed a tendency to reduced complexity values as a function of age while controls showed the opposite tendency. ► The tendency observed in schizophrenic patients parallels the tendency observed in Alzheimer disease patients.
Resumo:
New actuation technology in functional or "smart" materials has opened new horizons in robotics actuation systems. Materials such as piezo-electric fiber composites, electro-active polymers and shape memory alloys (SMA) are being investigated as promising alternatives to standard servomotor technology [52]. This paper focuses on the use of SMAs for building muscle-like actuators. SMAs are extremely cheap, easily available commercially and have the advantage of working at low voltages. The use of SMA provides a very interesting alternative to the mechanisms used by conventional actuators. SMAs allow to drastically reduce the size, weight and complexity of robotic systems. In fact, their large force-weight ratio, large life cycles, negligible volume, sensing capability and noise-free operation make possible the use of this technology for building a new class of actuation devices. Nonetheless, high power consumption and low bandwidth limit this technology for certain kind of applications. This presents a challenge that must be addressed from both materials and control perspectives in order to overcome these drawbacks. Here, the latter is tackled. It has been demonstrated that suitable control strategies and proper mechanical arrangements can dramatically improve on SMA performance, mostly in terms of actuation speed and limit cycles.
Resumo:
Office automation is one of the fields where the complexity related with technologies and working environments can be best shown. This is the starting point we have chosen to build up a theoretical model that shows us a scene quite different from the one traditionally considered. Through the development of the model, the levels of complexity associated with office automation and office environments have been identified, establishing a relationship between them. Thus, the model allows to state a general principle for sociotechnical design of office automation systems, comprising the ontological distinctions needed to properly evaluate each particular technology and its virtual contribution to office automation. From this fact comes the model's taxonomic ability to draw a global perspective of the state-of-art in office automation technologies.
Resumo:
Nonlinear analysis tools for studying and characterizing the dynamics of physiological signals have gained popularity, mainly because tracking sudden alterations of the inherent complexity of biological processes might be an indicator of altered physiological states. Typically, in order to perform an analysis with such tools, the physiological variables that describe the biological process under study are used to reconstruct the underlying dynamics of the biological processes. For that goal, a procedure called time-delay or uniform embedding is usually employed. Nonetheless, there is evidence of its inability for dealing with non-stationary signals, as those recorded from many physiological processes. To handle with such a drawback, this paper evaluates the utility of non-conventional time series reconstruction procedures based on non uniform embedding, applying them to automatic pattern recognition tasks. The paper compares a state of the art non uniform approach with a novel scheme which fuses embedding and feature selection at once, searching for better reconstructions of the dynamics of the system. Moreover, results are also compared with two classic uniform embedding techniques. Thus, the goal is comparing uniform and non uniform reconstruction techniques, including the one proposed in this work, for pattern recognition in biomedical signal processing tasks. Once the state space is reconstructed, the scheme followed characterizes with three classic nonlinear dynamic features (Largest Lyapunov Exponent, Correlation Dimension and Recurrence Period Density Entropy), while classification is carried out by means of a simple k-nn classifier. In order to test its generalization capabilities, the approach was tested with three different physiological databases (Speech Pathologies, Epilepsy and Heart Murmurs). In terms of the accuracy obtained to automatically detect the presence of pathologies, and for the three types of biosignals analyzed, the non uniform techniques used in this work lightly outperformed the results obtained using the uniform methods, suggesting their usefulness to characterize non-stationary biomedical signals in pattern recognition applications. On the other hand, in view of the results obtained and its low computational load, the proposed technique suggests its applicability for the applications under study.
Resumo:
Esta investigación recoge un cúmulo de intereses en torno a un modo de generar arquitectura muy específico: La producción de objetos con una forma subyacente no apriorística. Los conocimientos expuestos se apoyan en condiciones del pensamiento reciente que impulsan la ilusión por alimentar la fuente creativa de la arquitectura con otros campos del saber. Los tiempos del conocimiento animista sensible y el conocimiento objetivo de carácter científico son correlativos en la historia pero casi nunca han sido sincrónicos. Representa asimismo un intento por aunar los dos tipos de conocimiento retomando la inercia que ya se presentía a comienzos del siglo XX. Se trata por tanto, de un ensayo sobre la posible anulación de la contraposición entre estos dos mundos para pasar a una complementariedad entre ambos en una sola visión conjunta compartida. Como meta final de esta investigación se presenta el desarrollo de un sistema crítico de análisis para los objetos arquitectónicos que permita una diferenciación entre aquellos que responden a los problemas de manera completa y sincera y aquellos otros que esconden, bajo una superficie consensuada, la falta de un método resolutivo de la complejidad en el presente creativo. La Investigación observa tres grupos de conocimiento diferenciados agrupados en sus capítulos correspondientes: El primer capítulo versa sobre el Impulso Creador. En él se define la necesidad de crear un marco para el individuo creador, aquel que independientemente de las fuerzas sociales del momento presiente que existe algo más allá que está sin resolver. Denominamos aquí “creador rebelde” a un tipo de personaje reconocible a lo largo de la Historia como aquel capaz de reconocer los cambios que ese operan en su presente y que utiliza para descubrir lo nuevo y acercarse algo al origen creativo. En el momento actual ese tipo de personaje es el que intuye o ya ha intuido hace tiempo la existencia de una complejidad creciente no obviable en el pensamiento de este tiempo. El segundo capítulo desarrolla algunas Propiedades de Sistemas de actuación creativa. En él se muestra una investigación que desarrolla un marco de conocimientos científicos muy específicos de nuestro tiempo que la arquitectura, de momento, no ha absorbido ni refleja de manera directa en su manera de crear. Son temas de presencia casi ya mundana en la sociedad pero que se resisten a ser incluidos en los procesos creativos como parte de la conciencia. La mayoría de ellos hablan de precisión, órdenes invisibles, propiedades de la materia o la energía tratados de manera objetiva y apolítica. La meta final supone el acercamiento e incorporación de estos conceptos y propiedades a nuestro mundo sensible unificándolos indisociablemente bajo un solo punto de vista. El último capítulo versa sobre la Complejidad y su capacidad de reducción a lo esencial. Aquí se muestran, a modo de conclusiones, la introducción de varios conceptos para el desarrollo de un sistema crítico hacia la arquitectura de nuestro tiempo. Entre ellos, el de Complejidad Esencial, definido como aquella de carácter inevitable a la hora de responder la arquitectura a los problemas y solicitaciones crecientes a los que se enfrenta en el presente. La Tesis mantiene la importancia de informar sobre la imposibilidad en el estado actual de las cosas de responder de manera sincera con soluciones de carácter simplista y la necesidad, por tanto, de soluciones necesarias de carácter complejo. En este sentido se define asimismo el concepto de Forma Subyacente como herramienta crítica para poder evaluar la respuesta de cada arquitectura y poder tener un sistema y visión crítica sobre lo que es un objeto consistente frente a la situación a la que se enfrenta. Dicha forma subyacente se define como aquella manera de entender conjuntamente y de manera sincrónica aquello que percibimos de manera sensible inseparable de las fuerzas ocultas, creativas, tecnológicas, materiales y energéticas que sustentan la definición y entendimiento de cualquier objeto construido. ABSTRACT This research includes a cluster of interests around a specific way to generate architecture: The production of objects without an a priori underlying form. The knowledge presented is based on current conditions of thought promoting the illusion to feed the creative source of architecture with other fields of knowledge. The sensible animist knowledge and objective scientific knowledge are correlative in history but have rarely been synchronous. This research is also an attempt to combine both types of knowledge to regain the inertia already sensed in the early twentieth century. It is therefore an essay on the annulment of the opposition between these two worlds to move towards complementarities of both in a single shared vision. The ultimate goal of this research is to present the development of a critical analysis system for architectural objects that allows differentiation between those who respond to the problems sincerely and those who hide under an agreed appearance, the lack of a method for solving the complexity of the creative present. The research observes three distinct groups of knowledge contained in their respective chapters: The first chapter deals with the Creative Impulse. In it is defined the need to create a framework for the creative individual who, regardless of the current social forces, forebodes that there is something hidden beyond which is still unresolved. We define the "rebel creator" as a kind of person existing throughout history who is able to recognize the changes operating in its present and use them to discover something new and get closer to the origin of creation. At present, this type of character is the one who intuits the existence of a non obviable increasing complexity in society and thought. The second chapter presents some systems, and their properties, for creative performance. It describes the development of a framework composed of current scientific knowledge that architecture has not yet absorbed or reflected directly in her procedures. These are issues of common presence in society but are still reluctant to be included in the creative processes even if they already belong to the collective consciousness. Most of them talk about accuracy, invisible orders, properties of matter and energy, always treated from an objective and apolitical perspective. The ultimate goal pursues the approach and incorporation of these concepts and properties to the sensible world, inextricably unifying all under a single point of view. The last chapter deals with complexity and the ability to reduce it to the essentials. Here we show, as a conclusion, the introduction of several concepts to develop a critical approach to analyzing the architecture of our time. Among them, the concept of Essential Complexity, defined as one that inevitably arises when architecture responds to the increasing stresses that faces today. The thesis maintains the importance of reporting, in the present state of things, the impossibility to respond openly with simplistic solutions and, therefore, the need for solutions to complex character. In this sense, the concept of Underlying Form is defined as a critical tool to evaluate the response of each architecture and possess a critical system to clarify what is an consistent object facing a certain situation. The underlying form is then defined as a way to synchronously understand what we perceive sensitively inseparable from the hidden forces of creative, technological, material and energetic character that support the definition and understanding of any constructed object.
Resumo:
We summarize studies of earthquake fault models that give rise to slip complexities like those in natural earthquakes. For models of smooth faults between elastically deformable continua, it is critical that the friction laws involve a characteristic distance for slip weakening or evolution of surface state. That results in a finite nucleation size, or coherent slip patch size, h*. Models of smooth faults, using numerical cell size properly small compared to h*, show periodic response or complex and apparently chaotic histories of large events but have not been found to show small event complexity like the self-similar (power law) Gutenberg-Richter frequency-size statistics. This conclusion is supported in the present paper by fully inertial elastodynamic modeling of earthquake sequences. In contrast, some models of locally heterogeneous faults with quasi-independent fault segments, represented approximately by simulations with cell size larger than h* so that the model becomes "inherently discrete," do show small event complexity of the Gutenberg-Richter type. Models based on classical friction laws without a weakening length scale or for which the numerical procedure imposes an abrupt strength drop at the onset of slip have h* = 0 and hence always fall into the inherently discrete class. We suggest that the small-event complexity that some such models show will not survive regularization of the constitutive description, by inclusion of an appropriate length scale leading to a finite h*, and a corresponding reduction of numerical grid size.
Resumo:
We study a simple antiplane fault of finite length embedded in a homogeneous isotropic elastic solid to understand the origin of seismic source heterogeneity in the presence of nonlinear rate- and state-dependent friction. All the mechanical properties of the medium and friction are assumed homogeneous. Friction includes a characteristic length that is longer than the grid size so that our models have a well-defined continuum limit. Starting from a heterogeneous initial stress distribution, we apply a slowly increasing uniform stress load far from the fault and we simulate the seismicity for a few 1000 events. The style of seismicity produced by this model is determined by a control parameter associated with the degree of rate dependence of friction. For classical friction models with rate-independent friction, no complexity appears and seismicity is perfectly periodic. For weakly rate-dependent friction, large ruptures are still periodic, but small seismicity becomes increasingly nonstationary. When friction is highly rate-dependent, seismicity becomes nonperiodic and ruptures of all sizes occur inside the fault. Highly rate-dependent friction destabilizes the healing process producing premature healing of slip and partial stress drop. Partial stress drop produces large variations in the state of stress that in turn produce earthquakes of different sizes. Similar results have been found by other authors using the Burridge and Knopoff model. We conjecture that all models in which static stress drop is only a fraction of the dynamic stress drop produce stress heterogeneity.