866 resultados para the arousal theory
Resumo:
The classical theory of intermittency developed for return maps assumes uniform density of points reinjected from the chaotic to laminar region. Though it works fine in some model systems, there exist a number of so-called pathological cases characterized by a significant deviation of main characteristics from the values predicted on the basis of the uniform distribution. Recently, we reported on how the reinjection probability density (RPD) can be generalized. Here, we extend this methodology and apply it to different dynamical systems exhibiting anomalous type-II and type-III intermittencies. Estimation of the universal RPD is based on fitting a linear function to experimental data and requires no a priori knowledge on the dynamical model behind. We provide special fitting procedure that enables robust estimation of the RPD from relatively short data sets (dozens of points). Thus, the method is applicable for a wide variety of data sets including numerical simulations and real-life experiments. Estimated RPD enables analytic evaluation of the length of the laminar phase of intermittent behaviors. We show that the method copes well with dynamical systems exhibiting significantly different statistics reported in the literature. We also derive and classify characteristic relations between the mean laminar length and main controlling parameter in perfect agreement with data provided by numerical simulations
Resumo:
Advanced control techniques like V2, Vout hysteresis or V2Ic can strongly reduce the required output capacitance in PowerSoC converters. Techniques to analyze power converters based on the analysis of the frequency response are not suitable for ripple-based controllers that use fast-scale dynamics to control the power stage. This paper proves that the use of discrete modeling together with Floquet theory is a very powerful tool to model the system and derive stable region diagrams for sensitivity analysis. It is applied to V 2Ic control, validating experimentally that Floquet theory predicts accurately subharmonic oscillations. This method is applied to several ripplebased controllers, providing higher accuracy when it is compared with other techniques based on the frequency response. The paper experimentally validates the usefulness of the discrete modeling and the Floquet theory on a 5 MHz Buck converter with a V 2Ic control.
Resumo:
Surfactant monolayers are of interest in a variety of phenomena, including thin film dynamics and the formation and dynamics of foams. Measurement of surface properties has received a continuous attention and requires good theoretical models to extract the relevant physico- chemical information from experimental data. A common experimental set up consists in a shallow liquid layer whose free surface is slowly com- pressed/expanded in periodic fashion by moving two slightly immersed solid barriers, which varies the free surface area and thus the surfactant concentration. The simplest theory ignores the fluid dynamics in the bulk fluid, assuming spatially uniform surfactant concentration, which requires quite small forcing frequencies and provides reversible dynamics in the compression/expansion cycles. Sometimes, it is not clear whether depar- ture from reversibility is due to non-equilibrium effects or to the ignored fluid dynamics. Here we present a long wave theory that takes the fluid dynamics and the symmetries of the problem into account. In particular, the validity of the spatially-uniform-surfactant-concentration assumption is established and a nonlinear diffusion equation is derived. This allows for calculating spatially nonuniform monolayer dynamics and uncovering the physical mechanisms involved in the surfactant behavior. Also, this analysis can be considered a good means for extracting more relevant information from each experimental run.
Resumo:
This paper is an introduction of the regret theory-based scenario building approach combining with a modified Delphi method that uses an interactive process to design and assess four different TDM measures (i.e., cordon toll, parking charge, increased bus frequency and decreased bus fare). The case study of Madrid is used to present the analysis and provide policy recommendations. The new scenario building approach incorporates expert judgement and transport models in an interactive process. It consists of a two-round modified Delphi survey, which was answeared by a group of Spanish transport experts who were the participants of the Transport Engineering Congress (CIT 2012), and an integrated land-use and transport model (LUTI) for Madrid that is called MARS (Metropolitan Activity Relocation Simulator).
Resumo:
This study suggests a theoretical framework for improving the teaching/ learning process of English employed in the Aeronautical discourse that brings together cognitive learning strategies, Genre Analysis and the Contemporary theory of Metaphor (Lakoff and Johnson 1980; Lakoff 1993). It maintains that cognitive strategies such as imagery, deduction, inference and grouping can be enhanced by means of metaphor and genre awareness in the context of content based approach to language learning. A list of image metaphors and conceptual metaphors which comes from the terminological database METACITEC is provided. The metaphorical terms from the area of Aeronautics have been taken from specialised dictionaries and have been categorised according to the conceptual metaphors they respond to, by establishing the source domains and the target domains, as well as the semantic networks found. This information makes reference to the internal mappings underlying the discourse of aeronautics reflected in five aviation accident case studies which are related to accident reports from the National Transportation Safety Board (NTSB) and provides an important source for designing language teaching tasks. La Lingüística Cognitiva y el Análisis del Género han contribuido a la mejora de la enseñanza de segundas lenguas y, en particular, al desarrollo de la competencia lingüística de los alumnos de inglés para fines específicos. Este trabajo pretende perfeccionar los procesos de enseñanza y el aprendizaje del lenguaje empleado en el discurso aeronáutico por medio de la práctica de estrategias cognitivas y prestando atención a la Teoría del análisis del género y a la Teoría contemporánea de la metáfora (Lakoff y Johnson 1980; Lakoff 1993). Con el propósito de crear recursos didácticos en los que se apliquen estrategias metafóricas, se ha elaborado un listado de metáforas de imagen y de metáforas conceptuales proveniente de la base de datos terminológica META-CITEC. Estos términos se han clasificado de acuerdo con las metáforas conceptuales y de imagen existentes en esta área de conocimiento. Para la enseñanza de este lenguaje de especialidad, se proponen las correspondencias y las proyecciones entre el dominio origen y el dominio meta que se han hallado en los informes de accidentes aéreos tomados de la Junta federal de la Seguridad en el Transporte (NTSB)
Resumo:
La investigación para el conocimiento del cerebro es una ciencia joven, su inicio se remonta a Santiago Ramón y Cajal en 1888. Desde esta fecha a nuestro tiempo la neurociencia ha avanzado mucho en el desarrollo de técnicas que permiten su estudio. Desde la neurociencia cognitiva hoy se explican muchos modelos que nos permiten acercar a nuestro entendimiento a capacidades cognitivas complejas. Aun así hablamos de una ciencia casi en pañales que tiene un lago recorrido por delante. Una de las claves del éxito en los estudios de la función cerebral ha sido convertirse en una disciplina que combina conocimientos de diversas áreas: de la física, de las matemáticas, de la estadística y de la psicología. Esta es la razón por la que a lo largo de este trabajo se entremezclan conceptos de diferentes campos con el objetivo de avanzar en el conocimiento de un tema tan complejo como el que nos ocupa: el entendimiento de la mente humana. Concretamente, esta tesis ha estado dirigida a la integración multimodal de la magnetoencefalografía (MEG) y la resonancia magnética ponderada en difusión (dMRI). Estas técnicas son sensibles, respectivamente, a los campos magnéticos emitidos por las corrientes neuronales, y a la microestructura de la materia blanca cerebral. A lo largo de este trabajo hemos visto que la combinación de estas técnicas permiten descubrir sinergias estructurofuncionales en el procesamiento de la información en el cerebro sano y en el curso de patologías neurológicas. Más específicamente en este trabajo se ha estudiado la relación entre la conectividad funcional y estructural y en cómo fusionarlas. Para ello, se ha cuantificado la conectividad funcional mediante el estudio de la sincronización de fase o la correlación de amplitudes entre series temporales, de esta forma se ha conseguido un índice que mide la similitud entre grupos neuronales o regiones cerebrales. Adicionalmente, la cuantificación de la conectividad estructural a partir de imágenes de resonancia magnética ponderadas en difusión, ha permitido hallar índices de la integridad de materia blanca o de la fuerza de las conexiones estructurales entre regiones. Estas medidas fueron combinadas en los capítulos 3, 4 y 5 de este trabajo siguiendo tres aproximaciones que iban desde el nivel más bajo al más alto de integración. Finalmente se utilizó la información fusionada de MEG y dMRI para la caracterización de grupos de sujetos con deterioro cognitivo leve, la detección de esta patología resulta relevante en la identificación precoz de la enfermedad de Alzheimer. Esta tesis está dividida en seis capítulos. En el capítulos 1 se establece un contexto para la introducción de la connectómica dentro de los campos de la neuroimagen y la neurociencia. Posteriormente en este capítulo se describen los objetivos de la tesis, y los objetivos específicos de cada una de las publicaciones científicas que resultaron de este trabajo. En el capítulo 2 se describen los métodos para cada técnica que fue empleada: conectividad estructural, conectividad funcional en resting state, redes cerebrales complejas y teoría de grafos y finalmente se describe la condición de deterioro cognitivo leve y el estado actual en la búsqueda de nuevos biomarcadores diagnósticos. En los capítulos 3, 4 y 5 se han incluido los artículos científicos que fueron producidos a lo largo de esta tesis. Estos han sido incluidos en el formato de la revista en que fueron publicados, estando divididos en introducción, materiales y métodos, resultados y discusión. Todos los métodos que fueron empleados en los artículos están descritos en el capítulo 2 de la tesis. Finalmente, en el capítulo 6 se concluyen los resultados generales de la tesis y se discuten de forma específica los resultados de cada artículo. ABSTRACT In this thesis I apply concepts from mathematics, physics and statistics to the neurosciences. This field benefits from the collaborative work of multidisciplinary teams where physicians, psychologists, engineers and other specialists fight for a common well: the understanding of the brain. Research on this field is still in its early years, being its birth attributed to the neuronal theory of Santiago Ramo´n y Cajal in 1888. In more than one hundred years only a very little percentage of the brain functioning has been discovered, and still much more needs to be explored. Isolated techniques aim at unraveling the system that supports our cognition, nevertheless in order to provide solid evidence in such a field multimodal techniques have arisen, with them we will be able to improve current knowledge about human cognition. Here we focus on the multimodal integration of magnetoencephalography (MEG) and diffusion weighted magnetic resonance imaging. These techniques are sensitive to the magnetic fields emitted by the neuronal currents and to the white matter microstructure, respectively. The combination of such techniques could bring up evidences about structural-functional synergies in the brain information processing and which part of this synergy fails in specific neurological pathologies. In particular, we are interested in the relationship between functional and structural connectivity, and how two integrate this information. We quantify the functional connectivity by studying the phase synchronization or the amplitude correlation between time series obtained by MEG, and so we get an index indicating similarity between neuronal entities, i.e. brain regions. In addition we quantify structural connectivity by performing diffusion tensor estimation from the diffusion weighted images, thus obtaining an indicator of the integrity of the white matter or, if preferred, the strength of the structural connections between regions. These quantifications are then combined following three different approaches, from the lowest to the highest level of integration, in chapters 3, 4 and 5. We finally apply the fused information to the characterization or prediction of mild cognitive impairment, a clinical entity which is considered as an early step in the continuum pathological process of dementia. The dissertation is divided in six chapters. In chapter 1 I introduce connectomics within the fields of neuroimaging and neuroscience. Later in this chapter we describe the objectives of this thesis, and the specific objectives of each of the scientific publications that were produced as result of this work. In chapter 2 I describe the methods for each of the techniques that were employed, namely structural connectivity, resting state functional connectivity, complex brain networks and graph theory, and finally, I describe the clinical condition of mild cognitive impairment and the current state of the art in the search for early biomarkers. In chapters 3, 4 and 5 I have included the scientific publications that were generated along this work. They have been included in in their original format and they contain introduction, materials and methods, results and discussion. All methods that were employed in these papers have been described in chapter 2. Finally, in chapter 6 I summarize all the results from this thesis, both locally for each of the scientific publications and globally for the whole work.
Resumo:
This article presents a new and computationally efficient method of analysis of a railway track modelled as a continuous beam of 2N spans supported by elastic vertical springs. The main feature of this method is its important reduction in computational effort with respect to standard matrix methods of structural analysis. In this article, the whole structure is considered to be a repetition of a single one. The analysis presented is applied to a simple railway track model, i.e. to a repetitive beam supported on vertical springs (sleepers). The proposed method of analysis is based on the general theory of spatially periodic structures. The main feature of this theory is the possibility to apply Discrete Fourier Transform (DFT) in order to reduce a large system of q(2N + 1) linear stiffness equilibrium equations to a set of 2N + 1 uncoupled systems of q equations each. In this way, a dramatic reduction of the computational effort of solving the large system of equations is achieved. This fact is particularly important in the analysis of railway track structures, in which N is a very large number (around several thousands), and q = 2, the vertical displacement and rotation, is very small. The proposed method allows us to easily obtain the exact solution given by Samartín [1], i.e. the continuous beam railway track response. The comparison between the proposed method and other methods of analysis of railway tracks, such as Lorente de Nó and Zimmermann-Timoshenko, clearly shows the accuracy of the obtained results for the proposed method, even for low values of N. In addition, identical results between the proposed and the Lorente methods have been found, although the proposed method seems to be of simpler application and computationally more efficient than the Lorente one. Small but significative differences occur between these two methods and the one developed by Zimmermann-Timoshenko. This article also presents a detailed sensitivity analysis of the vertical displacement of the sleepers. Although standard matrix methods of structural analysis can handle this railway model, one of the objectives of this article is to show the efficiency of DFT method with respect to standard matrix structural analysis. A comparative analysis between standard matrix structural analysis and the proposed method (DFT), in terms of computational time, input, output and also software programming, will be carried out. Finally, a URL link to a MatLab computer program list, based on the proposed method, is given
Resumo:
This paper discusses a model based on the agency theory to analyze the optimal transfer of construction risk in public works contracts. The base assumption is that of a contract between a principal (public authority) and an agent (firm), where the payment mechanism is linear and contains an incentive mechanism to enhance the effort of the agent to reduce construction costs. A theoretical model is proposed starting from a cost function with a random component and assuming that both the public authority and the firm are risk averse. The main outcome of the paper is that the optimal transfer of construction risk will be lower when the variance of errors in cost forecast, the risk aversion of the firm and the marginal cost of public funds are larger, while the optimal transfer of construction risk will grow when the variance of errors in cost monitoring and the risk aversion of the public authority are larger
Resumo:
The last few years have witnessed a significant decrease in the gap between the Shannon channel capacity limit and what is practically achievable. Progress has resulted from novel extensions of previously known coding techniques involving interleaved concatenated codes. A considerable body of simulation results is now available, supported by an important but limited theoretical basis. This paper presents a computational technique which further ties simulation results to the known theory and reveals a considerable reduction in the complexity required to approach the Shannon limit.
Resumo:
Most demographic data indicate a roughly exponential increase in adult mortality with age, a phenomenon that has been explained in terms of a decline in the force of natural selection acting on age-specific mortality. Scattered demographic findings suggest the existence of a late-life mortality plateau in both humans and dipteran insects, seemingly at odds with both prior data and evolutionary theory. Extensions to the evolutionary theory of aging are developed which indicate that such late-life mortality plateaus are to be expected when enough late-life data are collected. This expanded theory predicts late-life mortality plateaus, with both antagonistic pleiotropy and mutation accumulation as driving population genetic mechanisms.
Resumo:
A “most probable state” equilibrium statistical theory for random distributions of hetons in a closed basin is developed here in the context of two-layer quasigeostrophic models for the spreading phase of open-ocean convection. The theory depends only on bulk conserved quantities such as energy, circulation, and the range of values of potential vorticity in each layer. The simplest theory is formulated for a uniform cooling event over the entire basin that triggers a homogeneous random distribution of convective towers. For a small Rossby deformation radius typical for open-ocean convection sites, the most probable states that arise from this theory strongly resemble the saturated baroclinic states of the spreading phase of convection, with a stabilizing barotropic rim current and localized temperature anomaly.
Resumo:
By using perfusions and bolus administration, coupled with postembedding immunocytochemical procedures, we have identified the structures involved in the transport of derivatized orosomucoid (α1-acidic glycoprotein) across the continuous microvascular endothelium of the murine myocardium. Our findings indicate that: (i) monomeric orosomucoid binds to the luminal surface of the endothelium; (ii) it is restricted to caveolae during its transport across the endothelium; (iii) it is detected in the perivascular spaces at early time points (by 1 min) and in larger quantities at later time points (>5 min) from the beginning of its perfusion or its intravascular administration; (iv) no orosomucoid molecules are found in the intercellular junctions or at the abluminal exits of interendothelial spaces; and (v) the vesicular transport of orosomucoid is strongly inhibited by N-ethylmaleimide (>80%). Because, by size and shape, the orosomucoid qualifies as a preferential probe for the postulated small pore system, our results are discussed in relation to the pore theory of capillary permeability.
Resumo:
We model experience-dependent plasticity in the cortical representation of whiskers (the barrel cortex) in normal adult rats, and in adult rats that were prenatally exposed to alcohol. Prenatal exposure to alcohol (PAE) caused marked deficits in experience-dependent plasticity in a cortical barrel-column. Cortical plasticity was induced by trimming all whiskers on one side of the face except two. This manipulation produces high activity from the intact whiskers that contrasts with low activity from the cut whiskers while avoiding any nerve damage. By a computational model, we show that the evolution of neuronal responses in a single barrel-column after this sensory bias is consistent with the synaptic modifications that follow the rules of the Bienenstock, Cooper, and Munro (BCM) theory. The BCM theory postulates that a neuron possesses a moving synaptic modification threshold, θM, that dictates whether the neuron's activity at any given instant will lead to strengthening or weakening of its input synapses. The current value of θM changes proportionally to the square of the neuron's activity averaged over some recent past. In the model of alcohol impaired cortex, the effective θM has been set to a level unattainable by the depressed levels of cortical activity leading to “impaired” synaptic plasticity that is consistent with experimental findings. Based on experimental and computational results, we discuss how elevated θM may be related to (i) reduced levels of neurotransmitters modulating plasticity, (ii) abnormally low expression of N-methyl-d-aspartate receptors (NMDARs), and (iii) the membrane translocation of Ca2+/calmodulin-dependent protein kinase II (CaMKII) in adult rat cortex subjected to prenatal alcohol exposure.
Resumo:
RNA viruses are excellent experimental models for studying evolution under the theoretical framework of population genetics. For a proper justification of this thesis we have introduced some properties of RNA viruses that are relevant for studying evolution. On the other hand, population genetics is a reductionistic theory of evolution. It does not consider or make simplistic assumptions on the transformation laws within and between genotypic and phenotypic spaces. However, such laws are minimized in the case of RNA viruses because the phenotypic space maps onto the genotypic space in a much more linear way than on higher DNA-based organisms. Under experimental conditions, we have tested the role of deleterious and beneficial mutations in the degree of adaptation of vesicular stomatitis virus (VSV), a nonsegmented virus of negative strand. We also have studied how effective population size, initial genetic variability in populations, and environmental heterogeneity shapes the impact of mutations in the evolution of vesicular stomatitis virus. Finally, in an integrative attempt, we discuss pros and cons of the quasispecies theory compared with classic population genetics models for haploid organisms to explain the evolution of RNA viruses.
Resumo:
To some extent, the genetic theory of adaptive evolution in bacteria is a simple extension of that developed for sexually reproducing eukaryotes. In other, fundamental ways, the process of adaptive evolution in bacteria is quantitatively and qualitatively different from that of organisms for which recombination is an integral part of the reproduction process. In this speculative and opinionated discussion, we explore these differences. In particular, we consider (i) how, as a consequence of the low rates of recombination, “ordinary” chromosomal gene evolution in bacteria is different from that in organisms where recombination is frequent and (ii) the fundamental role of the horizontal transmission of genes and accessory genetic elements as sources of variation in bacteria. We conclude with speculations about the evolution of accessory elements and their role in the adaptive evolution of bacteria.