963 resultados para Non-commutative particles dynamics
Resumo:
Esta tesis se centra en el estudio de medios granulares blandos y atascados mediante la aplicación de la física estadística. Esta aproximación se sitúa entre los tradicionales enfoques macro y micromecánicos: trata de establecer cuáles son las propiedades macroscópicas esperables de un sistema granular en base a un análisis de las propiedades de las partículas y las interacciones que se producen entre ellas y a una consideración de las restricciones macroscópicas del sistema. Para ello se utiliza la teoría estadística junto con algunos principios, conceptos y definiciones de la teoría de los medios continuos (campo de tensiones y deformaciones, energía potencial elástica, etc) y algunas técnicas de homogeneización. La interacción entre las partículas es analizada mediante las aportaciones de la teoría del contacto y de las fuerzas capilares (producidas por eventuales meniscos de líquido cuando el medio está húmedo). La idea básica de la mecánica estadística es que entre todas soluciones de un problema físico (como puede ser el ensamblaje en equilibrio estático de partículas de un medio granular) existe un conjunto que es compatible con el conocimiento macroscópico que tenemos del sistema (por ejemplo, su volumen, la tensión a la que está sometido, la energía potencial elástica que almacena, etc.). Este conjunto todavía contiene un número enorme de soluciones. Pues bien, si no hay ninguna información adicional es razonable pensar que no existe ningún motivo para que alguna de estas soluciones sea más probable que las demás. Entonces parece natural asignarles a todas ellas el mismo peso estadístico y construir una función matemática compatible. Actuando de este modo se obtiene cuál es la función de distribución más probable de algunas cantidades asociadas a las soluciones, para lo cual es muy importante asegurarse de que todas ellas son igualmente accesibles por el procedimiento de ensamblaje o protocolo. Este enfoque se desarrolló en sus orígenes para el estudio de los gases ideales pero se puede extender para sistemas no térmicos como los analizados en esta tesis. En este sentido el primer intento se produjo hace poco más de veinte años y es la colectividad de volumen. Desde entonces esta ha sido empleada y mejorada por muchos investigadores en todo el mundo, mientras que han surgido otras, como la de la energía o la del fuerza-momento (tensión multiplicada por volumen). Cada colectividad describe, en definitiva, conjuntos de soluciones caracterizados por diferentes restricciones macroscópicas, pero de todos ellos resultan distribuciones estadísticas de tipo Maxwell-Boltzmann y controladas por dichas restricciones. En base a estos trabajos previos, en esta tesis se ha adaptado el enfoque clásico de la física estadística para el caso de medios granulares blandos. Se ha propuesto un marco general para estudiar estas colectividades que se basa en la comparación de todas las posibles soluciones en un espacio matemático definido por las componentes del fuerza-momento y en unas funciones de densidad de estados. Este desarrollo teórico se complementa con resultados obtenidos mediante simulación de la compresión cíclica de sistemas granulares bidimensionales. Se utilizó para ello un método de dinámica molecular, MD (o DEM). Las simulaciones consideran una interacción mecánica elástica, lineal y amortiguada a la que se ha añadido, en algunos casos, la fuerza cohesiva producida por meniscos de agua. Se realizaron cálculos en serie y en paralelo. Los resultados no solo prueban que las funciones de distribución de las componentes de fuerza-momento del sistema sometido a un protocolo específico parecen ser universales, sino que también revelan que existen muchos aspectos computacionales que pueden determinar cuáles son las soluciones accesibles. This thesis focuses on the application of statistical mechanics for the study of static and jammed packings of soft granular media. Such approach lies between micro and macromechanics: it tries to establish what the expected macroscopic properties of a granular system are, by starting from a micromechanical analysis of the features of the particles, and the interactions between them, and by considering the macroscopic constraints of the system. To do that, statistics together with some principles, concepts and definitions of continuum mechanics (e.g. stress and strain fields, elastic potential energy, etc.) as well as some homogenization techniques are used. The interaction between the particles of a granular system is examined too and theories on contact and capillary forces (when the media are wet) are revisited. The basic idea of statistical mechanics is that among the solutions of a physical problem (e.g. the static arrangement of particles in mechanical equilibrium) there is a class that is compatible with our macroscopic knowledge of the system (volume, stress, elastic potential energy,...). This class still contains an enormous number of solutions. In the absence of further information there is not any a priori reason for favoring one of these more than any other. Hence we shall naturally construct the equilibrium function by assigning equal statistical weights to all the functions compatible with our requirements. This procedure leads to the most probable statistical distribution of some quantities, but it is necessary to guarantee that all the solutions are likely accessed. This approach was originally set up for the study of ideal gases, but it can be extended to non-thermal systems too. In this connection, the first attempt for granular systems was the volume ensemble, developed about 20 years ago. Since then, this model has been followed and improved upon by many researchers around the world, while other two approaches have also been set up: energy and force-moment (i.e. stress multiplied by volume) ensembles. Each ensemble is described by different macroscopic constraints but all of them result on a Maxwell-Boltzmann statistical distribution, which is precisely controlled by the respective constraints. According to this previous work, in this thesis the classical statistical mechanics approach is introduced and adapted to the case of soft granular media. A general framework, which includes these three ensembles and uses a force-moment phase space and a density of states function, is proposed. This theoretical development is complemented by molecular dynamics (or DEM) simulations of the cyclic compression of 2D granular systems. Simulations were carried out by considering spring-dashpot mechanical interactions and attractive capillary forces in some cases. They were run on single and parallel processors. Results not only prove that the statistical distributions of the force-moment components obtained with a specific protocol seem to be universal, but also that there are many computational issues that can determine what the attained packings or solutions are.
Resumo:
Cuando una colectividad de sistemas dinámicos acoplados mediante una estructura irregular de interacciones evoluciona, se observan dinámicas de gran complejidad y fenómenos emergentes imposibles de predecir a partir de las propiedades de los sistemas individuales. El objetivo principal de esta tesis es precisamente avanzar en nuestra comprensión de la relación existente entre la topología de interacciones y las dinámicas colectivas que una red compleja es capaz de mantener. Siendo este un tema amplio que se puede abordar desde distintos puntos de vista, en esta tesis se han estudiado tres problemas importantes dentro del mismo que están relacionados entre sí. Por un lado, en numerosos sistemas naturales y artificiales que se pueden describir mediante una red compleja la topología no es estática, sino que depende de la dinámica que se desarrolla en la red: un ejemplo son las redes de neuronas del cerebro. En estas redes adaptativas la propia topología emerge como consecuencia de una autoorganización del sistema. Para conocer mejor cómo pueden emerger espontáneamente las propiedades comúnmente observadas en redes reales, hemos estudiado el comportamiento de sistemas que evolucionan según reglas adaptativas locales con base empírica. Nuestros resultados numéricos y analíticos muestran que la autoorganización del sistema da lugar a dos de las propiedades más universales de las redes complejas: a escala mesoscópica, la aparición de una estructura de comunidades, y, a escala macroscópica, la existencia de una ley de potencias en la distribución de las interacciones en la red. El hecho de que estas propiedades aparecen en dos modelos con leyes de evolución cuantitativamente distintas que siguen unos mismos principios adaptativos sugiere que estamos ante un fenómeno que puede ser muy general, y estar en el origen de estas propiedades en sistemas reales. En segundo lugar, proponemos una medida que permite clasificar los elementos de una red compleja en función de su relevancia para el mantenimiento de dinámicas colectivas. En concreto, estudiamos la vulnerabilidad de los distintos elementos de una red frente a perturbaciones o grandes fluctuaciones, entendida como una medida del impacto que estos acontecimientos externos tienen en la interrupción de una dinámica colectiva. Los resultados que se obtienen indican que la vulnerabilidad dinámica es sobre todo dependiente de propiedades locales, por tanto nuestras conclusiones abarcan diferentes topologías, y muestran la existencia de una dependencia no trivial entre la vulnerabilidad y la conectividad de los elementos de una red. Finalmente, proponemos una estrategia de imposición de una dinámica objetivo genérica en una red dada e investigamos su validez en redes con diversas topologías que mantienen regímenes dinámicos turbulentos. Se obtiene como resultado que las redes heterogéneas (y la amplia mayora de las redes reales estudiadas lo son) son las más adecuadas para nuestra estrategia de targeting de dinámicas deseadas, siendo la estrategia muy efectiva incluso en caso de disponer de un conocimiento muy imperfecto de la topología de la red. Aparte de la relevancia teórica para la comprensión de fenómenos colectivos en sistemas complejos, los métodos y resultados propuestos podrán dar lugar a aplicaciones en sistemas experimentales y tecnológicos, como por ejemplo los sistemas neuronales in vitro, el sistema nervioso central (en el estudio de actividades síncronas de carácter patológico), las redes eléctricas o los sistemas de comunicaciones. ABSTRACT The time evolution of an ensemble of dynamical systems coupled through an irregular interaction scheme gives rise to dynamics of great of complexity and emergent phenomena that cannot be predicted from the properties of the individual systems. The main objective of this thesis is precisely to increase our understanding of the interplay between the interaction topology and the collective dynamics that a complex network can support. This is a very broad subject, so in this thesis we will limit ourselves to the study of three relevant problems that have strong connections among them. First, it is a well-known fact that in many natural and manmade systems that can be represented as complex networks the topology is not static; rather, it depends on the dynamics taking place on the network (as it happens, for instance, in the neuronal networks in the brain). In these adaptive networks the topology itself emerges from the self-organization in the system. To better understand how the properties that are commonly observed in real networks spontaneously emerge, we have studied the behavior of systems that evolve according to local adaptive rules that are empirically motivated. Our numerical and analytical results show that self-organization brings about two of the most universally found properties in complex networks: at the mesoscopic scale, the appearance of a community structure, and, at the macroscopic scale, the existence of a power law in the weight distribution of the network interactions. The fact that these properties show up in two models with quantitatively different mechanisms that follow the same general adaptive principles suggests that our results may be generalized to other systems as well, and they may be behind the origin of these properties in some real systems. We also propose a new measure that provides a ranking of the elements in a network in terms of their relevance for the maintenance of collective dynamics. Specifically, we study the vulnerability of the elements under perturbations or large fluctuations, interpreted as a measure of the impact these external events have on the disruption of collective motion. Our results suggest that the dynamic vulnerability measure depends largely on local properties (our conclusions thus being valid for different topologies) and they show a non-trivial dependence of the vulnerability on the connectivity of the network elements. Finally, we propose a strategy for the imposition of generic goal dynamics on a given network, and we explore its performance in networks with different topologies that support turbulent dynamical regimes. It turns out that heterogeneous networks (and most real networks that have been studied belong in this category) are the most suitable for our strategy for the targeting of desired dynamics, the strategy being very effective even when the knowledge on the network topology is far from accurate. Aside from their theoretical relevance for the understanding of collective phenomena in complex systems, the methods and results here discussed might lead to applications in experimental and technological systems, such as in vitro neuronal systems, the central nervous system (where pathological synchronous activity sometimes occurs), communication systems or power grids.
Resumo:
The interaction of high intensity X-ray lasers with matter is modeled. A collisional-radiative timedependent module is implemented to study radiation transport in matter from ultrashort and ultraintense X-ray bursts. Inverse bremsstrahlung absorption by free electrons, electron conduction or hydrodynamic effects are not considered. The collisional-radiative system is coupled with the electron distribution evolution treated with a Fokker-Planck approach with additional inelastic terms. The model includes spontaneous emission, resonant photoabsorption, collisional excitation and de-excitation, radiative recombination, photoionization, collisional ionization, three-body recombination, autoionization and dielectronic capture. It is found that for high densities, but still below solid, collisions play an important role and thermalization times are not short enough to ensure a thermal electron distribution. At these densities Maxwellian and non-Maxwellian electron distribution models yield substantial differences in collisional rates, modifying the atomic population dynamics.
Resumo:
The objective of this study is to analyze the common pool resource appropriation and public good provisiondecisions in a dynamic setting, testing the differences in behavior and performance between lab and field subjects. We performeda total of 45 games in Nicaragua, including 88 villagers in rural communities and 92 undergraduate students. In order to analyze sequential decision making, we introduce a dynamic and asymmetric irrigation game that combines the typical social dilemmas associated to irrigation systems management.In addition, in 9 out of 22 villagers’ groups, we implemented a treatment that included the disclosure of subjects’ appropriation of the common pool resource. The results reveal that the provision of individuals’ appropriation level results in higher appropriation in subsequent rounds. In addition, the results show that non-treated villagers provide more public good than treated villagers but if compared with students the differences are not significant. The results also suggest that appropriation levels are below the Nash prediction of full appropriation, but above the social efficient level. This results in an efficiency loss in the game that can be explained to a large extent by individual decisions on appropriation and public good contribution and by group appropriation behavior.
Resumo:
The mechanical behavior of granular materials has been traditionally approached through two theoretical and computational frameworks: macromechanics and micromechanics. Macromechanics focuses on continuum based models. In consequence it is assumed that the matter in the granular material is homogeneous and continuously distributed over its volume so that the smallest element cut from the body possesses the same physical properties as the body. In particular, it has some equivalent mechanical properties, represented by complex and non-linear constitutive relationships. Engineering problems are usually solved using computational methods such as FEM or FDM. On the other hand, micromechanics is the analysis of heterogeneous materials on the level of their individual constituents. In granular materials, if the properties of particles are known, a micromechanical approach can lead to a predictive response of the whole heterogeneous material. Two classes of numerical techniques can be differentiated: computational micromechanics, which consists on applying continuum mechanics on each of the phases of a representative volume element and then solving numerically the equations, and atomistic methods (DEM), which consist on applying rigid body dynamics together with interaction potentials to the particles. Statistical mechanics approaches arise between micro and macromechanics. It tries to state which the expected macroscopic properties of a granular system are, by starting from a micromechanical analysis of the features of the particles and the interactions. The main objective of this paper is to introduce this approach.
Resumo:
The cyclic compression of several granular systems has been simulated with a molecular dynamics code. All the samples consisted of bidimensional, soft, frictionless and equal-sized particles that were initially arranged according to a squared lattice and were compressed by randomly generated irregular walls. The compression protocols can be described by some control variables (volume or external force acting on the walls) and by some dimensionless factors, that relate stiffness, density, diameter, damping ratio and water surface tension to the external forces, displacements and periods. Each protocol, that is associated to a dynamic process, results in an arrangement with its own macroscopic features: volume (or packing ratio), coordination number, and stress; and the differences between packings can be highly significant. The statistical distribution of the force-moment state of the particles (i.e. the equivalent average stress multiplied by the volume) is analyzed. In spite of the lack of a theoretical framework based on statistical mechanics specific for these protocols, it is shown how the obtained distributions of mean and relative deviatoric force-moment are. Then it is discussed on the nature of these distributions and on their relation to specific protocols.
Resumo:
The first steps towards developing a continuum-molecular coupled simulations techniques are presented, for the purpose of computing macroscopic systems of confined fluids. The idea is to compute the interface wall-fluid by Molecular Dynamics simulations, where Lennard-Jones potential (and others) have been employed for the molecular interactions, so the usual non slip boundary condition is not specified. Instead, a shear rate can be imposed at the wall, which allows to obtain the properties of the wall material by means of an iterative method. The remaining fluid region will be computed by a spectral hp method. We present MD simulations of a Couette flow, and the results of the developed boundary conditions from the wall fluid interaction.
Resumo:
The electro-dynamical tethers emit waves in structured denominated Alfven wings. The Derivative Nonlineal Schrödinger Equation (DNLS) possesses the capacity to describe the propagation of circularly polarized Alfven waves of finite amplitude in cold plasmas. The DNLS equation is truncated to explore the coherent, weakly nonlinear, cubic coupling of three waves near resonance, one wave being linearly unstable and the other waves damped. In this article is presented a theoretical and numerical analysis when the growth rate of the unstable wave is next to zero considering two damping models: Landau and resistive. The DNLS equation presents a chaotic dynamics when is consider only three wave truncation. The evolution to chaos possesses three routes: hard transition, period-doubling and intermittence of type I.
Resumo:
Conditions are identified under which analyses of laminar mixing layers can shed light on aspects of turbulent spray combustion. With this in mind, laminar spray-combustion models are formulated for both non-premixed and partially premixed systems. The laminar mixing layer separating a hot-air stream from a monodisperse spray carried by either an inert gas or air is investigated numerically and analytically in an effort to increase understanding of the ignition process leading to stabilization of high-speed spray combustion. The problem is formulated in an Eulerian framework, with the conservation equations written in the boundary-layer approximation and with a one-step Arrhenius model adopted for the chemistry description. The numerical integrations unveil two different types of ignition behaviour depending on the fuel availability in the reaction kernel, which in turn depends on the rates of droplet vaporization and fuel-vapour diffusion. When sufficient fuel is available near the hot boundary, as occurs when the thermochemical properties of heptane are employed for the fuel in the integrations, combustion is established through a precipitous temperature increase at a well-defined thermal-runaway location, a phenomenon that is amenable to a theoretical analysis based on activation-energy asymptotics, presented here, following earlier ideas developed in describing unsteady gaseous ignition in mixing layers. By way of contrast, when the amount of fuel vapour reaching the hot boundary is small, as is observed in the computations employing the thermochemical properties of methanol, the incipient chemical reaction gives rise to a slowly developing lean deflagration that consumes the available fuel as it propagates across the mixing layer towards the spray. The flame structure that develops downstream from the ignition point depends on the fuel considered and also on the spray carrier gas, with fuel sprays carried by air displaying either a lean deflagration bounding a region of distributed reaction or a distinct double-flame structure with a rich premixed flame on the spray side and a diffusion flame on the air side. Results are calculated for the distributions of mixture fraction and scalar dissipation rate across the mixing layer that reveal complexities that serve to identify differences between spray-flamelet and gaseous-flamelet problems.
Resumo:
A mathematical model for the group combustion of pulverized coal particles was developed in a previous work. It includes the Lagrangian description of the dehumidification, devolatilization and char gasification reactions of the coal particles in the homogenized gaseous environment resulting from the three fuels, CO, H2 and volatiles, supplied by the gasification of the particles and their simultaneous group combustion by the gas phase oxidation reactions, which are considered to be very fast. This model is complemented here with an analysis of the particle dynamics, determined principally by the effects of aerodynamic drag and gravity, and its dispersion based on a stochastic model. It is also extended to include two other simpler models for the gasification of the particles: the first one for particles small enough to extinguish the surrounding diffusion flames, and a second one for particles with small ash content when the porous shell of ashes remaining after gasification of the char, non structurally stable, is disrupted. As an example of the applicability of the models, they are used in the numerical simulation of an experiment of a non-swirling pulverized coal jet with a nearly stagnant air at ambient temperature, with an initial region of interaction with a small annular methane flame. Computational algorithms for solving the different stages undergone by a coal particle during its combustion are proposed. For the partial differential equations modeling the gas phase, a second order finite element method combined with a semi-Lagrangian characteristics method are used. The results obtained with the three versions of the model are compared among them and show how the first of the simpler models fits better the experimental results.
Resumo:
Nonlinear analysis tools for studying and characterizing the dynamics of physiological signals have gained popularity, mainly because tracking sudden alterations of the inherent complexity of biological processes might be an indicator of altered physiological states. Typically, in order to perform an analysis with such tools, the physiological variables that describe the biological process under study are used to reconstruct the underlying dynamics of the biological processes. For that goal, a procedure called time-delay or uniform embedding is usually employed. Nonetheless, there is evidence of its inability for dealing with non-stationary signals, as those recorded from many physiological processes. To handle with such a drawback, this paper evaluates the utility of non-conventional time series reconstruction procedures based on non uniform embedding, applying them to automatic pattern recognition tasks. The paper compares a state of the art non uniform approach with a novel scheme which fuses embedding and feature selection at once, searching for better reconstructions of the dynamics of the system. Moreover, results are also compared with two classic uniform embedding techniques. Thus, the goal is comparing uniform and non uniform reconstruction techniques, including the one proposed in this work, for pattern recognition in biomedical signal processing tasks. Once the state space is reconstructed, the scheme followed characterizes with three classic nonlinear dynamic features (Largest Lyapunov Exponent, Correlation Dimension and Recurrence Period Density Entropy), while classification is carried out by means of a simple k-nn classifier. In order to test its generalization capabilities, the approach was tested with three different physiological databases (Speech Pathologies, Epilepsy and Heart Murmurs). In terms of the accuracy obtained to automatically detect the presence of pathologies, and for the three types of biosignals analyzed, the non uniform techniques used in this work lightly outperformed the results obtained using the uniform methods, suggesting their usefulness to characterize non-stationary biomedical signals in pattern recognition applications. On the other hand, in view of the results obtained and its low computational load, the proposed technique suggests its applicability for the applications under study.
Resumo:
Background: In recent years, Spain has implemented a number of air quality control measures that are expected to lead to a future reduction in fine particle concentrations and an ensuing positive impact on public health. Objectives: We aimed to assess the impact on mortality attributable to a reduction in fine particle levels in Spain in 2014 in relation to the estimated level for 2007. Methods: To estimate exposure, we constructed fine particle distribution models for Spain for 2007 (reference scenario) and 2014 (projected scenario) with a spatial resolution of 16x16 km2. In a second step, we used the concentration-response functions proposed by cohort studies carried out in Europe (European Study of Cohorts for Air Pollution Effects and Rome longitudinal cohort) and North America (American Cancer Society cohort, Harvard Six Cities study and Canadian national cohort) to calculate the number of attributable annual deaths corresponding to all causes, all non-accidental causes, ischemic heart disease and lung cancer among persons aged over 25 years (2005-2007 mortality rate data). We examined the effect of the Spanish demographic shift in our analysis using 2007 and 2012 population figures. Results: Our model suggested that there would be a mean overall reduction in fine particle levels of 1mg/m3 by 2014. Taking into account 2007 population data, between 8 and 15 all-cause deaths per 100,000 population could be postponed annually by the expected reduction in fine particle levels. For specific subgroups, estimates varied from 10 to 30 deaths for all non-accidental causes, from 1 to 5 for lung cancer, and from 2 to 6 for ischemic heart disease. The expected burden of preventable mortality would be even higher in the future due to the Spanish population growth. Taking into account the population older than 30 years in 2012, the absolute mortality impact estimate would increase approximately by 18%. Conclusions: Effective implementation of air quality measures in Spain, in a scenario with a short-term projection, would amount to an appreciable decline infine particle concentrations, and this, in turn, would lead to notable health-related benefits. Recent European cohort studies strengthen the evidence of an association between long-term exposure to fine particles and health effects, and could enhance the health impact quantification in Europe. Air quality models can contribute to improved assessment of air pollution health impact estimates, particularly in study areas without air pollution monitoring data.
Resumo:
The theoretical study of forced bubble oscillations is motivated by the importance of cavitation bubbles and oscillating encapsulated microbubbles (i.e. contrast agents) in medical sciences. In more details,theoretical studies on bubble dynamics addressing the sound-bubble interaction phenomenon provide the basis for understanding the dynamics of contrast agent microbubbles used in medical diagnosis and of non-linearly oscillating cavitation bubbles in the case of high-intensity ultrasound therapy. Moreover, the inclusion of viscoelasticity is of vital importance for an accurate theoretical analysis since most biological tissues and fluids exhibit non-Newtonian behavior.
Resumo:
We derive a semi-analytic formulation that enables the study of the long-term dynamics of fast-rotating inert tethers around planetary satellites. These equations take into account the coupling between the translational and rotational motion, which has a non-negligible impact on the dynamics, as the orbital motion of the tether center of mass strongly depends on the tether plane of rotation and its spin rate, and vice-versa. We use these governing equations to explore the effects of this coupling on the dynamics, the lifetime of frozen orbits and the precession of the plane of rotation of the tether.
Resumo:
The determination of the local Lagrangian evolution of the flow topology in wall-bounded turbulence, and of the Lagrangian evolution associated with entrainment across the turbulent / non-turbulent interface into a turbulent boundary layer, require accurate tracking of a fluid particle and its local velocity gradients. This paper addresses the implementation of fluid-particle tracking in both a turbulent boundary layer direct numerical simulation and in a fully developed channel flow simulation. Determination of the sub-grid particle velocity is performed using both cubic B-spline, four-point Hermite spline and higher-order Hermite spline interpolation. Both wall-bounded flows show similar oscillations in the Lagrangian tracers of both velocity and velocity gradients, corresponding to the movement of particles across the boundaries of computational cells. While these oscillation in the particle velocity are relatively small and have negligible effect on the particle trajectories for time-steps of the order of CFL = 0.1, they appear to be the cause of significant oscillations in the evolution of the invariants of the velocity gradient tensor.