962 resultados para Interacting particle systems
Resumo:
One of the goals of EU BASIN is to understand variability in production across the Atlantic and the impact of this variability on higher trophic levels. One aspect of these investigations is to examine the biomes defined by Longhurst (2007). These biomes are largely based on productivity measured with remote sensing. During MSM 26, mesopelagic fish and size-spectrum data were collected to test the biome classifications of the north Atlantic. In most marine systems, the size-spectrum is a decay function with more, smaller organisms and fewer larger organisms. The intercept of the size-spectrum has been linked to overall productivity while the slope represents the "rate of decay" of this productivity (Zhou 2006, doi:10.1093/plankt/fbi119). A Laser In-Situ Scattering Transmissometer was used to collect size-spectrum data and net collections were made to capture mesopelagic fish. The relationship among the mesopelagic fish size and abundance distributions will be compared to the estimates of production from the size-spectrum data to evaluate the biomes of the stations occupied during MSM 26.
Resumo:
Aggregation of algae, mainly diatoms, is an important process in marine systems leading to the settling of particulate organic carbon predominantly in the form of marine snow. Exudation products of phytoplankton form transparent exopolymer particles (TEP), which acts as the glue for particle aggregation. Heterotrophic bacteria interacting with phytoplankton may influence TEP formation and phytoplankton aggregation. This bacterial impact has not been explored in detail. We hypothesized that bacteria attaching to Thalassiosira weissflogii might interact in a yet-to-be determined manner, which could impact TEP formation and aggregate abundance. The role of individual T. weissflogii-attaching and free-living new bacterial isolates for TEP production and diatom aggregation was investigated in vitro. T. weissflogii did not aggregate in axenic culture, and striking differences in aggregation dynamics and TEP abundance were observed when diatom cultures were inoculated with either diatom-attaching or free-living bacteria. The data indicated that free-living bacteria might not influence aggregation whereas bacteria attaching to diatom cells may increase aggregate formation. Interestingly, photosynthetically inactivated T. weissflogii cells did not aggregate regardless of the presence of bacteria. Comparison of aggregate formation, TEP production, aggregate sinking velocity and solid hydrated density revealed remarkable differences. Both, photosynthetically active T. weissflogii and specific diatom-attaching bacteria were required for aggregation. It was concluded that interactions between heterotrophic bacteria and diatoms increased aggregate formation and particle sinking and thus may enhance the efficiency of the biological pump.
Resumo:
Coastal upwelling systems account for approximately half of global ocean primary production and contribute disproportionately to biologically driven carbon sequestration. Diatoms, silica-precipitating microalgae, constitute the dominant phytoplankton in these productive regions, and their abundance and assemblage composition in the sedimentary record is considered one of the best proxies for primary production. The study of the sedimentary diatom abundance (SDA) and total organic carbon content (TOC) in the five most important coastal upwelling systems of the modern ocean (Iberia-Canary, Benguela, Peru-Humboldt, California and Somalia-Oman) reveals a global-scale positive relationship between diatom production and organic carbon burial. The analysis of SDA in conjunction with environmental variables of coastal upwelling systems such as upwelling strength, satellite-derived net primary production and surface water nutrient concentrations shows different relations between SDA and primary production on the regional scale. At the global-scale, SDA appears modulated by the capacity of diatoms to take up silicic acid, which ultimately sets an upper limit to global export production in these ocean regions.
Resumo:
The aim of this work is the theoretical study of the band alignment between the two components of a hybrid organic-inorganic solar-cell. The working organic molecules are metal tetra-sulphonated phthalocyanines (M-Pc) and the inorganic material is nano-porous ZnO growth in the 001 direction. The theoretical calculations are being made using the density functional theory (DFT) using a GGA functional with the SIESTA code, which projects electron wave functions and density onto a real space grid and uses as basis set a linear combination of numerical, finite-range localized atomic orbitals. We also used the DFT+U method included in the code that allows a semi-empirical inclusion of electronic correlations in the description of electronic spectra for systems such as zinc oxide.
Resumo:
This paper deals with the detection and tracking of an unknown number of targets using a Bayesian hierarchical model with target labels. To approximate the posterior probability density function, we develop a two-layer particle filter. One deals with track initiation, and the other with track maintenance. In addition, the parallel partition method is proposed to sample the states of the surviving targets.
Resumo:
We establish a refined version of the Second Law of Thermodynamics for Langevin stochastic processes describing mesoscopic systems driven by conservative or non-conservative forces and interacting with thermal noise. The refinement is based on the Monge-Kantorovich optimal mass transport and becomes relevant for processes far from quasi-stationary regime. General discussion is illustrated by numerical analysis of the optimal memory erasure protocol for a model for micron-size particle manipulated by optical tweezers.
Resumo:
We study particle current in a recently proposed model for coherent quantum transport. In this model, a system connected to mesoscopic Fermi reservoirs (meso-reservoir) is driven out of equilibrium by the action of super-reservoirs thermalized to prescribed temperatures and chemical potentials by a simple dissipative mechanism described by the Lindblad equation. We compare exact (numerical) results with theoretical expectations based on the Landauer formula.
Resumo:
La informática se está convirtiendo en la quinta utilidad (gas, agua, luz, teléfono) en parte debido al impacto de Cloud Computing en las mayorías de las organizaciones. Este uso de informática es usada por cada vez más tipos de sistemas, incluidos Sistemas Críticos. Esto tiene un impacto en la complejidad internad y la fiabilidad de los sistemas de la organización y los que se ofrecen a los clientes. Este trabajo investiga el uso de Cloud Computing por sistemas críticos, centrándose en las dependencias y especialmente en la fiabilidad de estos sistemas. Se han presentado algunos ejemplos de su uso, y aunque su utilización en sistemas críticos no está extendido, se presenta cual puede llegar a ser su impacto. El objetivo de este trabajo es primero definir un modelo que pueda representar de una forma cuantitativa las interdependencias en fiabilidad y interdependencia para las organizaciones que utilicen estos sistemas, y aplicar este modelo en un sistema crítico del campo de sanidad y mostrar sus resultados. Los conceptos de “macro-dependability” y “micro-dependability” son introducidos en el modelo para la definición de interdependencia y para analizar la fiabilidad de sistemas que dependen de otros sistemas. ABSTRACT With the increasing utilization of Internet services and cloud computing by most organizations (both private and public), it is clear that computing is becoming the 5th utility (along with water, electricity, telephony and gas). These technologies are used for almost all types of systems, and the number is increasing, including Critical Infrastructure systems. Even if Critical Infrastructure systems appear not to rely directly on cloud services, there may be hidden inter-dependencies. This is true even for private cloud computing, which seems more secure and reliable. The critical systems can began in some cases with a clear and simple design, but evolved as described by Egan to "rafted" networks. Because they are usually controlled by one or few organizations, even when they are complex systems, their dependencies can be understood. The organization oversees and manages changes. These CI systems have been affected by the introduction of new ICT models like global communications, PCs and the Internet. Even virtualization took more time to be adopted by Critical systems, due to their strategic nature, but once that these technologies have been proven in other areas, at the end they are adopted as well, for different reasons such as costs. A new technology model is happening now based on some previous technologies (virtualization, distributing and utility computing, web and software services) that are offered in new ways and is called cloud computing. The organizations are migrating more services to the cloud; this will have impact in their internal complexity and in the reliability of the systems they are offering to the organization itself and their clients. Not always this added complexity and associated risks to their reliability are seen. As well, when two or more CI systems are interacting, the risks of one can affect the rest, sharing the risks. This work investigates the use of cloud computing by critical systems, and is focused in the dependencies and reliability of these systems. Some examples are presented together with the associated risks. A framework is introduced for analysing the dependability and resilience of a system that relies on cloud services and how to improve them. As part of the framework, the concepts of micro and macro dependability are introduced to explain the internal and external dependability on services supplied by an external cloud. A pharmacovigilance model system has been used for framework validation.
Resumo:
Autonomous systems require, in most of the cases, reasoning and decision-making capabilities. Moreover, the decision process has to occur in real time. Real-time computing means that every situation or event has to have an answer before a temporal deadline. In complex applications, these deadlines are usually in the order of milliseconds or even microseconds if the application is very demanding. In order to comply with these timing requirements, computing tasks have to be performed as fast as possible. The problem arises when computations are no longer simple, but very time-consuming operations. A good example can be found in autonomous navigation systems with visual-tracking submodules where Kalman filtering is the most extended solution. However, in recent years, some interesting new approaches have been developed. Particle filtering, given its more general problem-solving features, has reached an important position in the field. The aim of this thesis is to design, implement and validate a hardware platform that constitutes itself an embedded intelligent system. The proposed system would combine particle filtering and evolutionary computation algorithms to generate intelligent behavior. Traditional approaches to particle filtering or evolutionary computation have been developed in software platforms, including parallel capabilities to some extent. In this work, an additional goal is fully exploiting hardware implementation advantages. By using the computational resources available in a FPGA device, better performance results in terms of computation time are expected. These hardware resources will be in charge of extensive repetitive computations. With this hardware-based implementation, real-time features are also expected.
Resumo:
Hybrid Stepper Motors are widely used in open-loop position applications. They are the choice of actuation for the collimators in the Large Hadron Collider, the largest particle accelerator at CERN. In this case the positioning requirements and the highly radioactive operating environment are unique. The latter forces both the use of long cables to connect the motors to the drives which act as transmission lines and also prevents the use of standard position sensors. However, reliable and precise operation of the collimators is critical for the machine, requiring the prevention of step loss in the motors and maintenance to be foreseen in case of mechanical degradation. In order to make the above possible, an approach is proposed for the application of an Extended Kalman Filter to a sensorless stepper motor drive, when the motor is separated from its drive by long cables. When the long cables and high frequency pulse width modulated control voltage signals are used together, the electrical signals difer greatly between the motor and drive-side of the cable. Since in the considered case only drive-side data is available, it is therefore necessary to estimate the motor-side signals. Modelling the entire cable and motor system in an Extended Kalman Filter is too computationally intensive for standard embedded real-time platforms. It is, in consequence, proposed to divide the problem into an Extended Kalman Filter, based only on the motor model, and separated motor-side signal estimators, the combination of which is less demanding computationally. The efectiveness of this approach is shown in simulation. Then its validity is experimentally demonstrated via implementation in a DSP based drive. A testbench to test its performance when driving an axis of a Large Hadron Collider collimator is presented along with the results achieved. It is shown that the proposed method is capable of achieving position and load torque estimates which allow step loss to be detected and mechanical degradation to be evaluated without the need for physical sensors. These estimation algorithms often require a precise model of the motor, but the standard electrical model used for hybrid stepper motors is limited when currents, which are high enough to produce saturation of the magnetic circuit, are present. New model extensions are proposed in order to have a more precise model of the motor independently of the current level, whilst maintaining a low computational cost. It is shown that a significant improvement in the model It is achieved with these extensions, and their computational performance is compared to study the cost of model improvement versus computation cost. The applicability of the proposed model extensions is demonstrated via their use in an Extended Kalman Filter running in real-time for closed-loop current control and mechanical state estimation. An additional problem arises from the use of stepper motors. The mechanics of the collimators can wear due to the abrupt motion and torque profiles that are applied by them when used in the standard way, i.e. stepping in open-loop. Closed-loop position control, more specifically Field Oriented Control, would allow smoother profiles, more respectful to the mechanics, to be applied but requires position feedback. As mentioned already, the use of sensors in radioactive environments is very limited for reliability reasons. Sensorless control is a known option but when the speed is very low or zero, as is the case most of the time for the motors used in the LHC collimator, the loss of observability prevents its use. In order to allow the use of position sensors without reducing the long term reliability of the whole system, the possibility to switch from closed to open loop is proposed and validated, allowing the use of closed-loop control when the position sensors function correctly and open-loop when there is a sensor failure. A different approach to deal with the switched drive working with long cables is also presented. Switched mode stepper motor drives tend to have poor performance or even fail completely when the motor is fed through a long cable due to the high oscillations in the drive-side current. The design of a stepper motor output fillter which solves this problem is thus proposed. A two stage filter, one devoted to dealing with the diferential mode and the other with the common mode, is designed and validated experimentally. With this ?lter the drive performance is greatly improved, achieving a positioning repeatability even better than with the drive working without a long cable, the radiated emissions are reduced and the overvoltages at the motor terminals are eliminated.
Resumo:
The study of granular systems is of great interest to many fields of science and technology. The packing of particles affects to the physical properties of the granular system. In particular, the crucial influence of particle size distribution (PSD) on the random packing structure increase the interest in relating both, either theoretically or by computational methods. A packing computational method is developed in order to estimate the void fraction corresponding to a fractal-like particle size distribution.
Resumo:
The study of particulate systems is of great interest in many fields of science and technology. Soil, sediments, powders, granular materials, colloidal and particulate suspensions are examples of systems involving many size particles. For those systems, the statistical description of the particle size distribution (PSD), that is, the mathematical distribution that defines the relative amounts of particles present, sorted according to size, is a crutial issue. The PSD can be important in understanding soil hydraulic properties, the geological origin or sediments or the physical and chemical properties of granular materials and ceramics, among others.
Resumo:
The study of granular systems is of great interest to many fields of science and technology. The packing of particles affects to the physical properties of the granular system. In particular, the crucial influence of particle size distribution (PSD) on the random packing structure increase the interest in relating both, either theoretically or by computational methods. A packing computational method is developed in order to estimate the void fraction corresponding to a fractal-like particle size distribution.