982 resultados para Interacting particle systems
Resumo:
During the last decade, large and costly instruments are being replaced by system based on microfluidic devices. Microfluidic devices hold the promise of combining a small analytical laboratory onto a chip-sized substrate to identify, immobilize, separate, and purify cells, bio-molecules, toxins, and other chemical and biological materials. Compared to conventional instruments, microfluidic devices would perform these tasks faster with higher sensitivity and efficiency, and greater affordability. Dielectrophoresis is one of the enabling technologies for these devices. It exploits the differences in particle dielectric properties to allow manipulation and characterization of particles suspended in a fluidic medium. Particles can be trapped or moved between regions of high or low electric fields due to the polarization effects in non-uniform electric fields. By varying the applied electric field frequency, the magnitude and direction of the dielectrophoretic force on the particle can be controlled. Dielectrophoresis has been successfully demonstrated in the separation, transportation, trapping, and sorting of various biological particles.
Resumo:
This paper proposes to promote autonomy in digital ecosystems so that it provides agents with information to improve the behavior of the digital ecosystem in terms of stability. This work proposes that, in digital ecosystems, autonomous agents can provide fundamental services and information. The final goal is to run the ecosystem, generate novel conditions and let agents exploit them. A set of evaluation measures must be defined as well. We want to provide an outline of some global indicators, such as heterogeneity and diversity, and establish relationships between agent behavior and these global indicators to fully understand interactions between agents, and to understand the dependence and autonomy relations that emerge between the interacting agents. Individual variations, interaction dependencies, and environmental factors are determinants of autonomy that would be considered. The paper concludes with a discussion of situations when autonomy is a milestone
Resumo:
El marcaje de proteínas con ubiquitina, conocido como ubiquitinación, cumple diferentes funciones que incluyen la regulación de varios procesos celulares, tales como: la degradación de proteínas por medio del proteosoma, la reparación del ADN, la señalización mediada por receptores de membrana, y la endocitosis, entre otras (1). Las moléculas de ubiquitina pueden ser removidas de sus sustratos gracias a la acción de un gran grupo de proteasas, llamadas enzimas deubiquitinizantes (DUBs) (2). Las DUBs son esenciales para la manutención de la homeostasis de la ubiquitina y para la regulación del estado de ubiquitinación de diferentes sustratos. El gran número y la diversidad de DUBs descritas refleja tanto su especificidad como su utilización para regular un amplio espectro de sustratos y vías celulares. Aunque muchas DUBs han sido estudiadas a profundidad, actualmente se desconocen los sustratos y las funciones biológicas de la mayoría de ellas. En este trabajo se investigaron las funciones de las DUBs: USP19, USP4 y UCH-L1. Utilizando varias técnicas de biología molecular y celular se encontró que: i) USP19 es regulada por las ubiquitin ligasas SIAH1 y SIAH2 ii) USP19 es importante para regular HIF-1α, un factor de transcripción clave en la respuesta celular a hipoxia, iii) USP4 interactúa con el proteosoma, iv) La quimera mCherry-UCH-L1 reproduce parcialmente los fenotipos que nuestro grupo ha descrito previamente al usar otros constructos de la misma enzima, y v) UCH-L1 promueve la internalización de la bacteria Yersinia pseudotuberculosis.
Resumo:
A branching random motion on a line, with abrupt changes of direction, is studied. The branching mechanism, being independient of random motion, and intensities of reverses are defined by a particle's current direction. A soluton of a certain hyperbolic system of coupled non-linear equations (Kolmogorov type backward equation) have a so-called McKean representation via such processes. Commonly this system possesses traveling-wave solutions. The convergence of solutions with Heaviside terminal data to the travelling waves is discussed.This Paper realizes the McKean programme for the Kolmogorov-Petrovskii-Piskunov equation in this case. The Feynman-Kac formula plays a key role.
Resumo:
Farming systems research is a multi-disciplinary holistic approach to solve the problems of small farms. Small and marginal farmers are the core of the Indian rural economy Constituting 0.80 of the total farming community but possessing only 0.36 of the total operational land. The declining trend of per capita land availability poses a serious challenge to the sustainability and profitability of farming. Under such conditions, it is appropriate to integrate land-based enterprises such as dairy, fishery, poultry, duckery, apiary, field and horticultural cropping within the farm, with the objective of generating adequate income and employment for these small and marginal farmers Under a set of farm constraints and varying levels of resource availability and Opportunity. The integration of different farm enterprises can be achieved with the help of a linear programming model. For the current review, integrated farming systems models were developed, by Way Of illustration, for the marginal, small, medium and large farms of eastern India using linear programming. Risk analyses were carried out for different levels of income and enterprise combinations. The fishery enterprise was shown to be less risk-prone whereas the crop enterprise involved greater risk. In general, the degree of risk increased with the increasing level of income. With increase in farm income and risk level, the resource use efficiency increased. Medium and large farms proved to be more profitable than small and marginal farms with higher level of resource use efficiency and return per Indian rupee (Rs) invested. Among the different enterprises of integrated farming systems, a chain of interaction and resource flow was observed. In order to make fanning profitable and improve resource use efficiency at the farm level, the synergy among interacting components of farming systems should be exploited. In the process of technology generation, transfer and other developmental efforts at the farm level (contrary to the discipline and commodity-based approaches which have a tendency to be piecemeal and in isolation), it is desirable to place a whole-farm scenario before the farmers to enhance their farm income, thereby motivating them towards more efficient and sustainable fanning.
Resumo:
Virtual Reality (VR) is widely used in visualizing medical datasets. This interest has emerged due to the usefulness of its techniques and features. Such features include immersion, collaboration, and interactivity. In a medical visualization context, immersion is important, because it allows users to interact directly and closelywith detailed structures in medical datasets. Collaboration on the other hand is beneficial, because it gives medical practitioners the chance to share their expertise and offer feedback and advice in a more effective and intuitive approach. Interactivity is crucial in medical visualization and simulation systems, because responsiveand instantaneous actions are key attributes in applications, such as surgical simulations. In this paper we present a case study that investigates the use of VR in a collaborative networked CAVE environment from a medical volumetric visualization perspective. The study will present a networked CAVE application, which has been built to visualize and interact with volumetric datasets. We will summarize the advantages of such an application and the potential benefits of our system. We also will describe the aspects related to this application area and the relevant issues of such implementations.
Nonlinear system identification using particle swarm optimisation tuned radial basis function models
Resumo:
A novel particle swarm optimisation (PSO) tuned radial basis function (RBF) network model is proposed for identification of non-linear systems. At each stage of orthogonal forward regression (OFR) model construction process, PSO is adopted to tune one RBF unit's centre vector and diagonal covariance matrix by minimising the leave-one-out (LOO) mean square error (MSE). This PSO aided OFR automatically determines how many tunable RBF nodes are sufficient for modelling. Compared with the-state-of-the-art local regularisation assisted orthogonal least squares algorithm based on the LOO MSE criterion for constructing fixed-node RBF network models, the PSO tuned RBF model construction produces more parsimonious RBF models with better generalisation performance and is often more efficient in model construction. The effectiveness of the proposed PSO aided OFR algorithm for constructing tunable node RBF models is demonstrated using three real data sets.
Resumo:
Although extensively studied within the lidar community, the multiple scattering phenomenon has always been considered a rare curiosity by radar meteorologists. Up to few years ago its appearance has only been associated with two- or three-body-scattering features (e.g. hail flares and mirror images) involving highly reflective surfaces. Recent atmospheric research aimed at better understanding of the water cycle and the role played by clouds and precipitation in affecting the Earth's climate has driven the deployment of high frequency radars in space. Examples are the TRMM 13.5 GHz, the CloudSat 94 GHz, the upcoming EarthCARE 94 GHz, and the GPM dual 13-35 GHz radars. These systems are able to detect the vertical distribution of hydrometeors and thus provide crucial feedbacks for radiation and climate studies. The shift towards higher frequencies increases the sensitivity to hydrometeors, improves the spatial resolution and reduces the size and weight of the radar systems. On the other hand, higher frequency radars are affected by stronger extinction, especially in the presence of large precipitating particles (e.g. raindrops or hail particles), which may eventually drive the signal below the minimum detection threshold. In such circumstances the interpretation of the radar equation via the single scattering approximation may be problematic. Errors will be large when the radiation emitted from the radar after interacting more than once with the medium still contributes substantially to the received power. This is the case if the transport mean-free-path becomes comparable with the instrument footprint (determined by the antenna beam-width and the platform altitude). This situation resembles to what has already been experienced in lidar observations, but with a predominance of wide- versus small-angle scattering events. At millimeter wavelengths, hydrometeors diffuse radiation rather isotropically compared to the visible or near infrared region where scattering is predominantly in the forward direction. A complete understanding of radiation transport modeling and data analysis methods under wide-angle multiple scattering conditions is mandatory for a correct interpretation of echoes observed by space-borne millimeter radars. This paper reviews the status of research in this field. Different numerical techniques currently implemented to account for higher order scattering are reviewed and their weaknesses and strengths highlighted. Examples of simulated radar backscattering profiles are provided with particular emphasis given to situations in which the multiple scattering contributions become comparable or overwhelm the single scattering signal. We show evidences of multiple scattering effects from air-borne and from CloudSat observations, i.e. unique signatures which cannot be explained by single scattering theory. Ideas how to identify and tackle the multiple scattering effects are discussed. Finally perspectives and suggestions for future work are outlined. This work represents a reference-guide for studies focused at modeling the radiation transport and at interpreting data from high frequency space-borne radar systems that probe highly opaque scattering media such as thick ice clouds or precipitating clouds.
Resumo:
Deep Brain Stimulation (DBS) has been successfully used throughout the world for the treatment of Parkinson's disease symptoms. To control abnormal spontaneous electrical activity in target brain areas DBS utilizes a continuous stimulation signal. This continuous power draw means that its implanted battery power source needs to be replaced every 18–24 months. To prolong the life span of the battery, a technique to accurately recognize and predict the onset of the Parkinson's disease tremors in human subjects and thus implement an on-demand stimulator is discussed here. The approach is to use a radial basis function neural network (RBFNN) based on particle swarm optimization (PSO) and principal component analysis (PCA) with Local Field Potential (LFP) data recorded via the stimulation electrodes to predict activity related to tremor onset. To test this approach, LFPs from the subthalamic nucleus (STN) obtained through deep brain electrodes implanted in a Parkinson patient are used to train the network. To validate the network's performance, electromyographic (EMG) signals from the patient's forearm are recorded in parallel with the LFPs to accurately determine occurrences of tremor, and these are compared to the performance of the network. It has been found that detection accuracies of up to 89% are possible. Performance comparisons have also been made between a conventional RBFNN and an RBFNN based on PSO which show a marginal decrease in performance but with notable reduction in computational overhead.
Resumo:
Clusters of computers can be used together to provide a powerful computing resource. Large Monte Carlo simulations, such as those used to model particle growth, are computationally intensive and take considerable time to execute on conventional workstations. By spreading the work of the simulation across a cluster of computers, the elapsed execution time can be greatly reduced. Thus a user has apparently the performance of a supercomputer by using the spare cycles on other workstations.
Resumo:
It is known that terraces at the air-polymer interface of lamella forming diblock copolymers do not make discontinuous jumps in height. Despite the underlying discretized structure, the height profiles are smoothly varying. The width of a transition region of a terrace edge in isolation is typically several hundreds of nanometres, resulting from a balance between surface tension, chain stretching penalties, and the enthalpy of mixing. What is less well known in these systems is what happens when two transition regions interact with one another. In this study, we investigate the dynamics of the interactions between copolymer lamellar edges. We find that the data can be well described by a model that assumes a repulsion between adjacent edges. While the model is simplistic, and does not include molecular level details, its agreement with the data suggest that some of the the underlying assumptions provide insight into the complex interplay between defects.
Resumo:
In this paper a new system identification algorithm is introduced for Hammerstein systems based on observational input/output data. The nonlinear static function in the Hammerstein system is modelled using a non-uniform rational B-spline (NURB) neural network. The proposed system identification algorithm for this NURB network based Hammerstein system consists of two successive stages. First the shaping parameters in NURB network are estimated using a particle swarm optimization (PSO) procedure. Then the remaining parameters are estimated by the method of the singular value decomposition (SVD). Numerical examples including a model based controller are utilized to demonstrate the efficacy of the proposed approach. The controller consists of computing the inverse of the nonlinear static function approximated by NURB network, followed by a linear pole assignment controller.
Resumo:
In addition to the Hamiltonian functional itself, non-canonical Hamiltonian dynamical systems generally possess integral invariants known as ‘Casimir functionals’. In the case of the Euler equations for a perfect fluid, the Casimir functionals correspond to the vortex topology, whose invariance derives from the particle-relabelling symmetry of the underlying Lagrangian equations of motion. In a recent paper, Vallis, Carnevale & Young (1989) have presented algorithms for finding steady states of the Euler equations that represent extrema of energy subject to given vortex topology, and are therefore stable. The purpose of this note is to point out a very general method for modifying any Hamiltonian dynamical system into an algorithm that is analogous to those of Vallis etal. in that it will systematically increase or decrease the energy of the system while preserving all of the Casimir invariants. By incorporating momentum into the extremization procedure, the algorithm is able to find steadily-translating as well as steady stable states. The method is applied to a variety of perfect-fluid systems, including Euler flow as well as compressible and incompressible stratified flow.
Resumo:
We consider the problem of discrete time filtering (intermittent data assimilation) for differential equation models and discuss methods for its numerical approximation. The focus is on methods based on ensemble/particle techniques and on the ensemble Kalman filter technique in particular. We summarize as well as extend recent work on continuous ensemble Kalman filter formulations, which provide a concise dynamical systems formulation of the combined dynamics-assimilation problem. Possible extensions to fully nonlinear ensemble/particle based filters are also outlined using the framework of optimal transportation theory.
Resumo:
Particle filters are fully non-linear data assimilation techniques that aim to represent the probability distribution of the model state given the observations (the posterior) by a number of particles. In high-dimensional geophysical applications the number of particles required by the sequential importance resampling (SIR) particle filter in order to capture the high probability region of the posterior, is too large to make them usable. However particle filters can be formulated using proposal densities, which gives greater freedom in how particles are sampled and allows for a much smaller number of particles. Here a particle filter is presented which uses the proposal density to ensure that all particles end up in the high probability region of the posterior probability density function. This gives rise to the possibility of non-linear data assimilation in large dimensional systems. The particle filter formulation is compared to the optimal proposal density particle filter and the implicit particle filter, both of which also utilise a proposal density. We show that when observations are available every time step, both schemes will be degenerate when the number of independent observations is large, unlike the new scheme. The sensitivity of the new scheme to its parameter values is explored theoretically and demonstrated using the Lorenz (1963) model.