975 resultados para Mean-field model
Resumo:
The properties of a proposed model of N point particles in direct interaction are considered in the limit of small velocities. It is shown that, in this limit, time correlations cancel out and that Newtonian dynamics is recovered for the system in a natural way.
Resumo:
The Gross-Neveu model in an S^1 space is analyzed by means of a variational technique: the Gaussian effective potential. By making the proper connection with previous exact results at finite temperature, we show that this technique is able to describe the phase transition occurring in this model. We also make some remarks about the appropriate treatment of Grassmann variables in variational approaches.
Resumo:
A new arena for the dynamics of spacetime is proposed, in which the basic quantum variable is the two-point distance on a metric space. The scaling dimension (that is, the Kolmogorov capacity) in the neighborhood of each point then defines in a natural way a local concept of dimension. We study our model in the region of parameter space in which the resulting spacetime is not too different from a smooth manifold.
Resumo:
A laboratory study has been conducted with two aims in mind. The first goal was to develop a description of how a cutting edge scrapes ice from the road surface. The second goal was to investigate the extent, if any, to which serrated blades were better than un-serrated or "classical" blades at ice removal. The tests were conducted in the Ice Research Laboratory at the Iowa Institute of Hydraulic Research of the University of Iowa. A specialized testing machine, with a hydraulic ram capable of attaining scraping velocities of up to 30 m.p.h. was used in the testing. In order to determine the ice scraping process, the effects of scraping velocity, ice thickness, and blade geometry on the ice scraping forces were determined. Higher ice thickness lead to greater ice chipping (as opposed to pulverization at lower thicknesses) and thus lower loads. Behavior was observed at higher velocities. The study of blade geometry included the effect of rake angle, clearance angle, and flat width. The latter were found to be particularly important in developing a clear picture of the scraping process. As clearance angle decreases and flat width increases, the scraping loads show a marked increase, due to the need to re-compress pulverized ice fragments. The effect of serrations was to decrease the scraping forces. However, for the coarsest serrated blades (with the widest teeth and gaps) the quantity of ice removed was significantly less than for a classical blade. Finer serrations appear to be able to match the ice removal of classical blades at lower scraping loads. Thus, one of the recommendations of this study is to examine the use of serrated blades in the field. Preliminary work (by Nixon and Potter, 1996) suggests such work will be fruitful. A second and perhaps more challenging result of the study is that chipping of ice is more preferable to pulverization of the ice. How such chipping can be forced to occur is at present an open question.
Resumo:
A Monte Carlo procedure to simulate the penetration and energy loss of low¿energy electron beams through solids is presented. Elastic collisions are described by using the method of partial waves for the screened Coulomb field of the nucleus. The atomic charge density is approximated by an analytical expression with parameters determined from the Dirac¿Hartree¿Fock¿Slater self¿consistent density obtained under Wigner¿Seitz boundary conditions in order to account for solid¿state effects; exchange effects are also accounted for by an energy¿dependent local correction. Elastic differential cross sections are then easily computed by combining the WKB and Born approximations to evaluate the phase shifts. Inelastic collisions are treated on the basis of a generalized oscillator strength model which gives inelastic mean free paths and stopping powers in good agreement with experimental data. This scattering model is accurate in the energy range from a few hundred eV up to about 50 keV. The reliability of the simulation method is analyzed by comparing simulation results and experimental data from backscattering and transmission measurements.
Resumo:
With the advancement of high-throughput sequencing and dramatic increase of available genetic data, statistical modeling has become an essential part in the field of molecular evolution. Statistical modeling results in many interesting discoveries in the field, from detection of highly conserved or diverse regions in a genome to phylogenetic inference of species evolutionary history Among different types of genome sequences, protein coding regions are particularly interesting due to their impact on proteins. The building blocks of proteins, i.e. amino acids, are coded by triples of nucleotides, known as codons. Accordingly, studying the evolution of codons leads to fundamental understanding of how proteins function and evolve. The current codon models can be classified into three principal groups: mechanistic codon models, empirical codon models and hybrid ones. The mechanistic models grasp particular attention due to clarity of their underlying biological assumptions and parameters. However, they suffer from simplified assumptions that are required to overcome the burden of computational complexity. The main assumptions applied to the current mechanistic codon models are (a) double and triple substitutions of nucleotides within codons are negligible, (b) there is no mutation variation among nucleotides of a single codon and (c) assuming HKY nucleotide model is sufficient to capture essence of transition- transversion rates at nucleotide level. In this thesis, I develop a framework of mechanistic codon models, named KCM-based model family framework, based on holding or relaxing the mentioned assumptions. Accordingly, eight different models are proposed from eight combinations of holding or relaxing the assumptions from the simplest one that holds all the assumptions to the most general one that relaxes all of them. The models derived from the proposed framework allow me to investigate the biological plausibility of the three simplified assumptions on real data sets as well as finding the best model that is aligned with the underlying characteristics of the data sets. -- Avec l'avancement de séquençage à haut débit et l'augmentation dramatique des données géné¬tiques disponibles, la modélisation statistique est devenue un élément essentiel dans le domaine dé l'évolution moléculaire. Les résultats de la modélisation statistique dans de nombreuses découvertes intéressantes dans le domaine de la détection, de régions hautement conservées ou diverses dans un génome de l'inférence phylogénétique des espèces histoire évolutive. Parmi les différents types de séquences du génome, les régions codantes de protéines sont particulièrement intéressants en raison de leur impact sur les protéines. Les blocs de construction des protéines, à savoir les acides aminés, sont codés par des triplets de nucléotides, appelés codons. Par conséquent, l'étude de l'évolution des codons mène à la compréhension fondamentale de la façon dont les protéines fonctionnent et évoluent. Les modèles de codons actuels peuvent être classés en trois groupes principaux : les modèles de codons mécanistes, les modèles de codons empiriques et les hybrides. Les modèles mécanistes saisir une attention particulière en raison de la clarté de leurs hypothèses et les paramètres biologiques sous-jacents. Cependant, ils souffrent d'hypothèses simplificatrices qui permettent de surmonter le fardeau de la complexité des calculs. Les principales hypothèses retenues pour les modèles actuels de codons mécanistes sont : a) substitutions doubles et triples de nucleotides dans les codons sont négligeables, b) il n'y a pas de variation de la mutation chez les nucléotides d'un codon unique, et c) en supposant modèle nucléotidique HKY est suffisant pour capturer l'essence de taux de transition transversion au niveau nucléotidique. Dans cette thèse, je poursuis deux objectifs principaux. Le premier objectif est de développer un cadre de modèles de codons mécanistes, nommé cadre KCM-based model family, sur la base de la détention ou de l'assouplissement des hypothèses mentionnées. En conséquence, huit modèles différents sont proposés à partir de huit combinaisons de la détention ou l'assouplissement des hypothèses de la plus simple qui détient toutes les hypothèses à la plus générale qui détend tous. Les modèles dérivés du cadre proposé nous permettent d'enquêter sur la plausibilité biologique des trois hypothèses simplificatrices sur des données réelles ainsi que de trouver le meilleur modèle qui est aligné avec les caractéristiques sous-jacentes des jeux de données. Nos expériences montrent que, dans aucun des jeux de données réelles, tenant les trois hypothèses mentionnées est réaliste. Cela signifie en utilisant des modèles simples qui détiennent ces hypothèses peuvent être trompeuses et les résultats de l'estimation inexacte des paramètres. Le deuxième objectif est de développer un modèle mécaniste de codon généralisée qui détend les trois hypothèses simplificatrices, tandis que d'informatique efficace, en utilisant une opération de matrice appelée produit de Kronecker. Nos expériences montrent que sur un jeux de données choisis au hasard, le modèle proposé de codon mécaniste généralisée surpasse autre modèle de codon par rapport à AICc métrique dans environ la moitié des ensembles de données. En outre, je montre à travers plusieurs expériences que le modèle général proposé est biologiquement plausible.
Resumo:
A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized by an elevated level of clustering compared to a random graph (although not extreme) and can be markedly non-local.
Resumo:
The transport and magnetotransport properties of the metallic and ferromagnetic SrRuO3 (SRO) and the metallic and paramagnetic LaNiO3 (LNO) epitaxial thin films have been investigated in fields up to 55 T at temperatures down to 1.8 K . At low temperatures both samples display a well-defined resistivity minimum. We argue that this behavior is due to the increasing relevance of quantum corrections to the conductivity (QCC) as temperature is lowered; this effect being particularly relevant in these oxides due to their short mean free path. However, it is not straightforward to discriminate between contributions of weak localization and renormalization of electron-electron interactions to the QCC through temperature dependence alone. We have taken advantage of the distinct effect of a magnetic field on both mechanisms to demonstrate that in ferromagnetic SRO the weak-localization contribution is suppressed by the large internal field leaving only renormalized electron-electron interactions, whereas in the nonmagnetic LNO thin films the weak-localization term is relevant.
Resumo:
BACKGROUND: Walk-in centres may improve access to healthcare for some patients, due to their convenient location and extensive opening hours, with no need for an appointment. Herein, we describe and assess a new model of walk-in centre, characterised by care provided by residents and supervision achieved by experienced family doctors. The main aim of the study was to assess patients' satisfaction about the care they received from residents and their supervision by family doctors. The secondary aim was to describe walk-in patients' demographic characteristics and to identify potential associations with satisfaction. METHODS: The study was conducted in the walk-in centre of Lausanne. Patients who consulted between 11th and 31st April were automatically included and received a questionnaire in French. We used a five-point Likert scale, ranging from "not at all satisfied" to "very satisfied", converted from values of 1 to 5. We focused on the satisfaction regarding residents' care and supervision by a family doctor. The former was divided in three categories: "Skills", "Treatment" and "Behaviour". A mean satisfaction score was calculated for each category and a multivariable logistic model was applied in order to identify associations with patients' demographics. RESULTS: The overall response rate was 47% [184/395]. Walk-in patients were more likely to be women (62%), young (median age 31), with a high education level (40% of University degree or equivalent). Patients were "very satisfied" with residents' care, with a median satisfaction score between 4.5 and 5, for each category. Over 90% of patients were "satisfied" or "very satisfied" that a family doctor was involved in the consultation. Age showed the greatest association with satisfaction. CONCLUSION: Patients were highly satisfied with care provided by residents and with the involvement of a family doctor in the consultation. Older age showed the greatest positive association with satisfaction with a positive impact. The high level satisfaction reported by walk-in patients supports this new model of walk-in centre.
Resumo:
We investigate chaotic, memory, and cooling rate effects in the three-dimensional Edwards-Anderson model by doing thermoremanent (TRM) and ac susceptibility numerical experiments and making a detailed comparison with laboratory experiments on spin glasses. In contrast to the experiments, the Edwards-Anderson model does not show any trace of reinitialization processes in temperature change experiments (TRM or ac). A detailed comparison with ac relaxation experiments in the presence of dc magnetic field or coupling distribution perturbations reveals that the absence of chaotic effects in the Edwards-Anderson model is a consequence of the presence of strong cooling rate effects. We discuss possible solutions to this discrepancy, in particular the smallness of the time scales reached in numerical experiments, but we also question the validity of the Edwards-Anderson model to reproduce the experimental results.
Resumo:
The properties of a proposed model of N point particles in direct interaction are considered in the limit of small velocities. It is shown that, in this limit, time correlations cancel out and that Newtonian dynamics is recovered for the system in a natural way.
Resumo:
A new arena for the dynamics of spacetime is proposed, in which the basic quantum variable is the two-point distance on a metric space. The scaling dimension (that is, the Kolmogorov capacity) in the neighborhood of each point then defines in a natural way a local concept of dimension. We study our model in the region of parameter space in which the resulting spacetime is not too different from a smooth manifold.
Resumo:
Because of the increase in workplace automation and the diversification of industrial processes, workplaces have become more and more complex. The classical approaches used to address workplace hazard concerns, such as checklists or sequence models, are, therefore, of limited use in such complex systems. Moreover, because of the multifaceted nature of workplaces, the use of single-oriented methods, such as AEA (man oriented), FMEA (system oriented), or HAZOP (process oriented), is not satisfactory. The use of a dynamic modeling approach in order to allow multiple-oriented analyses may constitute an alternative to overcome this limitation. The qualitative modeling aspects of the MORM (man-machine occupational risk modeling) model are discussed in this article. The model, realized on an object-oriented Petri net tool (CO-OPN), has been developed to simulate and analyze industrial processes in an OH&S perspective. The industrial process is modeled as a set of interconnected subnets (state spaces), which describe its constitutive machines. Process-related factors are introduced, in an explicit way, through machine interconnections and flow properties. While man-machine interactions are modeled as triggering events for the state spaces of the machines, the CREAM cognitive behavior model is used in order to establish the relevant triggering events. In the CO-OPN formalism, the model is expressed as a set of interconnected CO-OPN objects defined over data types expressing the measure attached to the flow of entities transiting through the machines. Constraints on the measures assigned to these entities are used to determine the state changes in each machine. Interconnecting machines implies the composition of such flow and consequently the interconnection of the measure constraints. This is reflected by the construction of constraint enrichment hierarchies, which can be used for simulation and analysis optimization in a clear mathematical framework. The use of Petri nets to perform multiple-oriented analysis opens perspectives in the field of industrial risk management. It may significantly reduce the duration of the assessment process. But, most of all, it opens perspectives in the field of risk comparisons and integrated risk management. Moreover, because of the generic nature of the model and tool used, the same concepts and patterns may be used to model a wide range of systems and application fields.