965 resultados para Mean Field Analysis
Resumo:
At the jamming transition, amorphous packings are known to display anomalous vibrational modes with a density of states (DOS) that remains constant at low frequency. The scaling of the DOS at higher packing fractions remains, however, unclear. One might expect to find a simple Debye scaling, but recent results from effective medium theory and the exact solution of mean-field models both predict an anomalous, non-Debye scaling. Being mean-field in nature, however, these solutions are only strictly valid in the limit of infinite spatial dimension, and it is unclear what value they have for finite-dimensional systems. Here, we study packings of soft spheres in dimensions 3 through 7 and find, away from jamming, a universal non-Debye scaling of the DOS that is consistent with the mean-field predictions. We also consider how the soft mode participation ratio evolves as dimension increases.
Resumo:
The conventional mechanism of fermion mass generation in the Standard Model involves Spontaneous Symmetry Breaking (SSB). In this thesis, we study an alternate mechanism for the generation of fermion masses that does not require SSB, in the context of lattice field theories. Being inherently strongly coupled, this mechanism requires a non-perturbative approach like the lattice approach.
In order to explore this mechanism, we study a simple lattice model with a four-fermion interaction that has massless fermions at weak couplings and massive fermions at strong couplings, but without any spontaneous symmetry breaking. Prior work on this type of mass generation mechanism in 4D, was done long ago using either mean-field theory or Monte-Carlo calculations on small lattices. In this thesis, we have developed a new computational approach that enables us to perform large scale quantum Monte-Carlo calculations to study the phase structure of this theory. In 4D, our results confirm prior results, but differ in some quantitative details of the phase diagram. In contrast, in 3D, we discover a new second order critical point using calculations on lattices up to size $ 60^3$. Such large scale calculations are unprecedented. The presence of the critical point implies the existence of an alternate mechanism of fermion mass generation without any SSB, that could be of interest in continuum quantum field theory.
Resumo:
This thesis proves certain results concerning an important question in non-equilibrium quantum statistical mechanics which is the derivation of effective evolution equations approximating the dynamics of a system of large number of bosons initially at equilibrium (ground state at very low temperatures). The dynamics of such systems are governed by the time-dependent linear many-body Schroedinger equation from which it is typically difficult to extract useful information due to the number of particles being large. We will study quantitatively (i.e. with explicit bounds on the error) how a suitable one particle non-linear Schroedinger equation arises in the mean field limit as number of particles N → ∞ and how the appropriate corrections to the mean field will provide better approximations of the exact dynamics. In the first part of this thesis we consider the evolution of N bosons, where N is large, with two-body interactions of the form N³ᵝv(Nᵝ⋅), 0≤β≤1. The parameter β measures the strength and the range of interactions. We compare the exact evolution with an approximation which considers the evolution of a mean field coupled with an appropriate description of pair excitations, see [18,19] by Grillakis-Machedon-Margetis. We extend the results for 0 ≤ β < 1/3 in [19, 20] to the case of β < 1/2 and obtain an error bound of the form p(t)/Nᵅ, where α>0 and p(t) is a polynomial, which implies a specific rate of convergence as N → ∞. In the second part, utilizing estimates of the type discussed in the first part, we compare the exact evolution with the mean field approximation in the sense of marginals. We prove that the exact evolution is close to the approximate in trace norm for times of the order o(1)√N compared to log(o(1)N) as obtained in Chen-Lee-Schlein [6] for the Hartree evolution. Estimates of similar type are obtained for stronger interactions as well.
Resumo:
Tese (doutorado)Universidade de Brasília, Instituto de Física, Programa de Pós-Graduação em Física, 2015.
Resumo:
We develop a deterministic mathematical model to describe the way in which polymers bind to DNA by considering the dynamics of the gap distribution that forms when polymers bind to a DNA plasmid. In so doing, we generalise existing theory to account for overlaps and binding cooperativity whereby the polymer binding rate depends on the size of the overlap The proposed mean-field models are then solved using a combination of numerical and asymptotic methods. We find that overlaps lead to higher coverage and hence higher charge neutralisations, results which are more in line with recent experimental observations. Our work has applications to gene therapy where polymers are used to neutralise the negative charges of the DNA phosphate backbone, allowing condensation prior to delivery into the nucleus of an abnormal cell.
Resumo:
Nowadays, one of the most important areas of interest in archeology is the characterization of the submersed cultural heritage. Mediterranean Sea is rich in archaeological findings due to storms, accidents and naval battles since prehistoric times. Chemical analysis of submerged materials is an extremely valuable source of information on the origin and precedence of the wrecks, and also the raw materials employed during the manufacturing of the objects found in these sites. Nevertheless, sometimes it is not possible to extract the archaeological material from the marine environment due to size of the sample, the legislation or preservation purposes. In these cases, the in-situ analysis turns into the only alternative for obtaining information. In spite of this demand, no analytical techniques are available for the in-situ chemical characterization of underwater materials. The versatility of laser-induced breakdown spectroscopy (LIBS) has been successfully tested in oceanography 1. Advantages such as rapid and in situ analysis with no sample preparation make LIBS a suitable alternative for field measurements. To further exploit the inherent advantages of the technology, a mobile fiber-based LIBS platform capable of performing remote measurements up to 50 meters range has been designed for the recognition and identification of artworks in underwater archaeological shipwrecks. The LIBS prototype featured both single-pulse (SP-LIBS) and multi-pulse excitation (MP-LIBS) 2. The use of multi-pulse excitation allowed an increased laser beam energy (up to 95 mJ) transmitted through the optical fiber. This excitation mode results in an improved performance of the equipment in terms of extended range of analysis (to a depth of 50 m) and a broader variety of samples to be analyzed (i.e., rocks, marble, ceramics and concrete). In the present work, the design and construction considerations of the instrument are reported and its performance is discussed on the basis of the spectral response, the remote irradiance achieved upon the range of analysis and its influence on plasma properties, as well as the effect of the laser pulse duration and purge gas to the LIBS signal. Also, to check the reliability and reproducibility of the instrument for field analysis several robustness tests were performed outside the lab. Finally, the capability of this instrument was successfully demonstrated in an underwater archaeological shipwreck (San Pedro de Alcántara, Malaga).
Resumo:
In this thesis we present a mathematical formulation of the interaction between microorganisms such as bacteria or amoebae and chemicals, often produced by the organisms themselves. This interaction is called chemotaxis and leads to cellular aggregation. We derive some models to describe chemotaxis. The first is the pioneristic Keller-Segel parabolic-parabolic model and it is derived by two different frameworks: a macroscopic perspective and a microscopic perspective, in which we start with a stochastic differential equation and we perform a mean-field approximation. This parabolic model may be generalized by the introduction of a degenerate diffusion parameter, which depends on the density itself via a power law. Then we derive a model for chemotaxis based on Cattaneo's law of heat propagation with finite speed, which is a hyperbolic model. The last model proposed here is a hydrodynamic model, which takes into account the inertia of the system by a friction force. In the limit of strong friction, the model reduces to the parabolic model, whereas in the limit of weak friction, we recover a hyperbolic model. Finally, we analyze the instability condition, which is the condition that leads to aggregation, and we describe the different kinds of aggregates we may obtain: the parabolic models lead to clusters or peaks whereas the hyperbolic models lead to the formation of network patterns or filaments. Moreover, we discuss the analogy between bacterial colonies and self gravitating systems by comparing the chemotactic collapse and the gravitational collapse (Jeans instability).
Resumo:
This work aims to develop a neurogeometric model of stereo vision, based on cortical architectures involved in the problem of 3D perception and neural mechanisms generated by retinal disparities. First, we provide a sub-Riemannian geometry for stereo vision, inspired by the work on the stereo problem by Zucker (2006), and using sub-Riemannian tools introduced by Citti-Sarti (2006) for monocular vision. We present a mathematical interpretation of the neural mechanisms underlying the behavior of binocular cells, that integrate monocular inputs. The natural compatibility between stereo geometry and neurophysiological models shows that these binocular cells are sensitive to position and orientation. Therefore, we model their action in the space R3xS2 equipped with a sub-Riemannian metric. Integral curves of the sub-Riemannian structure model neural connectivity and can be related to the 3D analog of the psychophysical association fields for the 3D process of regular contour formation. Then, we identify 3D perceptual units in the visual scene: they emerge as a consequence of the random cortico-cortical connection of binocular cells. Considering an opportune stochastic version of the integral curves, we generate a family of kernels. These kernels represent the probability of interaction between binocular cells, and they are implemented as facilitation patterns to define the evolution in time of neural population activity at a point. This activity is usually modeled through a mean field equation: steady stable solutions lead to consider the associated eigenvalue problem. We show that three-dimensional perceptual units naturally arise from the discrete version of the eigenvalue problem associated to the integro-differential equation of the population activity.
Resumo:
Nowadays, technological advancements have brought industry and research towards the automation of various processes. Automation brings a reduction in costs and an improvement in product quality. For this reason, companies are pushing research to investigate new technologies. The agriculture industry has always looked towards automating various processes, from product processing to storage. In the last years, the automation of harvest and cultivation phases also has become attractive, pushed by the advancement of autonomous driving. Nevertheless, ADAS systems are not enough. Merging different technologies will be the solution to obtain total automation of agriculture processes. For example, sensors that estimate products' physical and chemical properties can be used to evaluate the maturation level of fruit. Therefore, the fusion of these technologies has a key role in industrial process automation. In this dissertation, ADAS systems and sensors for precision agriculture will be both treated. Several measurement procedures for characterizing commercial 3D LiDARs will be proposed and tested to cope with the growing need for comparison tools. Axial errors and transversal errors have been investigated. Moreover, a measurement method and setup for evaluating the fog effect on 3D LiDARs will be proposed. Each presented measurement procedure has been tested. The obtained results highlight the versatility and the goodness of the proposed approaches. Regarding the precision agriculture sensors, a measurement approach for the Moisture Content and density estimation of crop directly on the field is presented. The approach regards the employment of a Near Infrared spectrometer jointly with Partial Least Square statistical analysis. The approach and the model will be described together with a first laboratory prototype used to evaluate the NIRS approach. Finally, a prototype for on the field analysis is realized and tested. The test results are promising, evidencing that the proposed approach is suitable for Moisture Content and density estimation.
Resumo:
In this work we analyze how patchy distributions of CO2 and brine within sand reservoirs may lead to significant attenuation and velocity dispersion effects, which in turn may have a profound impact on surface seismic data. The ultimate goal of this paper is to contribute to the understanding of these processes within the framework of the seismic monitoring of CO2 sequestration, a key strategy to mitigate global warming. We first carry out a Monte Carlo analysis to study the statistical behavior of attenuation and velocity dispersion of compressional waves traveling through rocks with properties similar to those at the Utsira Sand, Sleipner field, containing quasi-fractal patchy distributions of CO2 and brine. These results show that the mean patch size and CO2 saturation play key roles in the observed wave-induced fluid flow effects. The latter can be remarkably important when CO2 concentrations are low and mean patch sizes are relatively large. To analyze these effects on the corresponding surface seismic data, we perform numerical simulations of wave propagation considering reservoir models and CO2 accumulation patterns similar to the CO2 injection site in the Sleipner field. These numerical experiments suggest that wave-induced fluid flow effects may produce changes in the reservoir's seismic response, modifying significantly the main seismic attributes usually employed in the characterization of these environments. Consequently, the determination of the nature of the fluid distributions as well as the proper modeling of the seismic data constitute important aspects that should not be ignored in the seismic monitoring of CO2 sequestration problems.
Resumo:
This report presents the results of work zone field data analyzed on interstate highways in Missouri to determine the mean breakdown and queue-discharge flow rates as measures of capacity. Several days of traffic data collected at a work zone near Pacific, Missouri with a speed limit of 50 mph were analyzed in both the eastbound and westbound directions. As a result, a total of eleven breakdown events were identified using average speed profiles. The traffic flows prior to and after the onset of congestion were studied. Breakdown flow rates ranged between 1194 to 1404 vphpl, with an average of 1295 vphpl, and a mean queue discharge rate of 1072 vphpl was determined. Mean queue discharge, as used by the Highway Capacity Manual 2000 (HCM), in terms of pcphpl was found to be 1199, well below the HCM’s average capacity of 1600 pcphpl. This reduced capacity found at the site is attributable mainly to narrower lane width and higher percentage of heavy vehicles, around 25%, in the traffic stream. The difference found between mean breakdown flow (1295 vphpl) and queue-discharge flow (1072 vphpl) has been observed widely, and is due to reduced traffic flow once traffic breaks down and queues start to form. The Missouri DOT currently uses a spreadsheet for work zone planning applications that assumes the same values of breakdown and mean queue discharge flow rates. This study proposes that breakdown flow rates should be used to forecast the onset of congestion, whereas mean queue discharge flow rates should be used to estimate delays under congested conditions. Hence, it is recommended that the spreadsheet be refined accordingly.
Resumo:
The large pine weevil, Hylobius abietis, is a serious pest of reforestation in northern Europe. However, weevils developing in stumps of felled trees can be killed by entomopathogenic nematodes applied to soil around the stumps and this method of control has been used at an operational level in the UK and Ireland. We investigated the factors affecting the efficacy of entomopathogenic nematodes in the control of the large pine weevil spanning 10 years of field experiments, by means of a meta-analysis of published studies and previously unpublished data. We investigated two species with different foraging strategies, the ‘ambusher’ Steinernema carpocapsae, the species most often used at an operational level, and the ‘cruiser’ Heterorhabditis downesi. Efficacy was measured both by percentage reduction in numbers of adults emerging relative to untreated controls and by percentage parasitism of developing weevils in the stump. Both measures were significantly higher with H. downesi compared to S. carpocapsae. General linear models were constructed for each nematode species separately, using substrate type (peat versus mineral soil) and tree species (pine versus spruce) as fixed factors, weevil abundance (from the mean of untreated stumps) as a covariate and percentage reduction or percentage parasitism as the response variable. For both nematode species, the most significant and parsimonious models showed that substrate type was consistently, but not always, the most significant variable, whether replicates were at a site or stump level, and that peaty soils significantly promote the efficacy of both species. Efficacy, in terms of percentage parasitism, was not density dependent.
Phylogenetic and virulence analysis of tick-borne encephalitis virus field isolates from Switzerland
Resumo:
Tick-borne encephalitis (TBE) is an endemic disease in Switzerland, with about 110-120 reported human cases each year. Endemic areas are found throughout the country. However, the viruses circulating in Switzerland have not been characterized so far. In this study, the complete envelope (E) protein sequences and phylogenetic classification of 72 TBE viruses found in Ixodes ricinus ticks sampled at 39 foci throughout Switzerland were analyzed. All isolates belonged to the European subtype and were highly related (mean pairwise sequence identity of 97.8% at the nucleotide and 99.6% at the amino acid level of the E protein). Sixty-four isolates were characterized in vitro with respect to their plaque phenotype. More than half (57.8%) of isolates produced a mixture of plaques of different sizes, reflecting a heterogeneous population of virus variants. Isolates consistently forming plaques of small size were associated with recently detected endemic foci with no or only sporadic reports of clinical cases. All of six virus isolates investigated in an in vivo mouse model were highly neurovirulent (100% mortality) but exhibited a relatively low level of neuroinvasiveness, with mouse survival rates ranging from 50% to 100%. Therefore, TBE viruses circulating in Switzerland belong to the European subtype and are closely related. In vitro and in vivo surrogates suggest a high proportion of isolates with a relatively low level of virulence, which is in agreement with a hypothesized high proportion of subclinical or mild TBE infections.
Resumo:
Exposimeters are increasingly applied in bioelectromagnetic research to determine personal radiofrequency electromagnetic field (RF-EMF) exposure. The main advantages of exposimeter measurements are their convenient handling for study participants and the large amount of personal exposure data, which can be obtained for several RF-EMF sources. However, the large proportion of measurements below the detection limit is a challenge for data analysis. With the robust ROS (regression on order statistics) method, summary statistics can be calculated by fitting an assumed distribution to the observed data. We used a preliminary sample of 109 weekly exposimeter measurements from the QUALIFEX study to compare summary statistics computed by robust ROS with a naïve approach, where values below the detection limit were replaced by the value of the detection limit. For the total RF-EMF exposure, differences between the naïve approach and the robust ROS were moderate for the 90th percentile and the arithmetic mean. However, exposure contributions from minor RF-EMF sources were considerably overestimated with the naïve approach. This results in an underestimation of the exposure range in the population, which may bias the evaluation of potential exposure-response associations. We conclude from our analyses that summary statistics of exposimeter data calculated by robust ROS are more reliable and more informative than estimates based on a naïve approach. Nevertheless, estimates of source-specific medians or even lower percentiles depend on the assumed data distribution and should be considered with caution.
Resumo:
Visual field assessment is a core component of glaucoma diagnosis and monitoring, and the Standard Automated Perimetry (SAP) test is considered up until this moment, the gold standard of visual field assessment. Although SAP is a subjective assessment and has many pitfalls, it is being constantly used in the diagnosis of visual field loss in glaucoma. Multifocal visual evoked potential (mfVEP) is a newly introduced method used for visual field assessment objectively. Several analysis protocols have been tested to identify early visual field losses in glaucoma patients using the mfVEP technique, some were successful in detection of field defects, which were comparable to the standard SAP visual field assessment, and others were not very informative and needed more adjustment and research work. In this study, we implemented a novel analysis approach and evaluated its validity and whether it could be used effectively for early detection of visual field defects in glaucoma. OBJECTIVES: The purpose of this study is to examine the effectiveness of a new analysis method in the Multi-Focal Visual Evoked Potential (mfVEP) when it is used for the objective assessment of the visual field in glaucoma patients, compared to the gold standard technique. METHODS: 3 groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes) and glaucoma suspect patients (38 eyes). All subjects had a two standard Humphrey visual field HFA test 24-2 and a single mfVEP test undertaken in one session. Analysis of the mfVEP results was done using the new analysis protocol; the Hemifield Sector Analysis HSA protocol. Analysis of the HFA was done using the standard grading system. RESULTS: Analysis of mfVEP results showed that there was a statistically significant difference between the 3 groups in the mean signal to noise ratio SNR (ANOVA p<0.001 with a 95% CI). The difference between superior and inferior hemispheres in all subjects were all statistically significant in the glaucoma patient group 11/11 sectors (t-test p<0.001), partially significant 5/11 (t-test p<0.01) and no statistical difference between most sectors in normal group (only 1/11 was significant) (t-test p<0.9). sensitivity and specificity of the HAS protocol in detecting glaucoma was 97% and 86% respectively, while for glaucoma suspect were 89% and 79%. DISCUSSION: The results showed that the new analysis protocol was able to confirm already existing field defects detected by standard HFA, was able to differentiate between the 3 study groups with a clear distinction between normal and patients with suspected glaucoma; however the distinction between normal and glaucoma patients was especially clear and significant. CONCLUSION: The new HSA protocol used in the mfVEP testing can be used to detect glaucomatous visual field defects in both glaucoma and glaucoma suspect patient. Using this protocol can provide information about focal visual field differences across the horizontal midline, which can be utilized to differentiate between glaucoma and normal subjects. Sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucoma field loss.