939 resultados para Statistical Language Model
Resumo:
We investigate the phase transition in a strongly disordered short-range three-spin interaction model characterized by the absence of time-reversal symmetry in the Hamiltonian. In the mean-field limit the model is well described by the Adam-Gibbs-DiMarzio scenario for the glass transition; however, in the short-range case this picture turns out to be modified. The model presents a finite temperature continuous phase transition characterized by a divergent spin-glass susceptibility and a negative specific-heat exponent. We expect the nature of the transition in this three-spin model to be the same as the transition in the Edwards-Anderson model in a magnetic field, with the advantage that the strong crossover effects present in the latter case are absent.
Resumo:
A general formulation of boundary conditions for semiconductor-metal contacts follows from a phenomenological procedure sketched here. The resulting boundary conditions, which incorporate only physically well-defined parameters, are used to study the classical unipolar drift-diffusion model for the Gunn effect. The analysis of its stationary solutions reveals the presence of bistability and hysteresis for a certain range of contact parameters. Several types of Gunn effect are predicted to occur in the model, when no stable stationary solution exists, depending on the value of the parameters of the injecting contact appearing in the boundary condition. In this way, the critical role played by contacts in the Gunn effect is clearly established.
Resumo:
Critical exponents of the infinitely slowly driven Zhang model of self-organized criticality are computed for d=2 and 3, with particular emphasis devoted to the various roughening exponents. Besides confirming recent estimates of some exponents, new quantities are monitored, and their critical exponents computed. Among other results, it is shown that the three-dimensional exponents do not coincide with the Bak-Tang-Wiesenfeld [Phys. Rev. Lett. 59, 381 (1987); Phys. Rev. A 38, 364 (1988)] (Abelian) model, and that the dynamical exponent as computed from the correlation length and from the roughness of the energy profile do not necessarily coincide, as is usually implicitly assumed. An explanation for this is provided. The possibility of comparing these results with those obtained from renormalization group arguments is also briefly addressed.
Resumo:
We study the static properties of the Little model with asymmetric couplings. We show that the thermodynamics of this model coincides with that of the Sherrington-Kirkpatrick model, and we compute the main finite-size corrections to the difference of the free energy between these two models and to some clarifying order parameters. Our results agree with numerical simulations. Numerical results are presented for the symmetric Little model, which show that the same conclusions are also valid in this case.
Resumo:
Scroll waves in excitable media, described by the Barkley model, are studied. In the parameter region of weak excitability, negative tension of wave filaments is found. It leads to expansion of scroll rings and instability of wave filaments. A circular filament tends to stretch, bend, loop, and produce an expanding tangle that fills up the volume. The filament does not undergo fragmentation before it touches the boundaries. Statistical properties of such Winfree turbulence of scroll waves are numerically investigated.
Resumo:
Background: MLPA method is a potentially useful semi-quantitative method to detect copy number alterations in targeted regions. In this paper, we propose a method for the normalization procedure based on a non-linear mixed-model, as well as a new approach for determining the statistical significance of altered probes based on linear mixed-model. This method establishes a threshold by using different tolerance intervals that accommodates the specific random error variability observed in each test sample.Results: Through simulation studies we have shown that our proposed method outperforms two existing methods that are based on simple threshold rules or iterative regression. We have illustrated the method using a controlled MLPA assay in which targeted regions are variable in copy number in individuals suffering from different disorders such as Prader-Willi, DiGeorge or Autism showing the best performace.Conclusion: Using the proposed mixed-model, we are able to determine thresholds to decide whether a region is altered. These threholds are specific for each individual, incorporating experimental variability, resulting in improved sensitivity and specificity as the examples with real data have revealed.
Resumo:
PURPOSE: In the radiopharmaceutical therapy approach to the fight against cancer, in particular when it comes to translating laboratory results to the clinical setting, modeling has served as an invaluable tool for guidance and for understanding the processes operating at the cellular level and how these relate to macroscopic observables. Tumor control probability (TCP) is the dosimetric end point quantity of choice which relates to experimental and clinical data: it requires knowledge of individual cellular absorbed doses since it depends on the assessment of the treatment's ability to kill each and every cell. Macroscopic tumors, seen in both clinical and experimental studies, contain too many cells to be modeled individually in Monte Carlo simulation; yet, in particular for low ratios of decays to cells, a cell-based model that does not smooth away statistical considerations associated with low activity is a necessity. The authors present here an adaptation of the simple sphere-based model from which cellular level dosimetry for macroscopic tumors and their end point quantities, such as TCP, may be extrapolated more reliably. METHODS: Ten homogenous spheres representing tumors of different sizes were constructed in GEANT4. The radionuclide 131I was randomly allowed to decay for each model size and for seven different ratios of number of decays to number of cells, N(r): 1000, 500, 200, 100, 50, 20, and 10 decays per cell. The deposited energy was collected in radial bins and divided by the bin mass to obtain the average bin absorbed dose. To simulate a cellular model, the number of cells present in each bin was calculated and an absorbed dose attributed to each cell equal to the bin average absorbed dose with a randomly determined adjustment based on a Gaussian probability distribution with a width equal to the statistical uncertainty consistent with the ratio of decays to cells, i.e., equal to Nr-1/2. From dose volume histograms the surviving fraction of cells, equivalent uniform dose (EUD), and TCP for the different scenarios were calculated. Comparably sized spherical models containing individual spherical cells (15 microm diameter) in hexagonal lattices were constructed, and Monte Carlo simulations were executed for all the same previous scenarios. The dosimetric quantities were calculated and compared to the adjusted simple sphere model results. The model was then applied to the Bortezomib-induced enzyme-targeted radiotherapy (BETR) strategy of targeting Epstein-Barr virus (EBV)-expressing cancers. RESULTS: The TCP values were comparable to within 2% between the adjusted simple sphere and full cellular models. Additionally, models were generated for a nonuniform distribution of activity, and results were compared between the adjusted spherical and cellular models with similar comparability. The TCP values from the experimental macroscopic tumor results were consistent with the experimental observations for BETR-treated 1 g EBV-expressing lymphoma tumors in mice. CONCLUSIONS: The adjusted spherical model presented here provides more accurate TCP values than simple spheres, on par with full cellular Monte Carlo simulations while maintaining the simplicity of the simple sphere model. This model provides a basis for complementing and understanding laboratory and clinical results pertaining to radiopharmaceutical therapy.
Resumo:
This article introduces the Dyadic Coping Inventory (DCI; Bodenmann, 2008) and aims (1) to investigate the reliability and aspects of the validity of the Italian and French versions of the DCI, and (2) to replicate its factor structure and reliabilities using a new Swiss German sample. Based on 216 German-, 378 Italian-, and 198 French-speaking participants, the factor structure of the original German inventory was able to be replicated by using principal components analysis in all three groups after excluding two items in the Italian and French versions. The latter were shown to be as reliable as the German version with the exception of the low reliabilities of negative dyadic coping in the French group. Confirmatory factor analyses provided additional support for delegated dyadic coping and evaluation of dyadic coping. Intercorrelations among scales were similar across all three languages groups with a few exceptions. Previous findings could be replicated in all three groups, showing that aspects of dyadic coping were more strongly related to marital quality than to dyadic communication. The use of the dyadic coping scales in the actor-partner interdependence model, the common fate model, and the mutual influence model is discussed.
Resumo:
In this work, a previously-developed, statistical-based, damage-detection approach was validated for its ability to autonomously detect damage in bridges. The damage-detection approach uses statistical differences in the actual and predicted behavior of the bridge caused under a subset of ambient trucks. The predicted behavior is derived from a statistics-based model trained with field data from the undamaged bridge (not a finite element model). The differences between actual and predicted responses, called residuals, are then used to construct control charts, which compare undamaged and damaged structure data. Validation of the damage-detection approach was achieved by using sacrificial specimens that were mounted to the bridge and exposed to ambient traffic loads and which simulated actual damage-sensitive locations. Different damage types and levels were introduced to the sacrificial specimens to study the sensitivity and applicability. The damage-detection algorithm was able to identify damage, but it also had a high false-positive rate. An evaluation of the sub-components of the damage-detection methodology and methods was completed for the purpose of improving the approach. Several of the underlying assumptions within the algorithm were being violated, which was the source of the false-positives. Furthermore, the lack of an automatic evaluation process was thought to potentially be an impediment to widespread use. Recommendations for the improvement of the methodology were developed and preliminarily evaluated. These recommendations are believed to improve the efficacy of the damage-detection approach.
Resumo:
In this paper, we develop a new decision making model and apply it in political Surveys of economic climate collect opinions of managers about the short-term future evolution of their business. Interviews are carried out on a regular basis and responses measure optimistic, neutral or pessimistic views about the economic perspectives. We propose a method to evaluate the sampling error of the average opinion derived from a particular type of survey data. Our variance estimate is useful to interpret historical trends and to decide whether changes in the index from one period to another are due to a structural change or whether ups and downs can be attributed to sampling randomness. An illustration using real data from a survey of business managers opinions is discussed.