934 resultados para static computer simulation
Resumo:
Résumé La théorie de l'autocatégorisation est une théorie de psychologie sociale qui porte sur la relation entre l'individu et le groupe. Elle explique le comportement de groupe par la conception de soi et des autres en tant que membres de catégories sociales, et par l'attribution aux individus des caractéristiques prototypiques de ces catégories. Il s'agit donc d'une théorie de l'individu qui est censée expliquer des phénomènes collectifs. Les situations dans lesquelles un grand nombre d'individus interagissent de manière non triviale génèrent typiquement des comportements collectifs complexes qui sont difficiles à prévoir sur la base des comportements individuels. La simulation informatique de tels systèmes est un moyen fiable d'explorer de manière systématique la dynamique du comportement collectif en fonction des spécifications individuelles. Dans cette thèse, nous présentons un modèle formel d'une partie de la théorie de l'autocatégorisation appelée principe du métacontraste. À partir de la distribution d'un ensemble d'individus sur une ou plusieurs dimensions comparatives, le modèle génère les catégories et les prototypes associés. Nous montrons que le modèle se comporte de manière cohérente par rapport à la théorie et est capable de répliquer des données expérimentales concernant divers phénomènes de groupe, dont par exemple la polarisation. De plus, il permet de décrire systématiquement les prédictions de la théorie dont il dérive, notamment dans des situations nouvelles. Au niveau collectif, plusieurs dynamiques peuvent être observées, dont la convergence vers le consensus, vers une fragmentation ou vers l'émergence d'attitudes extrêmes. Nous étudions également l'effet du réseau social sur la dynamique et montrons qu'à l'exception de la vitesse de convergence, qui augmente lorsque les distances moyennes du réseau diminuent, les types de convergences dépendent peu du réseau choisi. Nous constatons d'autre part que les individus qui se situent à la frontière des groupes (dans le réseau social ou spatialement) ont une influence déterminante sur l'issue de la dynamique. Le modèle peut par ailleurs être utilisé comme un algorithme de classification automatique. Il identifie des prototypes autour desquels sont construits des groupes. Les prototypes sont positionnés de sorte à accentuer les caractéristiques typiques des groupes, et ne sont pas forcément centraux. Enfin, si l'on considère l'ensemble des pixels d'une image comme des individus dans un espace de couleur tridimensionnel, le modèle fournit un filtre qui permet d'atténuer du bruit, d'aider à la détection d'objets et de simuler des biais de perception comme l'induction chromatique. Abstract Self-categorization theory is a social psychology theory dealing with the relation between the individual and the group. It explains group behaviour through self- and others' conception as members of social categories, and through the attribution of the proto-typical categories' characteristics to the individuals. Hence, it is a theory of the individual that intends to explain collective phenomena. Situations involving a large number of non-trivially interacting individuals typically generate complex collective behaviours, which are difficult to anticipate on the basis of individual behaviour. Computer simulation of such systems is a reliable way of systematically exploring the dynamics of the collective behaviour depending on individual specifications. In this thesis, we present a formal model of a part of self-categorization theory named metacontrast principle. Given the distribution of a set of individuals on one or several comparison dimensions, the model generates categories and their associated prototypes. We show that the model behaves coherently with respect to the theory and is able to replicate experimental data concerning various group phenomena, for example polarization. Moreover, it allows to systematically describe the predictions of the theory from which it is derived, specially in unencountered situations. At the collective level, several dynamics can be observed, among which convergence towards consensus, towards frag-mentation or towards the emergence of extreme attitudes. We also study the effect of the social network on the dynamics and show that, except for the convergence speed which raises as the mean distances on the network decrease, the observed convergence types do not depend much on the chosen network. We further note that individuals located at the border of the groups (whether in the social network or spatially) have a decisive influence on the dynamics' issue. In addition, the model can be used as an automatic classification algorithm. It identifies prototypes around which groups are built. Prototypes are positioned such as to accentuate groups' typical characteristics and are not necessarily central. Finally, if we consider the set of pixels of an image as individuals in a three-dimensional color space, the model provides a filter that allows to lessen noise, to help detecting objects and to simulate perception biases such as chromatic induction.
Resumo:
Disturbances affect metapopulations directly through reductions in population size and indirectly through habitat modification. We consider how metapopulation persistence is affected by different disturbance regimes and the way in which disturbances spread, when metapopulations are compact or elongated, using a stochastic spatially explicit model which includes metapopulation and habitat dynamics. We discover that the risk of population extinction is larger for spatially aggregated disturbances than for spatially random disturbances. By changing the spatial configuration of the patches in the system--leading to different proportions of edge and interior patches--we demonstrate that the probability of metapopulation extinction is smaller when the metapopulation is more compact. Both of these results become more pronounced when colonization connectivity decreases. Our results have important management implication as edge patches, which are invariably considered to be less important, may play an important role as disturbance refugia.
Resumo:
PURPOSE: To develop a breathhold method for black-blood viability imaging of the heart that may facilitate identifying the endocardial border. MATERIALS AND METHODS: Three stimulated-echo acquisition mode (STEAM) images were obtained almost simultaneously during the same acquisition using three different demodulation values. Two of the three images were used to construct a black-blood image of the heart. The third image was a T(1)-weighted viability image that enabled detection of hyperintense infarcted myocardium after contrast agent administration. The three STEAM images were combined into one composite black-blood viability image of the heart. The composite STEAM images were compared to conventional inversion-recovery (IR) delayed hyperenhanced (DHE) images in nine human subjects studied on a 3T MRI scanner. RESULTS: STEAM images showed black-blood characteristics and a significant improvement in the blood-infarct signal-difference to noise ratio (SDNR) when compared to the IR-DHE images (34 +/- 4.1 vs. 10 +/- 2.9, mean +/- standard deviation (SD), P < 0.002). There was sufficient myocardium-infarct SDNR in the STEAM images to accurately delineate infarcted regions. The extracted infarcts demonstrated good agreement with the IR-DHE images. CONCLUSION: The STEAM black-blood property allows for better delineation of the blood-infarct border, which would enhance the fast and accurate measurement of infarct size.
Resumo:
Attempts to use a stimulated echo acquisition mode (STEAM) in cardiac imaging are impeded by imaging artifacts that result in signal attenuation and nulling of the cardiac tissue. In this work, we present a method to reduce this artifact by acquiring two sets of stimulated echo images with two different demodulations. The resulting two images are combined to recover the signal loss and weighted to compensate for possible deformation-dependent intensity variation. Numerical simulations were used to validate the theory. Also, the proposed correction method was applied to in vivo imaging of normal volunteers (n = 6) and animal models with induced infarction (n = 3). The results show the ability of the method to recover the lost myocardial signal and generate artifact-free black-blood cardiac images.
Resumo:
As a result of sex chromosome differentiation from ancestral autosomes, male mammalian cells only contain one X chromosome. It has long been hypothesized that X-linked gene expression levels have become doubled in males to restore the original transcriptional output, and that the resulting X overexpression in females then drove the evolution of X inactivation (XCI). However, this model has never been directly tested and patterns and mechanisms of dosage compensation across different mammals and birds generally remain little understood. Here we trace the evolution of dosage compensation using extensive transcriptome data from males and females representing all major mammalian lineages and birds. Our analyses suggest that the X has become globally upregulated in marsupials, whereas we do not detect a global upregulation of this chromosome in placental mammals. However, we find that a subset of autosomal genes interacting with X-linked genes have become downregulated in placentals upon the emergence of sex chromosomes. Thus, different driving forces may underlie the evolution of XCI and the highly efficient equilibration of X expression levels between the sexes observed for both of these lineages. In the egg-laying monotremes and birds, which have partially homologous sex chromosome systems, partial upregulation of the X (Z in birds) evolved but is largely restricted to the heterogametic sex, which provides an explanation for the partially sex-biased X (Z) expression and lack of global inactivation mechanisms in these lineages. Our findings suggest that dosage reductions imposed by sex chromosome differentiation events in amniotes were resolved in strikingly different ways.
Resumo:
Experimental observations of self-organized behavior arising out of noise are also described, and details on the numerical algorithms needed in the computer simulation of these problems are given.
Resumo:
An analytical model of an amorphous silicon p-i-n solar cell is presented to describe its photovoltaic behavior under short-circuit conditions. It has been developed from the analysis of numerical simulation results. These results reproduce the experimental illumination dependence of short-circuit resistance, which is the reciprocal slope of the I(V) curve at the short-circuit point. The recombination rate profiles show that recombination in the regions of charged defects near the p-i and i-n interfaces should not be overlooked. Based on the interpretation of the numerical solutions, we deduce analytical expressions for the recombination current and short-circuit resistance. These expressions are given as a function of an effective ¿¿ product, which depends on the intensity of illumination. We also study the effect of surface recombination with simple expressions that describe its influence on current loss and short-circuit resistance.
Resumo:
Computer simulations of a colloidal particle suspended in a fluid confined by rigid walls show that, at long times, the velocity correlation function decays with a negative algebraic tail. The exponent depends on the confining geometry, rather than the spatial dimensionality. We can account for the tail by using a simple mode-coupling theory which exploits the fact that the sound wave generated by a moving particle becomes diffusive.
Resumo:
By generalizing effective-medium theory to the case of orientationally ordered but positionally disordered two component mixtures, it is shown that the anisotropic dielectric tensor of oxide superconductors can be extracted from microwave measurements on oriented crystallites of YBa2Cu3O7¿x embedded in epoxy. Surprisingly, this technique appears to be the only one which can access the resistivity perpendicular to the copper¿oxide planes in crystallites that are too small for depositing electrodes. This possibility arises in part because the real part of the dielectric constant of oxide superconductors has a large magnitude. The validity of the effective-medium approach for orientationally ordered mixtures is corroborated by simulations on two¿dimensional anisotropic random resistor networks. Analysis of the experimental data suggests that the zero-temperature limit of the finite frequency resistivity does not vanish along the c axis, a result which would simply the existence of states at the Fermi surface, even in the superconducting state
Resumo:
The liquid-liquid critical point scenario of water hypothesizes the existence of two metastable liq- uid phases low-density liquid (LDL) and high-density liquid (HDL) deep within the supercooled region. The hypothesis originates from computer simulations of the ST2 water model, but the stabil- ity of the LDL phase with respect to the crystal is still being debated. We simulate supercooled ST2 water at constant pressure, constant temperature, and constant number of molecules N for N ≤ 729 and times up to 1 μs. We observe clear differences between the two liquids, both structural and dynamical. Using several methods, including finite-size scaling, we confirm the presence of a liquid-liquid phase transition ending in a critical point. We find that the LDL is stable with respect to the crystal in 98% of our runs (we perform 372 runs for LDL or LDL-like states), and in 100% of our runs for the two largest system sizes (N = 512 and 729, for which we perform 136 runs for LDL or LDL-like states). In all these runs, tiny crystallites grow and then melt within 1 μs. Only for N ≤ 343 we observe six events (over 236 runs for LDL or LDL-like states) of spontaneous crystal- lization after crystallites reach an estimated critical size of about 70 ± 10 molecules.
Resumo:
PURPOSE: To objectively characterize different heart tissues from functional and viability images provided by composite-strain-encoding (C-SENC) MRI. MATERIALS AND METHODS: C-SENC is a new MRI technique for simultaneously acquiring cardiac functional and viability images. In this work, an unsupervised multi-stage fuzzy clustering method is proposed to identify different heart tissues in the C-SENC images. The method is based on sequential application of the fuzzy c-means (FCM) and iterative self-organizing data (ISODATA) clustering algorithms. The proposed method is tested on simulated heart images and on images from nine patients with and without myocardial infarction (MI). The resulting clustered images are compared with MRI delayed-enhancement (DE) viability images for determining MI. Also, Bland-Altman analysis is conducted between the two methods. RESULTS: Normal myocardium, infarcted myocardium, and blood are correctly identified using the proposed method. The clustered images correctly identified 90 +/- 4% of the pixels defined as infarct in the DE images. In addition, 89 +/- 5% of the pixels defined as infarct in the clustered images were also defined as infarct in DE images. The Bland-Altman results show no bias between the two methods in identifying MI. CONCLUSION: The proposed technique allows for objectively identifying divergent heart tissues, which would be potentially important for clinical decision-making in patients with MI.
Resumo:
Two methods of differential isotopic coding of carboxylic groups have been developed to date. The first approach uses d0- or d3-methanol to convert carboxyl groups into the corresponding methyl esters. The second relies on the incorporation of two 18O atoms into the C-terminal carboxylic group during tryptic digestion of proteins in H(2)18O. However, both methods have limitations such as chromatographic separation of 1H and 2H derivatives or overlap of isotopic distributions of light and heavy forms due to small mass shifts. Here we present a new tagging approach based on the specific incorporation of sulfanilic acid into carboxylic groups. The reagent was synthesized in a heavy form (13C phenyl ring), showing no chromatographic shift and an optimal isotopic separation with a 6 Da mass shift. Moreover, sulfanilic acid allows for simplified fragmentation in matrix-assisted laser desorption/ionization (MALDI) due the charge fixation of the sulfonate group at the C-terminus of the peptide. The derivatization is simple, specific and minimizes the number of sample treatment steps that can strongly alter the sample composition. The quantification is reproducible within an order of magnitude and can be analyzed either by electrospray ionization (ESI) or MALDI. Finally, the method is able to specifically identify the C-terminal peptide of a protein by using GluC as the proteolytic enzyme.
Resumo:
The ability to determine the location and relative strength of all transcription-factor binding sites in a genome is important both for a comprehensive understanding of gene regulation and for effective promoter engineering in biotechnological applications. Here we present a bioinformatically driven experimental method to accurately define the DNA-binding sequence specificity of transcription factors. A generalized profile was used as a predictive quantitative model for binding sites, and its parameters were estimated from in vitro-selected ligands using standard hidden Markov model training algorithms. Computer simulations showed that several thousand low- to medium-affinity sequences are required to generate a profile of desired accuracy. To produce data on this scale, we applied high-throughput genomics methods to the biochemical problem addressed here. A method combining systematic evolution of ligands by exponential enrichment (SELEX) and serial analysis of gene expression (SAGE) protocols was coupled to an automated quality-controlled sequence extraction procedure based on Phred quality scores. This allowed the sequencing of a database of more than 10,000 potential DNA ligands for the CTF/NFI transcription factor. The resulting binding-site model defines the sequence specificity of this protein with a high degree of accuracy not achieved earlier and thereby makes it possible to identify previously unknown regulatory sequences in genomic DNA. A covariance analysis of the selected sites revealed non-independent base preferences at different nucleotide positions, providing insight into the binding mechanism.
Resumo:
Functional connectivity affects demography and gene dynamics in fragmented populations. Besides species-specific dispersal ability, the connectivity between local populations is affected by the landscape elements encountered during dispersal. Documenting these effects is thus a central issue for the conservation and management of fragmented populations. In this study, we compare the power and accuracy of three methods (partial correlations, regressions and Approximate Bayesian Computations) that use genetic distances to infer the effect of landscape upon dispersal. We use stochastic individual-based simulations of fragmented populations surrounded by landscape elements that differ in their permeability to dispersal. The power and accuracy of all three methods are good when there is a strong contrast between the permeability of different landscape elements. The power and accuracy can be further improved by restricting analyses to adjacent pairs of populations. Landscape elements that strongly impede dispersal are the easiest to identify. However, power and accuracy decrease drastically when landscape complexity increases and the contrast between the permeability of landscape elements decreases. We provide guidelines for future studies and underline the needs to evaluate or develop approaches that are more powerful.
Resumo:
BACKGROUND: Physician training in smoking cessation counseling has been shown to be effective as a means to increase quit success. We assessed the cost-effectiveness ratio of a smoking cessation counseling training programme. Its effectiveness was previously demonstrated in a cluster randomized, control trial performed in two Swiss university outpatients clinics, in which residents were randomized to receive training in smoking interventions or a control educational intervention. DESIGN AND METHODS: We used a Markov simulation model for effectiveness analysis. This model incorporates the intervention efficacy, the natural quit rate, and the lifetime probability of relapse after 1-year abstinence. We used previously published results in addition to hospital service and outpatient clinic cost data. The time horizon was 1 year, and we opted for a third-party payer perspective. RESULTS: The incremental cost of the intervention amounted to US$2.58 per consultation by a smoker, translating into a cost per life-year saved of US$25.4 for men and 35.2 for women. One-way sensitivity analyses yielded a range of US$4.0-107.1 in men and US$9.7-148.6 in women. Variations in the quit rate of the control intervention, the length of training effectiveness, and the discount rate yielded moderately large effects on the outcome. Variations in the natural cessation rate, the lifetime probability of relapse, the cost of physician training, the counseling time, the cost per hour of physician time, and the cost of the booklets had little effect on the cost-effectiveness ratio. CONCLUSIONS: Training residents in smoking cessation counseling is a very cost-effective intervention and may be more efficient than currently accepted tobacco control interventions.