930 resultados para BIASES


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Cryptococcus neoformans causes meningitis and disseminated infection in healthy individuals, but more commonly in hosts with defective immune responses. Cell-mediated immunity is an important component of the immune response to a great variety of infections, including yeast infections. We aimed to evaluate a specific lymphocyte transformation assay to Cryptococcus neoformans in order to identify immunodeficiency associated to neurocryptococcosis (NCC) as primary cause of the mycosis. Methods: Healthy volunteers, poultry growers, and HIV-seronegative patients with neurocryptococcosis were tested for cellular immune response. Cryptococcal meningitis was diagnosed by India ink staining of cerebrospinal fluid and cryptococcal antigen test (Immunomycol-Inc, SP, Brazil). Isolated peripheral blood mononuclear cells were stimulated with C. neoformans antigen, C. albicans antigen, and pokeweed mitogen. The amount of H-3-thymidine incorporated was assessed, and the results were expressed as stimulation index (SI) and log SI, sensitivity, specificity, and cut-off value (receiver operating characteristics curve). We applied unpaired Student t tests to compare data and considered significant differences for p<0.05. Results: The lymphotoxin alpha showed a low capacity with all the stimuli for classifying patients as responders and non-responders. Lymphotoxin alpha stimulated by heated-killed antigen from patients with neurocryptococcosis was not affected by TCD4+ cell count, and the intensity of response did not correlate with the clinical evolution of neurocryptococcosis. Conclusion: Response to lymphocyte transformation assay should be analyzed based on a normal range and using more than one stimulator. The use of a cut-off value to classify patients with neurocryptococcosis is inadequate. Statistical analysis should be based on the log transformation of SI. A more purified antigen for evaluating specific response to C. neoformans is needed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The frequency distribution of SNPs and haplotypes in the ABCB1, SLCO1B1 and SLCO1B3 genes varies largely among continental populations. This variation can lead to biases in pharmacogenetic studies conducted in admixed populations such as those from Brazil and other Latin American countries. The aim of this study was to evaluate the influence of self-reported colour, geographical origin and genomic ancestry on distributions of the ABCB1, SLCO1B1 and SLCO1B3 polymorphisms and derived haplotypes in admixed Brazilian populations. A total of 1039 healthy adults from the north, north-east, south-east and south of Brazil were recruited for this investigation. The c.388A>G (rs2306283), c.463C>A (rs11045819) and c.521T>C (rs4149056) SNPs in the SLCO1B1 gene and c.334T>G (rs4149117) and c.699G>A (rs7311358) SNPs in the SLCO1B3 gene were determined by Taqman 5'-nuclease assays. The ABCB1 c.1236C>T (rs1128503), c.2677G>T/A (rs2032582) and c.3435C>T (rs1045642) polymorphisms were genotyped using a previously described single-base extension/termination method. The results showed that genotype and haplotype distributions are highly variable among populations of the same self-reported colour and geographical region. However, genomic ancestry showed that these associations are better explained by a continuous variable. The influence of ancestry on the distribution of alleles and haplotype frequencies was more evident in variants with large differences in allele frequencies between European and African populations. Design and interpretation of pharmacogenetic studies using these transporter genes should include genomic controls to avoid spurious conclusions based on improper matching of study cohorts from Brazilian populations and other highly admixed populations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study analyzes important aspects of the tropical Atlantic Ocean from simulations of the fourth version of the Community Climate System Model (CCSM4): the mean sea surface temperature (SST) and wind stress, the Atlantic warm pools, the principal modes of SST variability, and the heat budget in the Benguela region. The main goal was to assess the similarities and differences between the CCSM4 simulations and observations. The results indicate that the tropical Atlantic overall is realistic in CCSM4. However, there are still significant biases in the CCSM4 Atlantic SSTs, with a colder tropical North Atlantic and a hotter tropical South Atlantic, that are related to biases in the wind stress. These are also reflected in the Atlantic warm pools in April and September, with its volume greater than in observations in April and smaller than in observations in September. The variability of SSTs in the tropical Atlantic is well represented in CCSM4. However, in the equatorial and tropical South Atlantic regions, CCSM4 has two distinct modes of variability, in contrast to observed behavior. A model heat budget analysis of the Benguela region indicates that the variability of the upper-ocean temperature is dominated by vertical advection, followed by meridional advection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives Clinical significance and management of prenatal hydronephrosis (PNH) are sources of debate. Existing studies are flawed with biased cohorts or inconsistent follow-up. We aimed to evaluate the incidence of pathology in a large cohort of PNH and assess the biases and outcomes of this population. Methods We reviewed 1034 charts of fetuses with PNH. Records of delivered offspring were reviewed at a pediatric center and analyzed with respect to prenatal and postnatal pathology and management. Results Prenatal resolution of hydronephrosis occurred in 24.7% of pregnancies. On first postnatal ultrasound, some degree of dilatation was present in 80%, 88% and 95% of mild, moderate and severe PNH cases, respectively. At the end of follow-up, hydronephrosis persisted in 10%, 25% and 72% of children, respectively. Incidence of vesicoureteral reflux did not correlate with severity of PNH. Children with postnatal workup had more severe PNH than those without. Conclusions Despite prenatal resolution totalizing 25%, pelvic dilatation persisted on first postnatal imaging in most cases, thus justifying postnatal ultrasound evaluation. Whereas most mild cases resolved spontaneously, a quarter of moderate and more than half of severe cases required surgery. Patients with postnatal imaging and referral had more severe PNH, which could result in overestimation of pathology. (c) 2012 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A set of predictor variables is said to be intrinsically multivariate predictive (IMP) for a target variable if all properly contained subsets of the predictor set are poor predictors of the. target but the full set predicts the target with great accuracy. In a previous article, the main properties of IMP Boolean variables have been analytically described, including the introduction of the IMP score, a metric based on the coefficient of determination (CoD) as a measure of predictiveness with respect to the target variable. It was shown that the IMP score depends on four main properties: logic of connection, predictive power, covariance between predictors and marginal predictor probabilities (biases). This paper extends that work to a broader context, in an attempt to characterize properties of discrete Bayesian networks that contribute to the presence of variables (network nodes) with high IMP scores. We have found that there is a relationship between the IMP score of a node and its territory size, i.e., its position along a pathway with one source: nodes far from the source display larger IMP scores than those closer to the source, and longer pathways display larger maximum IMP scores. This appears to be a consequence of the fact that nodes with small territory have larger probability of having highly covariate predictors, which leads to smaller IMP scores. In addition, a larger number of XOR and NXOR predictive logic relationships has positive influence over the maximum IMP score found in the pathway. This work presents analytical results based on a simple structure network and an analysis involving random networks constructed by computational simulations. Finally, results from a real Bayesian network application are provided. (C) 2012 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Background With the development of DNA hybridization microarray technologies, nowadays it is possible to simultaneously assess the expression levels of thousands to tens of thousands of genes. Quantitative comparison of microarrays uncovers distinct patterns of gene expression, which define different cellular phenotypes or cellular responses to drugs. Due to technical biases, normalization of the intensity levels is a pre-requisite to performing further statistical analyses. Therefore, choosing a suitable approach for normalization can be critical, deserving judicious consideration. Results Here, we considered three commonly used normalization approaches, namely: Loess, Splines and Wavelets, and two non-parametric regression methods, which have yet to be used for normalization, namely, the Kernel smoothing and Support Vector Regression. The results obtained were compared using artificial microarray data and benchmark studies. The results indicate that the Support Vector Regression is the most robust to outliers and that Kernel is the worst normalization technique, while no practical differences were observed between Loess, Splines and Wavelets. Conclusion In face of our results, the Support Vector Regression is favored for microarray normalization due to its superiority when compared to the other methods for its robustness in estimating the normalization curve.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background The evolutionary advantages of selective attention are unclear. Since the study of selective attention began, it has been suggested that the nervous system only processes the most relevant stimuli because of its limited capacity [1]. An alternative proposal is that action planning requires the inhibition of irrelevant stimuli, which forces the nervous system to limit its processing [2]. An evolutionary approach might provide additional clues to clarify the role of selective attention. Methods We developed Artificial Life simulations wherein animals were repeatedly presented two objects, "left" and "right", each of which could be "food" or "non-food." The animals' neural networks (multilayer perceptrons) had two input nodes, one for each object, and two output nodes to determine if the animal ate each of the objects. The neural networks also had a variable number of hidden nodes, which determined whether or not it had enough capacity to process both stimuli (Table 1). The evolutionary relevance of the left and the right food objects could also vary depending on how much the animal's fitness was increased when ingesting them (Table 1). We compared sensory processing in animals with or without limited capacity, which evolved in simulations in which the objects had the same or different relevances. Table 1. Nine sets of simulations were performed, varying the values of food objects and the number of hidden nodes in the neural networks. The values of left and right food were swapped during the second half of the simulations. Non-food objects were always worth -3. The evolution of neural networks was simulated by a simple genetic algorithm. Fitness was a function of the number of food and non-food objects each animal ate and the chromosomes determined the node biases and synaptic weights. During each simulation, 10 populations of 20 individuals each evolved in parallel for 20,000 generations, then the relevance of food objects was swapped and the simulation was run again for another 20,000 generations. The neural networks were evaluated by their ability to identify the two objects correctly. The detectability (d') for the left and the right objects was calculated using Signal Detection Theory [3]. Results and conclusion When both stimuli were equally relevant, networks with two hidden nodes only processed one stimulus and ignored the other. With four or eight hidden nodes, they could correctly identify both stimuli. When the stimuli had different relevances, the d' for the most relevant stimulus was higher than the d' for the least relevant stimulus, even when the networks had four or eight hidden nodes. We conclude that selection mechanisms arose in our simulations depending not only on the size of the neuron networks but also on the stimuli's relevance for action.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It has consistently been shown that agents judge the intervals between their actions and outcomes as compressed in time, an effect named intentional binding. In the present work, we investigated whether this effect is result of prior bias volunteers have about the timing of the consequences of their actions, or if it is due to learning that occurs during the experimental session. Volunteers made temporal estimates of the interval between their action and target onset (Action conditions), or between two events (No-Action conditions). Our results show that temporal estimates become shorter throughout each experimental block in both conditions. Moreover, we found that observers judged intervals between action and outcomes as shorter even in very early trials of each block. To quantify the decrease of temporal judgments in experimental blocks, exponential functions were fitted to participants’ temporal judgments. The fitted parameters suggest that observers had different prior biases as to intervals between events in which action was involved. These findings suggest that prior bias might play a more important role in this effect than calibration-type learning processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Visual signals, used for communication both within and between species, vary immensely in the forms that they take. How is it that all this splendour has evolved in nature? Since it is the receiver’s preferences that cause selective pressures on signals, elucidating the mechanism behind the response of the signal receiver is vital to gain a closer understanding of the evolutionary process. In my thesis I have therefore investigated how receivers, represented by chickens, Gallus gallus domesticus, respond to different stimuli displayed on a peck-sensitive computer screen. According to the receiver bias hypothesis, animals and humans often express biases when responding to certain stimuli. These biases develop as by-products of how the recognition mechanism categorises and discriminates between stimuli. Since biases are generated from general stimulus processing mechanisms, they occur irrespective of species and type of signal, and it is often possible to predict the direction and intensity of the biases. One of the results from the experiments in my thesis demonstrates that similar experience in different species may generate similar biases. By giving chickens at least some of the experience of human faces as humans presumably have, the chickens subsequently expressed preferences for the same faces as a group of human subjects. Another kind of experience generated a bias for symmetry. This bias developed in the context of training chickens to recognise two mirror images of an asymmetrical stimulus. Untrained chickens and chickens trained on only one of the mirror images expressed no symmetry preferences. The bias produced by the training regime was for a specific symmetrical stimulus which had a strong resemblance to the familiar asymmetrical exemplar, rather than a general preference for symmetry. A further kind of experience, training chickens to respond to some stimuli but not to others, generated a receiver bias for exaggerated stimuli, whereas chickens trained on reversed stimuli developed a bias for less exaggerated stimuli. To investigate the potential of this bias to drive the evolution of signals towards exaggerated forms, a simplified evolutionary process was mimicked. The stimuli variants rejected by the chickens were eliminated, whereas the selected forms were kept and evolved prior to the subsequent display. As a result, signals evolved into exaggerated forms in all tested stimulus dimensions: length, intensity and area, despite the inclusion of a cost to the sender for using increasingly exaggerated signals. The bias was especially strong and persistent for stimuli varying along the intensity dimension where it remained despite extensive training. All the results in my thesis may be predicted by the receiver bias hypothesis. This implies that biases, developed due to stimuli experience, may be significant mechanisms driving the evolution of signal form.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Degree in Marine Sciences. Faculty of Marine Sciences, University of Las Palmas de Gran Canaria. Institut de Ciències del Mar, Consejo Superior de Investigaciones Científicas

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The quality of temperature and humidity retrievals from the infrared SEVIRI sensors on the geostationary Meteosat Second Generation (MSG) satellites is assessed by means of a one dimensional variational algorithm. The study is performed with the aim of improving the spatial and temporal resolution of available observations to feed analysis systems designed for high resolution regional scale numerical weather prediction (NWP) models. The non-hydrostatic forecast model COSMO (COnsortium for Small scale MOdelling) in the ARPA-SIM operational configuration is used to provide background fields. Only clear sky observations over sea are processed. An optimised 1D–VAR set-up comprising of the two water vapour and the three window channels is selected. It maximises the reduction of errors in the model backgrounds while ensuring ease of operational implementation through accurate bias correction procedures and correct radiative transfer simulations. The 1D–VAR retrieval quality is firstly quantified in relative terms employing statistics to estimate the reduction in the background model errors. Additionally the absolute retrieval accuracy is assessed comparing the analysis with independent radiosonde and satellite observations. The inclusion of satellite data brings a substantial reduction in the warm and dry biases present in the forecast model. Moreover it is shown that the retrieval profiles generated by the 1D–VAR are well correlated with the radiosonde measurements. Subsequently the 1D–VAR technique is applied to two three–dimensional case–studies: a false alarm case–study occurred in Friuli–Venezia–Giulia on the 8th of July 2004 and a heavy precipitation case occurred in Emilia–Romagna region between 9th and 12th of April 2005. The impact of satellite data for these two events is evaluated in terms of increments in the integrated water vapour and saturation water vapour over the column, in the 2 meters temperature and specific humidity and in the surface temperature. To improve the 1D–VAR technique a method to calculate flow–dependent model error covariance matrices is also assessed. The approach employs members from an ensemble forecast system generated by perturbing physical parameterisation schemes inside the model. The improved set–up applied to the case of 8th of July 2004 shows a substantial neutral impact.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The continuous increase of genome sequencing projects produced a huge amount of data in the last 10 years: currently more than 600 prokaryotic and 80 eukaryotic genomes are fully sequenced and publically available. However the sole sequencing process of a genome is able to determine just raw nucleotide sequences. This is only the first step of the genome annotation process that will deal with the issue of assigning biological information to each sequence. The annotation process is done at each different level of the biological information processing mechanism, from DNA to protein, and cannot be accomplished only by in vitro analysis procedures resulting extremely expensive and time consuming when applied at a this large scale level. Thus, in silico methods need to be used to accomplish the task. The aim of this work was the implementation of predictive computational methods to allow a fast, reliable, and automated annotation of genomes and proteins starting from aminoacidic sequences. The first part of the work was focused on the implementation of a new machine learning based method for the prediction of the subcellular localization of soluble eukaryotic proteins. The method is called BaCelLo, and was developed in 2006. The main peculiarity of the method is to be independent from biases present in the training dataset, which causes the over‐prediction of the most represented examples in all the other available predictors developed so far. This important result was achieved by a modification, made by myself, to the standard Support Vector Machine (SVM) algorithm with the creation of the so called Balanced SVM. BaCelLo is able to predict the most important subcellular localizations in eukaryotic cells and three, kingdom‐specific, predictors were implemented. In two extensive comparisons, carried out in 2006 and 2008, BaCelLo reported to outperform all the currently available state‐of‐the‐art methods for this prediction task. BaCelLo was subsequently used to completely annotate 5 eukaryotic genomes, by integrating it in a pipeline of predictors developed at the Bologna Biocomputing group by Dr. Pier Luigi Martelli and Dr. Piero Fariselli. An online database, called eSLDB, was developed by integrating, for each aminoacidic sequence extracted from the genome, the predicted subcellular localization merged with experimental and similarity‐based annotations. In the second part of the work a new, machine learning based, method was implemented for the prediction of GPI‐anchored proteins. Basically the method is able to efficiently predict from the raw aminoacidic sequence both the presence of the GPI‐anchor (by means of an SVM), and the position in the sequence of the post‐translational modification event, the so called ω‐site (by means of an Hidden Markov Model (HMM)). The method is called GPIPE and reported to greatly enhance the prediction performances of GPI‐anchored proteins over all the previously developed methods. GPIPE was able to predict up to 88% of the experimentally annotated GPI‐anchored proteins by maintaining a rate of false positive prediction as low as 0.1%. GPIPE was used to completely annotate 81 eukaryotic genomes, and more than 15000 putative GPI‐anchored proteins were predicted, 561 of which are found in H. sapiens. In average 1% of a proteome is predicted as GPI‐anchored. A statistical analysis was performed onto the composition of the regions surrounding the ω‐site that allowed the definition of specific aminoacidic abundances in the different considered regions. Furthermore the hypothesis that compositional biases are present among the four major eukaryotic kingdoms, proposed in literature, was tested and rejected. All the developed predictors and databases are freely available at: BaCelLo http://gpcr.biocomp.unibo.it/bacello eSLDB http://gpcr.biocomp.unibo.it/esldb GPIPE http://gpcr.biocomp.unibo.it/gpipe

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The humans process the numbers in a similar way to animals. There are countless studies in which similar performance between animals and humans (adults and/or children) are reported. Three models have been developed to explain the cognitive mechanisms underlying the number processing. The triple-code model (Dehaene, 1992) posits an mental number line as preferred way to represent magnitude. The mental number line has three particular effects: the distance, the magnitude and the SNARC effects. The SNARC effect shows a spatial association between number and space representations. In other words, the small numbers are related to left space while large numbers are related to right space. Recently a vertical SNARC effect has been found (Ito & Hatta, 2004; Schwarz & Keus, 2004), reflecting a space-related bottom-to-up representation of numbers. The magnitude representations horizontally and vertically could influence the subject performance in explicit and implicit digit tasks. The goal of this research project aimed to investigate the spatial components of number representation using different experimental designs and tasks. The experiment 1 focused on horizontal and vertical number representations in a within- and between-subjects designs in a parity and magnitude comparative tasks, presenting positive or negative Arabic digits (1-9 without 5). The experiment 1A replied the SNARC and distance effects in both spatial arrangements. The experiment 1B showed an horizontal reversed SNARC effect in both tasks while a vertical reversed SNARC effect was found only in comparative task. In the experiment 1C two groups of subjects performed both tasks in two different instruction-responding hand assignments with positive numbers. The results did not show any significant differences between two assignments, even if the vertical number line seemed to be more flexible respect to horizontal one. On the whole the experiment 1 seemed to demonstrate a contextual (i.e. task set) influences of the nature of the SNARC effect. The experiment 2 focused on the effect of horizontal and vertical number representations on spatial biases in a paper-and-pencil bisecting tasks. In the experiment 2A the participants were requested to bisect physical and number (2 or 9) lines horizontally and vertically. The findings demonstrated that digit 9 strings tended to generate a more rightward bias comparing with digit 2 strings horizontally. However in vertical condition the digit 2 strings generated a more upperward bias respect to digit 9 strings, suggesting a top-to-bottom number line. In the experiment 2B the participants were asked to bisect lines flanked by numbers (i.e. 1 or 7) in four spatial arrangements: horizontal, vertical, right-diagonal and left-diagonal lines. Four number conditions were created according to congruent or incongruent number line representation: 1-1, 1-7, 7-1 and 7-7. The main results showed a more reliable rightward bias in horizontal congruent condition (1-7) respect to incongruent condition (7-1). Vertically the incongruent condition (1-7) determined a significant bias towards bottom side of line respect to congruent condition (7-1). The experiment 2 suggested a more rigid horizontal number line while in vertical condition the number representation could be more flexible. In the experiment 3 we adopted the materials of experiment 2B in order to find a number line effect on temporal (motor) performance. The participants were presented horizontal, vertical, rightdiagonal and left-diagonal lines flanked by the same digits (i.e. 1-1 or 7-7) or by different digits (i.e. 1-7 or 7-1). The digits were spatially congruent or incongruent with their respective hypothesized mental representations. Participants were instructed to touch the lines either close to the large digit, or close to the small digit, or to bisected the lines. Number processing influenced movement execution more than movement planning. Number congruency influenced spatial biases mostly along the horizontal but also along the vertical dimension. These results support a two-dimensional magnitude representation. Finally, the experiment 4 addressed the visuo-spatial manipulation of number representations for accessing and retrieval arithmetic facts. The participants were requested to perform a number-matching and an addition verification tasks. The findings showed an interference effect between sum-nodes and neutral-nodes only with an horizontal presentation of digit-cues, in number-matching tasks. In the addition verification task, the performance was similar for horizontal and vertical presentations of arithmetic problems. In conclusion the data seemed to show an automatic activation of horizontal number line also used to retrieval arithmetic facts. The horizontal number line seemed to be more rigid and the preferred way to order number from left-to-right. A possible explanation could be the left-to-right direction for reading and writing. The vertical number line seemed to be more flexible and more dependent from the tasks, reflecting perhaps several example in the environment representing numbers either from bottom-to-top or from top-to-bottom. However the bottom-to-top number line seemed to be activated by explicit task demands.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis focuses on the limits that may prevent an entrepreneur from maximizing her value, and the benefits of diversification in reducing her cost of capital. After reviewing all relevant literature dealing with the differences between traditional corporate finance and entrepreneurial finance, we focus on the biases occurring when traditional finance techniques are applied to the entrepreneurial context. In particular, using the portfolio theory framework, we determine the degree of under-diversification of entrepreneurs. Borrowing the methodology developed by Kerins et al. (2004), we test a model for the cost of capital according to the firms' industry and the entrepreneur's wealth commitment to the firm. This model takes three market inputs (standard deviation of market returns, expected return of the market, and risk-free rate), and two firm-specific inputs (standard deviation of the firm returns and correlation between firm and market returns) as parameters, and returns an appropriate cost of capital as an output. We determine the expected market return and the risk-free rate according to the huge literature on the market risk premium. As for the market return volatility, it is estimated considering a GARCH specification for the market index returns. Furthermore, we assume that the firm-specific inputs can be obtained considering new-listed firms similar in risk to the firm we are evaluating. After we form a database including all the data needed for our analysis, we perform an empirical investigation to understand how much of the firm's total risk depends on market risk, and which explanatory variables can explain it. Our results show that cost of capital declines as the level of entrepreneur's commitment decreases. Therefore, maximizing the value for the entrepreneur depends on the fraction of entrepreneur's wealth invested in the firm and the fraction she sells to outside investors. These results are interesting both for entrepreneurs and policy makers: the former can benefit from an unbiased model for their valuation; the latter can obtain some guidelines to overcome the recent financial market crisis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The quest for universal memory is driving the rapid development of memories with superior all-round capabilities in non-volatility, high speed, high endurance and low power. The memory subsystem accounts for a significant cost and power budget of a computer system. Current DRAM-based main memory systems are starting to hit the power and cost limit. To resolve this issue the industry is improving existing technologies such as Flash and exploring new ones. Among those new technologies is the Phase Change Memory (PCM), which overcomes some of the shortcomings of the Flash such as durability and scalability. This alternative non-volatile memory technology, which uses resistance contrast in phase-change materials, offers more density relative to DRAM, and can help to increase main memory capacity of future systems while remaining within the cost and power constraints. Chalcogenide materials can suitably be exploited for manufacturing phase-change memory devices. Charge transport in amorphous chalcogenide-GST used for memory devices is modeled using two contributions: hopping of trapped electrons and motion of band electrons in extended states. Crystalline GST exhibits an almost Ohmic I(V) curve. In contrast amorphous GST shows a high resistance at low biases while, above a threshold voltage, a transition takes place from a highly resistive to a conductive state, characterized by a negative differential-resistance behavior. A clear and complete understanding of the threshold behavior of the amorphous phase is fundamental for exploiting such materials in the fabrication of innovative nonvolatile memories. The type of feedback that produces the snapback phenomenon is described as a filamentation in energy that is controlled by electron–electron interactions between trapped electrons and band electrons. The model thus derived is implemented within a state-of-the-art simulator. An analytical version of the model is also derived and is useful for discussing the snapback behavior and the scaling properties of the device.