925 resultados para Key feature


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the past decade, the advent of efficient genome sequencing tools and high-throughput experimental biotechnology has lead to enormous progress in the life science. Among the most important innovations is the microarray tecnology. It allows to quantify the expression for thousands of genes simultaneously by measurin the hybridization from a tissue of interest to probes on a small glass or plastic slide. The characteristics of these data include a fair amount of random noise, a predictor dimension in the thousand, and a sample noise in the dozens. One of the most exciting areas to which microarray technology has been applied is the challenge of deciphering complex disease such as cancer. In these studies, samples are taken from two or more groups of individuals with heterogeneous phenotypes, pathologies, or clinical outcomes. these samples are hybridized to microarrays in an effort to find a small number of genes which are strongly correlated with the group of individuals. Eventhough today methods to analyse the data are welle developed and close to reach a standard organization (through the effort of preposed International project like Microarray Gene Expression Data -MGED- Society [1]) it is not unfrequant to stumble in a clinician's question that do not have a compelling statistical method that could permit to answer it.The contribution of this dissertation in deciphering disease regards the development of new approaches aiming at handle open problems posed by clinicians in handle specific experimental designs. In Chapter 1 starting from a biological necessary introduction, we revise the microarray tecnologies and all the important steps that involve an experiment from the production of the array, to the quality controls ending with preprocessing steps that will be used into the data analysis in the rest of the dissertation. While in Chapter 2 a critical review of standard analysis methods are provided stressing most of problems that In Chapter 3 is introduced a method to adress the issue of unbalanced design of miacroarray experiments. In microarray experiments, experimental design is a crucial starting-point for obtaining reasonable results. In a two-class problem, an equal or similar number of samples it should be collected between the two classes. However in some cases, e.g. rare pathologies, the approach to be taken is less evident. We propose to address this issue by applying a modified version of SAM [2]. MultiSAM consists in a reiterated application of a SAM analysis, comparing the less populated class (LPC) with 1,000 random samplings of the same size from the more populated class (MPC) A list of the differentially expressed genes is generated for each SAM application. After 1,000 reiterations, each single probe given a "score" ranging from 0 to 1,000 based on its recurrence in the 1,000 lists as differentially expressed. The performance of MultiSAM was compared to the performance of SAM and LIMMA [3] over two simulated data sets via beta and exponential distribution. The results of all three algorithms over low- noise data sets seems acceptable However, on a real unbalanced two-channel data set reagardin Chronic Lymphocitic Leukemia, LIMMA finds no significant probe, SAM finds 23 significantly changed probes but cannot separate the two classes, while MultiSAM finds 122 probes with score >300 and separates the data into two clusters by hierarchical clustering. We also report extra-assay validation in terms of differentially expressed genes Although standard algorithms perform well over low-noise simulated data sets, multi-SAM seems to be the only one able to reveal subtle differences in gene expression profiles on real unbalanced data. In Chapter 4 a method to adress similarities evaluation in a three-class prblem by means of Relevance Vector Machine [4] is described. In fact, looking at microarray data in a prognostic and diagnostic clinical framework, not only differences could have a crucial role. In some cases similarities can give useful and, sometimes even more, important information. The goal, given three classes, could be to establish, with a certain level of confidence, if the third one is similar to the first or the second one. In this work we show that Relevance Vector Machine (RVM) [2] could be a possible solutions to the limitation of standard supervised classification. In fact, RVM offers many advantages compared, for example, with his well-known precursor (Support Vector Machine - SVM [3]). Among these advantages, the estimate of posterior probability of class membership represents a key feature to address the similarity issue. This is a highly important, but often overlooked, option of any practical pattern recognition system. We focused on Tumor-Grade-three-class problem, so we have 67 samples of grade I (G1), 54 samples of grade 3 (G3) and 100 samples of grade 2 (G2). The goal is to find a model able to separate G1 from G3, then evaluate the third class G2 as test-set to obtain the probability for samples of G2 to be member of class G1 or class G3. The analysis showed that breast cancer samples of grade II have a molecular profile more similar to breast cancer samples of grade I. Looking at the literature this result have been guessed, but no measure of significance was gived before.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Peer-to-Peer network paradigm is drawing the attention of both final users and researchers for its features. P2P networks shift from the classic client-server approach to a high level of decentralization where there is no central control and all the nodes should be able not only to require services, but to provide them to other peers as well. While on one hand such high level of decentralization might lead to interesting properties like scalability and fault tolerance, on the other hand it implies many new problems to deal with. A key feature of many P2P systems is openness, meaning that everybody is potentially able to join a network with no need for subscription or payment systems. The combination of openness and lack of central control makes it feasible for a user to free-ride, that is to increase its own benefit by using services without allocating resources to satisfy other peers’ requests. One of the main goals when designing a P2P system is therefore to achieve cooperation between users. Given the nature of P2P systems based on simple local interactions of many peers having partial knowledge of the whole system, an interesting way to achieve desired properties on a system scale might consist in obtaining them as emergent properties of the many interactions occurring at local node level. Two methods are typically used to face the problem of cooperation in P2P networks: 1) engineering emergent properties when designing the protocol; 2) study the system as a game and apply Game Theory techniques, especially to find Nash Equilibria in the game and to reach them making the system stable against possible deviant behaviors. In this work we present an evolutionary framework to enforce cooperative behaviour in P2P networks that is alternative to both the methods mentioned above. Our approach is based on an evolutionary algorithm inspired by computational sociology and evolutionary game theory, consisting in having each peer periodically trying to copy another peer which is performing better. The proposed algorithms, called SLAC and SLACER, draw inspiration from tag systems originated in computational sociology, the main idea behind the algorithm consists in having low performance nodes copying high performance ones. The algorithm is run locally by every node and leads to an evolution of the network both from the topology and from the nodes’ strategy point of view. Initial tests with a simple Prisoners’ Dilemma application show how SLAC is able to bring the network to a state of high cooperation independently from the initial network conditions. Interesting results are obtained when studying the effect of cheating nodes on SLAC algorithm. In fact in some cases selfish nodes rationally exploiting the system for their own benefit can actually improve system performance from the cooperation formation point of view. The final step is to apply our results to more realistic scenarios. We put our efforts in studying and improving the BitTorrent protocol. BitTorrent was chosen not only for its popularity but because it has many points in common with SLAC and SLACER algorithms, ranging from the game theoretical inspiration (tit-for-tat-like mechanism) to the swarms topology. We discovered fairness, meant as ratio between uploaded and downloaded data, to be a weakness of the original BitTorrent protocol and we drew inspiration from the knowledge of cooperation formation and maintenance mechanism derived from the development and analysis of SLAC and SLACER, to improve fairness and tackle freeriding and cheating in BitTorrent. We produced an extension of BitTorrent called BitFair that has been evaluated through simulation and has shown the abilities of enforcing fairness and tackling free-riding and cheating nodes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Negotiating boundaries: from state of affairs to matter of transit. The research deals with the everyday management of spatial uncertainty, starting with the wider historical question of terrains vagues (a French term for wastelands, dismantled areas and peripheral city voids, or interstitial spaces) and focusing later on a particular case study. The choice intended to privilege a small place (a mouth of a lagoon which crosses a beach), with ordinary features, instead of the esthetical “vague terrains”, often witnessed through artistic media or architectural reflections. This place offered the chance to explore a particular dimension of indeterminacy, mostly related with a certain kind of phenomenal instability of its limits, the hybrid character of its cultural status (neither natural, nor artificial) and its crossover position as a transitional space, between different tendencies and activities. The first theoretical part of the research develops a semiotic of vagueness, by taking under exam the structuralist idea of relation, in order to approach an interpretive notion of continuity and indeterminacy. This exploration highlights the key feature of actantial network distribution, which provides a bridge with the second methodological parts, dedicated to a “tuning” of the tools for the analysis. This section establishes a dialogue with current social sciences (like Actor-Network Theory, Situated action and Distributed Cognition), in order to define some observational methods for the documentation of social practices, which could be comprised in a semiotic ethnography framework. The last part, finally, focuses on the mediation and negotiation by which human actors are interacting with the varying conditions of the chosen environment, looking at people’s movements through space, their embodied dealings with the boundaries and the use of spatial artefacts as framing infrastructure of the site.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The optical resonances of metallic nanoparticles placed at nanometer distances from a metal plane were investigated. At certain wavelengths, these “sphere-on-plane” systems become resonant with the incident electromagnetic field and huge enhancements of the field are predicted localized in the small gaps created between the nanoparticle and the plane. An experimental architecture to fabricate sphere-on-plane systems was successfully achieved in which in addition to the commonly used alkanethiols, polyphenylene dendrimers were used as molecular spacers to separate the metallic nanoparticles from the metal planes. They allow for a defined nanoparticle-plane separation and some often are functionalized with a chromophore core which is therefore positioned exactly in the gap. The metal planes used in the system architecture consisted of evaporated thin films of gold or silver. Evaporated gold or silver films have a smooth interface with their substrate and a rougher top surface. To investigate the influence of surface roughness on the optical response of such a film, two gold films were prepared with a smooth and a rough side which were as similar as possible. Surface plasmons were excited in Kretschmann configuration both on the rough and on the smooth side. Their reflectivity could be well modeled by a single gold film for each individual measurement. The film has to be modeled as two layers with significantly different optical constants. The smooth side, although polycrystalline, had an optical response that was very similar to a monocrystalline surface while for the rough side the standard response of evaporated gold is retrieved. For investigations on thin non-absorbing dielectric films though, this heterogeneity introduces only a negligible error. To determine the resonant wavelength of the sphere-on-plane systems a strategy was developed which is based on multi-wavelength surface plasmon spectroscopy experiments in Kretschmann-configuration. The resonant behavior of the system lead to characteristic changes in the surface plasmon dispersion. A quantitative analysis was performed by calculating the polarisability per unit area /A treating the sphere-on-plane systems as an effective layer. This approach completely avoids the ambiguity in the determination of thickness and optical response of thin films in surface plasmon spectroscopy. Equal area densities of polarisable units yielded identical response irrespective of the thickness of the layer they are distributed in. The parameter range where the evaluation of surface plasmon data in terms of /A is applicable was determined for a typical experimental situation. It was shown that this analysis yields reasonable quantitative agreement with a simple theoretical model of the sphere-on-plane resonators and reproduces the results from standard extinction experiments having a higher information content and significantly increased signal-to-noise ratio. With the objective to acquire a better quantitative understanding of the dependence of the resonance wavelength on the geometry of the sphere-on-plane systems, different systems were fabricated in which the gold nanoparticle size, type of spacer and ambient medium were varied and the resonance wavelength of the system was determined. The gold nanoparticle radius was varied in the range from 10 nm to 80 nm. It could be shown that the polyphenylene dendrimers can be used as molecular spacers to fabricate systems which support gap resonances. The resonance wavelength of the systems could be tuned in the optical region between 550 nm and 800 nm. Based on a simple analytical model, a quantitative analysis was developed to relate the systems’ geometry with the resonant wavelength and surprisingly good agreement of this simple model with the experiment without any adjustable parameters was found. The key feature ascribed to sphere-on-plane systems is a very large electromagnetic field localized in volumes in the nanometer range. Experiments towards a quantitative understanding of the field enhancements taking place in the gap of the sphere-on-plane systems were done by monitoring the increase in fluorescence of a metal-supported monolayer of a dye-loaded dendrimer upon decoration of the surface with nanoparticles. The metal used (gold and silver), the colloid mean size and the surface roughness were varied. Large silver crystallites on evaporated silver surfaces lead to the most pronounced fluorescence enhancements in the order of 104. They constitute a very promising sample architecture for the study of field enhancements.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Enhancing the sensitivity of nuclear magnetic resonance measurements via hyperpolarization techniques like parahydrogen induced polarization (PHIP) is of high interest for spectroscopic investigations. Parahydrogen induced polarization is a chemical method, which makes use of the correlation between nuclear spins in parahydrogen to create hyperpolarized molecules. The key feature of this technique is the pairwise and simultaneous transfer of the two hydrogen atoms of parahydrogen to a double or triple bond resulting in a population of the Zeeman energy levels different from the Boltzmann equation. The obtained hyperpolarization results in antiphase peaks in the NMR spectrum with high intensities. Due to these strong NMR signals, this method finds arnlot of applications in chemistry e.g. the characterization of short-lived reaction intermediates. Also in medicine it opens up the possibility to boost the sensitivity of medical diagnostics via magnetic labeling of active contrast agents. Thus, further examination and optimization of the PHIP technique is of significant importance in order to achieve the highest possible sensitivity gain.rnrnIn this work, different aspects concerning PHIP were studied with respect to its chemical and spectroscopic background. The first part of this work mainly focused on optimizing the PHIP technique by investigating different catalyst systems and developing new setups for the parahydrogenation. Further examinations facilitated the transfer of the generated polarization from the protons to heteronuclei like 13C. The second part of this thesis examined the possibility to transfer these results to different biologically active compounds to enable their later application in medical diagnostics. Onerngroup of interesting substances is represented by metabolites or neurotransmitters in mammalian cells. Other interesting substances are clinically relevant drugs like a barbituric acid derivative or antidepressant drugs like citalopram which were investigated with regard to their applicability for the PHIP technique and the possibility to achievernpolarization transfer to 13C nuclei. The last investigated substrate is a polymerizable monomer whose polymer was used as a blood plasma expander for trauma victims after the first half of the 20th century. In this case, the utility of the monomer for the PHIP technique as a basis for later investigations of a polymerization reaction using hyperpolarized monomers was examined.rnrnHence, this thesis covers the optimization of the PHIP technology, hereby combining different fields of research like chemical and spectroscopical aspects, and transfers the results to applications of real biologally acitve compounds.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Calcium fluoride (CaF2) is one of the key lens materials in deep-ultraviolet microlithography because of its transparency at 193 nm and its nearly perfect optical isotropy. Its physical and chemical properties make it applicable for lens fabrication. The key feature of CaF2 is its extreme laser stability. rnAfter exposing CaF2 to 193 nm laser irradiation at high fluences, a loss in optical performance is observed, which is related to radiation-induced defect structures in the material. The initial rapid damage process is well understood as the formation of radiation-induced point defects, however, after a long irradiation time of up to 2 months, permanent damage of the crystals is observed. Based on experimental results, these permanent radiation-induced defect structures are identified as metallic Ca colloids.rnThe properties of point defects in CaF2 and their stabilization in the crystal bulk are calculated with density functional theory (DFT). Because the stabilization of the point defects and the formation of metallic Ca colloids are diffusion-driven processes, the diffusion coefficients for the vacancy (F center) and the interstitial (H center) in CaF2 are determined with the nudged elastic band method. The optical properties of Ca colloids in CaF2 are obtained from Mie-theory, and their formation energy is determined.rnBased on experimental observations and the theoretical description of radiation-induced point defects and defect structures, a diffusion-based model for laser-induced material damage in CaF2 is proposed, which also includes a mechanism for annealing of laser damage. rn

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Lo stretch film è una diffusa applicazione per imballaggio dei film in polietilene (PE), utilizzato per proteggere diversi prodotti di vari dimensioni e pesi. Una caratteristica fondamentale del film è la sua proprietà adesiva in virtù della quale il film può essere facilmente chiuso su se stesso. Tipicamente vengono scelti gradi lineari a bassa densità (LLDPE) con valori relativamente bassi di densità a causa delle loro buone prestazioni. Il mercato basa la scelta del materiale adesivo per tentativi piuttosto che in base alla conoscenza delle caratteristiche strutturali ottimali per l’applicazione. Come per i pressure sensitive adhesives, le proprietà adesive di film stretch in PE possono essere misurati mediante "peel testing". Esistono molti metodi standard internazionali ma i risultati di tali prove sono fortemente dipendenti dalla geometria di prova, sulla possibile deformazione plastica che si verificano nel peel arm(s), e la velocità e temperatura. Lo scopo del presente lavoro è quello di misurare l'energia di adesione Gc di film stretch di PE, su se stessi e su substrati diversi, sfruttando l'interpretazione della meccanica della frattura per tener conto dell'elevata flessibilità e deformabilità di tali film. Quindi, la dipendenza velocità/temperatura di Gc sarà studiata con riferimento diretto al comportamento viscoelastico lineare dei materiali utilizzati negli strati adesivi, per esplorare le relazioni struttura-proprietà che possono mettere in luce i meccanismi molecolari coinvolti nei processi di adesione e distacco. Nella presente caso, l’adesivo non è direttamente disponibile come materiale separato che può essere messo tra due superfici di prova e misurato per la determinazione delle sue proprietà. Il presupposto principale è che una parte, o fase, della complessa struttura semi-cristallina del PE possa funzionare come adesivo, e un importante risultato di questo studio può essere una migliore identificazione e caratterizzazione di questo "fase adesiva".

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Partner notification (PN or contact tracing) is an important aspect of treating bacterial sexually transmitted infections (STIs), such as Chlamydia trachomatis. It facilitates the identification of new infected cases that can be treated through individual case management. PN also acts indirectly by limiting onward transmission in the general population. However, the impact of PN, both at the level of individuals and the population, remains unclear. Since it is difficult to study the effects of PN empirically, mathematical and computational models are useful tools for investigating its potential as a public health intervention. To this end, we developed an individual-based modeling framework called Rstisim. It allows the implementation of different models of STI transmission with various levels of complexity and the reconstruction of the complete dynamic sexual partnership network over any time period. A key feature of this framework is that we can trace an individual's partnership history in detail and investigate the outcome of different PN strategies for C. trachomatis. For individual case management, the results suggest that notifying three or more partners from the preceding 18 months yields substantial numbers of new cases. In contrast, the successful treatment of current partners is most important for preventing re-infection of index cases and reducing further transmission of C. trachomatis at the population level. The findings of this study demonstrate the difference between individual and population level outcomes of public health interventions for STIs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mr. Pechersky set out to examine a specific feature of the employer-employee relationship in Russian business organisations. He wanted to study to what extent the so-called "moral hazard" is being solved (if it is being solved at all), whether there is a relationship between pay and performance, and whether there is a correlation between economic theory and Russian reality. Finally, he set out to construct a model of the Russian economy that better reflects the way it actually functions than do certain other well-known models (for example models of incentive compensation, the Shapiro-Stiglitz model etc.). His report was presented to the RSS in the form of a series of manuscripts in English and Russian, and on disc, with many tables and graphs. He begins by pointing out the different examples of randomness that exist in the relationship between employee and employer. Firstly, results are frequently affected by circumstances outside the employee's control that have nothing to do with how intelligently, honestly, and diligently the employee has worked. When rewards are based on results, uncontrollable randomness in the employee's output induces randomness in their incomes. A second source of randomness involves the outside events that are beyond the control of the employee that may affect his or her ability to perform as contracted. A third source of randomness arises when the performance itself (rather than the result) is measured, and the performance evaluation procedures include random or subjective elements. Mr. Pechersky's study shows that in Russia the third source of randomness plays an important role. Moreover, he points out that employer-employee relationships in Russia are sometimes opposite to those in the West. Drawing on game theory, he characterises the Western system as follows. The two players are the principal and the agent, who are usually representative individuals. The principal hires an agent to perform a task, and the agent acquires an information advantage concerning his actions or the outside world at some point in the game, i.e. it is assumed that the employee is better informed. In Russia, on the other hand, incentive contracts are typically negotiated in situations in which the employer has the information advantage concerning outcome. Mr. Pechersky schematises it thus. Compensation (the wage) is W and consists of a base amount, plus a portion that varies with the outcome, x. So W = a + bx, where b is used to measure the intensity of the incentives provided to the employee. This means that one contract will be said to provide stronger incentives than another if it specifies a higher value for b. This is the incentive contract as it operates in the West. The key feature distinguishing the Russian example is that x is observed by the employer but is not observed by the employee. So the employer promises to pay in accordance with an incentive scheme, but since the outcome is not observable by the employee the contract cannot be enforced, and the question arises: is there any incentive for the employer to fulfil his or her promises? Mr. Pechersky considers two simple models of employer-employee relationships displaying the above type of information symmetry. In a static framework the obtained result is somewhat surprising: at the Nash equilibrium the employer pays nothing, even though his objective function contains a quadratic term reflecting negative consequences for the employer if the actual level of compensation deviates from the expectations of the employee. This can lead, for example, to labour turnover, or the expenses resulting from a bad reputation. In a dynamic framework, the conclusion can be formulated as follows: the higher the discount factor, the higher the incentive for the employer to be honest in his/her relationships with the employee. If the discount factor is taken to be a parameter reflecting the degree of (un)certainty (the higher the degree of uncertainty is, the lower is the discount factor), we can conclude that the answer to the formulated question depends on the stability of the political, social and economic situation in a country. Mr. Pechersky believes that the strength of a market system with private property lies not just in its providing the information needed to compute an efficient allocation of resources in an efficient manner. At least equally important is the manner in which it accepts individually self-interested behaviour, but then channels this behaviour in desired directions. People do not have to be cajoled, artificially induced, or forced to do their parts in a well-functioning market system. Instead, they are simply left to pursue their own objectives as they see fit. Under the right circumstances, people are led by Adam Smith's "invisible hand" of impersonal market forces to take the actions needed to achieve an efficient, co-ordinated pattern of choices. The problem is that, as Mr. Pechersky sees it, there is no reason to believe that the circumstances in Russia are right, and the invisible hand is doing its work properly. Political instability, social tension and other circumstances prevent it from doing so. Mr. Pechersky believes that the discount factor plays a crucial role in employer-employee relationships. Such relationships can be considered satisfactory from a normative point of view, only in those cases where the discount factor is sufficiently large. Unfortunately, in modern Russia the evidence points to the typical discount factor being relatively small. This fact can be explained as a manifestation of aversion to risk of economic agents. Mr. Pechersky hopes that when political stabilisation occurs, the discount factors of economic agents will increase, and the agent's behaviour will be explicable in terms of more traditional models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Phyllotaxis, the regular arrangement of leaves and flowers around the stem, is a key feature of plant architecture. Current models propose that the spatiotemporal regulation of organ initiation is controlled by a positive feedback loop between the plant hormone auxin and its efflux carrier PIN-FORMED1 (PIN1). Consequently, pin1 mutants give rise to naked inflorescence stalks with few or no flowers, indicating that PIN1 plays a crucial role in organ initiation. However, pin1 mutants do produce leaves. In order to understand the regulatory mechanisms controlling leaf initiation in Arabidopsis (Arabidopsis thaliana) rosettes, we have characterized the vegetative pin1 phenotype in detail. We show that although the timing of leaf initiation in vegetative pin1 mutants is variable and divergence angles clearly deviate from the canonical 137° value, leaves are not positioned at random during early developmental stages. Our data further indicate that other PIN proteins are unlikely to explain the persistence of leaf initiation and positioning during pin1 vegetative development. Thus, phyllotaxis appears to be more complex than suggested by current mechanistic models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There is an emerging interest in modeling spatially correlated survival data in biomedical and epidemiological studies. In this paper, we propose a new class of semiparametric normal transformation models for right censored spatially correlated survival data. This class of models assumes that survival outcomes marginally follow a Cox proportional hazard model with unspecified baseline hazard, and their joint distribution is obtained by transforming survival outcomes to normal random variables, whose joint distribution is assumed to be multivariate normal with a spatial correlation structure. A key feature of the class of semiparametric normal transformation models is that it provides a rich class of spatial survival models where regression coefficients have population average interpretation and the spatial dependence of survival times is conveniently modeled using the transformed variables by flexible normal random fields. We study the relationship of the spatial correlation structure of the transformed normal variables and the dependence measures of the original survival times. Direct nonparametric maximum likelihood estimation in such models is practically prohibited due to the high dimensional intractable integration of the likelihood function and the infinite dimensional nuisance baseline hazard parameter. We hence develop a class of spatial semiparametric estimating equations, which conveniently estimate the population-level regression coefficients and the dependence parameters simultaneously. We study the asymptotic properties of the proposed estimators, and show that they are consistent and asymptotically normal. The proposed method is illustrated with an analysis of data from the East Boston Ashma Study and its performance is evaluated using simulations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVE: Chromosomal instability is a key feature in hepatocellular carcinoma (HCC). Array comparative genomic hybridization (aCGH) revealed recurring structural aberrations, whereas fluorescence in situ hybridization (FISH) indicated an increasing number of numerical aberrations in dedifferentiating HCC. Therefore, we examined whether there was a correlation between structural and numerical aberrations of chromosomal instability in HCC. METHODS AND RESULTS: 27 HCC (5 well, 10 moderately, 12 lower differentiated) already cytogenetically characterized by aCGH were analyzed. FISH analysis using probes for chromosomes 1, 3, 7, 8 and 17 revealed 1.46-4.24 signals/nucleus, which correlated with the histological grade (well vs. moderately,p < 0.0003; moderately vs. lower, p < 0.004). The number of chromosomes to each other was stable with exceptions only seen for chromosome 8. Loss of 4q and 13q, respectively, were correlated with the number of aberrations detected by aCGH (p < 0.001, p < 0.005; Mann-Whitney test). Loss of 4q and gain of 8q were correlated with an increasing number of numerical aberrations detected by FISH (p < 0.020, p < 0.031). Loss of 8p was correlated with the number of structural imbalances seen in aCGH (p < 0.048), but not with the number of numerical changes seen in FISH. CONCLUSION: We found that losses of 4q, 8p and 13q were closely correlated with an increasing number of aberrations detected by aCGH, whereas a loss of 4q and a gain of 8q were also observed in the context of polyploidization, the cytogenetic correlate of morphological dedifferentiation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Writing center scholarship and practice have approached how issues of identity influence communication but have not fully considered ways of making identity a key feature of writing center research or practice. This dissertation suggests a new way to view identity -- through an experience of "multimembership" or the consideration that each identity is constructed based on the numerous community memberships that make up that identity. Etienne Wenger (1998) proposes that a fully formed identity is ultimately impossible, but it is through the work of reconciling memberships that important individual and community transformations can occur. Since Wenger also argues that reconciliation "is the most significant challenge" for those moving into new communities of practice (or, "engage in a process of collective learning in a shared domain of human endeavor" (4)), yet this challenge often remains tacit, this dissertation examines and makes explicit how this important work is done at two different research sites - a university writing center (the Michigan Tech Multiliteracies Center) and at a multinational corporation (Kimberly-Clark Corporation). Drawing extensively on qualitative ethnographic methods including interview transcriptions, observations, and case studies, as well as work from scholars in writing center studies (Grimm, Denney, Severino), literacy studies (New London Group, Street, Gee), composition (Horner and Trimbur, Canagarajah, Lu), rhetoric (Crowley), and identity studies (Anzaldua, Pratt), I argue that, based on evidence from the two sites, writing centers need to educate tutors to not only take identity into consideration, but to also make individuals' reconciliation work more visible, as it will continue once students and tutors leave the university. Further, as my research at the Michigan Tech Multiliteracies Center and Kimberly-Clark will show, communities can (and should) change their practices in ways that account for reconciliation work as identity, communication, and learning are inextricably bound up with one another.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This dissertation represents experimental and numerical investigations of combustion initiation trigged by electrical-discharge-induced plasma within lean and dilute methane air mixture. This research topic is of interest due to its potential to further promote the understanding and prediction of spark ignition quality in high efficiency gasoline engines, which operate with lean and dilute fuel-air mixture. It is specified in this dissertation that the plasma to flame transition is the key process during the spark ignition event, yet it is also the most complicated and least understood procedure. Therefore the investigation is focused on the overlapped periods when plasma and flame both exists in the system. Experimental study is divided into two parts. Experiments in Part I focuses on the flame kernel resulting from the electrical discharge. A number of external factors are found to affect the growth of the flame kernel, resulting in complex correlations between discharge and flame kernel. Heat loss from the flame kernel to code ambient is found to be a dominant factor that quenches the flame kernel. Another experimental focus is on the plasma channel. Electrical discharges into gases induce intense and highly transient plasma. Detailed observation of the size and contents of the discharge-induced plasma channel is performed. Given the complex correlation and the multi-discipline physical/chemical processes involved in the plasma-flame transition, the modeling principle is taken to reproduce detailed transitions numerically with minimum analytical assumptions. Detailed measurement obtained from experimental work facilitates the more accurate description of initial reaction conditions. The novel and unique spark source considering both energy and species deposition is defined in a justified manner, which is the key feature of this Ignition by Plasma (IBP) model. The results of numerical simulation are intuitive and the potential of numerical simulation to better resolve the complex spark ignition mechanism is presented. Meanwhile, imperfections of the IBP model and numerical simulation have been specified and will address future attentions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recent studies using diffusion tensor imaging (DTI) have advanced our knowledge of the organization of white matter subserving language function. It remains unclear, however, how DTI may be used to predict accurately a key feature of language organization: its asymmetric representation in one cerebral hemisphere. In this study of epilepsy patients with unambiguous lateralization on Wada testing (19 left and 4 right lateralized subjects; no bilateral subjects), the predictive value of DTI for classifying the dominant hemisphere for language was assessed relative to the existing standard-the intra-carotid Amytal (Wada) procedure. Our specific hypothesis is that language laterality in both unilateral left- and right-hemisphere language dominant subjects may be predicted by hemispheric asymmetry in the relative density of three white matter pathways terminating in the temporal lobe implicated in different aspects of language function: the arcuate (AF), uncinate (UF), and inferior longitudinal fasciculi (ILF). Laterality indices computed from asymmetry of high anisotropy AF pathways, but not the other pathways, classified the majority (19 of 23) of patients using the Wada results as the standard. A logistic regression model incorporating information from DTI of the AF, fMRI activity in Broca's area, and handedness was able to classify 22 of 23 (95.6%) patients correctly according to their Wada score. We conclude that evaluation of highly anisotropic components of the AF alone has significant predictive power for determining language laterality, and that this markedly asymmetric distribution in the dominant hemisphere may reflect enhanced connectivity between frontal and temporal sites to support fluent language processes. Given the small sample reported in this preliminary study, future research should assess this method on a larger group of patients, including subjects with bi-hemispheric dominance.