7 resultados para Measurement based model identification

em Helda - Digital Repository of University of Helsinki


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent reports, adolescents and young adults (AYA) with acute lymphoblastic leukemia (ALL) have had a better outcome with pediatric treatment than with adult protocols. ALL can be classified into biologic subgroups according to immunophenotype and cytogenetics, with different clinical characteristics and outcome. The proportions of the subgroups are different in children and adults. ALL subtypes in AYA patients are less well characterized. In this study, the treatment and outcome of ALL in AYA patients aged 10-25 years in Finland on pediatric and adult protocols was retrospectively analyzed. In total, 245 patients were included. The proportions of biologic subgroups in different age groups were determined. Patients with initially normal or failed karyotype were examined with oligonucleotide microarray-based comparative genomic hybridization (aCGH). Also deletions and instability of chromosome 9p were screened in ALL patients. In addition, patients with other hematologic malignancies were screened for 9p instability. aCGH data were also used to determine a gene set that classifies AYA patients at diagnosis according to their risk of relapse. Receiver operating characteristic analysis was used to assess the value of the set of genes as prognostic classifiers. The 5-year event-free survival of AYA patients treated with pediatric or adult protocols was 67% and 60% (p=0.30), respectively. White blood cell count larger than 100x109/l was associated with poor prognosis. Patients treated with pediatric protocols and assigned to an intermediate-risk group fared significantly better than those of the pediatric high-risk or adult treatment groups. Deletions of 9p were detected in 46% of AYA ALL patients. The chromosomal region 9p21.3 was always affected, and the CDKN2A gene was always deleted. In about 15% of AYA patients, the 9p21.3 deletion was smaller than 200 kb in size, and therefore, probably undetectable with conventional methods. Deletion of 9p was the most common aberration of AYA ALL patients with initially normal karyotype. Instability of 9p, defined as multiple separate areas of copy number loss or homozygous loss within a larger heterozygous area in 9p, was detected in 19% (n=27) of ALL patients. This abnormality was restricted to ALL; none of the patients with other hematologic malignancies had the aberration. The prognostic model identification procedure resulted in a model of four genes: BAK1, CDKN2B, GSTM1, and MT1F. The copy number profile combinations of these genes differentiated between AYA ALL patients at diagnosis depending on their risk of relapse. Deletions of CDKN2B and BAK1 in combination with amplification of GSTM1 and MT1F were associated with a higher probability of relapse. Unlike all previous studies, we found that the outcome of AYA patients with ALL treated using pediatric or adult therapeutic protocols was comparable. The success of adult ALL therapy emphasizes the benefit of referral of patients to academic centers and adherence to research protocols. 9p deletions and instability are common features of ALL and may act together with oncogene-activating translocations in leukemogenesis. New and more sensitive methods of molecular cytogenetics can reveal previously cryptic genetic aberrations with an important role in leukemic development and prognosis and that may be potential targets of therapy. aCGH also provides a viable approach for model design aiming at evaluation of risk of relapse in ALL.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Microbes in natural and artificial environments as well as in the human body are a key part of the functional properties of these complex systems. The presence or absence of certain microbial taxa is a correlate of functional status like risk of disease or course of metabolic processes of a microbial community. As microbes are highly diverse and mostly notcultivable, molecular markers like gene sequences are a potential basis for detection and identification of key types. The goal of this thesis was to study molecular methods for identification of microbial DNA in order to develop a tool for analysis of environmental and clinical DNA samples. Particular emphasis was placed on specificity of detection which is a major challenge when analyzing complex microbial communities. The approach taken in this study was the application and optimization of enzymatic ligation of DNA probes coupled with microarray read-out for high-throughput microbial profiling. The results show that fungal phylotypes and human papillomavirus genotypes could be accurately identified from pools of PCR amplicons generated from purified sample DNA. Approximately 1 ng/μl of sample DNA was needed for representative PCR amplification as measured by comparisons between clone sequencing and microarray. A minimum of 0,25 amol/μl of PCR amplicons was detectable from amongst 5 ng/μl of background DNA, suggesting that the detection limit of the test comprising of ligation reaction followed by microarray read-out was approximately 0,04%. Detection from sample DNA directly was shown to be feasible with probes forming a circular molecule upon ligation followed by PCR amplification of the probe. In this approach, the minimum detectable relative amount of target genome was found to be 1% of all genomes in the sample as estimated from 454 deep sequencing results. Signal-to-noise of contact printed microarrays could be improved by using an internal microarray hybridization control oligonucleotide probe together with a computational algorithm. The algorithm was based on identification of a bias in the microarray data and correction of the bias as shown by simulated and real data. The results further suggest semiquantitative detection to be possible by ligation detection, allowing estimation of target abundance in a sample. However, in practise, comprehensive sequence information of full length rRNA genes is needed to support probe design with complex samples. This study shows that DNA microarray has the potential for an accurate microbial diagnostic platform to take advantage of increasing sequence data and to replace traditional, less efficient methods that still dominate routine testing in laboratories. The data suggests that ligation reaction based microarray assay can be optimized to a degree that allows good signal-tonoise and semiquantitative detection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis concentrates on two questions: the translation of metaphors in literary texts, and the use of semiotic models and tools in translation studies. The aim of the thesis is to present a semiotic, text based model designed to ease the translation of metaphors and to analyze translated metaphors. In the translation of metaphors I will concentrate on the central problem of metaphor translations: in addition to its denotation and connotation, a single metaphor may contain numerous culture or genre specific meanings. How can a translator ensure the translation of all meanings relevant to the text as a whole? I will approach the question from two directions. Umberto Eco's holistic text analysis model provides an opportunity to concentrate on the problematic nature of metaphor translation from the level of a text as a specific entity, while George Lakoff's and Mark Johnson's metaphor research makes it possible to approach the question from the level of individual metaphors. On the semiotic side, the model utilizes Eero Tarasti's existential semiotics supported by Algirdas Greimas' actant model and Yuri Lotman's theory of cultural semiotics. In the model introduced in the thesis, individual texts are deconstructed through Eco's model into elements. The textual roles and features of these elements are distilled further through Tarasti's model into their coexistent meaning levels. The priorization and analysis of these meaning levels provide an opportunity to consider the contents and significance of specific metaphors in relation to the needs of the text as a whole. As example texts, I will use Motörhead's hard rock classic Iron Horse/Born to Lose and its translation into Rauta-airot by Viikate. I will use the introduced model to analyze the metaphors in the source and target texts, and to consider the transfer of culture specific elements between the languages and cultural borders. In addition, I will use the analysis process to examine the validity of the model introduced in the thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Frictions are factors that hinder trading of securities in financial markets. Typical frictions include limited market depth, transaction costs, lack of infinite divisibility of securities, and taxes. Conventional models used in mathematical finance often gloss over these issues, which affect almost all financial markets, by arguing that the impact of frictions is negligible and, consequently, the frictionless models are valid approximations. This dissertation consists of three research papers, which are related to the study of the validity of such approximations in two distinct modeling problems. Models of price dynamics that are based on diffusion processes, i.e., continuous strong Markov processes, are widely used in the frictionless scenario. The first paper establishes that diffusion models can indeed be understood as approximations of price dynamics in markets with frictions. This is achieved by introducing an agent-based model of a financial market where finitely many agents trade a financial security, the price of which evolves according to price impacts generated by trades. It is shown that, if the number of agents is large, then under certain assumptions the price process of security, which is a pure-jump process, can be approximated by a one-dimensional diffusion process. In a slightly extended model, in which agents may exhibit herd behavior, the approximating diffusion model turns out to be a stochastic volatility model. Finally, it is shown that when agents' tendency to herd is strong, logarithmic returns in the approximating stochastic volatility model are heavy-tailed. The remaining papers are related to no-arbitrage criteria and superhedging in continuous-time option pricing models under small-transaction-cost asymptotics. Guasoni, Rásonyi, and Schachermayer have recently shown that, in such a setting, any financial security admits no arbitrage opportunities and there exist no feasible superhedging strategies for European call and put options written on it, as long as its price process is continuous and has the so-called conditional full support (CFS) property. Motivated by this result, CFS is established for certain stochastic integrals and a subclass of Brownian semistationary processes in the two papers. As a consequence, a wide range of possibly non-Markovian local and stochastic volatility models have the CFS property.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Molecular motors are proteins that convert chemical energy into mechanical work. The viral packaging ATPase P4 is a hexameric molecular motor that translocates RNA into preformed viral capsids. P4 belongs to the ubiquitous class of hexameric helicases. Although its structure is known, the mechanism of RNA translocation remains elusive. Here we present a detailed kinetic study of nucleotide binding, hydrolysis, and product release by P4. We propose a stochastic-sequential cooperative model to describe the coordination of ATP hydrolysis within the hexamer. In this model the apparent cooperativity is a result of hydrolysis stimulation by ATP and RNA binding to neighboring subunits rather than cooperative nucleotide binding. Simultaneous interaction of neighboring subunits with RNA makes the otherwise random hydrolysis sequential and processive. Further, we use hydrogen/deuterium exchange detected by high resolution mass spectrometry to visualize P4 conformational dynamics during the catalytic cycle. Concerted changes of exchange kinetics reveal a cooperative unit that dynamically links ATP binding sites and the central RNA binding channel. The cooperative unit is compatible with the structure-based model in which translocation is effected by conformational changes of a limited protein region. Deuterium labeling also discloses the transition state associated with RNA loading which proceeds via opening of the hexameric ring. Hydrogen/deuterium exchange is further used to delineate the interactions of the P4 hexamer with the viral procapsid. P4 associates with the procapsid via its C-terminal face. The interactions stabilize subunit interfaces within the hexamer. The conformation of the virus-bound hexamer is more stable than the hexamer in solution, which is prone to spontaneous ring openings. We propose that the stabilization within the viral capsid increases the packaging processivity and confers selectivity during RNA loading. Finally, we use single molecule techniques to characterize P4 translocation along RNA. While the P4 hexamer encloses RNA topologically within the central channel, it diffuses randomly along the RNA. In the presence of ATP, unidirectional net movement is discernible in addition to the stochastic motion. The diffusion is hindered by activation energy barriers that depend on the nucleotide binding state. The results suggest that P4 employs an electrostatic clutch instead of cycling through stable, discrete, RNA binding states during translocation. Conformational changes coupled to ATP hydrolysis modify the electrostatic potential inside the central channel, which in turn biases RNA motion in one direction. Implications of the P4 model for other hexameric molecular motors are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study examines the properties of Generalised Regression (GREG) estimators for domain class frequencies and proportions. The family of GREG estimators forms the class of design-based model-assisted estimators. All GREG estimators utilise auxiliary information via modelling. The classic GREG estimator with a linear fixed effects assisting model (GREG-lin) is one example. But when estimating class frequencies, the study variable is binary or polytomous. Therefore logistic-type assisting models (e.g. logistic or probit model) should be preferred over the linear one. However, other GREG estimators than GREG-lin are rarely used, and knowledge about their properties is limited. This study examines the properties of L-GREG estimators, which are GREG estimators with fixed-effects logistic-type models. Three research questions are addressed. First, I study whether and when L-GREG estimators are more accurate than GREG-lin. Theoretical results and Monte Carlo experiments which cover both equal and unequal probability sampling designs and a wide variety of model formulations show that in standard situations, the difference between L-GREG and GREG-lin is small. But in the case of a strong assisting model, two interesting situations arise: if the domain sample size is reasonably large, L-GREG is more accurate than GREG-lin, and if the domain sample size is very small, estimation of assisting model parameters may be inaccurate, resulting in bias for L-GREG. Second, I study variance estimation for the L-GREG estimators. The standard variance estimator (S) for all GREG estimators resembles the Sen-Yates-Grundy variance estimator, but it is a double sum of prediction errors, not of the observed values of the study variable. Monte Carlo experiments show that S underestimates the variance of L-GREG especially if the domain sample size is minor, or if the assisting model is strong. Third, since the standard variance estimator S often fails for the L-GREG estimators, I propose a new augmented variance estimator (A). The difference between S and the new estimator A is that the latter takes into account the difference between the sample fit model and the census fit model. In Monte Carlo experiments, the new estimator A outperformed the standard estimator S in terms of bias, root mean square error and coverage rate. Thus the new estimator provides a good alternative to the standard estimator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Detecting Earnings Management Using Neural Networks. Trying to balance between relevant and reliable accounting data, generally accepted accounting principles (GAAP) allow, to some extent, the company management to use their judgment and to make subjective assessments when preparing financial statements. The opportunistic use of the discretion in financial reporting is called earnings management. There have been a considerable number of suggestions of methods for detecting accrual based earnings management. A majority of these methods are based on linear regression. The problem with using linear regression is that a linear relationship between the dependent variable and the independent variables must be assumed. However, previous research has shown that the relationship between accruals and some of the explanatory variables, such as company performance, is non-linear. An alternative to linear regression, which can handle non-linear relationships, is neural networks. The type of neural network used in this study is the feed-forward back-propagation neural network. Three neural network-based models are compared with four commonly used linear regression-based earnings management detection models. All seven models are based on the earnings management detection model presented by Jones (1991). The performance of the models is assessed in three steps. First, a random data set of companies is used. Second, the discretionary accruals from the random data set are ranked according to six different variables. The discretionary accruals in the highest and lowest quartiles for these six variables are then compared. Third, a data set containing simulated earnings management is used. Both expense and revenue manipulation ranging between -5% and 5% of lagged total assets is simulated. Furthermore, two neural network-based models and two linear regression-based models are used with a data set containing financial statement data from 110 failed companies. Overall, the results show that the linear regression-based models, except for the model using a piecewise linear approach, produce biased estimates of discretionary accruals. The neural network-based model with the original Jones model variables and the neural network-based model augmented with ROA as an independent variable, however, perform well in all three steps. Especially in the second step, where the highest and lowest quartiles of ranked discretionary accruals are examined, the neural network-based model augmented with ROA as an independent variable outperforms the other models.