7 resultados para finite-time tracking

em Helda - Digital Repository of University of Helsinki


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Syövän diagnostiikassa ja hoidossa nanopartikkelit voivat toimia kuljetinaineina lääke- ja diagnostisille aineille tai nukleiinihappojaksoille. Kantaja-aineeseen voidaan liittää kohdennusmolekyylejä partikkelien passiivista tai aktiivista kohdennusta varten tai radioleima kuvantamista tai radioterapiaa varten. Kantaja-aineiden avulla voidaan parantaa lääkeaineen fysikaalis-kemiallisia ominaisuuksia ja biologista hyötyosuutta, vähentää systeemisiä sivuvaikutuksia, pidentää lääkeaineen puoliintumisaikaa ja siten harventaa annosteluväliä, sekä parantaa lääkeaineen pääsyä kohdekudokseen. Näin voidaan parantaa kemo- ja radioterapian tehoa ja hoidon onnistumisen todennäköisyyttä. Kirjallisuuskatsauksessa perehdytään nanokantajien rooliin syövän hoidossa. Vuosikymmeniä jatkuneesta tutkimuksesta huolimatta vain kaksi (Eurooppa) tai kolme (Yhdysvallat) nanopartikkeliformulaatiota on hyväksytty markkinoille syövän hoidossa. Ongelmina ovat riittämätön hakeutuminen kohdekudokseen, immunogeenisyys ja nanopartikkelien labiilius. Kokeellisessa osassa tutkitaan in vitro ja hiirillä in vivo 99mTc-leimattujen, PEG-verhoiltujen biotiiniliposomien kaksivaiheista kohdennusta ihmisen munasarjan adenokarsinoomasoluihin. Kohdentamiseen käytetään biotinyloitua setuksimabi-(Erbitux®) vasta-ainetta, joka sitoutuu solujen yli-ilmentämiin EGF-reseptoreihin. Kaksivaiheista kohdennusta verrataan suoraan ja/tai passiiviseen kohdennukseen. Tehokkaampien kuvantamismenetelmien kehitys on vauhdittanut kohdennettujen nanopartikkelien tutkimusta. Isotooppikuvantamista käyttäen pystytään seuraamaan radioleiman jakautumista elimistössä ja kuvantamaan solutasolla tapahtuvia ilmiöitä. Kirjallisuuskatsauksessa perehdytään SPECT- ja PET-kuvantamiseen syövän hoidossa, sekä niiden hyödyntämiseen lääkekehityksessä nanopartikkelien kuvantamisessa. Kyseiset kuvantamismenetelmät erottuvat muista menetelmistä korkean erotuskyvyn, herkkyyden ja helppokäyttöisyyden suhteen. Kokeellisessa osassa 99mTc-leimattujen liposomien distribuutiota hiirissä tutkittiin SPECT-CT-laitteen avulla. Aktiivisuus kasvaimessa, pernassa ja maksassa kvantifioitiin InVivoScope-ohjelman ja gammalaskijan avulla. Tuloksia verrattiin keskenään. In vitro-kokeessa saavutettiin kaksivaiheisella kohdennuksella 2,7- 3,5-kertainen (solulinjasta riippuen) hakeutuminen soluihin kontrolliliposomeihin verrattuna. Kuitenkin suora kohdennus toimi kaksivaiheista kohdennusta paremmin in vitro. In vivo –kokeissa liposomit jakautuivat kasvaimeen tehokkaammin i.p.-annosteltuna kuin i.v.-annosteltuna. Kaksivaiheisella kohdennuksella saavutettiin 1,24-kertainen jakautuminen kasvaimeen (% ID/g kudosta) passiivisesti kohdennettuihin liposomeihin verrattuna. %ID/elin oli kohdennetuilla liposomeilla 5,9 % ja passiivisesti kohdennetuilla 5,4%. Todellinen ero oli siis pieni. InVivoScope:n ja gammalaskijan tulokset eivät korreloineet keskenään. Lisätutkimuksia ja menetelmän optimointia vaaditaan liposomien kohdennuksessa kasvaimeen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation is a theoretical study of finite-state based grammars used in natural language processing. The study is concerned with certain varieties of finite-state intersection grammars (FSIG) whose parsers define regular relations between surface strings and annotated surface strings. The study focuses on the following three aspects of FSIGs: (i) Computational complexity of grammars under limiting parameters In the study, the computational complexity in practical natural language processing is approached through performance-motivated parameters on structural complexity. Each parameter splits some grammars in the Chomsky hierarchy into an infinite set of subset approximations. When the approximations are regular, they seem to fall into the logarithmic-time hierarchyand the dot-depth hierarchy of star-free regular languages. This theoretical result is important and possibly relevant to grammar induction. (ii) Linguistically applicable structural representations Related to the linguistically applicable representations of syntactic entities, the study contains new bracketing schemes that cope with dependency links, left- and right branching, crossing dependencies and spurious ambiguity. New grammar representations that resemble the Chomsky-Schützenberger representation of context-free languages are presented in the study, and they include, in particular, representations for mildly context-sensitive non-projective dependency grammars whose performance-motivated approximations are linear time parseable. (iii) Compilation and simplification of linguistic constraints Efficient compilation methods for certain regular operations such as generalized restriction are presented. These include an elegant algorithm that has already been adopted as the approach in a proprietary finite-state tool. In addition to the compilation methods, an approach to on-the-fly simplifications of finite-state representations for parse forests is sketched. These findings are tightly coupled with each other under the theme of locality. I argue that the findings help us to develop better, linguistically oriented formalisms for finite-state parsing and to develop more efficient parsers for natural language processing. Avainsanat: syntactic parsing, finite-state automata, dependency grammar, first-order logic, linguistic performance, star-free regular approximations, mildly context-sensitive grammars

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Topic detection and tracking (TDT) is an area of information retrieval research the focus of which revolves around news events. The problems TDT deals with relate to segmenting news text into cohesive stories, detecting something new, previously unreported, tracking the development of a previously reported event, and grouping together news that discuss the same event. The performance of the traditional information retrieval techniques based on full-text similarity has remained inadequate for online production systems. It has been difficult to make the distinction between same and similar events. In this work, we explore ways of representing and comparing news documents in order to detect new events and track their development. First, however, we put forward a conceptual analysis of the notions of topic and event. The purpose is to clarify the terminology and align it with the process of news-making and the tradition of story-telling. Second, we present a framework for document similarity that is based on semantic classes, i.e., groups of words with similar meaning. We adopt people, organizations, and locations as semantic classes in addition to general terms. As each semantic class can be assigned its own similarity measure, document similarity can make use of ontologies, e.g., geographical taxonomies. The documents are compared class-wise, and the outcome is a weighted combination of class-wise similarities. Third, we incorporate temporal information into document similarity. We formalize the natural language temporal expressions occurring in the text, and use them to anchor the rest of the terms onto the time-line. Upon comparing documents for event-based similarity, we look not only at matching terms, but also how near their anchors are on the time-line. Fourth, we experiment with an adaptive variant of the semantic class similarity system. The news reflect changes in the real world, and in order to keep up, the system has to change its behavior based on the contents of the news stream. We put forward two strategies for rebuilding the topic representations and report experiment results. We run experiments with three annotated TDT corpora. The use of semantic classes increased the effectiveness of topic tracking by 10-30\% depending on the experimental setup. The gain in spotting new events remained lower, around 3-4\%. The anchoring the text to a time-line based on the temporal expressions gave a further 10\% increase the effectiveness of topic tracking. The gains in detecting new events, again, remained smaller. The adaptive systems did not improve the tracking results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies quantile residuals and uses different methodologies to develop test statistics that are applicable in evaluating linear and nonlinear time series models based on continuous distributions. Models based on mixtures of distributions are of special interest because it turns out that for those models traditional residuals, often referred to as Pearson's residuals, are not appropriate. As such models have become more and more popular in practice, especially with financial time series data there is a need for reliable diagnostic tools that can be used to evaluate them. The aim of the thesis is to show how such diagnostic tools can be obtained and used in model evaluation. The quantile residuals considered here are defined in such a way that, when the model is correctly specified and its parameters are consistently estimated, they are approximately independent with standard normal distribution. All the tests derived in the thesis are pure significance type tests and are theoretically sound in that they properly take the uncertainty caused by parameter estimation into account. -- In Chapter 2 a general framework based on the likelihood function and smooth functions of univariate quantile residuals is derived that can be used to obtain misspecification tests for various purposes. Three easy-to-use tests aimed at detecting non-normality, autocorrelation, and conditional heteroscedasticity in quantile residuals are formulated. It also turns out that these tests can be interpreted as Lagrange Multiplier or score tests so that they are asymptotically optimal against local alternatives. Chapter 3 extends the concept of quantile residuals to multivariate models. The framework of Chapter 2 is generalized and tests aimed at detecting non-normality, serial correlation, and conditional heteroscedasticity in multivariate quantile residuals are derived based on it. Score test interpretations are obtained for the serial correlation and conditional heteroscedasticity tests and in a rather restricted special case for the normality test. In Chapter 4 the tests are constructed using the empirical distribution function of quantile residuals. So-called Khmaladze s martingale transformation is applied in order to eliminate the uncertainty caused by parameter estimation. Various test statistics are considered so that critical bounds for histogram type plots as well as Quantile-Quantile and Probability-Probability type plots of quantile residuals are obtained. Chapters 2, 3, and 4 contain simulations and empirical examples which illustrate the finite sample size and power properties of the derived tests and also how the tests and related graphical tools based on residuals are applied in practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The likelihood ratio test of cointegration rank is the most widely used test for cointegration. Many studies have shown that its finite sample distribution is not well approximated by the limiting distribution. The article introduces and evaluates by Monte Carlo simulation experiments bootstrap and fast double bootstrap (FDB) algorithms for the likelihood ratio test. It finds that the performance of the bootstrap test is very good. The more sophisticated FDB produces a further improvement in cases where the performance of the asymptotic test is very unsatisfactory and the ordinary bootstrap does not work as well as it might. Furthermore, the Monte Carlo simulations provide a number of guidelines on when the bootstrap and FDB tests can be expected to work well. Finally, the tests are applied to US interest rates and international stock prices series. It is found that the asymptotic test tends to overestimate the cointegration rank, while the bootstrap and FDB tests choose the correct cointegration rank.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern sample surveys started to spread after statistician at the U.S. Bureau of the Census in the 1940s had developed a sampling design for the Current Population Survey (CPS). A significant factor was also that digital computers became available for statisticians. In the beginning of 1950s, the theory was documented in textbooks on survey sampling. This thesis is about the development of the statistical inference for sample surveys. For the first time the idea of statistical inference was enunciated by a French scientist, P. S. Laplace. In 1781, he published a plan for a partial investigation in which he determined the sample size needed to reach the desired accuracy in estimation. The plan was based on Laplace s Principle of Inverse Probability and on his derivation of the Central Limit Theorem. They were published in a memoir in 1774 which is one of the origins of statistical inference. Laplace s inference model was based on Bernoulli trials and binominal probabilities. He assumed that populations were changing constantly. It was depicted by assuming a priori distributions for parameters. Laplace s inference model dominated statistical thinking for a century. Sample selection in Laplace s investigations was purposive. In 1894 in the International Statistical Institute meeting, Norwegian Anders Kiaer presented the idea of the Representative Method to draw samples. Its idea was that the sample would be a miniature of the population. It is still prevailing. The virtues of random sampling were known but practical problems of sample selection and data collection hindered its use. Arhtur Bowley realized the potentials of Kiaer s method and in the beginning of the 20th century carried out several surveys in the UK. He also developed the theory of statistical inference for finite populations. It was based on Laplace s inference model. R. A. Fisher contributions in the 1920 s constitute a watershed in the statistical science He revolutionized the theory of statistics. In addition, he introduced a new statistical inference model which is still the prevailing paradigm. The essential idea is to draw repeatedly samples from the same population and the assumption that population parameters are constants. Fisher s theory did not include a priori probabilities. Jerzy Neyman adopted Fisher s inference model and applied it to finite populations with the difference that Neyman s inference model does not include any assumptions of the distributions of the study variables. Applying Fisher s fiducial argument he developed the theory for confidence intervals. Neyman s last contribution to survey sampling presented a theory for double sampling. This gave the central idea for statisticians at the U.S. Census Bureau to develop the complex survey design for the CPS. Important criterion was to have a method in which the costs of data collection were acceptable, and which provided approximately equal interviewer workloads, besides sufficient accuracy in estimation.