874 resultados para Time-varying variable selection
Resumo:
BACKGROUND Acute myeloid leukaemia mainly affects elderly people, with a median age at diagnosis of around 70 years. Although about 50-60% of patients enter first complete remission upon intensive induction chemotherapy, relapse remains high and overall outcomes are disappointing. Therefore, effective post-remission therapy is urgently needed. Although often no post-remission therapy is given to elderly patients, it might include chemotherapy or allogeneic haemopoietic stem cell transplantation (HSCT) following reduced-intensity conditioning. We aimed to assess the comparative value of allogeneic HSCT with other approaches, including no post-remission therapy, in patients with acute myeloid leukaemia aged 60 years and older. METHODS For this time-dependent analysis, we used the results from four successive prospective HOVON-SAKK acute myeloid leukaemia trials. Between May 3, 2001, and Feb 5, 2010, a total of 1155 patients aged 60 years and older were entered into these trials, of whom 640 obtained a first complete remission after induction chemotherapy and were included in the analysis. Post-remission therapy consisted of allogeneic HSCT following reduced-intensity conditioning (n=97), gemtuzumab ozogamicin (n=110), chemotherapy (n=44), autologous HSCT (n=23), or no further treatment (n=366). Reduced-intensity conditioning regimens consisted of fludarabine combined with 2 Gy of total body irradiation (n=71), fludarabine with busulfan (n=10), or other regimens (n=16). A time-dependent analysis was done, in which allogeneic HSCT was compared with other types of post-remission therapy. The primary endpoint of the study was 5-year overall survival for all treatment groups, analysed by a time-dependent analysis. FINDINGS 5-year overall survival was 35% (95% CI 25-44) for patients who received an allogeneic HSCT, 21% (17-26) for those who received no additional post-remission therapy, and 26% (19-33) for patients who received either additional chemotherapy or autologous HSCT. Overall survival at 5 years was strongly affected by the European LeukemiaNET acute myeloid leukaemia risk score, with patients in the favourable risk group (n=65) having better 5-year overall survival (56% [95% CI 43-67]) than those with intermediate-risk (n=131; 23% [19-27]) or adverse-risk (n=444; 13% [8-20]) acute myeloid leukaemia. Multivariable analysis with allogeneic HSCT as a time-dependent variable showed that allogeneic HSCT was associated with better 5-year overall survival (HR 0·71 [95% CI 0·53-0·95], p=0·017) compared with non-allogeneic HSCT post-remission therapies or no post-remission therapy, especially in patients with intermediate-risk (0·82 [0·58-1·15]) or adverse-risk (0.39 [0·21-0·73]) acute myeloid leukaemia. INTERPRETATION Collectively, the results from these four trials suggest that allogeneic HSCT might be the preferred treatment approach in patients 60 years of age and older with intermediate-risk and adverse-risk acute myeloid leukaemia in first complete remission, but the comparative value should ideally be shown in a prospective randomised study. FUNDING None.
Resumo:
Geographic distance is a standard proxy for transport costs under the simple assumption that freight fees increase monotonically over space. Using the Japanese Census of Logistics, this paper examines the extent to which transport distance and time affect freight costs across shipping modes, commodity groups, and prefecture pairs. The results show substantial heterogeneity in transport costs and time across shipping modes. Consistent with an iceberg formulation of transport costs, distance has a significantly positive effect on freight costs by air transportation. However, I find the puzzling results that business enterprises are likely to pay more for short-distance shipments by truck, ship, and railroad transportation. As a plausible explanation, I discuss aggregation bias arising from freight-specific premiums for timely, frequent, and small-batch shipments.
Resumo:
Various environmental factors may influence the foraging behaviour of seed dispersers which could ultimately affect the seed dispersal process. We examined whether moonlight levels and the presence or absence of rodentshelter affect rodentseedremoval (rate, handling time and time of removal) and seedselection (size and species) among seven oak species. The presence or absence of safe microhabitats was found to be more important than moonlight levels in the removal of seeds. Bright moonlight caused a different temporal distribution of seedremoval throughout the night but only affected the overall removal rates in open microhabitats. Seeds were removed more rapidly in open microhabitat (regardless of the moon phase), decreasing the time allocated to seed discrimination and translocation. Only in open microhabitats did increasing levels of moonlight decrease the time allocated to selection and removal of seeds. As a result, a more precise seedselection was made under shelter, owing to lower levels of predation risk. Rodent ranking preference for species was identical between full/new moon in shelter but not in open microhabitats. For all treatments, species selection by rodents was much stronger than size selection. Nevertheless, heavy seeds, which require more energy and time to be transported, were preferentially removed under shelter, where there is no time restriction to move the seeds. Our findings reveal that seedselection is safety dependent and, therefore, microhabitats in which seeds are located (sheltered versus exposed) and moonlight levels in open areas should be taken into account in rodent food selection studies.
Resumo:
The main purpose of a gene interaction network is to map the relationships of the genes that are out of sight when a genomic study is tackled. DNA microarrays allow the measure of gene expression of thousands of genes at the same time. These data constitute the numeric seed for the induction of the gene networks. In this paper, we propose a new approach to build gene networks by means of Bayesian classifiers, variable selection and bootstrap resampling. The interactions induced by the Bayesian classifiers are based both on the expression levels and on the phenotype information of the supervised variable. Feature selection and bootstrap resampling add reliability and robustness to the overall process removing the false positive findings. The consensus among all the induced models produces a hierarchy of dependences and, thus, of variables. Biologists can define the depth level of the model hierarchy so the set of interactions and genes involved can vary from a sparse to a dense set. Experimental results show how these networks perform well on classification tasks. The biological validation matches previous biological findings and opens new hypothesis for future studies
Resumo:
This paper discusses the use of sound waves to illustrate multipath radio propagation concepts. Specifically, a procedure is presented to measure the time-varying frequency response of the channel. This helps demonstrate how a propagation channel can be characterized in time and frequency, and provides visualizations of the concepts of coherence time and coherence bandwidth. The measurements are very simple to carry out, and the required equipment is easily available. The proposed method can be useful for wireless or mobile communication courses.
Resumo:
n this work, a mathematical unifying framework for designing new fault detection schemes in nonlinear stochastic continuous-time dynamical systems is developed. These schemes are based on a stochastic process, called the residual, which reflects the system behavior and whose changes are to be detected. A quickest detection scheme for the residual is proposed, which is based on the computed likelihood ratios for time-varying statistical changes in the Ornstein–Uhlenbeck process. Several expressions are provided, depending on a priori knowledge of the fault, which can be employed in a proposed CUSUM-type approximated scheme. This general setting gathers different existing fault detection schemes within a unifying framework, and allows for the definition of new ones. A comparative simulation example illustrates the behavior of the proposed schemes.
Resumo:
Using the Bayesian approach as the model selection criteria, the main purpose in this study is to establish a practical road accident model that can provide a better interpretation and prediction performance. For this purpose we are using a structural explanatory model with autoregressive error term. The model estimation is carried out through Bayesian inference and the best model is selected based on the goodness of fit measures. To cross validate the model estimation further prediction analysis were done. As the road safety measures the number of fatal accidents in Spain, during 2000-2011 were employed. The results of the variable selection process show that the factors explaining fatal road accidents are mainly exposure, economic factors, and surveillance and legislative measures. The model selection shows that the impact of economic factors on fatal accidents during the period under study has been higher compared to surveillance and legislative measures.
Resumo:
Los accidentes del tráfico son un fenómeno social muy relevantes y una de las principales causas de mortalidad en los países desarrollados. Para entender este fenómeno complejo se aplican modelos econométricos sofisticados tanto en la literatura académica como por las administraciones públicas. Esta tesis está dedicada al análisis de modelos macroscópicos para los accidentes del tráfico en España. El objetivo de esta tesis se puede dividir en dos bloques: a. Obtener una mejor comprensión del fenómeno de accidentes de trafico mediante la aplicación y comparación de dos modelos macroscópicos utilizados frecuentemente en este área: DRAG y UCM, con la aplicación a los accidentes con implicación de furgonetas en España durante el período 2000-2009. Los análisis se llevaron a cabo con enfoque frecuencista y mediante los programas TRIO, SAS y TRAMO/SEATS. b. La aplicación de modelos y la selección de las variables más relevantes, son temas actuales de investigación y en esta tesis se ha desarrollado y aplicado una metodología que pretende mejorar, mediante herramientas teóricas y prácticas, el entendimiento de selección y comparación de los modelos macroscópicos. Se han desarrollado metodologías tanto para selección como para comparación de modelos. La metodología de selección de modelos se ha aplicado a los accidentes mortales ocurridos en la red viaria en el período 2000-2011, y la propuesta metodológica de comparación de modelos macroscópicos se ha aplicado a la frecuencia y la severidad de los accidentes con implicación de furgonetas en el período 2000-2009. Como resultado de los desarrollos anteriores se resaltan las siguientes contribuciones: a. Profundización de los modelos a través de interpretación de las variables respuesta y poder de predicción de los modelos. El conocimiento sobre el comportamiento de los accidentes con implicación de furgonetas se ha ampliado en este proceso. bl. Desarrollo de una metodología para selección de variables relevantes para la explicación de la ocurrencia de accidentes de tráfico. Teniendo en cuenta los resultados de a) la propuesta metodológica se basa en los modelos DRAG, cuyos parámetros se han estimado con enfoque bayesiano y se han aplicado a los datos de accidentes mortales entre los años 2000-2011 en España. Esta metodología novedosa y original se ha comparado con modelos de regresión dinámica (DR), que son los modelos más comunes para el trabajo con procesos estocásticos. Los resultados son comparables, y con la nueva propuesta se realiza una aportación metodológica que optimiza el proceso de selección de modelos, con escaso coste computacional. b2. En la tesis se ha diseñado una metodología de comparación teórica entre los modelos competidores mediante la aplicación conjunta de simulación Monte Cario, diseño de experimentos y análisis de la varianza ANOVA. Los modelos competidores tienen diferentes estructuras, que afectan a la estimación de efectos de las variables explicativas. Teniendo en cuenta el estudio desarrollado en bl) este desarrollo tiene el propósito de determinar como interpretar la componente de tendencia estocástica que un modelo UCM modela explícitamente, a través de un modelo DRAG, que no tiene un método específico para modelar este elemento. Los resultados de este estudio son importantes para ver si la serie necesita ser diferenciada antes de modelar. b3. Se han desarrollado nuevos algoritmos para realizar los ejercicios metodológicos, implementados en diferentes programas como R, WinBUGS, y MATLAB. El cumplimiento de los objetivos de la tesis a través de los desarrollos antes enunciados se remarcan en las siguientes conclusiones: 1. El fenómeno de accidentes del tráfico se ha analizado mediante dos modelos macroscópicos. Los efectos de los factores de influencia son diferentes dependiendo de la metodología aplicada. Los resultados de predicción son similares aunque con ligera superioridad de la metodología DRAG. 2. La metodología para selección de variables y modelos proporciona resultados prácticos en cuanto a la explicación de los accidentes de tráfico. La predicción y la interpretación también se han mejorado mediante esta nueva metodología. 3. Se ha implementado una metodología para profundizar en el conocimiento de la relación entre las estimaciones de los efectos de dos modelos competidores como DRAG y UCM. Un aspecto muy importante en este tema es la interpretación de la tendencia mediante dos modelos diferentes de la que se ha obtenido información muy útil para los investigadores en el campo del modelado. Los resultados han proporcionado una ampliación satisfactoria del conocimiento en torno al proceso de modelado y comprensión de los accidentes con implicación de furgonetas y accidentes mortales totales en España. ABSTRACT Road accidents are a very relevant social phenomenon and one of the main causes of death in industrialized countries. Sophisticated econometric models are applied in academic work and by the administrations for a better understanding of this very complex phenomenon. This thesis is thus devoted to the analysis of macro models for road accidents with application to the Spanish case. The objectives of the thesis may be divided in two blocks: a. To achieve a better understanding of the road accident phenomenon by means of the application and comparison of two of the most frequently used macro modelings: DRAG (demand for road use, accidents and their gravity) and UCM (unobserved components model); the application was made to van involved accident data in Spain in the period 2000-2009. The analysis has been carried out within the frequentist framework and using available state of the art software, TRIO, SAS and TRAMO/SEATS. b. Concern on the application of the models and on the relevant input variables to be included in the model has driven the research to try to improve, by theoretical and practical means, the understanding on methodological choice and model selection procedures. The theoretical developments have been applied to fatal accidents during the period 2000-2011 and van-involved road accidents in 2000-2009. This has resulted in the following contributions: a. Insight on the models has been gained through interpretation of the effect of the input variables on the response and prediction accuracy of both models. The behavior of van-involved road accidents has been explained during this process. b1. Development of an input variable selection procedure, which is crucial for an efficient choice of the inputs. Following the results of a) the procedure uses the DRAG-like model. The estimation is carried out within the Bayesian framework. The procedure has been applied for the total road accident data in Spain in the period 2000-2011. The results of the model selection procedure are compared and validated through a dynamic regression model given that the original data has a stochastic trend. b2. A methodology for theoretical comparison between the two models through Monte Carlo simulation, computer experiment design and ANOVA. The models have a different structure and this affects the estimation of the effects of the input variables. The comparison is thus carried out in terms of the effect of the input variables on the response, which is in general different, and should be related. Considering the results of the study carried out in b1) this study tries to find out how a stochastic time trend will be captured in DRAG model, since there is no specific trend component in DRAG. Given the results of b1) the findings of this study are crucial in order to see if the estimation of data with stochastic component through DRAG will be valid or whether the data need a certain adjustment (typically differencing) prior to the estimation. The model comparison methodology was applied to the UCM and DRAG models, considering that, as mentioned above, the UCM has a specific trend term while DRAG does not. b3. New algorithms were developed for carrying out the methodological exercises. For this purpose different softwares, R, WinBUGs and MATLAB were used. These objectives and contributions have been resulted in the following findings: 1. The road accident phenomenon has been analyzed by means of two macro models: The effects of the influential input variables may be estimated through the models, but it has been observed that the estimates vary from one model to the other, although prediction accuracy is similar, with a slight superiority of the DRAG methodology. 2. The variable selection methodology provides very practical results, as far as the explanation of road accidents is concerned. Prediction accuracy and interpretability have been improved by means of a more efficient input variable and model selection procedure. 3. Insight has been gained on the relationship between the estimates of the effects using the two models. A very relevant issue here is the role of trend in both models, relevant recommendations for the analyst have resulted from here. The results have provided a very satisfactory insight into both modeling aspects and the understanding of both van-involved and total fatal accidents behavior in Spain.
Resumo:
The concepts of temperature and equilibrium are not well defined in systems of particles with time-varying external forces. An example is a radio frequency ion trap, with the ions laser cooled into an ordered solid, characteristic of sub-mK temperatures, whereas the kinetic energies associated with the fast coherent motion in the trap are up to 7 orders of magnitude higher. Simulations with 1,000 ions reach equilibrium between the degrees of freedom when only aperiodic displacements (secular motion) are considered. The coupling of the periodic driven motion associated with the confinement to the nonperiodic random motion of the ions is very small at low temperatures and increases quadratically with temperature.
Resumo:
Time-variable gravity data from the Gravity Recovery And Climate Experiment (GRACE) mission are used to study total water content over Australia for the period 2002–2010. A time-varying annual signal explains 61% of the variance of the data, in good agreement with two independent estimates of the same quantity from hydrological models. Water mass content variations across Australia are linked to Pacific and Indian Ocean variability, associated with El Niño-Southern Oscillation (ENSO) and the Indian Ocean Dipole (IOD), respectively. From 1989, positive (negative) IOD phases were related to anomalously low (high) precipitation in southeastern Australia, associated with a reduced (enhanced) tropical moisture flux. In particular, the sustained water mass content reduction over central and southern regions of Australia during the period 2006–2008 is associated with three consecutive positive IOD events.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
In various signal-channel-estimation problems, the channel being estimated may be well approximated by a discrete finite impulse response (FIR) model with sparsely separated active or nonzero taps. A common approach to estimating such channels involves a discrete normalized least-mean-square (NLMS) adaptive FIR filter, every tap of which is adapted at each sample interval. Such an approach suffers from slow convergence rates and poor tracking when the required FIR filter is "long." Recently, NLMS-based algorithms have been proposed that employ least-squares-based structural detection techniques to exploit possible sparse channel structure and subsequently provide improved estimation performance. However, these algorithms perform poorly when there is a large dynamic range amongst the active taps. In this paper, we propose two modifications to the previous algorithms, which essentially remove this limitation. The modifications also significantly improve the applicability of the detection technique to structurally time varying channels. Importantly, for sparse channels, the computational cost of the newly proposed detection-guided NLMS estimator is only marginally greater than that of the standard NLMS estimator. Simulations demonstrate the favourable performance of the newly proposed algorithm. © 2006 IEEE.
Resumo:
Traditional vegetation mapping methods use high cost, labour-intensive aerial photography interpretation. This approach can be subjective and is limited by factors such as the extent of remnant vegetation, and the differing scale and quality of aerial photography over time. An alternative approach is proposed which integrates a data model, a statistical model and an ecological model using sophisticated Geographic Information Systems (GIS) techniques and rule-based systems to support fine-scale vegetation community modelling. This approach is based on a more realistic representation of vegetation patterns with transitional gradients from one vegetation community to another. Arbitrary, though often unrealistic, sharp boundaries can be imposed on the model by the application of statistical methods. This GIS-integrated multivariate approach is applied to the problem of vegetation mapping in the complex vegetation communities of the Innisfail Lowlands in the Wet Tropics bioregion of Northeastern Australia. The paper presents the full cycle of this vegetation modelling approach including sampling sites, variable selection, model selection, model implementation, internal model assessment, model prediction assessments, models integration of discrete vegetation community models to generate a composite pre-clearing vegetation map, independent data set model validation and model prediction's scale assessments. An accurate pre-clearing vegetation map of the Innisfail Lowlands was generated (0.83r(2)) through GIS integration of 28 separate statistical models. This modelling approach has good potential for wider application, including provision of. vital information for conservation planning and management; a scientific basis for rehabilitation of disturbed and cleared areas; a viable method for the production of adequate vegetation maps for conservation and forestry planning of poorly-studied areas. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
In this paper, a new method for characterizing the newborn heart rate variability (HRV) is proposed. The central of the method is the newly proposed technique for instantaneous frequency (IF) estimation specifically designed for nonstationary multicomponen signals such as HRV. The new method attempts to characterize the newborn HRV using features extracted from the time–frequency (TF) domain of the signal. These features comprise the IF, the instantaneous bandwidth (IB) and instantaneous energy (IE) of the different TF components of the HRV. Applied to the HRV of both normal and seizure suffering newborns, this method clearly reveals the locations of the spectral peaks and their time-varying nature. The total energy of HRV components, ET and ratio of energy concentrated in the low-frequency (LF) to that in high frequency (HF) components have been shown to be significant features in identifying the HRV of newborn with seizures.
Resumo:
This article examines whether UK portfolio returns are time varying so that expected returns follow an AR(1) process as proposed by Conrad and Kaul for the USA. It explores this hypothesis for four portfolios that have been formed on the basis of market capitalization. The portfolio returns are modelled using a kalman filter signal extraction model in which the unobservable expected return is the state variable and is allowed to evolve as a stationary first order autoregressive process. It finds that this model is a good representation of returns and can account for most of the autocorrelation present in observed portfolio returns. This study concludes that UK portfolio returns are time varying and the nature of the time variation appears to introduce a substantial amount of autocorrelation to portfolio returns. Like Conrad and Kaul if finds a link between the extent to which portfolio returns are time varying and the size of firms within a portfolio but not the monotonic one found for the USA.