852 resultados para Random effect model
Resumo:
INTRODUÇÃO: A prótese biliar endoscópica é aceita em todo o mundo como a primeira escolha de tratamento paliativo na obstrução biliar maligna. Atualmente ainda persistem dois tipos de materiais utilizados em sua confecção: plástico e metal. Consequentemente, muitas dúvidas surgem quanto a qual deles é o mais benéfico para o paciente. Esta revisão reúne as informações disponíveis da mais alta qualidade sobre estes dois tipos de prótese, fornecendo informações em relação à disfunção, complicação, taxas de reintervenção, custos, sobrevida e tempo de permeabilidade; e pretende ajudar a lidar com a prática clínica nos dias de hoje. OBJETIVO: Analisar, através de metanálise, os benefícios de dois tipos de próteses na obstrução biliar maligna inoperável. MÉTODOS: Uma revisão sistemática de ensaios clínicos randomizados (RCT) foi conduzida, com a última atualização em março de 2015, utilizando EMBASE, CINAHL (EBSCO), Medline, Lilacs / Centro (BVS), Scopus, o CAPES (Brasil), e literatura cinzenta. As informações dos estudos selecionados foram extraídas tendo em vista seis desfechos: primariamente disfunção, taxas de reintervenção e complicações; e, secundariamente, custos, sobrevivência e tempo de permeabilidade. Os dados sobre as características dos participantes do RCT, critérios de inclusão e exclusão e tipos de próteses também foram extraídos. Os vieses foram avaliados principalmente através da escala de Jadad. Esta metanálise foi registrada no banco de dados PROSPERO pelo número CRD42014015078. A análise do risco absoluto dos resultados foi realizada utilizando o software RevMan 5, calculando as diferenças de risco (RD) de variáveis dicotômicas e média das diferenças (MD) de variáveis contínuas. Os dados sobre a RD e MD para cada desfecho primário foram calculados utilizando o teste de Mantel-Haenszel e a inconsistência foi avaliada com o teste Qui-quadrado (Chi2) e o método de Higgins (I2). A análise de sensibilidade foi realizada com a retirada de estudos discrepantes e a utilização do efeito aleatório. O teste t de Student foi utilizado para a comparação das médias aritméticas ponderadas, em relação aos desfechos secundários. RESULTADOS: Inicialmente foram identificados 3660 estudos; 3539 foram excluídos por título ou resumo, enquanto 121 estudos foram totalmente avaliados e foram excluídos, principalmente por não comparar próteses metálicas (SEMS) e próteses plásticas (PS), levando a treze RCT selecionados e 1133 indivíduos metanálise. A média de idade foi de 69,5 anos, e o câncer mais comum foi de via biliar (proximal) e pancreático (distal). O diâmetro de SEMS mais utilizado foi de 10 mm (30 Fr) e o diâmetro de PS mais utilizado foi de 10 Fr. Na metanálise, SEMS tiveram menor disfunção global em comparação com PS (21,6% versus 46,8% p < 0,00001) e menos reintervenções (21,6% versus 56,6% p < 0,00001), sem diferença nas complicações (13,7% versus 15,9% p = 0,16). Na análise secundária, a taxa média de sobrevida foi maior no grupo SEMS (182 contra 150 dias - p < 0,0001), com um período maior de permeabilidade (250 contra 124 dias - p < 0,0001) e um custo semelhante por paciente, embora menor no grupo SEMS (4.193,98 contra 4.728,65 Euros - p < 0,0985). CONCLUSÃO: SEMS estão associados com menor disfunção, menores taxas de reintervenção, melhor sobrevida e maior tempo de permeabilidade. Complicações e custos não apresentaram diferença
Resumo:
Trabalho Final do Curso de Mestrado Integrado em Medicina, Faculdade de Medicina, Universidade de Lisboa, 2014
Resumo:
Paleoceanographic archives derived from 17 marine sediment cores reconstruct the response of the Southwest Pacific Ocean to the peak interglacial, Marine Isotope Stage (MIS) 5e (ca. 125?ka). Paleo-Sea Surface Temperature (SST) estimates were obtained from the Random Forest model-an ensemble decision tree tool-applied to core-top planktonic foraminiferal faunas calibrated to modern SSTs. The reconstructed geographic pattern of the SST anomaly (maximum SST between 120 and 132?ka minus mean modern SST) seems to indicate how MIS 5e conditions were generally warmer in the Southwest Pacific, especially in the western Tasman Sea where a strengthened East Australian Current (EAC) likely extended subtropical influence to ca. 45°S off Tasmania. In contrast, the eastern Tasman Sea may have had a modest cooling except around 45°S. The observed pattern resembles that developing under the present warming trend in the region. An increase in wind stress curl over the modern South Pacific is hypothesized to have spun-up the South Pacific Subtropical Gyre, with concurrent increase in subtropical flow in the western boundary currents that include the EAC. However, warmer temperatures along the Subtropical Front and Campbell Plateau to the south suggest that the relative influence of the boundary inflows to eastern New Zealand may have differed in MIS 5e, and these currents may have followed different paths compared to today.
Resumo:
The estimated parameters of output distance functions frequently violate the monotonicity, quasi-convexity and convexity constraints implied by economic theory, leading to estimated elasticities and shadow prices that are incorrectly signed, and ultimately to perverse conclusions concerning the effects of input and output changes on productivity growth and relative efficiency levels. We show how a Bayesian approach can be used to impose these constraints on the parameters of a translog output distance function. Implementing the approach involves the use of a Gibbs sampler with data augmentation. A Metropolis-Hastings algorithm is also used within the Gibbs to simulate observations from truncated pdfs. Our methods are developed for the case where panel data is available and technical inefficiency effects are assumed to be time-invariant. Two models-a fixed effects model and a random effects model-are developed and applied to panel data on 17 European railways. We observe significant changes in estimated elasticities and shadow price ratios when regularity restrictions are imposed. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
The Professions in Australia Study is the first longitudinal investigation of the professions in Australia; it spans 33 years. Self-administered questionnaires were distributed on at least eight occasions between 1965 and 1998 to cohorts of students and later practitioners from the professions of engineering, law and medicine. The longitudinal design of this study has allowed for an investigation of individual change over time of three archetypal characteristics of the professions, service, knowledge and autonomy and two of the benefits of professional work, financial rewards and prestige. A cumulative logit random effects model was used to statistically assess changes in the ordinal response scores for measuring importance of the characteristics and benefits through stages of the career path. Individuals were also classified by average trends in response scores over time and hence professions are described through their members' tendency to follow a particular path in attitudes either of change or constancy, in relation to the importance of the five elements (characteristics and benefits). Comparisons in trends are also made between the three professions.
Resumo:
Many variables that are of interest in social science research are nominal variables with two or more categories, such as employment status, occupation, political preference, or self-reported health status. With longitudinal survey data it is possible to analyse the transitions of individuals between different employment states or occupations (for example). In the statistical literature, models for analysing categorical dependent variables with repeated observations belong to the family of models known as generalized linear mixed models (GLMMs). The specific GLMM for a dependent variable with three or more categories is the multinomial logit random effects model. For these models, the marginal distribution of the response does not have a closed form solution and hence numerical integration must be used to obtain maximum likelihood estimates for the model parameters. Techniques for implementing the numerical integration are available but are computationally intensive requiring a large amount of computer processing time that increases with the number of clusters (or individuals) in the data and are not always readily accessible to the practitioner in standard software. For the purposes of analysing categorical response data from a longitudinal social survey, there is clearly a need to evaluate the existing procedures for estimating multinomial logit random effects model in terms of accuracy, efficiency and computing time. The computational time will have significant implications as to the preferred approach by researchers. In this paper we evaluate statistical software procedures that utilise adaptive Gaussian quadrature and MCMC methods, with specific application to modeling employment status of women using a GLMM, over three waves of the HILDA survey.
Resumo:
Numerous studies find that monetary models of exchange rates cannot beat a random walk model. Such a finding, however, is not surprising given that such models are built upon money demand functions and traditional money demand functions appear to have broken down in many developed countries. In this article, we investigate whether using a more stable underlying money demand function results in improvements in forecasts of monetary models of exchange rates. More specifically, we use a sweep-adjusted measure of US monetary aggregate M1 which has been shown to have a more stable money demand function than the official M1 measure. The results suggest that the monetary models of exchange rates contain information about future movements of exchange rates, but the success of such models depends on the stability of money demand functions and the specifications of the models.
Resumo:
The key to the correct application of ANOVA is careful experimental design and matching the correct analysis to that design. The following points should therefore, be considered before designing any experiment: 1. In a single factor design, ensure that the factor is identified as a 'fixed' or 'random effect' factor. 2. In more complex designs, with more than one factor, there may be a mixture of fixed and random effect factors present, so ensure that each factor is clearly identified. 3. Where replicates can be grouped or blocked, the advantages of a randomised blocks design should be considered. There should be evidence, however, that blocking can sufficiently reduce the error variation to counter the loss of DF compared with a randomised design. 4. Where different treatments are applied sequentially to a patient, the advantages of a three-way design in which the different orders of the treatments are included as an 'effect' should be considered. 5. Combining different factors to make a more efficient experiment and to measure possible factor interactions should always be considered. 6. The effect of 'internal replication' should be taken into account in a factorial design in deciding the number of replications to be used. Where possible, each error term of the ANOVA should have at least 15 DF. 7. Consider carefully whether a particular factorial design can be considered to be a split-plot or a repeated measures design. If such a design is appropriate, consider how to continue the analysis bearing in mind the problem of using post hoc tests in this situation.
Resumo:
The principled statistical application of Gaussian random field models used in geostatistics has historically been limited to data sets of a small size. This limitation is imposed by the requirement to store and invert the covariance matrix of all the samples to obtain a predictive distribution at unsampled locations, or to use likelihood-based covariance estimation. Various ad hoc approaches to solve this problem have been adopted, such as selecting a neighborhood region and/or a small number of observations to use in the kriging process, but these have no sound theoretical basis and it is unclear what information is being lost. In this article, we present a Bayesian method for estimating the posterior mean and covariance structures of a Gaussian random field using a sequential estimation algorithm. By imposing sparsity in a well-defined framework, the algorithm retains a subset of “basis vectors” that best represent the “true” posterior Gaussian random field model in the relative entropy sense. This allows a principled treatment of Gaussian random field models on very large data sets. The method is particularly appropriate when the Gaussian random field model is regarded as a latent variable model, which may be nonlinearly related to the observations. We show the application of the sequential, sparse Bayesian estimation in Gaussian random field models and discuss its merits and drawbacks.
Resumo:
This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two non-linear techniques, namely, recurrent neural networks and kernel recursive least squares regression - techniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation.
Resumo:
In order to generate sales promotion response predictions, marketing analysts estimate demand models using either disaggregated (consumer-level) or aggregated (store-level) scanner data. Comparison of predictions from these demand models is complicated by the fact that models may accommodate different forms of consumer heterogeneity depending on the level of data aggregation. This study shows via simulation that demand models with various heterogeneity specifications do not produce more accurate sales response predictions than a homogeneous demand model applied to store-level data, with one major exception: a random coefficients model designed to capture within-store heterogeneity using store-level data produced significantly more accurate sales response predictions (as well as better fit) compared to other model specifications. An empirical application to the paper towel product category adds additional insights. This article has supplementary material online.
Resumo:
Numerous studies find that monetary models of exchange rates cannot beat a random walk model. Such a finding, however, is not surprising given that such models are built upon money demand functions and traditional money demand functions appear to have broken down in many developed countries. In this paper we investigate whether using a more stable underlying money demand function results in improvements in forecasts of monetary models of exchange rates. More specifically, we use a sweepadjusted measure of US monetary aggregate M1 which has been shown to have a more stable money demand function than the official M1 measure. The results suggest that the monetary models of exchange rates contain information about future movements of exchange rates but the success of such models depends on the stability of money demand functions and the specifications of the models.
Resumo:
We use non-parametric procedures to identify breaks in the underlying series of UK household sector money demand functions. Money demand functions are estimated using cointegration techniques and by employing both the Simple Sum and Divisia measures of money. P-star models are also estimated for out-of-sample inflation forecasting. Our findings suggest that the presence of breaks affects both the estimation of cointegrated money demand functions and the inflation forecasts. P-star forecast models based on Divisia measures appear more accurate at longer horizons and the majority of models with fundamentals perform better than a random walk model.
Resumo:
This paper compares the experience of forecasting the UK government bond yield curve before and after the dramatic lowering of short-term interest rates from October 2008. Out-of-sample forecasts for 1, 6 and 12 months are generated from each of a dynamic Nelson-Siegel model, autoregressive models for both yields and the principal components extracted from those yields, a slope regression and a random walk model. At short forecasting horizons, there is little difference in the performance of the models both prior to and after 2008. However, for medium- to longer-term horizons, the slope regression provided the best forecasts prior to 2008, while the recent experience of near-zero short interest rates coincides with a period of forecasting superiority for the autoregressive and dynamic Nelson-Siegel models. © 2014 John Wiley & Sons, Ltd.
Resumo:
The research presented in this thesis was developed as part of DIBANET, an EC funded project aiming to develop an energetically self-sustainable process for the production of diesel miscible biofuels (i.e. ethyl levulinate) via acid hydrolysis of selected biomass feedstocks. Three thermal conversion technologies, pyrolysis, gasification and combustion, were evaluated in the present work with the aim of recovering the energy stored in the acid hydrolysis solid residue (AHR). Mainly consisting of lignin and humins, the AHR can contain up to 80% of the energy in the original feedstock. Pyrolysis of AHR proved unsatisfactory, so attention focussed on gasification and combustion with the aim of producing heat and/or power to supply the energy demanded by the ethyl levulinate production process. A thermal processing rig consisting on a Laminar Entrained Flow Reactor (LEFR) equipped with solid and liquid collection and online gas analysis systems was designed and built to explore pyrolysis, gasification and air-blown combustion of AHR. Maximum liquid yield for pyrolysis of AHR was 30wt% with volatile conversion of 80%. Gas yield for AHR gasification was 78wt%, with 8wt% tar yields and conversion of volatiles close to 100%. 90wt% of the AHR was transformed into gas by combustion, with volatile conversions above 90%. 5volO2%-95vol%N2 gasification resulted in a nitrogen diluted, low heating value gas (2MJ/m3). Steam and oxygen-blown gasification of AHR were additionally investigated in a batch gasifier at KTH in Sweden. Steam promoted the formation of hydrogen (25vol%) and methane (14vol%) improving the gas heating value to 10MJ/m3, below the typical for steam gasification due to equipment limitations. Arrhenius kinetic parameters were calculated using data collected with the LEFR to provide reaction rate information for process design and optimisation. Activation energy (EA) and pre-exponential factor (ko in s-1) for pyrolysis (EA=80kJ/mol, lnko=14), gasification (EA=69kJ/mol, lnko=13) and combustion (EA=42kJ/mol, lnko=8) were calculated after linearly fitting the data using the random pore model. Kinetic parameters for pyrolysis and combustion were also determined by dynamic thermogravimetric analysis (TGA), including studies of the original biomass feedstocks for comparison. Results obtained by differential and integral isoconversional methods for activation energy determination were compared. Activation energy calculated by the Vyazovkin method was 103-204kJ/mol for pyrolysis of untreated feedstocks and 185-387kJ/mol for AHRs. Combustion activation energy was 138-163kJ/mol for biomass and 119-158 for AHRs. The non-linear least squares method was used to determine reaction model and pre-exponential factor. Pyrolysis and combustion of biomass were best modelled by a combination of third order reaction and 3 dimensional diffusion models, while AHR decomposed following the third order reaction for pyrolysis and the 3 dimensional diffusion for combustion.