933 resultados para Random Pore Model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Paleoceanographic archives derived from 17 marine sediment cores reconstruct the response of the Southwest Pacific Ocean to the peak interglacial, Marine Isotope Stage (MIS) 5e (ca. 125?ka). Paleo-Sea Surface Temperature (SST) estimates were obtained from the Random Forest model-an ensemble decision tree tool-applied to core-top planktonic foraminiferal faunas calibrated to modern SSTs. The reconstructed geographic pattern of the SST anomaly (maximum SST between 120 and 132?ka minus mean modern SST) seems to indicate how MIS 5e conditions were generally warmer in the Southwest Pacific, especially in the western Tasman Sea where a strengthened East Australian Current (EAC) likely extended subtropical influence to ca. 45°S off Tasmania. In contrast, the eastern Tasman Sea may have had a modest cooling except around 45°S. The observed pattern resembles that developing under the present warming trend in the region. An increase in wind stress curl over the modern South Pacific is hypothesized to have spun-up the South Pacific Subtropical Gyre, with concurrent increase in subtropical flow in the western boundary currents that include the EAC. However, warmer temperatures along the Subtropical Front and Campbell Plateau to the south suggest that the relative influence of the boundary inflows to eastern New Zealand may have differed in MIS 5e, and these currents may have followed different paths compared to today.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we present results of the internal structure (pore size and pore wall thickness distributions) of a series of activated carbon fibers with different degrees of burn-off, determined from interpretation of argon adsorption data at 87 K using infinite and finite wall thickness models. The latter approach has recently been developed in our laboratory. The results show that while the low bun-off samples have nearly uniform pore size (

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The estimated parameters of output distance functions frequently violate the monotonicity, quasi-convexity and convexity constraints implied by economic theory, leading to estimated elasticities and shadow prices that are incorrectly signed, and ultimately to perverse conclusions concerning the effects of input and output changes on productivity growth and relative efficiency levels. We show how a Bayesian approach can be used to impose these constraints on the parameters of a translog output distance function. Implementing the approach involves the use of a Gibbs sampler with data augmentation. A Metropolis-Hastings algorithm is also used within the Gibbs to simulate observations from truncated pdfs. Our methods are developed for the case where panel data is available and technical inefficiency effects are assumed to be time-invariant. Two models-a fixed effects model and a random effects model-are developed and applied to panel data on 17 European railways. We observe significant changes in estimated elasticities and shadow price ratios when regularity restrictions are imposed. (c) 2004 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Professions in Australia Study is the first longitudinal investigation of the professions in Australia; it spans 33 years. Self-administered questionnaires were distributed on at least eight occasions between 1965 and 1998 to cohorts of students and later practitioners from the professions of engineering, law and medicine. The longitudinal design of this study has allowed for an investigation of individual change over time of three archetypal characteristics of the professions, service, knowledge and autonomy and two of the benefits of professional work, financial rewards and prestige. A cumulative logit random effects model was used to statistically assess changes in the ordinal response scores for measuring importance of the characteristics and benefits through stages of the career path. Individuals were also classified by average trends in response scores over time and hence professions are described through their members' tendency to follow a particular path in attitudes either of change or constancy, in relation to the importance of the five elements (characteristics and benefits). Comparisons in trends are also made between the three professions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present results of the reconstruction of a saccharose-based activated carbon (CS1000a) using hybrid reverse Monte Carlo (HRMC) simulation, recently proposed by Opletal et al. [1]. Interaction between carbon atoms in the simulation is modeled by an environment dependent interaction potential (EDIP) [2,3]. The reconstructed structure shows predominance of sp(2) over sp bonding, while a significant proportion of sp(3) hybrid bonding is also observed. We also calculated a ring distribution and geometrical pore size distribution of the model developed. The latter is compared with that obtained from argon adsorption at 87 K using our recently proposed characterization procedure [4], the finite wall thickness (FWT) model. Further, we determine self-diffusivities of argon and nitrogen in the constructed carbon as functions of loading. It is found that while there is a maximum in the diffusivity with respect to loading, as previously observed by Pikunic et al. [5], diffusivities in the present work are 10 times larger than those obtained in the prior work, consistent with the larger pore size as well as higher porosity of the activated saccharose carbon studied here.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many variables that are of interest in social science research are nominal variables with two or more categories, such as employment status, occupation, political preference, or self-reported health status. With longitudinal survey data it is possible to analyse the transitions of individuals between different employment states or occupations (for example). In the statistical literature, models for analysing categorical dependent variables with repeated observations belong to the family of models known as generalized linear mixed models (GLMMs). The specific GLMM for a dependent variable with three or more categories is the multinomial logit random effects model. For these models, the marginal distribution of the response does not have a closed form solution and hence numerical integration must be used to obtain maximum likelihood estimates for the model parameters. Techniques for implementing the numerical integration are available but are computationally intensive requiring a large amount of computer processing time that increases with the number of clusters (or individuals) in the data and are not always readily accessible to the practitioner in standard software. For the purposes of analysing categorical response data from a longitudinal social survey, there is clearly a need to evaluate the existing procedures for estimating multinomial logit random effects model in terms of accuracy, efficiency and computing time. The computational time will have significant implications as to the preferred approach by researchers. In this paper we evaluate statistical software procedures that utilise adaptive Gaussian quadrature and MCMC methods, with specific application to modeling employment status of women using a GLMM, over three waves of the HILDA survey.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Numerous studies find that monetary models of exchange rates cannot beat a random walk model. Such a finding, however, is not surprising given that such models are built upon money demand functions and traditional money demand functions appear to have broken down in many developed countries. In this article, we investigate whether using a more stable underlying money demand function results in improvements in forecasts of monetary models of exchange rates. More specifically, we use a sweep-adjusted measure of US monetary aggregate M1 which has been shown to have a more stable money demand function than the official M1 measure. The results suggest that the monetary models of exchange rates contain information about future movements of exchange rates, but the success of such models depends on the stability of money demand functions and the specifications of the models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The principled statistical application of Gaussian random field models used in geostatistics has historically been limited to data sets of a small size. This limitation is imposed by the requirement to store and invert the covariance matrix of all the samples to obtain a predictive distribution at unsampled locations, or to use likelihood-based covariance estimation. Various ad hoc approaches to solve this problem have been adopted, such as selecting a neighborhood region and/or a small number of observations to use in the kriging process, but these have no sound theoretical basis and it is unclear what information is being lost. In this article, we present a Bayesian method for estimating the posterior mean and covariance structures of a Gaussian random field using a sequential estimation algorithm. By imposing sparsity in a well-defined framework, the algorithm retains a subset of “basis vectors” that best represent the “true” posterior Gaussian random field model in the relative entropy sense. This allows a principled treatment of Gaussian random field models on very large data sets. The method is particularly appropriate when the Gaussian random field model is regarded as a latent variable model, which may be nonlinearly related to the observations. We show the application of the sequential, sparse Bayesian estimation in Gaussian random field models and discuss its merits and drawbacks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two non-linear techniques, namely, recurrent neural networks and kernel recursive least squares regression - techniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: The behavioral and psychological symptoms related to dementia (BPSD) are difficult to manage and are associated with adverse patient outcomes. OBJECTIVE: To systematically analyze the data on memantine in the treatment of BPSD. METHODS: We searched MEDLINE, EMBASE, Pharm-line, the Cochrane Centre Collaboration, www.clinicaltrials.gov, www.controlled-trials.com, and PsycINFO (1966-July 2007). We contacted manufacturers and scrutinized the reference sections of articles identified in our search for further references, including conference proceedings. Two researchers (IM and CF) independently reviewed all studies identified by the search strategy. We included 6 randomized, parallel-group, double-blind studies that rated BPSD with the Neuropsychiatric Inventory (NPI) in our meta-analysis. Patients had probable Alzheimer's disease and received treatment with memantine for at least one month. Overall efficacy of memantine on the NPI was established with a t-test for the average difference between means across studies, using a random effects model. RESULTS: Five of the 6 studies identified had NPI outcome data. In these 5 studies, 868 patients were treated with memantine and 882 patients were treated with placebo. Patients on memantine improved by 1.99 on the NPI scale (95% Cl -0.08 to -3.91; p = 0.041) compared with the placebo group. CONCLUSIONS: Initial data appear to indicate that memantine decreases NPI scores and may have a role in managing BPSD. However, there are a number of limitations with the current data; the effect size was relatively small, and whether memantine produces significant clinical benefit is not clear.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In order to generate sales promotion response predictions, marketing analysts estimate demand models using either disaggregated (consumer-level) or aggregated (store-level) scanner data. Comparison of predictions from these demand models is complicated by the fact that models may accommodate different forms of consumer heterogeneity depending on the level of data aggregation. This study shows via simulation that demand models with various heterogeneity specifications do not produce more accurate sales response predictions than a homogeneous demand model applied to store-level data, with one major exception: a random coefficients model designed to capture within-store heterogeneity using store-level data produced significantly more accurate sales response predictions (as well as better fit) compared to other model specifications. An empirical application to the paper towel product category adds additional insights. This article has supplementary material online.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Numerous studies find that monetary models of exchange rates cannot beat a random walk model. Such a finding, however, is not surprising given that such models are built upon money demand functions and traditional money demand functions appear to have broken down in many developed countries. In this paper we investigate whether using a more stable underlying money demand function results in improvements in forecasts of monetary models of exchange rates. More specifically, we use a sweepadjusted measure of US monetary aggregate M1 which has been shown to have a more stable money demand function than the official M1 measure. The results suggest that the monetary models of exchange rates contain information about future movements of exchange rates but the success of such models depends on the stability of money demand functions and the specifications of the models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Assessing factors that predict new product success (NPS) holds critical importance for companies, as research shows that despite considerable new product investment, success rates are generally below 25%. Over the decades, meta-analytical attempts have been made to summarize empirical findings on NPS factors. However, market environment changes such as increased global competition, as well as methodological advancements in meta-analytical research, present a timely opportunity to augment their results. Hence, a key objective of this research is to provide an updated and extended meta-analytic investigation of the factors affecting NPS. Using Henard and Szymanski's meta-analysis as the most comprehensive recent summary of empirical findings, this study updates their findings by analyzing articles published from 1999 through 2011, the period following the original meta-analysis. Based on 233 empirical studies (from 204 manuscripts) on NPS, with a total 2618 effect sizes, this study also takes advantage of more recent methodological developments by re-calculating effects of the meta-analysis employing a random effects model. The study's scope broadens by including overlooked but important additional variables, notably “country culture,” and discusses substantive differences between the updated meta-analysis and its predecessor. Results reveal generally weaker effect sizes than those reported by Henard and Szymanski in 2001, and provide evolutionary evidence of decreased effects of common success factors over time. Moreover, culture emerges as an important moderating factor, weakening effect sizes for individualistic countries and strengthening effects for risk-averse countries, highlighting the importance of further investigating culture's role in product innovation studies, and of tracking changes of success factors of product innovations. Finally, a sharp increase since 1999 in studies investigating product and process characteristics identifies a significant shift in research interest in new product development success factors. The finding that the importance of success factors generally declines over time calls for new theoretical approaches to better capture the nature of new product development (NPD) success factors. One might speculate that the potential to create competitive advantages through an understanding of NPD success factors is reduced as knowledge of these factors becomes more widespread among managers. Results also imply that managers attempting to improve success rates of NPDs need to consider national culture as this factor exhibits a strong moderating effect: Working in varied cultural contexts will result in differing antecedents of successful new product ventures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We use non-parametric procedures to identify breaks in the underlying series of UK household sector money demand functions. Money demand functions are estimated using cointegration techniques and by employing both the Simple Sum and Divisia measures of money. P-star models are also estimated for out-of-sample inflation forecasting. Our findings suggest that the presence of breaks affects both the estimation of cointegrated money demand functions and the inflation forecasts. P-star forecast models based on Divisia measures appear more accurate at longer horizons and the majority of models with fundamentals perform better than a random walk model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper compares the experience of forecasting the UK government bond yield curve before and after the dramatic lowering of short-term interest rates from October 2008. Out-of-sample forecasts for 1, 6 and 12 months are generated from each of a dynamic Nelson-Siegel model, autoregressive models for both yields and the principal components extracted from those yields, a slope regression and a random walk model. At short forecasting horizons, there is little difference in the performance of the models both prior to and after 2008. However, for medium- to longer-term horizons, the slope regression provided the best forecasts prior to 2008, while the recent experience of near-zero short interest rates coincides with a period of forecasting superiority for the autoregressive and dynamic Nelson-Siegel models. © 2014 John Wiley & Sons, Ltd.