903 resultados para Ordered probit regression


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A compact and planar donor–acceptor molecule 1 comprising tetrathiafulvalene (TTF) and benzothiadiazole (BTD) units has been synthesised and experimentally characterised by structural, optical, and electrochemical methods. Solution-processed and thermally evaporated thin films of 1 have also been explored as active materials in organic field-effect transistors (OFETs). For these devices, hole field-effect mobilities of μFE=(1.3±0.5)×10−3 and (2.7±0.4)×10−3 cm2 V s−1 were determined for the solution-processed and thermally evaporated thin films, respectively. An intense intramolecular charge-transfer (ICT) transition at around 495 nm dominates the optical absorption spectrum of the neutral dyad, which also shows a weak emission from its ICT state. The iodine-induced oxidation of 1 leads to a partially oxidised crystalline charge-transfer (CT) salt {(1)2I3}, and eventually also to a fully oxidised compound {1I3}⋅1/2I2. Single crystals of the former CT compound, exhibiting a highly symmetrical crystal structure, reveal a fairly good room temperature electrical conductivity of the order of 2 S cm−1. The one-dimensional spin system bears compactly bonded BTD acceptors (spatial localisation of the LUMO) along its ridge.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The counterfactual decomposition technique popularized by Blinder (1973, Journal of Human Resources, 436–455) and Oaxaca (1973, International Economic Review, 693–709) is widely used to study mean outcome differences between groups. For example, the technique is often used to analyze wage gaps by sex or race. This article summarizes the technique and addresses several complications, such as the identification of effects of categorical predictors in the detailed decomposition or the estimation of standard errors. A new command called oaxaca is introduced, and examples illustrating its usage are given.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

estout, introduced by Jann (Stata Journal 5: 288–308), is a useful tool for producing regression tables from stored estimates. However, its syntax is relatively complex and commands may turn out long even for simple tables. Furthermore, having to store the estimates beforehand can be cumbersome. To facilitate the production of regression tables, I therefore present here two new commands called eststo and esttab. eststo is a wrapper for offcial Stata’s estimates store and simplifies the storing of estimation results for tabulation. esttab, on the other hand, is a wrapper for estout and simplifies compiling nice-looking tables from the stored estimates without much typing. I also provide updates to estout and estadd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Organizing and archiving statistical results and processing a subset of those results for publication are important and often underestimated issues in conducting statistical analyses. Because automation of these tasks is often poor, processing results produced by statistical packages is quite laborious and vulnerable to error. I will therefore present a new package called estout that facilitates and automates some of these tasks. This new command can be used to produce regression tables for use with spreadsheets, LaTeX, HTML, or word processors. For example, the results for multiple models can be organized in spreadsheets and can thus be archived in an orderly manner. Alternatively, the results can be directly saved as a publication-ready table for inclusion in, for example, a LaTeX document. estout is implemented as a wrapper for estimates table but has many additional features, such as support for mfx. However, despite its flexibility, estout is—I believe—still very straightforward and easy to use. Furthermore, estout can be customized via so-called defaults files. A tool to make available supplementary statistics called estadd is also provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we propose a new fully-automatic method for localizing and segmenting 3D intervertebral discs from MR images, where the two problems are solved in a unified data-driven regression and classification framework. We estimate the output (image displacements for localization, or fg/bg labels for segmentation) of image points by exploiting both training data and geometric constraints simultaneously. The problem is formulated in a unified objective function which is then solved globally and efficiently. We validate our method on MR images of 25 patients. Taking manually labeled data as the ground truth, our method achieves a mean localization error of 1.3 mm, a mean Dice metric of 87%, and a mean surface distance of 1.3 mm. Our method can be applied to other localization and segmentation tasks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In clinical practice, traditional X-ray radiography is widely used, and knowledge of landmarks and contours in anteroposterior (AP) pelvis X-rays is invaluable for computer aided diagnosis, hip surgery planning and image-guided interventions. This paper presents a fully automatic approach for landmark detection and shape segmentation of both pelvis and femur in conventional AP X-ray images. Our approach is based on the framework of landmark detection via Random Forest (RF) regression and shape regularization via hierarchical sparse shape composition. We propose a visual feature FL-HoG (Flexible- Level Histogram of Oriented Gradients) and a feature selection algorithm based on trace radio optimization to improve the robustness and the efficacy of RF-based landmark detection. The landmark detection result is then used in a hierarchical sparse shape composition framework for shape regularization. Finally, the extracted shape contour is fine-tuned by a post-processing step based on low level image features. The experimental results demonstrate that our feature selection algorithm reduces the feature dimension in a factor of 40 and improves both training and test efficiency. Further experiments conducted on 436 clinical AP pelvis X-rays show that our approach achieves an average point-to-curve error around 1.2 mm for femur and 1.9 mm for pelvis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

robreg provides a number of robust estimators for linear regression models. Among them are the high breakdown-point and high efficiency MM-estimator, the Huber and bisquare M-estimator, and the S-estimator, each supporting classic or robust standard errors. Furthermore, basic versions of the LMS/LQS (least median of squares) and LTS (least trimmed squares) estimators are provided. Note that the moremata package, also available from SSC, is required.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper considers panel data methods for estimating ordered logit models with individual-specific correlated unobserved heterogeneity. We show that a popular approach is inconsistent, whereas some consistent and efficient estimators are available, including minimum distance and generalized method-of-moment estimators. A Monte Carlo study reveals the good properties of an alternative estimator that has not been considered in econometric applications before, is simple to implement and almost as efficient. An illustrative application based on data from the German Socio-Economic Panel confirms the large negative effect of unemployment on life satisfaction that has been found in the previous literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a new estimator for the fixed effects ordered logit model. In contrast to existing methods, the new procedure allows estimating the thresholds. The empirical relevance and simplicity of implementation is illustrated in an application on the effect of unemployment on life satisfaction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Parameter estimates from commonly used multivariable parametric survival regression models do not directly quantify differences in years of life expectancy. Gaussian linear regression models give results in terms of absolute mean differences, but are not appropriate in modeling life expectancy, because in many situations time to death has a negative skewed distribution. A regression approach using a skew-normal distribution would be an alternative to parametric survival models in the modeling of life expectancy, because parameter estimates can be interpreted in terms of survival time differences while allowing for skewness of the distribution. In this paper we show how to use the skew-normal regression so that censored and left-truncated observations are accounted for. With this we model differences in life expectancy using data from the Swiss National Cohort Study and from official life expectancy estimates and compare the results with those derived from commonly used survival regression models. We conclude that a censored skew-normal survival regression approach for left-truncated observations can be used to model differences in life expectancy across covariates of interest.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of nonparametric estimation of a concave regression function F. We show that the supremum distance between the least square s estimatorand F on a compact interval is typically of order(log(n)/n)2/5. This entails rates of convergence for the estimator’s derivative. Moreover, we discuss the impact of additional constraints on F such as monotonicity and pointwise bounds. Then we apply these results to the analysis of current status data, where the distribution function of the event times is assumed to be concave.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let Y_i = f(x_i) + E_i\ (1\le i\le n) with given covariates x_1\lt x_2\lt \cdots\lt x_n , an unknown regression function f and independent random errors E_i with median zero. It is shown how to apply several linear rank test statistics simultaneously in order to test monotonicity of f in various regions and to identify its local extrema.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When considering data from many trials, it is likely that some of them present a markedly different intervention effect or exert an undue influence on the summary results. We develop a forward search algorithm for identifying outlying and influential studies in meta-analysis models. The forward search algorithm starts by fitting the hypothesized model to a small subset of likely outlier-free studies and proceeds by adding studies into the set one-by-one that are determined to be closest to the fitted model of the existing set. As each study is added to the set, plots of estimated parameters and measures of fit are monitored to identify outliers by sharp changes in the forward plots. We apply the proposed outlier detection method to two real data sets; a meta-analysis of 26 studies that examines the effect of writing-to-learn interventions on academic achievement adjusting for three possible effect modifiers, and a meta-analysis of 70 studies that compares a fluoride toothpaste treatment to placebo for preventing dental caries in children. A simple simulated example is used to illustrate the steps of the proposed methodology, and a small-scale simulation study is conducted to evaluate the performance of the proposed method. Copyright © 2016 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Coronary atherosclerosis has been considered a chronic disease characterized by ongoing progression in response to systemic risk factors and local pro-atherogenic stimuli. As our understanding of the pathobiological mechanisms implicated in atherogenesis and plaque progression is evolving, effective treatment strategies have been developed that led to substantial reduction of the clinical manifestations and acute complications of coronary atherosclerotic disease. More recently, intracoronary imaging modalities have enabled detailed in vivo quantification and characterization of coronary atherosclerotic plaque, serial evaluation of atherosclerotic changes over time, and assessment of vascular responses to effective anti-atherosclerotic medications. The use of intracoronary imaging modalities has demonstrated that intensive lipid lowering can halt plaque progression and may even result in regression of coronary atheroma when the highest doses of the most potent statins are used. While current evidence indicates the feasibility of atheroma regression and of reversal of presumed high-risk plaque characteristics in response to intensive anti-atherosclerotic therapies, these changes of plaque size and composition are modest and their clinical implications remain largely elusive. Growing interest has focused on achieving more pronounced regression of coronary plaque using novel anti-atherosclerotic medications, and more importantly on elucidating ways toward clinical translation of favorable changes of plaque anatomy into more favorable clinical outcomes for our patients.