921 resultados para Hyper-Random
Resumo:
Stochastic models for three-dimensional particles have many applications in applied sciences. Lévy–based particle models are a flexible approach to particle modelling. The structure of the random particles is given by a kernel smoothing of a Lévy basis. The models are easy to simulate but statistical inference procedures have not yet received much attention in the literature. The kernel is not always identifiable and we suggest one approach to remedy this problem. We propose a method to draw inference about the kernel from data often used in local stereology and study the performance of our approach in a simulation study.
Resumo:
We prove large deviation results for sums of heavy-tailed random elements in rather general convex cones being semigroups equipped with a rescaling operation by positive real numbers. In difference to previous results for the cone of convex sets, our technique does not use the embedding of cones in linear spaces. Examples include the cone of convex sets with the Minkowski addition, positive half-line with maximum operation and the family of square integrable functions with arithmetic addition and argument rescaling.
Resumo:
In this paper, we propose a fully automatic, robust approach for segmenting proximal femur in conventional X-ray images. Our method is based on hierarchical landmark detection by random forest regression, where the detection results of 22 global landmarks are used to do the spatial normalization, and the detection results of the 59 local landmarks serve as the image cue for instantiation of a statistical shape model of the proximal femur. To detect landmarks in both levels, we use multi-resolution HoG (Histogram of Oriented Gradients) as features which can achieve better accuracy and robustness. The efficacy of the present method is demonstrated by experiments conducted on 150 clinical x-ray images. It was found that the present method could achieve an average point-to-curve error of 2.0 mm and that the present method was robust to low image contrast, noise and occlusions caused by implants.
Resumo:
Knowledge of landmarks and contours in anteroposterior (AP) pelvis X-rays is invaluable for computer aided diagnosis, hip surgery planning and image-guided interventions. This paper presents a fully automatic and robust approach for landmarking and segmentation of both pelvis and femur in a conventional AP X-ray. Our approach is based on random forest regression and hierarchical sparse shape composition. Experiments conducted on 436 clinical AP pelvis x-rays show that our approach achieves an average point-to-curve error around 1.3 mm for femur and 2.2 mm for pelvis, both with success rates around 98%. Compared to existing methods, our approach exhibits better performance in both the robustness and the accuracy.
Resumo:
Perceptual learning is a training induced improvement in performance. Mechanisms underlying the perceptual learning of depth discrimination in dynamic random dot stereograms were examined by assessing stereothresholds as a function of decorrelation. The inflection point of the decorrelation function was defined as the level of decorrelation corresponding to 1.4 times the threshold when decorrelation is 0%. In general, stereothresholds increased with increasing decorrelation. Following training, stereothresholds and standard errors of measurement decreased systematically for all tested decorrelation values. Post training decorrelation functions were reduced by a multiplicative constant (approximately 5), exhibiting changes in stereothresholds without changes in the inflection points. Disparity energy model simulations indicate that a post-training reduction in neuronal noise can sufficiently account for the perceptual learning effects. In two subjects, learning effects were retained over a period of six months, which may have application for training stereo deficient subjects.
Resumo:
Pancreatic cancer is one of the most lethal type of cancer due to its high metastasis rate and resistance to chemotherapy. Pancreatic fibrosis is a constant pathological feature of chronic pancreatitis and the hyperactive stroma associated with pancreatic cancer. Strong evidence supports an important role of cyclooxygenase-2 (COX-2) and COX-2 generated prostaglandin E2 (PGE2) during pancreatic fibrosis. Pancreatic stellate cells (PSC) are the predominant source of extracellular matrix production (ECM), thus being the key players in both diseases. Given this background, the primary objective is to delineate the role of PGE2 on human pancreatic stellate cells (PSC) hyper activation associated with pancreatic cancer. This study showed that human PSC cells express COX-2 and synthesize high levels of PGE2. PGE2 stimulated PSC migration and invasion; expression of extra cellular matrix (ECM) genes and tissue degrading matrix metallo proteinases (MMP) genes. I further identified the PGE2 EP receptor responsible for mediating these effects on PSC. Using genetic and pharmacological approaches I identified the receptor required for PGE2 mediates PSC hyper activation. Treating PSC with Specific antagonists against EP1, EP2 and EP4, demonstrated that blocking EP4 receptor only, resulted in a complete reduction of PGE2 mediated PSC activation. Furthermore, siRNA mediated silencing of EP4, but not other EP receptors, blocked the effects of PGE2 on PSC fibrogenic activity. Further examination of the downstream pathway modulators revealed that PGE2 stimulation of PSC involved CREB and not AKT pathway. The regulation of PSC by PGE2 was further investigated at the molecular level, with a focus on COL1A1. Collagen I deposition by PSC is one of the most important events in pancreatic cancer. I found that PGE2 regulates PSC through activation of COL1A1 expression and transcriptional activity. Downstream of PGE2, silencing of EP4 receptor caused a complete reduction of COL1A1 expression and activity supporting the role of EP4 mediated stimulation of PSC. Taken together, this data indicate that PGE2 regulates PSC via EP4 and suggest that EP4 can be a better therapeutic target for pancreatic cancer to reduce the extensive stromal reaction, possibly in combination with chemotherapeutic drugs can further kill pancreatic cancer cells.
Resumo:
The use of group-randomized trials is particularly widespread in the evaluation of health care, educational, and screening strategies. Group-randomized trials represent a subset of a larger class of designs often labeled nested, hierarchical, or multilevel and are characterized by the randomization of intact social units or groups, rather than individuals. The application of random effects models to group-randomized trials requires the specification of fixed and random components of the model. The underlying assumption is usually that these random components are normally distributed. This research is intended to determine if the Type I error rate and power are affected when the assumption of normality for the random component representing the group effect is violated. ^ In this study, simulated data are used to examine the Type I error rate, power, bias and mean squared error of the estimates of the fixed effect and the observed intraclass correlation coefficient (ICC) when the random component representing the group effect possess distributions with non-normal characteristics, such as heavy tails or severe skewness. The simulated data are generated with various characteristics (e.g. number of schools per condition, number of students per school, and several within school ICCs) observed in most small, school-based, group-randomized trials. The analysis is carried out using SAS PROC MIXED, Version 6.12, with random effects specified in a random statement and restricted maximum likelihood (REML) estimation specified. The results from the non-normally distributed data are compared to the results obtained from the analysis of data with similar design characteristics but normally distributed random effects. ^ The results suggest that the violation of the normality assumption for the group component by a skewed or heavy-tailed distribution does not appear to influence the estimation of the fixed effect, Type I error, and power. Negative biases were detected when estimating the sample ICC and dramatically increased in magnitude as the true ICC increased. These biases were not as pronounced when the true ICC was within the range observed in most group-randomized trials (i.e. 0.00 to 0.05). The normally distributed group effect also resulted in bias ICC estimates when the true ICC was greater than 0.05. However, this may be a result of higher correlation within the data. ^
Resumo:
OBJECTIVE To determine the incidence of hypo- and hyper-capnia in a European cohort of ventilated newborn infants. DESIGN AND SETTING Two-point cross-sectional prospective study in 173 European neonatal intensive care units. PATIENTS AND METHODS Patient characteristics, ventilator settings and measurements, and blood gas analyses were collected for endotracheally ventilated newborn infants on two separate dates. RESULTS A total of 1569 blood gas analyses were performed in 508 included patients with a mean±SD Pco2 of 48±12 mm Hg or 6.4±1.6 kPa (range 17-104 mm Hg or 2.3-13.9 kPa). Hypocapnia (Pco2<30 mm Hg or 4 kPa) and hypercapnia (Pco2>52 mm Hg or 7 kPa) was present in, respectively, 69 (4%) and 492 (31%) of the blood gases. Hypocapnia was most common in the first 3 days of life (7.3%) and hypercapnia after the first week of life (42.6%). Pco2 was significantly higher in preterm infants (49 mm Hg or 6.5 kPa) than term infants (43 mm Hg or 5.7 kPa) and significantly lower during pressure-limited ventilation (47 mm Hg or 6.3±1.6 kPa) compared with volume-targeted ventilation (51 mm Hg or 6.8±1.7 kPa) and high-frequency ventilation (50 mm Hg or 6.7±1.7 kPa). CONCLUSIONS This study shows that hypocapnia is a relatively uncommon finding during neonatal ventilation. The higher incidence of hypercapnia may suggest that permissive hypercapnia has found its way into daily clinical practice.
Resumo:
In recent years, the econometrics literature has shown a growing interest in the study of partially identified models, in which the object of economic and statistical interest is a set rather than a point. The characterization of this set and the development of consistent estimators and inference procedures for it with desirable properties are the main goals of partial identification analysis. This review introduces the fundamental tools of the theory of random sets, which brings together elements of topology, convex geometry, and probability theory to develop a coherent mathematical framework to analyze random elements whose realizations are sets. It then elucidates how these tools have been fruitfully applied in econometrics to reach the goals of partial identification analysis.
Resumo:
This article proposes computing sensitivities of upper tail probabilities of random sums by the saddlepoint approximation. The considered sensitivity is the derivative of the upper tail probability with respect to the parameter of the summation index distribution. Random sums with Poisson or Geometric distributed summation indices and Gamma or Weibull distributed summands are considered. The score method with importance sampling is considered as an alternative approximation. Numerical studies show that the saddlepoint approximation and the method of score with importance sampling are very accurate. But the saddlepoint approximation is substantially faster than the score method with importance sampling. Thus, the suggested saddlepoint approximation can be conveniently used in various scientific problems.
Resumo:
Over the last decade, a plethora of computer-aided diagnosis (CAD) systems have been proposed aiming to improve the accuracy of the physicians in the diagnosis of interstitial lung diseases (ILD). In this study, we propose a scheme for the classification of HRCT image patches with ILD abnormalities as a basic component towards the quantification of the various ILD patterns in the lung. The feature extraction method relies on local spectral analysis using a DCT-based filter bank. After convolving the image with the filter bank, q-quantiles are computed for describing the distribution of local frequencies that characterize image texture. Then, the gray-level histogram values of the original image are added forming the final feature vector. The classification of the already described patches is done by a random forest (RF) classifier. The experimental results prove the superior performance and efficiency of the proposed approach compared against the state-of-the-art.