932 resultados para Random-set theory
Resumo:
In many applications the observed data can be viewed as a censored high dimensional full data random variable X. By the curve of dimensionality it is typically not possible to construct estimators that are asymptotically efficient at every probability distribution in a semiparametric censored data model of such a high dimensional censored data structure. We provide a general method for construction of one-step estimators that are efficient at a chosen submodel of the full-data model, are still well behaved off this submodel and can be chosen to always improve on a given initial estimator. These one-step estimators rely on good estimators of the censoring mechanism and thus will require a parametric or semiparametric model for the censoring mechanism. We present a general theorem that provides a template for proving the desired asymptotic results. We illustrate the general one-step estimation methods by constructing locally efficient one-step estimators of marginal distributions and regression parameters with right-censored data, current status data and bivariate right-censored data, in all models allowing the presence of time-dependent covariates. The conditions of the asymptotics theorem are rigorously verified in one of the examples and the key condition of the general theorem is verified for all examples.
Resumo:
DNA sequence copy number has been shown to be associated with cancer development and progression. Array-based Comparative Genomic Hybridization (aCGH) is a recent development that seeks to identify the copy number ratio at large numbers of markers across the genome. Due to experimental and biological variations across chromosomes and across hybridizations, current methods are limited to analyses of single chromosomes. We propose a more powerful approach that borrows strength across chromosomes and across hybridizations. We assume a Gaussian mixture model, with a hidden Markov dependence structure, and with random effects to allow for intertumoral variation, as well as intratumoral clonal variation. For ease of computation, we base estimation on a pseudolikelihood function. The method produces quantitative assessments of the likelihood of genetic alterations at each clone, along with a graphical display for simple visual interpretation. We assess the characteristics of the method through simulation studies and through analysis of a brain tumor aCGH data set. We show that the pseudolikelihood approach is superior to existing methods both in detecting small regions of copy number alteration and in accurately classifying regions of change when intratumoral clonal variation is present.
Resumo:
This book will serve as a foundation for a variety of useful applications of graph theory to computer vision, pattern recognition, and related areas. It covers a representative set of novel graph-theoretic methods for complex computer vision and pattern recognition tasks. The first part of the book presents the application of graph theory to low-level processing of digital images such as a new method for partitioning a given image into a hierarchy of homogeneous areas using graph pyramids, or a study of the relationship between graph theory and digital topology. Part II presents graph-theoretic learning algorithms for high-level computer vision and pattern recognition applications, including a survey of graph based methodologies for pattern recognition and computer vision, a presentation of a series of computationally efficient algorithms for testing graph isomorphism and related graph matching tasks in pattern recognition and a new graph distance measure to be used for solving graph matching problems. Finally, Part III provides detailed descriptions of several applications of graph-based methods to real-world pattern recognition tasks. It includes a critical review of the main graph-based and structural methods for fingerprint classification, a new method to visualize time series of graphs, and potential applications in computer network monitoring and abnormal event detection.
Resumo:
This doctoral thesis presents the experimental results along with a suitable synthesis with computational/theoretical results towards development of a reliable heat transfer correlation for a specific annular condensation flow regime inside a vertical tube. For fully condensing flows of pure vapor (FC-72) inside a vertical cylindrical tube of 6.6 mm diameter and 0.7 m length, the experimental measurements are shown to yield values of average heat transfer co-efficient, and approximate length of full condensation. The experimental conditions cover: mass flux G over a range of 2.9 kg/m2-s ≤ G ≤ 87.7 kg/m2-s, temperature difference ∆T (saturation temperature at the inlet pressure minus the mean condensing surface temperature) of 5 ºC to 45 ºC, and cases for which the length of full condensation xFC is in the range of 0 < xFC < 0.7 m. The range of flow conditions over which there is good agreement (within 15%) with the theory and its modeling assumptions has been identified. Additionally, the ranges of flow conditions for which there are significant discrepancies (between 15 -30% and greater than 30%) with theory have also been identified. The paper also refers to a brief set of key experimental results with regard to sensitivity of the flow to time-varying or quasi-steady (i.e. steady in the mean) impositions of pressure at both the inlet and the outlet. The experimental results support the updated theoretical/computational results that gravity dominated condensing flows do not allow such elliptic impositions.
Resumo:
Amorphous carbon has been investigated for a long time. Since it has the random orientation of carbon atoms, its density depends on the position of each carbon atom. It is important to know the density of amorphous carbon to use it for modeling advance carbon materials in the future. Two methods were used to create the initial structures of amorphous carbon. One is the random placement method by randomly locating 100 carbon atoms in a cubic lattice. Another method is the liquid-quench method by using reactive force field (ReaxFF) to rapidly decrease the system of 100 carbon atoms from the melting temperature. Density functional theory (DFT) was used to refine the position of each carbon atom and the dimensions of the boundaries to minimize the ground energy of the structure. The average densities of amorphous carbon structures created by the random placement method and the liquid-quench method are 2.59 and 2.44 g/cm3, respectively. Both densities have a good agreement with previous works. In addition, the final structure of amorphous carbon generated by the liquid-quench method has lower energy.
Resumo:
Many methodologies dealing with prediction or simulation of soft tissue deformations on medical image data require preprocessing of the data in order to produce a different shape representation that complies with standard methodologies, such as mass–spring networks, finite element method s (FEM). On the other hand, methodologies working directly on the image space normally do not take into account mechanical behavior of tissues and tend to lack physics foundations driving soft tissue deformations. This chapter presents a method to simulate soft tissue deformations based on coupled concepts from image analysis and mechanics theory. The proposed methodology is based on a robust stochastic approach that takes into account material properties retrieved directly from the image, concepts from continuum mechanics and FEM. The optimization framework is solved within a hierarchical Markov random field (HMRF) which is implemented on the graphics processor unit (GPU See Graphics processing unit ).
Resumo:
A model of theoretical science is set forth to guide the formulation of general theories around abstract concepts and processes. Such theories permit explanatory application to many phenomena that are not ostensibly alike, and in so doing encompass socially disapproved violence, making special theories of violence unnecessary. Though none is completely adequate for the explanatory job, at least seven examples of general theories that help account for deviance make up the contemporary theoretical repertoire. From them, we can identify abstractions built around features of offenses, aspects of individuals, the nature of social relationships, and different social processes. Although further development of general theories may be hampered by potential indeterminacy of the subject matter and by the possibility of human agency, maneuvers to deal with such obstacles are available.
Resumo:
In this note, we show that an extension of a test for perfect ranking in a balanced ranked set sample given by Li and Balakrishnan (2008) to the multi-cycle case turns out to be equivalent to the test statistic proposed by Frey et al. (2007). This provides an alternative interpretation and motivation for their test statistic.
Resumo:
Stochastic models for three-dimensional particles have many applications in applied sciences. Lévy–based particle models are a flexible approach to particle modelling. The structure of the random particles is given by a kernel smoothing of a Lévy basis. The models are easy to simulate but statistical inference procedures have not yet received much attention in the literature. The kernel is not always identifiable and we suggest one approach to remedy this problem. We propose a method to draw inference about the kernel from data often used in local stereology and study the performance of our approach in a simulation study.
Resumo:
We consider a flux formulation of Double Field Theory in which fluxes are dynamical and field-dependent. Gauge consistency imposes a set of quadratic constraints on the dynamical fluxes, which can be solved by truly double configurations. The constraints are related to generalized Bianchi Identities for (non-)geometric fluxes in the double space, sourced by (exotic) branes. Following previous constructions, we then obtain generalized connections, torsion and curvatures compatible with the consistency conditions. The strong constraint-violating terms needed to make contact with gauged supergravities containing duality orbits of non-geometric fluxes, systematically arise in this formulation.
Resumo:
Although prior research on new venture creation has identified several antecedents that differentiate entrepreneurs from non-entrepreneurs, scholars still have an incomplete understanding of the factors and decision processes that lead an individual to become an entrepreneur. By applying prospect theory, we introduce the reference point as an important antecedent of new venture creation. Testing our research model and hypotheses with entrepreneurs and employees, results show that entrepreneurs set more aspiring reference points and therefore find themselves more often in a perceived loss situation. Results are also robust when testing for entrepreneurial intention of business graduate students. According to prospect theory, the perceived loss triggers more risk-seeking behavior. Summing up, the reference point has a positive effect on new venture creation and differentiates entrepreneurs from nonentrepreneurs. We discuss theoretical and managerial implications of the findings and develop avenues for future research.
Resumo:
Many techniques based on data which are drawn by Ranked Set Sampling (RSS) scheme assume that the ranking of observations is perfect. Therefore it is essential to develop some methods for testing this assumption. In this article, we propose a parametric location-scale free test for assessing the assumption of perfect ranking. The results of a simulation study in two special cases of normal and exponential distributions indicate that the proposed test performs well in comparison with its leading competitors.
Resumo:
Using explicitly-correlated coupled-cluster theory with single and double excitations, the intermolecular distances and interaction energies of the T-shaped imidazole⋯⋯benzene and pyrrole⋯⋯benzene complexes have been computed in a large augmented correlation-consistent quadruple-zeta basis set, adding also corrections for connected triple excitations and remaining basis-set-superposition errors. The results of these computations are used to assess other methods such as Møller–Plesset perturbation theory (MP2), spin-component-scaled MP2 theory, dispersion-weighted MP2 theory, interference-corrected explicitly-correlated MP2 theory, dispersion-corrected double-hybrid density-functional theory (DFT), DFT-based symmetry-adapted perturbation theory, the random-phase approximation, explicitly-correlated ring-coupled-cluster-doubles theory, and double-hybrid DFT with a correlation energy computed in the random-phase approximation.