966 resultados para Geometric mixture
Resumo:
This thesis entitled Reliability Modelling and Analysis in Discrete time Some Concepts and Models Useful in the Analysis of discrete life time data.The present study consists of five chapters. In Chapter II we take up the derivation of some general results useful in reliability modelling that involves two component mixtures. Expression for the failure rate, mean residual life and second moment of residual life of the mixture distributions in terms of the corresponding quantities in the component distributions are investigated. Some applications of these results are also pointed out. The role of the geometric,Waring and negative hypergeometric distributions as models of life lengths in the discrete time domain has been discussed already. While describing various reliability characteristics, it was found that they can be often considered as a class. The applicability of these models in single populations naturally extends to the case of populations composed of sub-populations making mixtures of these distributions worth investigating. Accordingly the general properties, various reliability characteristics and characterizations of these models are discussed in chapter III. Inference of parameters in mixture distribution is usually a difficult problem because the mass function of the mixture is a linear function of the component masses that makes manipulation of the likelihood equations, leastsquare function etc and the resulting computations.very difficult. We show that one of our characterizations help in inferring the parameters of the geometric mixture without involving computational hazards. As mentioned in the review of results in the previous sections, partial moments were not studied extensively in literature especially in the case of discrete distributions. Chapters IV and V deal with descending and ascending partial factorial moments. Apart from studying their properties, we prove characterizations of distributions by functional forms of partial moments and establish recurrence relations between successive moments for some well known families. It is further demonstrated that partial moments are equally efficient and convenient compared to many of the conventional tools to resolve practical problems in reliability modelling and analysis. The study concludes by indicating some new problems that surfaced during the course of the present investigation which could be the subject for a future work in this area.
Resumo:
The Montreal Process indicators are intended to provide a common framework for assessing and reviewing progress toward sustainable forest management. The potential of a combined geometrical-optical/spectral mixture analysis model was assessed for mapping the Montreal Process age class and successional age indicators at a regional scale using Landsat Thematic data. The project location is an area of eucalyptus forest in Emu Creek State Forest, Southeast Queensland, Australia. A quantitative model relating the spectral reflectance of a forest to the illumination geometry, slope, and aspect of the terrain surface and the size, shape, and density, and canopy size. Inversion of this model necessitated the use of spectral mixture analysis to recover subpixel information on the fractional extent of ground scene elements (such as sunlit canopy, shaded canopy, sunlit background, and shaded background). Results obtained fron a sensitivity analysis allowed improved allocation of resources to maximize the predictive accuracy of the model. It was found that modeled estimates of crown cover projection, canopy size, and tree densities had significant agreement with field and air photo-interpreted estimates. However, the accuracy of the successional stage classification was limited. The results obtained highlight the potential for future integration of high and moderate spatial resolution-imaging sensors for monitoring forest structure and condition. (C) Elsevier Science Inc., 2000.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Based on the potential benefits to human health there is interest in increasing 18:3n-3, 20:5n-3, 22:6n-6, and cis-9,trans-11 conjugated linoleic acid (CLA) in ruminant foods. Four Aberdeen Angus steers (406 ± 8.2 kg BW) fitted with rumen and duodenal cannulae were used in a 4 x 4 Latin square experiment with 21 d periods to examine the potential of fish oil (FO) and linseed oil (LO) in the diet to increase ruminal outflow of trans-11 18:1 and total n-3 polyunsaturated fatty acids (PUFA) in growing cattle. Treatments consisted of a control diet (60:40; forage:concentrate ratio, on a DM basis, respectively) based on maize silage, or the same basal ration containing 30 g/kg DM of FO, LO or a mixture (1:1, w/w) of FO and LO (LFO). Diets were offered as total mixed rations and fed at a rate of 85 g DM/kg BW0.75/d. Oils had no effect (P = 0.52) on DM intake. Linseed oil had no effect (P > 0.05) on ruminal pH or VFA concentrations, while FO shifted rumen fermentation towards propionate at the expense of acetate. Compared with the control, LO increased (P < 0.05) 18:0, cis 18:1 (Δ9, 12-15), trans 18:1 (Δ4-9, 11-16), trans 18:2, geometric isomers of ∆9,11, ∆11,13, and ∆13,15 CLA, trans-8,cis-10 CLA, trans-10,trans-12 CLA, trans-12,trans-14 CLA, and 18:3n-3 flow at the duodenum. Inclusion of FO in the diet resulted in higher (P < 0.05) flows of cis-9 16:1, trans 16:1 (Δ6-13), cis 18:1 (Δ9, 11, and 13), trans 18:1 (Δ6-15), trans 18:2, 20:5n-3, 22:5n-3, and 22:6n-3, and lowered (P < 0.001) 18:0 at the duodenum relative to the control. For most fatty acids at the duodenum responses to LFO were intermediate of FO and LO. However, LFO resulted in higher (P = 0.04) flows of total trans 18:1 than LO and increased (P < 0.01) trans-6 16:1 and trans-12 18:1 at the duodenum compared with FO or LO. Biohydrogenation of cis-9 18:1 and 18:2n-6 in the rumen was independent of treatment, but both FO and LO increased (P < 0.001) the extent of 18:3n-3 biohydrogenation compared with the control. Ruminal 18:3n-3 biohydrogenation was higher (P < 0.001) for LO and LFO than FO, while biohydrogenation of 20:5n-3 and 22:6n-3 in the rumen was marginally lower (P = 0.05) for LFO than FO. In conclusion, LO and FO at 30 g/kg DM altered the biohydrogenation of unsaturated fatty acids in the rumen causing an increase in the flow of specific intermediates at the duodenum, but the potential of these oils fed alone or as a mixture to increase n-3 PUFA at the duodenum in cattle appears limited.
Resumo:
Mebendazole (MBZ) is a common benzimidazole anthelmintic that exists in three different polymorphic forms, A, B, and C. Polymorph C is the pharmaceutically preferred form due to its adequated aqueous solubility. No single crystal structure determinations depicting the nature of the crystal packing and molecular conformation and geometry have been performed on this compound. The crystal structure of mebendazole form C is resolved for the first time. Mebendazole form C crystallizes in the triclinic centrosymmetric space group and this drug is practically planar, since the least-squares methyl benzimidazolylcarbamate plane is much fitted on the forming atoms. However, the benzoyl group is twisted by 31(1)degrees from the benzimidazole ring, likewise the torsional angle between the benzene and carbonyl moieties is 27(1)degrees. The formerly described bends and other interesting intramolecular geometry features were viewed as consequence of the intermolecular contacts occurring within mebendazole C structure. Among these features, a conjugation decreasing through the imine nitrogen atom of the benzimidazole core and a further resonance path crossing the carbamate one were described. At last, the X-ray powder diffractogram of a form C rich mebendazole mixture was overlaid to the calculated one with the mebendazole crystal structure. (C) 2008 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 98:2336-2344, 2009
Resumo:
Unsteady flow of oil and refrigerant gas through radial clearance in rolling piston compressors has been modeled as a heterogeneous mixture, where the properties are determined from the species conservation transport equation coupled with momentum and energy equations. Time variations of pressure, tangential velocity of the rolling piston and radial clearance due to pump setting have been included in the mixture flow model. Those variables have been obtained by modeling the compression process, rolling piston dynamics and by using geometric characteristics of the pump, respectively. An important conclusion concerning this work is the large variation of refrigerant concentration in the oil-filled radial clearance during the compression cycle. That is particularly true for large values of mass flow rates, and for those cases the flow mixture cannot be considered as having uniform concentration. In presence of low mass flow rates homogeneous flow prevail and the mixture tend to have a uniform concentration. In general, it was observed that for calculating the refrigerant mass flow rate using the difference in refrigerant concentration between compression and suction chambers, a time average value for the gas concentration should be used at the clearance inlet.
Resumo:
In this article we introduce a three-parameter extension of the bivariate exponential-geometric (BEG) law (Kozubowski and Panorska, 2005) [4]. We refer to this new distribution as the bivariate gamma-geometric (BGG) law. A bivariate random vector (X, N) follows the BGG law if N has geometric distribution and X may be represented (in law) as a sum of N independent and identically distributed gamma variables, where these variables are independent of N. Statistical properties such as moment generation and characteristic functions, moments and a variance-covariance matrix are provided. The marginal and conditional laws are also studied. We show that BBG distribution is infinitely divisible, just as the BEG model is. Further, we provide alternative representations for the BGG distribution and show that it enjoys a geometric stability property. Maximum likelihood estimation and inference are discussed and a reparametrization is proposed in order to obtain orthogonality of the parameters. We present an application to a real data set where our model provides a better fit than the BEG model. Our bivariate distribution induces a bivariate Levy process with correlated gamma and negative binomial processes, which extends the bivariate Levy motion proposed by Kozubowski et al. (2008) [6]. The marginals of our Levy motion are a mixture of gamma and negative binomial processes and we named it BMixGNB motion. Basic properties such as stochastic self-similarity and the covariance matrix of the process are presented. The bivariate distribution at fixed time of our BMixGNB process is also studied and some results are derived, including a discussion about maximum likelihood estimation and inference. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
This study sought to analyse the behaviour of the average spinal posture using a novel investigative procedure in a maximal incremental effort test performed on a treadmill. Spine motion was collected via stereo-photogrammetric analysis in thirteen amateur athletes. At each time percentage of the gait cycle, the reconstructed spine points were projected onto the sagittal and frontal planes of the trunk. On each plane, a polynomial was fitted to the data, and the two-dimensional geometric curvature along the longitudinal axis of the trunk was calculated to quantify the geometric shape of the spine. The average posture presented at the gait cycle defined the spine Neutral Curve. This method enabled the lateral deviations, lordosis, and kyphosis of the spine to be quantified noninvasively and in detail. The similarity between each two volunteers was a maximum of 19% on the sagittal plane and 13% on the frontal (p<0.01). The data collected in this study can be considered preliminary evidence that there are subject-specific characteristics in spinal curvatures during running. Changes induced by increases in speed were not sufficient for the Neutral Curve to lose its individual characteristics, instead behaving like a postural signature. The data showed the descriptive capability of a new method to analyse spinal postures during locomotion; however, additional studies, and with larger sample sizes, are necessary for extracting more general information from this novel methodology.
Resumo:
The stingless bee Melipona beecheii presents great variability and is considered a complex of species. In order to better understand this species complex, we need to evaluate its diversity and develop methods that allow geographic traceability of the populations. Here we present a fast, efficient, and inexpensive means to accomplish this using geometric morphometrics of wings. We collected samples from Mexico, Guatemala, El Salvador, Nicaragua, and Costa Rica and we were able to correctly assign 87.1% of the colonies to their sampling sites and 92.4% to their haplotype. We propose that geometric morphometrics of the wing could be used as a first step analysis leaving the more expensive molecular analysis only to doubtful cases.
Resumo:
Gene clustering is a useful exploratory technique to group together genes with similar expression levels under distinct cell cycle phases or distinct conditions. It helps the biologist to identify potentially meaningful relationships between genes. In this study, we propose a clustering method based on multivariate normal mixture models, where the number of clusters is predicted via sequential hypothesis tests: at each step, the method considers a mixture model of m components (m = 2 in the first step) and tests if in fact it should be m - 1. If the hypothesis is rejected, m is increased and a new test is carried out. The method continues (increasing m) until the hypothesis is accepted. The theoretical core of the method is the full Bayesian significance test, an intuitive Bayesian approach, which needs no model complexity penalization nor positive probabilities for sharp hypotheses. Numerical experiments were based on a cDNA microarray dataset consisting of expression levels of 205 genes belonging to four functional categories, for 10 distinct strains of Saccharomyces cerevisiae. To analyze the method's sensitivity to data dimension, we performed principal components analysis on the original dataset and predicted the number of classes using 2 to 10 principal components. Compared to Mclust (model-based clustering), our method shows more consistent results.
Resumo:
Thermodiffusion in a lyotropic mixture of water and potassium laurate is investigated by means of an optical technique (Z scan) distinguishing the index variations due to the temperature gradient and the mass gradients. A phenomenological framework allowing for coupled diffusion is developed in order to analyze thermodiffusion in multicomponent systems. An observable parameter relating to the mass gradients is found to exhibit a sharp change around the critical micellar concentration, and thus may be used to detect it. The change in the slope is due to the markedly different values of the Soret coefficients of the surfactant and the micelles. The difference in the Soret coefficients is due to the fact that the micellization process reduces the energy of interaction of the ball of amphiphilic molecules with the solvent.
Resumo:
We investigate the phase diagram of a discrete version of the Maier-Saupe model with the inclusion of additional degrees of freedom to mimic a distribution of rodlike and disklike molecules. Solutions of this problem on a Bethe lattice come from the analysis of the fixed points of a set of nonlinear recursion relations. Besides the fixed points associated with isotropic and uniaxial nematic structures, there is also a fixed point associated with a biaxial nematic structure. Due to the existence of large overlaps of the stability regions, we resorted to a scheme to calculate the free energy of these structures deep in the interior of a large Cayley tree. Both thermodynamic and dynamic-stability analyses rule out the presence of a biaxial phase, in qualitative agreement with previous mean-field results.
Resumo:
We present a temperature- dependent Hartree- Fock- Bogoliubov- Popov theory to analyze the properties of the equilibrium states of an homogeneous mixture of bosonic atoms in two different hyperfine states and in the presence of an internal Josephson coupling. In our calculation we show that the bistable structure of the equilibrium states at zero temperature changes when we increase the temperature of the system. We investigate two mechanisms of the disappearance of bistability. In one, near the collapse of one of the equilibrium states, the acoustical branch becomes unstable and the gap of the optical branch goes to zero. In the other, there is no divergent behavior of the system and bistability disappears at a temperature in which the two equilibrium states merge at a zero- population fraction imbalance. When we further increase the temperature, this state remains as a unique equilibrium configuration.
Resumo:
Today several different unsupervised classification algorithms are commonly used to cluster similar patterns in a data set based only on its statistical properties. Specially in image data applications, self-organizing methods for unsupervised classification have been successfully applied for clustering pixels or group of pixels in order to perform segmentation tasks. The first important contribution of this paper refers to the development of a self-organizing method for data classification, named Enhanced Independent Component Analysis Mixture Model (EICAMM), which was built by proposing some modifications in the Independent Component Analysis Mixture Model (ICAMM). Such improvements were proposed by considering some of the model limitations as well as by analyzing how it should be improved in order to become more efficient. Moreover, a pre-processing methodology was also proposed, which is based on combining the Sparse Code Shrinkage (SCS) for image denoising and the Sobel edge detector. In the experiments of this work, the EICAMM and other self-organizing models were applied for segmenting images in their original and pre-processed versions. A comparative analysis showed satisfactory and competitive image segmentation results obtained by the proposals presented herein. (C) 2008 Published by Elsevier B.V.
Resumo:
High-angle grain boundary migration is predicted during geometric dynamic recrystallization (GDRX) by two types of mathematical models. Both models consider the driving pressure due to curvature and a sinusoidal driving pressure owing to subgrain walls connected to the grain boundary. One model is based on the finite difference solution of a kinetic equation, and the other, on a numerical technique in which the boundary is subdivided into linear segments. The models show that an initially flat boundary becomes serrated, with the peak and valley migrating into both adjacent grains, as observed during GDRX. When the sinusoidal driving pressure amplitude is smaller than 2 pi, the boundary stops migrating, reaching an equilibrium shape. Otherwise, when the amplitude is larger than 2 pi, equilibrium is never reached and the boundary migrates indefinitely, which would cause the protrusions of two serrated parallel boundaries to impinge on each other, creating smaller equiaxed grains.