932 resultados para A-PRIORI
Resumo:
The application of computer-aided inspection integrated with the coordinate measuring machine and laser scanners to inspect manufactured aircraft parts using robust registration of two-point datasets is a subject of active research in computational metrology. This paper presents a novel approach to automated inspection by matching shapes based on the modified iterative closest point (ICP) method to define a criterion for the acceptance or rejection of a part. This procedure improves upon existing methods by doing away with the following, viz., the need for constructing either a tessellated or smooth representation of the inspected part and requirements for an a priori knowledge of approximate registration and correspondence between the points representing the computer-aided design datasets and the part to be inspected. In addition, this procedure establishes a better measure for error between the two matched datasets. The use of localized region-based triangulation is proposed for tracking the error. The approach described improves the convergence of the ICP technique with a dramatic decrease in computational effort. Experimental results obtained by implementing this proposed approach using both synthetic and practical data show that the present method is efficient and robust. This method thereby validates the algorithm, and the examples demonstrate its potential to be used in engineering applications.
Multi-GNSS precise point positioning with raw single-frequency and dual-frequency measurement models
Resumo:
The emergence of multiple satellite navigation systems, including BDS, Galileo, modernized GPS, and GLONASS, brings great opportunities and challenges for precise point positioning (PPP). We study the contributions of various GNSS combinations to PPP performance based on undifferenced or raw observations, in which the signal delays and ionospheric delays must be considered. A priori ionospheric knowledge, such as regional or global corrections, strengthens the estimation of ionospheric delay parameters. The undifferenced models are generally more suitable for single-, dual-, or multi-frequency data processing for single or combined GNSS constellations. Another advantage over ionospheric-free PPP models is that undifferenced models avoid noise amplification by linear combinations. Extensive performance evaluations are conducted with multi-GNSS data sets collected from 105 MGEX stations in July 2014. Dual-frequency PPP results from each single constellation show that the convergence time of undifferenced PPP solution is usually shorter than that of ionospheric-free PPP solutions, while the positioning accuracy of undifferenced PPP shows more improvement for the GLONASS system. In addition, the GLONASS undifferenced PPP results demonstrate performance advantages in high latitude areas, while this impact is less obvious in the GPS/GLONASS combined configuration. The results have also indicated that the BDS GEO satellites have negative impacts on the undifferenced PPP performance given the current “poor” orbit and clock knowledge of GEO satellites. More generally, the multi-GNSS undifferenced PPP results have shown improvements in the convergence time by more than 60 % in both the single- and dual-frequency PPP results, while the positioning accuracy after convergence indicates no significant improvements for the dual-frequency PPP solutions, but an improvement of about 25 % on average for the single-frequency PPP solutions.
Resumo:
Carbon fiber reinforced polymer (CFRP) composite specimens with different thickness, geometry, and stacking sequences were subjected to fatigue spectrum loading in stages. Another set of specimens was subjected to static compression load. On-line acoustic Emission (AE) monitoring was carried out during these tests. Two artificial neural networks, Kohonen-self organizing feature map (KSOM), and multi-layer perceptron (MLP) have been developed for AE signal analysis. AE signals from specimens were clustered using the unsupervised learning KSOM. These clusters were correlated to the failure modes using available a priori information such as AE signal amplitude distributions, time of occurrence of signals, ultrasonic imaging, design of the laminates (stacking sequences, orientation of fibers), and AE parametric plots. Thereafter, AE signals generated from the rest of the specimens were classified by supervised learning MLP. The network developed is made suitable for on-line monitoring of AE signals in the presence of noise, which can be used for detection and identification of failure modes and their growth. The results indicate that the characteristics of AE signals from different failure modes in CFRP remain largely unaffected by the type of load, fiber orientation, and stacking sequences, they being representatives of the type of failure phenomena. The type of loading can have effect only on the extent of damage allowed before the specimens fail and hence on the number of AE signals during the test. The artificial neural networks (ANN) developed and the methods and procedures adopted show significant success in AE signal characterization under noisy environment (detection and identification of failure modes and their growth).
Resumo:
In this paper we analyze a deploy and search strategy for multi-agent systems. Mobile agents equipped with sensors carry out search operation in the search space. The lack of information about the search space is modeled as an uncertainty density distribution over the space, and is assumed to be known to the agents a priori. In each step, the agents deploy themselves in an optimal way so as to maximize per step reduction in the uncertainty density. We analyze the proposed strategy for convergence and spatial distributedness. The control law moving the agents has been analyzed for stability and convergence using LaSalle's invariance principle, and for spatial distributedness under a few realistic constraints on the control input such as constant speed, limit on maximum speed, and also sensor range limits. The simulation experiments show that the strategy successfully reduces the average uncertainty density below the required level.
Resumo:
Knowledge of the physical properties of asteroids is crucial in many branches of solar-system research. Knowledge of the spin states and shapes is needed, e.g., for accurate orbit determination and to study the history and evolution of the asteroids. In my thesis, I present new methods for using photometric lightcurves of asteroids in the determination of their spin states and shapes. The convex inversion method makes use of a general polyhedron shape model and provides us at best with an unambiguous spin solution and a convex shape solution that reproduces the main features of the original shape. Deriving information about the non-convex shape features is, in principle, also possible, but usually requires a priori information about the object. Alternatively, a distribution of non-convex solutions, describing the scale of the non-convexities, is also possible to be obtained. Due to insufficient number of absolute observations and inaccurately defined asteroid phase curves, the $c/b$-ratio, i.e., the flatness of the shape model is often somewhat ill-defined. However, especially in the case of elongated objects, the flatness seems to be quite well constrained, even in the case when only relative lightcurves are available. The results prove that it is, contrary to the earlier misbelief, possible to derive shape information from the lightcurve data if a sufficiently wide range of observing geometries is covered by the observations. Along with the more accurate shape models, also the rotational states, i.e., spin vectors and rotation periods, are defined with improved accuracy. The shape solutions obtained so far reveal a population of irregular objects whose most descriptive shape characteristics, however, can be expressed with only a few parameters. Preliminary statistical analyses for the shapes suggests that there are correlations between shape and other physical properties, such as the size, rotation period and taxonomic type of the asteroids. More shape data of, especially, the smallest and largest asteroids, as well as the fast and slow rotators is called for in order to be able to study the statistics more thoroughly.
Resumo:
The motivation behind the fusion of Intrusion Detection Systems was the realization that with the increasing traffic and increasing complexity of attacks, none of the present day stand-alone Intrusion Detection Systems can meet the high demand for a very high detection rate and an extremely low false positive rate. Multi-sensor fusion can be used to meet these requirements by a refinement of the combined response of different Intrusion Detection Systems. In this paper, we show the design technique of sensor fusion to best utilize the useful response from multiple sensors by an appropriate adjustment of the fusion threshold. The threshold is generally chosen according to the past experiences or by an expert system. In this paper, we show that the choice of the threshold bounds according to the Chebyshev inequality principle performs better. This approach also helps to solve the problem of scalability and has the advantage of failsafe capability. This paper theoretically models the fusion of Intrusion Detection Systems for the purpose of proving the improvement in performance, supplemented with the empirical evaluation. The combination of complementary sensors is shown to detect more attacks than the individual components. Since the individual sensors chosen detect sufficiently different attacks, their result can be merged for improved performance. The combination is done in different ways like (i) taking all the alarms from each system and avoiding duplications, (ii) taking alarms from each system by fixing threshold bounds, and (iii) rule-based fusion with a priori knowledge of the individual sensor performance. A number of evaluation metrics are used, and the results indicate that there is an overall enhancement in the performance of the combined detector using sensor fusion incorporating the threshold bounds and significantly better performance using simple rule-based fusion.
Resumo:
We show that information sharing among banks may serve as a collusive device. An informational sharing agreement is an a-priori commitment to reduce informational asymmetries between banks in future lending. Hence, information sharing tends to increase the intensity of competition in future periods and, thus, reduces the value of informational rents in current competition. We contribute to the existing literature by emphasizing that a reduction in informational rents will also reduce the intensity of competition in the current period, thereby reducing competitive pressure in current credit markets. We provide a large class of economic environments, where a ban on information sharing would be strictly welfare-enhancing.
Resumo:
The neural network finds its application in many image denoising applications because of its inherent characteristics such as nonlinear mapping and self-adaptiveness. The design of filters largely depends on the a-priori knowledge about the type of noise. Due to this, standard filters are application and image specific. Widely used filtering algorithms reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high frequency details making the image non-smooth. An integrated general approach to design a finite impulse response filter based on principal component neural network (PCNN) is proposed in this study for image filtering, optimized in the sense of visual inspection and error metric. This algorithm exploits the inter-pixel correlation by iteratively updating the filter coefficients using PCNN. This algorithm performs optimal smoothing of the noisy image by preserving high and low frequency features. Evaluation results show that the proposed filter is robust under various noise distributions. Further, the number of unknown parameters is very few and most of these parameters are adaptively obtained from the processed image.
Resumo:
A fuzzy system is developed using a linearized performance model of the gas turbine engine for performing gas turbine fault isolation from noisy measurements. By using a priori information about measurement uncertainties and through design variable linking, the design of the fuzzy system is posed as an optimization problem with low number of design variables which can be solved using the genetic algorithm in considerably low amount of computer time. The faults modeled are module faults in five modules: fan, low pressure compressor, high pressure compressor, high pressure turbine and low pressure turbine. The measurements used are deviations in exhaust gas temperature, low rotor speed, high rotor speed and fuel flow from a base line 'good engine'. The genetic fuzzy system (GFS) allows rapid development of the rule base if the fault signatures and measurement uncertainties change which happens for different engines and airlines. In addition, the genetic fuzzy system reduces the human effort needed in the trial and error process used to design the fuzzy system and makes the development of such a system easier and faster. A radial basis function neural network (RBFNN) is also used to preprocess the measurements before fault isolation. The RBFNN shows significant noise reduction and when combined with the GFS leads to a diagnostic system that is highly robust to the presence of noise in data. Showing the advantage of using a soft computing approach for gas turbine diagnostics.
Resumo:
This paper challenges the Kripkean interpretation of a posteriori necessities. It will be demonstrated, by an analysis of classic examples, that the modal content of supposed a posteriori necessities is more complicated than the Kripkean line suggests. We will see that further research is needed concerning the a priori principles underlying all a posteriori necessities. In the course of this analysis it will emerge that the modal content of a posteriori necessities can be best described in terms of a Finean conception of modality – by giving essences priority over modality. The upshot of this is that we might be able to establish the necessity of certain supposed a posteriori necessities by a priori means.
Resumo:
Modern sample surveys started to spread after statistician at the U.S. Bureau of the Census in the 1940s had developed a sampling design for the Current Population Survey (CPS). A significant factor was also that digital computers became available for statisticians. In the beginning of 1950s, the theory was documented in textbooks on survey sampling. This thesis is about the development of the statistical inference for sample surveys. For the first time the idea of statistical inference was enunciated by a French scientist, P. S. Laplace. In 1781, he published a plan for a partial investigation in which he determined the sample size needed to reach the desired accuracy in estimation. The plan was based on Laplace s Principle of Inverse Probability and on his derivation of the Central Limit Theorem. They were published in a memoir in 1774 which is one of the origins of statistical inference. Laplace s inference model was based on Bernoulli trials and binominal probabilities. He assumed that populations were changing constantly. It was depicted by assuming a priori distributions for parameters. Laplace s inference model dominated statistical thinking for a century. Sample selection in Laplace s investigations was purposive. In 1894 in the International Statistical Institute meeting, Norwegian Anders Kiaer presented the idea of the Representative Method to draw samples. Its idea was that the sample would be a miniature of the population. It is still prevailing. The virtues of random sampling were known but practical problems of sample selection and data collection hindered its use. Arhtur Bowley realized the potentials of Kiaer s method and in the beginning of the 20th century carried out several surveys in the UK. He also developed the theory of statistical inference for finite populations. It was based on Laplace s inference model. R. A. Fisher contributions in the 1920 s constitute a watershed in the statistical science He revolutionized the theory of statistics. In addition, he introduced a new statistical inference model which is still the prevailing paradigm. The essential idea is to draw repeatedly samples from the same population and the assumption that population parameters are constants. Fisher s theory did not include a priori probabilities. Jerzy Neyman adopted Fisher s inference model and applied it to finite populations with the difference that Neyman s inference model does not include any assumptions of the distributions of the study variables. Applying Fisher s fiducial argument he developed the theory for confidence intervals. Neyman s last contribution to survey sampling presented a theory for double sampling. This gave the central idea for statisticians at the U.S. Census Bureau to develop the complex survey design for the CPS. Important criterion was to have a method in which the costs of data collection were acceptable, and which provided approximately equal interviewer workloads, besides sufficient accuracy in estimation.
Resumo:
Effective usage of image guidance by incorporating the refractive index (RI) variation in computational modeling of light propagation in tissue is investigated to assess its impact on optical-property estimation. With the aid of realistic patient breast three-dimensional models, the variation in RI for different regions of tissue under investigation is shown to influence the estimation of optical properties in image-guided diffuse optical tomography (IG-DOT) using numerical simulations. It is also shown that by assuming identical RI for all regions of tissue would lead to erroneous estimation of optical properties. The a priori knowledge of the RI for the segmented regions of tissue in IG-DOT, which is difficult to obtain for the in vivo cases, leads to more accurate estimates of optical properties. Even inclusion of approximated RI values, obtained from the literature, for the regions of tissue resulted in better estimates of optical properties, with values comparable to that of having the correct knowledge of RI for different regions of tissue.
Resumo:
An a priori error analysis of discontinuous Galerkin methods for a general elliptic problem is derived under a mild elliptic regularity assumption on the solution. This is accomplished by using some techniques from a posteriori error analysis. The model problem is assumed to satisfy a GAyenrding type inequality. Optimal order L (2) norm a priori error estimates are derived for an adjoint consistent interior penalty method.
Resumo:
A new conformal creation field cosmology is considered and it is found that a negative energy scalar field nonminimally coupled to the gravitational field gives rise to creation and, in contrast to Hoyle-Narlikar theory, no a priori assumption about the rate of creation is required to solve the field equations.
Resumo:
Whether HIV-1 evolution in infected individuals is dominated by deterministic or stochastic effects remains unclear because current estimates of the effective population size of HIV-1 in vivo, N-e, are widely varying. Models assuming HIV-1 evolution to be neutral estimate N-e similar to 10(2)-10(4), smaller than the inverse mutation rate of HIV-1 (similar to 10(5)), implying the predominance of stochastic forces. In contrast, a model that includes selection estimates N-e>10(5), suggesting that deterministic forces would hold sway. The consequent uncertainty in the nature of HIV-1 evolution compromises our ability to describe disease progression and outcomes of therapy. We perform detailed bit-string simulations of viral evolution that consider large genome lengths and incorporate the key evolutionary processes underlying the genomic diversification of HIV-1 in infected individuals, namely, mutation, multiple infections of cells, recombination, selection, and epistatic interactions between multiple loci. Our simulations describe quantitatively the evolution of HIV-1 diversity and divergence in patients. From comparisons of our simulations with patient data, we estimate N-e similar to 10(3)-10(4), implying predominantly stochastic evolution. Interestingly, we find that N-e and the viral generation time are correlated with the disease progression time, presenting a route to a priori prediction of disease progression in patients. Further, we show that the previous estimate of N-e>10(5) reduces as the frequencies of multiple infections of cells and recombination assumed increase. Our simulations with N-e similar to 10(3)-10(4) may be employed to estimate markers of disease progression and outcomes of therapy that depend on the evolution of viral diversity and divergence.