730 resultados para fuzzy sample entropy
Resumo:
We give an a priori analysis of a semi-discrete discontinuous Galerkin scheme approximating solutions to a model of multiphase elastodynamics which involves an energy density depending not only on the strain but also the strain gradient. A key component in the analysis is the reduced relative entropy stability framework developed in Giesselmann (SIAM J Math Anal 46(5):3518–3539, 2014). The estimate we derive is optimal in the L∞(0,T;dG) norm for the strain and the L2(0,T;dG) norm for the velocity, where dG is an appropriate mesh dependent H1-like space.
Resumo:
Human observers exhibit large systematic distance-dependent biases when estimating the three-dimensional (3D) shape of objects defined by binocular image disparities. This has led some to question the utility of disparity as a cue to 3D shape and whether accurate estimation of 3D shape is at all possible. Others have argued that accurate perception is possible, but only with large continuous perspective transformations of an object. Using a stimulus that is known to elicit large distance-dependent perceptual bias (random dot stereograms of elliptical cylinders) we show that contrary to these findings the simple adoption of a more naturalistic viewing angle completely eliminates this bias. Using behavioural psychophysics, coupled with a novel surface-based reverse correlation methodology, we show that it is binocular edge and contour information that allows for accurate and precise perception and that observers actively exploit and sample this information when it is available.
Resumo:
Understanding complex social-ecological systems, and anticipating how they may respond to rapid change, requires an approach that incorporates environmental, social, economic, and policy factors, usually in a context of fragmented data availability. We employed fuzzy cognitive mapping (FCM) to integrate these factors in the assessment of future wildfire risk in the Chiquitania region, Bolivia. In this region, dealing with wildfires is becoming increasingly challenging due to reinforcing feedbacks between multiple drivers. We conducted semi-structured interviews and constructed different FCMs in focus groups to understand the regional dynamics of wildfire from diverse perspectives. We used FCM modelling to evaluate possible adaptation scenarios in the context of future drier climatic conditions. Scenarios also considered possible failure to respond in time to the emergent risk. This approach proved of great potential to support decision-making for risk management. It helped identify key forcing variables and generate insights into potential risks and trade-offs of different strategies. All scenarios showed increased wildfire risk in the event of more droughts. The ‘Hands-off’ scenario resulted in amplified impacts driven by intensifying trends, affecting particularly the agricultural production. The ‘Fire management’ scenario, which adopted a bottom-up approach to improve controlled burning, showed less trade-offs between wildfire risk reduction and production compared to the ‘Fire suppression’ scenario. Findings highlighted the importance of considering strategies that involve all actors who use fire, and the need to nest these strategies for a more systemic approach to manage wildfire risk. The FCM model could be used as a decision-support tool and serve as a ‘boundary object’ to facilitate collaboration and integration of different forms of knowledge and perceptions of fire in the region. This approach has also the potential to support decisions in other dynamic frontier landscapes around the world that are facing increased risk of large wildfires.
Resumo:
The determination of the amount of sample units that will compose the sample express the optimization of the workforce, and reduce errors inherent in the report of recommendation and evaluation of soil fertility. This study aimed to determine in three systems use and soil management, the numbers of units samples design, needed to form the composed sample, for evaluation of soil fertility. It was concluded that the number of sample units needed to compose the composed sample to determination the attributes of organic matter, pH, P, K, Ca, Mg, Al and H+Al and base saturation of soil vary by use and soil management and error acceptable to the mean estimate. For the same depth of collected, increasing the number of sample units, reduced the percentage error in estimating the average, allowing the recommendation of 14, 14 and 11 sample in management with native vegetation, pasture cultivation and corn, respectively, for a error 20% on the mean estimate.
Resumo:
Evidence of jet precession in many galactic and extragalactic sources has been reported in the literature. Much of this evidence is based on studies of the kinematics of the jet knots, which depends on the correct identification of the components to determine their respective proper motions and position angles on the plane of the sky. Identification problems related to fitting procedures, as well as observations poorly sampled in time, may influence the follow-up of the components in time, which consequently might contribute to a misinterpretation of the data. In order to deal with these limitations, we introduce a very powerful statistical tool to analyse jet precession: the cross-entropy method for continuous multi-extremal optimization. Only based on the raw data of the jet components (right ascension and declination offsets from the core), the cross-entropy method searches for the precession model parameters that better represent the data. In this work we present a large number of tests to validate this technique, using synthetic precessing jets built from a given set of precession parameters. With the aim of recovering these parameters, we applied the cross-entropy method to our precession model, varying exhaustively the quantities associated with the method. Our results have shown that even in the most challenging tests, the cross-entropy method was able to find the correct parameters within a 1 per cent level. Even for a non-precessing jet, our optimization method could point out successfully the lack of precession.
Resumo:
We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.
Resumo:
Technical actions performed by two groups of judokas who won medals at World Championships and Olympic Games during the period 1995-2001 were analyzed. In the Super Elite group (n = 17) were the best athletes in each weight category. The Elite group (n = 16) were medal winners who were not champions and did not win more than three medals. Super Elite judokas used a greater number of throwing techniques which resulted in scores, even when expressed relative to the total number of matches performed, and these techniques were applied in more directions than those of Elite judokas. Further, the number of different throwing techniques and the variability of directions in which techniques were applied were significantly correlated with number of wins and the number of points and ippon scored. Thus, a greater number of throwing techniques and use of directions for attack seem to be important in increasing unpredictability during judo matches.
Resumo:
This paper is concerned with the computational efficiency of fuzzy clustering algorithms when the data set to be clustered is described by a proximity matrix only (relational data) and the number of clusters must be automatically estimated from such data. A fuzzy variant of an evolutionary algorithm for relational clustering is derived and compared against two systematic (pseudo-exhaustive) approaches that can also be used to automatically estimate the number of fuzzy clusters in relational data. An extensive collection of experiments involving 18 artificial and two real data sets is reported and analyzed. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
This paper tackles the problem of showing that evolutionary algorithms for fuzzy clustering can be more efficient than systematic (i.e. repetitive) approaches when the number of clusters in a data set is unknown. To do so, a fuzzy version of an Evolutionary Algorithm for Clustering (EAC) is introduced. A fuzzy cluster validity criterion and a fuzzy local search algorithm are used instead of their hard counterparts employed by EAC. Theoretical complexity analyses for both the systematic and evolutionary algorithms under interest are provided. Examples with computational experiments and statistical analyses are also presented.
Resumo:
This paper presents an automatic method to detect and classify weathered aggregates by assessing changes of colors and textures. The method allows the extraction of aggregate features from images and the automatic classification of them based on surface characteristics. The concept of entropy is used to extract features from digital images. An analysis of the use of this concept is presented and two classification approaches, based on neural networks architectures, are proposed. The classification performance of the proposed approaches is compared to the results obtained by other algorithms (commonly considered for classification purposes). The obtained results confirm that the presented method strongly supports the detection of weathered aggregates.
Resumo:
We study and compare the information loss of a large class of Gaussian bipartite systems. It includes the usual Caldeira-Leggett-type model as well as Anosov models ( parametric oscillators, the inverted oscillator environment, etc), which exhibit instability, one of the most important characteristics of chaotic systems. We establish a rigorous connection between the quantum Lyapunov exponents and coherence loss, and show that in the case of unstable environments coherence loss is completely determined by the upper quantum Lyapunov exponent, a behavior which is more universal than that of the Caldeira-Leggett-type model.
Resumo:
Measurements of X-ray diffraction, electrical resistivity, and magnetization are reported across the Jahn-Teller phase transition in LaMnO(3). Using a thermodynamic equation, we obtained the pressure derivative of the critical temperature (T(JT)), dT(JT)/dP = -28.3 K GPa(-1). This approach also reveals that 5.7(3)J(mol K)(-1) comes from the volume change and 0.8(2)J(mol K)(-1) from the magnetic exchange interaction change across the phase transition. Around T(JT), a robust increase in the electrical conductivity takes place and the electronic entropy change, which is assumed to be negligible for the majority of electronic systems, was found to be 1.8(3)J(mol K)(-1).
Resumo:
Ancient potteries usually are made of the local clay material, which contains relatively high concentration of iron. The powdered samples are usually quite black, due to magnetite, and, although they can be used for thermoluminescene (TL) dating, it is easiest to obtain better TL reading when clearest natural or pre-treated sample is used. For electron paramagnetic resonance (EPR) measurements, the huge signal due to iron spin-spin interaction, promotes an intense interference overlapping any other signal in this range. Sample dating is obtained by dividing the radiation dose, determined by the concentration of paramagnetic species generated by irradiation, by the natural dose so as a consequence, EPR dating cannot be used, since iron signal do not depend on radiation dose. In some cases, the density separation method using hydrated solution of sodium polytungstate [Na(G)(H(2)W(12)O(40))center dot H(2)O] becomes useful. However, the sodium polytungstate is very expensive in Brazil: hence an alternative method for eliminating this interference is proposed. A chemical process to eliminate about 90% of magnetite was developed. A sample of powdered ancient pottery was treated in a mixture (3:1:1) of HCI, HNO(3) and H(2)O(2) for 4 h. After that, it was washed several times in distilled water to remove all acid matrixes. The original black sample becomes somewhat clearer. The resulting material was analyzed by plasma mass spectrometry (ICP-MS), with the result that the iron content is reduced by a factor of about 9. In EPR measurements a non-treated natural ceramic sample shows a broad spin-spin interaction signal, the chemically treated sample presents a narrow signal in g= 2.00 region, possibly due to a radical of (SiO(3))(3-), mixed with signal of remaining iron [M. lkeya, New Applications of Electron Spin Resonance, World Scientific, Singapore, 1993, p. 285]. This signal increases in intensity under -gamma-irradiation. However, still due to iron influence, the additive method yielded too old age-value. Since annealing at 300 degrees C, Toyoda and Ikeya IS. Toyoda, M. Ikeya, Geochem. J. 25 (1991) 427-445] states that E `(1)-signal with maximum intensity is obtained, while annealing at 400 degrees C E`(1)-signal is completely eliminated, the subtraction of the second one from 300 degrees C heat-treated sample isolate E`(1)-like signal. Since this is radiation dose-dependent, we show that now EPR dating becomes possible. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
An entropy-based image segmentation approach is introduced and applied to color images obtained from Google Earth. Segmentation refers to the process of partitioning a digital image in order to locate different objects and regions of interest. The application to satellite images paves the way to automated monitoring of ecological catastrophes, urban growth, agricultural activity, maritime pollution, climate changing and general surveillance. Regions representing aquatic, rural and urban areas are identified and the accuracy of the proposed segmentation methodology is evaluated. The comparison with gray level images revealed that the color information is fundamental to obtain an accurate segmentation. (C) 2010 Elsevier B.V. All rights reserved.