718 resultados para Intuitionistic Fuzzy sets
Resumo:
RESUMO O conhecimento dos solos é cada vez mais importante para que o uso dele seja realizado corretamente na agropecuária, no crescimento urbano, na conservação dos recursos naturais, entre outros. Entretanto, verifica-se carência de profissionais qualificados para a caracterização e os mapeamentos pedológicos, particularmente em escalas de maior detalhamento. Essa carência, aliada aos avanços das ferramentas computacionais e do sensoriamento remoto, promoveu o surgimento do Mapeamento Digital de Solos (MDS), que busca auxiliar e agilizar as atividades de levantamento pedológico. Assim, este trabalho objetivou desenvolver uma metodologia de delimitaçao de unidades de solos em topossequências por meio do comportamento espectral dos solos no comprimento de onda do Visível-Infravermelho Próximo (Vis-NIR). A metodologia espectral consistiu na obtenção das curvas espectrais dos solos por meio do espectrorradiômetro FieldSpecPro e da redução do número de informações espectrais por meio da análise de Componentes Principais, seguida de agrupamento das amostras mediante método fuzzy k-médias. Foram selecionadas cinco topossequências com pontos equidistantes de 30 m para caracterizar as classes de solos e amostragens. Foram descritas oito classes de solos distintas, que tiveram caracterização detalhada e classificação em perfis pedológicos. No restante dos pontos, a caracterização das classes de solos foi feita com base na classificação dos solos realizada nos perfis pedológicos, com coleta de amostras por meio de tradagens nas profundidades de 0,00-0,20 e 0,80-1,00 m, perfazendo o total de 162 amostras ao longo das cinco topossequências. As amostras foram analisadas pelas metodologias convencional e espectral, para que os resultados pudessem ser comparados e avaliados. Dessa forma, foram realizadas análises morfológicas, físicas (textura) e químicas nas amostras de solo. Das cinco topossequências estudadas, os resultados foram satisfatoriamente semelhantes; alguns solos não foram perfeitamente individualizados pela metodologia espectral, em razão da grande semelhança em seus comportamentos espectrais, como demonstrado pelo Latossolo Vermelho Férrico e Nitossolo Vermelho Férrico. A metodologia espectral foi capaz de diferenciar solos com resposta espectral distinta e estabelecer limites nas topossequências, apresentando grande potencial para ser implementada em levantamentos pedológicos.
Resumo:
PURPOSE: To objectively characterize different heart tissues from functional and viability images provided by composite-strain-encoding (C-SENC) MRI. MATERIALS AND METHODS: C-SENC is a new MRI technique for simultaneously acquiring cardiac functional and viability images. In this work, an unsupervised multi-stage fuzzy clustering method is proposed to identify different heart tissues in the C-SENC images. The method is based on sequential application of the fuzzy c-means (FCM) and iterative self-organizing data (ISODATA) clustering algorithms. The proposed method is tested on simulated heart images and on images from nine patients with and without myocardial infarction (MI). The resulting clustered images are compared with MRI delayed-enhancement (DE) viability images for determining MI. Also, Bland-Altman analysis is conducted between the two methods. RESULTS: Normal myocardium, infarcted myocardium, and blood are correctly identified using the proposed method. The clustered images correctly identified 90 +/- 4% of the pixels defined as infarct in the DE images. In addition, 89 +/- 5% of the pixels defined as infarct in the clustered images were also defined as infarct in DE images. The Bland-Altman results show no bias between the two methods in identifying MI. CONCLUSION: The proposed technique allows for objectively identifying divergent heart tissues, which would be potentially important for clinical decision-making in patients with MI.
Resumo:
We give a sufficient condition for a set of block subspaces in an infinite-dimensional Banach space to be weakly Ramsey. Using this condition we prove that in the Levy-collapse of a Mahlo cardinal, every projective set is weakly Ramsey. This, together with a construction of W. H. Woodin, is used to show that the Axiom of Projective Determinacy implies that every projective set is weakly Ramsey. In the case of co we prove similar results for a stronger Ramsey property. And for hereditarily indecomposable spaces we show that the Axiom of Determinacy plus the Axiom of Dependent Choices imply that every set is weakly Ramsey. These results are the generalizations to the class of projective sets of some theorems from W. T. Gowers, and our paper "Weakly Ramsey sets in Banach spaces."
Resumo:
Given a compact Riemannian manifold $M$ of dimension $m \geq 2$, we study the space of functions of $L^2(M)$generated by eigenfunctions ofeigenvalues less than $L \geq 1$ associated to the Laplace-Beltrami operator on $M$. On these spaces we give a characterization of the Carleson measures and the Logvinenko-Sereda sets.
Resumo:
[cat] En l'article es dona una condició necessària per a que els conjunts de negociació definits per Shimomura (1997) i el nucli d'un joc cooperatiu amb utilitat transferible coincideixin. A tal efecte, s'introdueix el concepte de vectors de màxim pagament. La condició necessària consiteix a verificar que aquests vectors pertanyen al nucli del joc.
Resumo:
For the last 2 decades, supertree reconstruction has been an active field of research and has seen the development of a large number of major algorithms. Because of the growing popularity of the supertree methods, it has become necessary to evaluate the performance of these algorithms to determine which are the best options (especially with regard to the supermatrix approach that is widely used). In this study, seven of the most commonly used supertree methods are investigated by using a large empirical data set (in terms of number of taxa and molecular markers) from the worldwide flowering plant family Sapindaceae. Supertree methods were evaluated using several criteria: similarity of the supertrees with the input trees, similarity between the supertrees and the total evidence tree, level of resolution of the supertree and computational time required by the algorithm. Additional analyses were also conducted on a reduced data set to test if the performance levels were affected by the heuristic searches rather than the algorithms themselves. Based on our results, two main groups of supertree methods were identified: on one hand, the matrix representation with parsimony (MRP), MinFlip, and MinCut methods performed well according to our criteria, whereas the average consensus, split fit, and most similar supertree methods showed a poorer performance or at least did not behave the same way as the total evidence tree. Results for the super distance matrix, that is, the most recent approach tested here, were promising with at least one derived method performing as well as MRP, MinFlip, and MinCut. The output of each method was only slightly improved when applied to the reduced data set, suggesting a correct behavior of the heuristic searches and a relatively low sensitivity of the algorithms to data set sizes and missing data. Results also showed that the MRP analyses could reach a high level of quality even when using a simple heuristic search strategy, with the exception of MRP with Purvis coding scheme and reversible parsimony. The future of supertrees lies in the implementation of a standardized heuristic search for all methods and the increase in computing power to handle large data sets. The latter would prove to be particularly useful for promising approaches such as the maximum quartet fit method that yet requires substantial computing power.
Resumo:
Galton (1907) first demonstrated the "wisdom of crowds" phenomenon by averaging independent estimates of unknown quantities given by many individuals. Herzog and Hertwig (2009; hereafter H&H in Psychological Science) showed that individuals' own estimates can be improved by asking them to make two estimates at separate times and averaging them. H&H claimed to observe far greater improvement in accuracy when participants received "dialectical" instructions to consider why their first estimate might be wrong before making their second estimates than when they received standard instructions. We reanalyzed H&H's data using measures of accuracy that are unrelated to the frequency of identical first and second responses and found that participants in both conditions improved their accuracy to an equal degree.
Resumo:
In this article, the objective is to demonstrate the effects of different decision styles on strategic decisions and likewise, on an organization. The technique that was presented in the study is based on the transformation of linguistic variables to numerical value intervals. In this model, the study benefits from fuzzy logic methodology and fuzzy numbers. This fuzzy methodology approach allows us to examine the relations between decision making styles and strategic management processes when there is uncertainty. The purpose is to provide results to companies that may help them to exercise the most appropriate decision making style for its different strategic management processes. The study is leaving more research topics for further studies that may be applied to other decision making areas within the strategic management process.
Resumo:
Aim To evaluate the effects of using distinct alternative sets of climatic predictor variables on the performance, spatial predictions and future projections of species distribution models (SDMs) for rare plants in an arid environment. . Location Atacama and Peruvian Deserts, South America (18º30'S - 31º30'S, 0 - 3 000 m) Methods We modelled the present and future potential distributions of 13 species of Heliotropium sect. Cochranea, a plant group with a centre of diversity in the Atacama Desert. We developed and applied a sequential procedure, starting from climate monthly variables, to derive six alternative sets of climatic predictor variables. We used them to fit models with eight modelling techniques within an ensemble forecasting framework, and derived climate change projections for each of them. We evaluated the effects of using these alternative sets of predictor variables on performance, spatial predictions and projections of SDMs using Generalised Linear Mixed Models (GLMM). Results The use of distinct sets of climatic predictor variables did not have a significant effect on overall metrics of model performance, but had significant effects on present and future spatial predictions. Main conclusion Using different sets of climatic predictors can yield the same model fits but different spatial predictions of current and future species distributions. This represents a new form of uncertainty in model-based estimates of extinction risk that may need to be better acknowledged and quantified in future SDM studies.
Resumo:
The atomic force microscope is not only a very convenient tool for studying the topography of different samples, but it can also be used to measure specific binding forces between molecules. For this purpose, one type of molecule is attached to the tip and the other one to the substrate. Approaching the tip to the substrate allows the molecules to bind together. Retracting the tip breaks the newly formed bond. The rupture of a specific bond appears in the force-distance curves as a spike from which the binding force can be deduced. In this article we present an algorithm to automatically process force-distance curves in order to obtain bond strength histograms. The algorithm is based on a fuzzy logic approach that permits an evaluation of "quality" for every event and makes the detection procedure much faster compared to a manual selection. In this article, the software has been applied to measure the binding strength between tubuline and microtubuline associated proteins.
Resumo:
Aspergillus fumigatus grows well at neutral and acidic pH in a medium containing protein as the sole nitrogen source by secreting two different sets of proteases. Neutral pH favors the secretion of neutral and alkaline endoproteases, leucine aminopeptidases (Laps) which are nonspecific monoaminopeptidases, and an X-prolyl dipeptidase (DppIV). Acidic pH environment promotes the secretion of an aspartic endoprotease of pepsin family (Pep1) and tripeptidyl-peptidases of the sedolisin family (SedB and SedD). A novel prolyl peptidase, AfuS28, was found to be secreted in both alkaline and acidic conditions. In previous studies, Laps were shown to degrade peptides from their N-terminus until an X-Pro sequence acts as a stop signal. X-Pro sequences can be then removed by DppIV, which allows Laps access to the following residues. We have shown that at acidic pH Seds degrade large peptides from their N-terminus into tripeptides until Pro in P1 or P'1 position acts as a stop for these exopeptidases. However, X-X-Pro and X-X-X-Pro sequences can be removed by AfuS28 thus allowing Seds further sequential proteolysis. In conclusion, both alkaline and acidic sets of proteases contain exoprotease activity capable of cleaving after proline residues that cannot be removed during sequential digestion by nonspecific exopeptidases.
Resumo:
We investigate under which dynamical conditions the Julia set of a quadratic rational map is a Sierpiński curve.
Resumo:
Due to the large number of characteristics, there is a need to extract the most relevant characteristicsfrom the input data, so that the amount of information lost in this way is minimal, and the classification realized with the projected data set is relevant with respect to the original data. In order to achieve this feature extraction, different statistical techniques, as well as the principal components analysis (PCA) may be used. This thesis describes an extension of principal components analysis (PCA) allowing the extraction ofa finite number of relevant features from high-dimensional fuzzy data and noisy data. PCA finds linear combinations of the original measurement variables that describe the significant variation in the data. The comparisonof the two proposed methods was produced by using postoperative patient data. Experiment results demonstrate the ability of using the proposed two methods in complex data. Fuzzy PCA was used in the classificationproblem. The classification was applied by using the similarity classifier algorithm where total similarity measures weights are optimized with differential evolution algorithm. This thesis presents the comparison of the classification results based on the obtained data from the fuzzy PCA.
Resumo:
Dissolved organic matter (DOM) is a complex mixture of organic compounds, ubiquitous in marine and freshwater systems. Fluorescence spectroscopy, by means of Excitation-Emission Matrices (EEM), has become an indispensable tool to study DOM sources, transport and fate in aquatic ecosystems. However the statistical treatment of large and heterogeneous EEM data sets still represents an important challenge for biogeochemists. Recently, Self-Organising Maps (SOM) has been proposed as a tool to explore patterns in large EEM data sets. SOM is a pattern recognition method which clusterizes and reduces the dimensionality of input EEMs without relying on any assumption about the data structure. In this paper, we show how SOM, coupled with a correlation analysis of the component planes, can be used both to explore patterns among samples, as well as to identify individual fluorescence components. We analysed a large and heterogeneous EEM data set, including samples from a river catchment collected under a range of hydrological conditions, along a 60-km downstream gradient, and under the influence of different degrees of anthropogenic impact. According to our results, chemical industry effluents appeared to have unique and distinctive spectral characteristics. On the other hand, river samples collected under flash flood conditions showed homogeneous EEM shapes. The correlation analysis of the component planes suggested the presence of four fluorescence components, consistent with DOM components previously described in the literature. A remarkable strength of this methodology was that outlier samples appeared naturally integrated in the analysis. We conclude that SOM coupled with a correlation analysis procedure is a promising tool for studying large and heterogeneous EEM data sets.