718 resultados para Intuitionistic Fuzzy sets


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The fuzzy logic accepts infinite intermediate logical values between false and true. In view of this principle, a system based on fuzzy rules was established to provide the best management of Catasetum fimbriatum. For the input of the developed fuzzy system, temperature and shade variables were used, and for the output, the orchid vitality. The system may help orchid experts and amateurs to manage this species. ?Low? (L), ?Medium? (M) and ?High? (H) were used as linguistic variables. The objective of the study was to develop a system based on fuzzy rules to improve management of the Catasetum fimbriatum species, as its production presents some difficulties, and it offers high added value

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pós-graduação em Agronegócio e Desenvolvimento - Tupã

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The pharmaceutical industry was consolidated in Brazil in the 1930s, and since then has become increasingly competitive. Therefore the implementation of the Toyota Production System, which aims to lean production, has become common among companies in the segment. The main efficiency indicator currently used is the Overall Equipment Effectiveness (OEE). This paper intends to, using the fuzzy model DEA-BCC, analyze the efficiency of the production lines of a pharmaceutical company in the Paraíba Valley, compare the values obtained by the model with those calculated by the OEE, identify the most sensitive machines to variation in the data input and develop a ranking of effectiveness between the consumer machinery. After the development, it is shown that the accuracy of the relationship between the two methods is approximately 57% and the line considered the most effective by the Toyota Production System is not the same as the one found by this paper

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Mecânica - FEG

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Large gene expression studies, such as those conducted using DNA arrays, often provide millions of different pieces of data. To address the problem of analyzing such data, we describe a statistical method, which we have called ‘gene shaving’. The method identifies subsets of genes with coherent expression patterns and large variation across conditions. Gene shaving differs from hierarchical clustering and other widely used methods for analyzing gene expression studies in that genes may belong to more than one cluster, and the clustering may be supervised by an outcome measure. The technique can be ‘unsupervised’, that is, the genes and samples are treated as unlabeled, or partially or fully supervised by using known properties of the genes or samples to assist in finding meaningful groupings. Results: We illustrate the use of the gene shaving method to analyze gene expression measurements made on samples from patients with diffuse large B-cell lymphoma. The method identifies a small cluster of genes whose expression is highly predictive of survival. Conclusions: The gene shaving method is a potentially useful tool for exploration of gene expression data and identification of interesting clusters of genes worth further investigation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hundreds of Terabytes of CMS (Compact Muon Solenoid) data are being accumulated for storage day by day at the University of Nebraska-Lincoln, which is one of the eight US CMS Tier-2 sites. Managing this data includes retaining useful CMS data sets and clearing storage space for newly arriving data by deleting less useful data sets. This is an important task that is currently being done manually and it requires a large amount of time. The overall objective of this study was to develop a methodology to help identify the data sets to be deleted when there is a requirement for storage space. CMS data is stored using HDFS (Hadoop Distributed File System). HDFS logs give information regarding file access operations. Hadoop MapReduce was used to feed information in these logs to Support Vector Machines (SVMs), a machine learning algorithm applicable to classification and regression which is used in this Thesis to develop a classifier. Time elapsed in data set classification by this method is dependent on the size of the input HDFS log file since the algorithmic complexities of Hadoop MapReduce algorithms here are O(n). The SVM methodology produces a list of data sets for deletion along with their respective sizes. This methodology was also compared with a heuristic called Retention Cost which was calculated using size of the data set and the time since its last access to help decide how useful a data set is. Accuracies of both were compared by calculating the percentage of data sets predicted for deletion which were accessed at a later instance of time. Our methodology using SVMs proved to be more accurate than using the Retention Cost heuristic. This methodology could be used to solve similar problems involving other large data sets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The pharmaceutical industry was consolidated in Brazil in the 1930s, and since then has become increasingly competitive. Therefore the implementation of the Toyota Production System, which aims to lean production, has become common among companies in the segment. The main efficiency indicator currently used is the Overall Equipment Effectiveness (OEE). This paper intends to, using the fuzzy model DEA-BCC, analyze the efficiency of the production lines of a pharmaceutical company in the Paraíba Valley, compare the values obtained by the model with those calculated by the OEE, identify the most sensitive machines to variation in the data input and develop a ranking of effectiveness between the consumer machinery. After the development, it is shown that the accuracy of the relationship between the two methods is approximately 57% and the line considered the most effective by the Toyota Production System is not the same as the one found by this paper

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Mecânica - FEG

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are some variants of the widely used Fuzzy C-Means (FCM) algorithm that support clustering data distributed across different sites. Those methods have been studied under different names, like collaborative and parallel fuzzy clustering. In this study, we offer some augmentation of the two FCM-based clustering algorithms used to cluster distributed data by arriving at some constructive ways of determining essential parameters of the algorithms (including the number of clusters) and forming a set of systematically structured guidelines such as a selection of the specific algorithm depending on the nature of the data environment and the assumptions being made about the number of clusters. A thorough complexity analysis, including space, time, and communication aspects, is reported. A series of detailed numeric experiments is used to illustrate the main ideas discussed in the study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The attributes describing a data set may often be arranged in meaningful subsets, each of which corresponds to a different aspect of the data. An unsupervised algorithm (SCAD) that simultaneously performs fuzzy clustering and aspects weighting was proposed in the literature. However, SCAD may fail and halt given certain conditions. To fix this problem, its steps are modified and then reordered to reduce the number of parameters required to be set by the user. In this paper we prove that each step of the resulting algorithm, named ASCAD, globally minimizes its cost-function with respect to the argument being optimized. The asymptotic analysis of ASCAD leads to a time complexity which is the same as that of fuzzy c-means. A hard version of the algorithm and a novel validity criterion that considers aspect weights in order to estimate the number of clusters are also described. The proposed method is assessed over several artificial and real data sets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present study compared the changes in markers of muscle damage after bouts of resistance exercise employing the Multiple-sets (MS) and Half-pyramid (HP) training systems. Ten healthy men (26.1 +/- 6.3 years), who had been involved in regular resistance training, performed MS and HP bouts, 14 days apart, in a randomised, counter-balanced manner. For the MS bout, participants performed three sets of maximum repetitions at 75%-1RM (i.e. 75% of a One Repetition Maximum) for the three exercises, starting with the bench press, followed by pec deck and decline bench press. For the HP bout, the participants performed three sets of maximum repetitions with 67%-1RM, 74%-1RM and 80%-1RM for the first, second and third sets, respectively, for the same three exercise sequences as the MS bout. The total volume of load lifted was equated between both bouts. Muscle soreness, plasma creatine kinase (CK) activity, myoglobin (Mb) and C-reactive protein (CRP) concentrations were assessed before and for three days after each exercise bout, and the changes over time were compared between MS and HP using two-way repeated measures ANOVA. Muscle soreness developed significantly (P<0.01) after both bouts, but no significant difference was observed between MS and HP. Plasma CK activity and Mb concentration increased significantly (P<0.01) without significant differences between bouts, and CRP concentration did not change significantly after either bout. These results suggest that the muscle damage profile is similar for MS and HP, probably due to the similar total volume of load lifted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show that if f is a homeomorphism of the 2-torus isotopic to the identity and its lift (f) over tilde is transitive, or even if it is transitive outside the lift of the elliptic islands, then (0,0) is in the interior of the rotation set of (f) over tilde. This proves a particular case of Boyland's conjecture.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Facial reconstruction is a method that seeks to recreate a person's facial appearance from his/her skull. This technique can be the last resource used in a forensic investigation, when identification techniques such as DNA analysis, dental records, fingerprints and radiographic comparison cannot be used to identify a body or skeletal remains. To perform facial reconstruction, the data of facial soft tissue thickness are necessary. Scientific literature has described differences in the thickness of facial soft tissue between ethnic groups. There are different databases of soft tissue thickness published in the scientific literature. There are no literature records of facial reconstruction works carried out with data of soft tissues obtained from samples of Brazilian subjects. There are also no reports of digital forensic facial reconstruction performed in Brazil. There are two databases of soft tissue thickness published for the Brazilian population: one obtained from measurements performed in fresh cadavers (fresh cadavers' pattern), and another from measurements using magnetic resonance imaging (Magnetic Resonance pattern). This study aims to perform three different characterized digital forensic facial reconstructions (with hair, eyelashes and eyebrows) of a Brazilian subject (based on an international pattern and two Brazilian patterns for soft facial tissue thickness), and evaluate the digital forensic facial reconstructions comparing them to photos of the individual and other nine subjects. The DICOM data of the Computed Tomography (CT) donated by a volunteer were converted into stereolitography (STL) files and used for the creation of the digital facial reconstructions. Once the three reconstructions were performed, they were compared to photographs of the subject who had the face reconstructed and nine other subjects. Thirty examiners participated in this recognition process. The target subject was recognized by 26.67% of the examiners in the reconstruction performed with the Brazilian Magnetic Resonance Pattern, 23.33% in the reconstruction performed with the Brazilian Fresh Cadavers Pattern and 20.00% in the reconstruction performed with the International Pattern, in which the target-subject was the most recognized subject in the first two patterns. The rate of correct recognitions of the target subject indicate that the digital forensic facial reconstruction, conducted with parameters used in this study, may be a useful tool. (C) 2011 Elsevier Ireland Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyzes concepts of independence and assumptions of convexity in the theory of sets of probability distributions. The starting point is Kyburg and Pittarelli's discussion of "convex Bayesianism" (in particular their proposals concerning E-admissibility, independence, and convexity). The paper offers an organized review of the literature on independence for sets of probability distributions; new results on graphoid properties and on the justification of "strong independence" (using exchangeability) are presented. Finally, the connection between Kyburg and Pittarelli's results and recent developments on the axiomatization of non-binary preferences, and its impact on "complete" independence, are described.