65 resultados para Máquina
Resumo:
This work presents a scalable and efficient parallel implementation of the Standard Simplex algorithm in the multicore architecture to solve large scale linear programming problems. We present a general scheme explaining how each step of the standard Simplex algorithm was parallelized, indicating some important points of the parallel implementation. Performance analysis were conducted by comparing the sequential time using the Simplex tableau and the Simplex of the CPLEXR IBM. The experiments were executed on a shared memory machine with 24 cores. The scalability analysis was performed with problems of different dimensions, finding evidence that our parallel standard Simplex algorithm has a better parallel efficiency for problems with more variables than constraints. In comparison with CPLEXR , the proposed parallel algorithm achieved a efficiency of up to 16 times better
Sistema inteligente para detecção de manchas de óleo na superfície marinha através de imagens de SAR
Resumo:
Oil spill on the sea, accidental or not, generates enormous negative consequences for the affected area. The damages are ambient and economic, mainly with the proximity of these spots of preservation areas and/or coastal zones. The development of automatic techniques for identification of oil spots on the sea surface, captured through Radar images, assist in a complete monitoring of the oceans and seas. However spots of different origins can be visualized in this type of imaging, which is a very difficult task. The system proposed in this work, based on techniques of digital image processing and artificial neural network, has the objective to identify the analyzed spot and to discern between oil and other generating phenomena of spot. Tests in functional blocks that compose the proposed system allow the implementation of different algorithms, as well as its detailed and prompt analysis. The algorithms of digital image processing (speckle filtering and gradient), as well as classifier algorithms (Multilayer Perceptron, Radial Basis Function, Support Vector Machine and Committe Machine) are presented and commented.The final performance of the system, with different kind of classifiers, is presented by ROC curve. The true positive rates are considered agreed with the literature about oil slick detection through SAR images presents
Resumo:
Present work proposed to map and features the wear mechanisms of structural polymers of engineering derived of the sliding contact with a metallic cylindrical spindle submitted to eccentricity due to fluctuations in it is mass and geometric centers. For this it was projected and makes an experimental apparatus from balancing machine where the cylindrical counterbody was supported in two bearings and the polymeric coupon was situated in a holder with freedom of displacement along counterbody. Thus, the experimental tests were standardized using two position of the two bearings (Fixed or Free) and seven different positions along the counterbody, that permit print different conditions to the stiffness from system. Others parameters as applied normal load, sliding velocity and distance were fixed. In this investigation it was used as coupon two structural polymers of wide quotidian use, PTFE (polytetrafluroethylene) and PEEK (poly-ether-ether-ketone) and the AISI 4140 alloy steel as counterbody. Polymeric materials were characterized by thermal analysis (thermogravimetric, differential scanning calorimetry and dynamic-mechanical), hardness and rays-X diffractometry. While the metallic material was submitted at hardness, mechanical resistance tests and metallographic analysis. During the tribological tests were recorded the heating response with thermometers, yonder overall velocity vibration (VGV) and the acceleration using accelerometers. After tests the wear surface of the coupons were analyzed using a Scanning Electronic Microscopy (SEM) to morphological analysis and spectroscopy EDS to microanalysis. Moreover the roughness of the counterbody was characterized before and after the tribological tests. It was observed that the tribological response of the polymers were different in function of their distinct molecular structure. It were identified the predominant wear mechanisms in each polymer. The VGV of the PTFE was smaller than PEEK, in the condition of minimum stiffness, in function of the higher loss coefficient of that polymer. Wear rate of the PTFE was more of a magnitude order higher than PEEK. With the results was possible developed a correlation between the wear rate and parameter (E/ρ)1/2 (Young modulus, E, density, ρ), proportional at longitudinal elastic wave velocity in the material.
Resumo:
In the present research work, composites were prepared using pine apple leaf fibres (PALF) as reinforcement with unsaturated polyester resin as matrix, incorporating with fire retardant at different compositions. The PALF was obtained from the decortication of pine apple leaves obtained from Ramada 4 from Ielmo Marinho in the State of Rio Grande do Norte. The unsaturated polyester resin and the catalyzer were bought from the local establishment. The fire retardant, aluminium tri-hydroxide - Al(OH)3 was donated by Alcoa Alumínio S.A and was used in the proportions of 20%, 40% and 60% w/w. Initially the fibres were treated with 2% NaOH for 1 hour, to remove any impurities present on the fibre surface, such as wax, fat, pectin and pectate, in order to have a better adsorption of the fibres with the matrix as well as the flame retardant. The fibre mat was prepared in a mat preparator by immersion, developed in the Textile Engineering Laboratory, at the UFRN. The composites (300x300x3 mm) were prepared by compression molding and the samples (150x25x3 mm) for analysis of the properties were cut randomly using a laser cutter. Some of the cut samples were used to measure the smoke emission and fire resistance using UL94 standard. Mechanical tension-extension and flexural properties were carried in CTGás RN and the Laboratório de Metais e Ensaios Mecânicos Engenharia de Materiais UFRN , as well as SEM studies were carried out at Núcleo de Estudos em Petróleo e Gás Natural - UFRN . From the observed results, it was noted that, there was no marked influence of the fire retardant on the mechanical properties. Also in the water absorption test, the quantity of water absorbed was less in the sample with higher concentration of fire retardant. It was also observed that the increase in the proportion of the fire retardant increased the time of burning, may be due to the compactness of the composite due to the presence of fire retardant as a filling material even though it was meant to reduce the rate of inflammability of the composite
Resumo:
In 1998 the first decorticator was developed in the Textile Engineering Laboratory and patented for the purpose of extracting fibres from pineapple leaves, with the financial help from CNPq and BNB. The objective of the present work was to develop an automatic decorticator different from the first one with a semiautomatic system of decortication with automatic feeding of the leaves and collection of the extracted fibres. The system is started through a command system that passes information to two engines, one for starting the beater cylinder and the other for the feeding of the leaves as well as the extraction of the decorticated fibres automatically. This in turn introduces the leaves between a knife and a beater cylinder with twenty blades (the previous one had only 8 blades). These blades are supported by equidistant flanges with a central transmission axis that would help in increasing the number of beatings of the leaves. In the present system the operator has to place the leaves on the rotating endless feeding belt and collect the extracted leaves that are being carried out through another endless belt. The pulp resulted form the extraction is collected in a tray through a collector. The feeding of the leaves as well as the extraction of the fibres is controlled automatically by varying the velocity of the cylinders. The semi-automatic decorticator basically composed of a chassis made out of iron bars (profile L) with 200cm length, 91 cm of height 68 cm of width. The decorticator weighs around 300Kg. It was observed that the increase in the number of blades from 8 to twenty in the beater cylinder reduced the turbulence inside the decorticator, which helped to improve the removal of the fibres without any problems as well as the quality of the fibres. From the studies carried out, from each leaf 2,8 to 4,5% of fibres can be extracted. This gives around 4 to 5 tons of fibres per hectare, which is more than that of cotton production per hectare. This quantity with no doubt could generate jobs to the people not only on the production of the fibres but also on their application in different areas
Resumo:
The objective of this research is the fabrication of a composite reinforced with dyed sisal fiber and polyester matrix for application in the fields such as, fashion, clothing, interior textiles; fashion accessories are some of the examples. For the fabrication of the composite, the sisal fibers were subjected to processes such as: chemical treatment with sodium hydroxide (NaOH) in the removal of impurities; bleaching for removing the yellowish color of the natural fiber and dyeing with direct dyes to confer the colors blue, green and orange. The search for new technologies ecologically correct has become a major concern in recent decades. Studies show that composite polymer reinforced by natural fibers is suitable for a large number of applications, and its use is advantageous in terms of economic and ecological. The dyed fibers were cut to a length of 30 mm, is used in the confection of webs. For this purpose, a web preparer by immersion, developed in the Laboratory of Chemical Textile of UFRN. The composite sheets measuring 300 x 300 x3 mm were molded by compression, with unsaturated orthophthalic polyester as matrix, and the samples in sizes 150 x 25 x 3 mm were cut with the aid of a laser machine, to be subjected to traction and flexion. The mechanical properties of traction and flexion in three points were performed in the Laboratory of metal and mechanical tests of Materials Engineering of UFRN. The resulting samples from the tests were evaluated in scanning electron microscope (SEM) at CTGas RN. On the basis of the analysis of the results from the mechanical tests, it was observed that the composite had good mechanical behavior, both in traction as in flexion. Furthermore, it was observed that in the water absorption test, the samples had a different percentage among themselves, this occurred due to the variation of density found in the fibre webs. The images of the SEM showed the failures from the manufacturing process and the adhesion of fibre/matrix. When the samples were prepared with the dyed fibers to be applied in fashion, the results were positive, and it can be concluded that the main objective of this work was achieved
Resumo:
The aim of this work was to study a series of 11 different compositions of Ti-Zr binary alloys resistance to aggressive environment, i. e., their ability to keep their surface properties and mass when exposed to them as a way to evaluate their performance as biomaterials. The first stage was devoted to the fabrication of tablets from these alloys by Plasma-Skull casting method using a Discovery Plasma machine from EDG Equipamentos, Brazil. In a second stage, the chemical composition of each produced tablet was verified. In a third stage, the specimen were submitted to: as-cast microstructure analysis via optical and scanning electron microscopy (OM and SEM), x-ray dispersive system (EDS) chemical analysis via SEM, Vickers hardness tests for mechanical evaluation and corrosion resistence tests in a 0.9% NaCl solution to simulate exposition to human saliva monitored by open circuit potential and polarization curves. From the obtained results, it was possible to infer that specimens A1 (94,07 wt% Ti and 5,93% wt% Zr), A4 (77,81 wt % Ti and 22,19 wt % Zr) and A8 (27,83 wt% Ti and 72,17 wt% Zr), presented best performance regarding to corrosion resistance, homogeneity and hardness which are necessary issues for biomaterials to be applied as orthopedic and odontological prosthesis
Resumo:
The aim of this study was to comparatively evaluate the mechanical strength of squared and rectangular 2.0 mm system miniplates comparing them to the standard configuration with 2 straight miniplates in stabilizing fractures in the anterior mandible. Ninety synthetic polyurethane mandible replicas were used in mechanical test. The samples were divided into six groups of three different methods for fixation. Groups 1, 2 and 3 showed complete fractures in symphysis, characterized by a linear separation between the medial incisor, and groups 4, 5 and 6 showed complete fractures in parasymphysis with oblique design. Groups 1 and 4 were represented by the standard technique with two straight miniplates parallel to each other. Groups 2 and 5 were stabilized by squared miniplates and groups 3 and 6 were fixed by rectangular design. Each group was subjected to a mechanical test at a displacement speed of 10 mm/min on a universal testing machine, receiving linear vertical load on the region of the left first molar. The values of the maximum load and when displacements reached 5 mm were obtained and statistically analyzed by calculating the confidence interval of 95%. Fixation systems using squared (G2) and rectangular (G3) miniplates obtained similar results. No statistically significant differences with respect to the maximum load and the load at 5 mm displacement were found when compared to standard method in symphyseal fractures (G1). In parasymphysis the fixation method using squared miniplates (G5) obtained results without significant differences regarding the maximum load and the load at 5 mm when compared to the standard configuration (G4). The fixation method using rectangular miniplates (G6) showed inferior results which were statistically significant when compared to the standard configuration (G4) for parasymphysis fractures. The mechanical behavior of the fixation methods was similar, except when rectangular miniplates were used. The fixation methods showed better results with statistical significance in symphyseal fractures
Resumo:
Traditional applications of feature selection in areas such as data mining, machine learning and pattern recognition aim to improve the accuracy and to reduce the computational cost of the model. It is done through the removal of redundant, irrelevant or noisy data, finding a representative subset of data that reduces its dimensionality without loss of performance. With the development of research in ensemble of classifiers and the verification that this type of model has better performance than the individual models, if the base classifiers are diverse, comes a new field of application to the research of feature selection. In this new field, it is desired to find diverse subsets of features for the construction of base classifiers for the ensemble systems. This work proposes an approach that maximizes the diversity of the ensembles by selecting subsets of features using a model independent of the learning algorithm and with low computational cost. This is done using bio-inspired metaheuristics with evaluation filter-based criteria
Resumo:
Although some individual techniques of supervised Machine Learning (ML), also known as classifiers, or algorithms of classification, to supply solutions that, most of the time, are considered efficient, have experimental results gotten with the use of large sets of pattern and/or that they have a expressive amount of irrelevant data or incomplete characteristic, that show a decrease in the efficiency of the precision of these techniques. In other words, such techniques can t do an recognition of patterns of an efficient form in complex problems. With the intention to get better performance and efficiency of these ML techniques, were thought about the idea to using some types of LM algorithms work jointly, thus origin to the term Multi-Classifier System (MCS). The MCS s presents, as component, different of LM algorithms, called of base classifiers, and realized a combination of results gotten for these algorithms to reach the final result. So that the MCS has a better performance that the base classifiers, the results gotten for each base classifier must present an certain diversity, in other words, a difference between the results gotten for each classifier that compose the system. It can be said that it does not make signification to have MCS s whose base classifiers have identical answers to the sames patterns. Although the MCS s present better results that the individually systems, has always the search to improve the results gotten for this type of system. Aim at this improvement and a better consistency in the results, as well as a larger diversity of the classifiers of a MCS, comes being recently searched methodologies that present as characteristic the use of weights, or confidence values. These weights can describe the importance that certain classifier supplied when associating with each pattern to a determined class. These weights still are used, in associate with the exits of the classifiers, during the process of recognition (use) of the MCS s. Exist different ways of calculating these weights and can be divided in two categories: the static weights and the dynamic weights. The first category of weights is characterizes for not having the modification of its values during the classification process, different it occurs with the second category, where the values suffers modifications during the classification process. In this work an analysis will be made to verify if the use of the weights, statics as much as dynamics, they can increase the perfomance of the MCS s in comparison with the individually systems. Moreover, will be made an analysis in the diversity gotten for the MCS s, for this mode verify if it has some relation between the use of the weights in the MCS s with different levels of diversity
Resumo:
This work presents JFLoat, a software implementation of IEEE-754 standard for binary floating point arithmetic. JFloat was built to provide some features not implemented in Java, specifically directed rounding support. That feature is important for Java-XSC, a project developed in this Department. Also, Java programs should have same portability when using floating point operations, mainly because IEEE-754 specifies that programs should have exactly same behavior on every configuration. However, it was noted that programs using Java native floating point types may be machine and operating system dependent. Also, JFloat is a possible solution to that problem
Resumo:
The use of clustering methods for the discovery of cancer subtypes has drawn a great deal of attention in the scientific community. While bioinformaticians have proposed new clustering methods that take advantage of characteristics of the gene expression data, the medical community has a preference for using classic clustering methods. There have been no studies thus far performing a large-scale evaluation of different clustering methods in this context. This work presents the first large-scale analysis of seven different clustering methods and four proximity measures for the analysis of 35 cancer gene expression data sets. Results reveal that the finite mixture of Gaussians, followed closely by k-means, exhibited the best performance in terms of recovering the true structure of the data sets. These methods also exhibited, on average, the smallest difference between the actual number of classes in the data sets and the best number of clusters as indicated by our validation criteria. Furthermore, hierarchical methods, which have been widely used by the medical community, exhibited a poorer recovery performance than that of the other methods evaluated. Moreover, as a stable basis for the assessment and comparison of different clustering methods for cancer gene expression data, this study provides a common group of data sets (benchmark data sets) to be shared among researchers and used for comparisons with new methods
Resumo:
The objective of the researches in artificial intelligence is to qualify the computer to execute functions that are performed by humans using knowledge and reasoning. This work was developed in the area of machine learning, that it s the study branch of artificial intelligence, being related to the project and development of algorithms and techniques capable to allow the computational learning. The objective of this work is analyzing a feature selection method for ensemble systems. The proposed method is inserted into the filter approach of feature selection method, it s using the variance and Spearman correlation to rank the feature and using the reward and punishment strategies to measure the feature importance for the identification of the classes. For each ensemble, several different configuration were used, which varied from hybrid (homogeneous) to non-hybrid (heterogeneous) structures of ensemble. They were submitted to five combining methods (voting, sum, sum weight, multiLayer Perceptron and naïve Bayes) which were applied in six distinct database (real and artificial). The classifiers applied during the experiments were k- nearest neighbor, multiLayer Perceptron, naïve Bayes and decision tree. Finally, the performance of ensemble was analyzed comparatively, using none feature selection method, using a filter approach (original) feature selection method and the proposed method. To do this comparison, a statistical test was applied, which demonstrate that there was a significant improvement in the precision of the ensembles
Resumo:
Formal methods and software testing are tools to obtain and control software quality. When used together, they provide mechanisms for software specification, verification and error detection. Even though formal methods allow software to be mathematically verified, they are not enough to assure that a system is free of faults, thus, software testing techniques are necessary to complement the process of verification and validation of a system. Model Based Testing techniques allow tests to be generated from other software artifacts such as specifications and abstract models. Using formal specifications as basis for test creation, we can generate better quality tests, because these specifications are usually precise and free of ambiguity. Fernanda Souza (2009) proposed a method to define test cases from B Method specifications. This method used information from the machine s invariant and the operation s precondition to define positive and negative test cases for an operation, using equivalent class partitioning and boundary value analysis based techniques. However, the method proposed in 2009 was not automated and had conceptual deficiencies like, for instance, it did not fit in a well defined coverage criteria classification. We started our work with a case study that applied the method in an example of B specification from the industry. Based in this case study we ve obtained subsidies to improve it. In our work we evolved the proposed method, rewriting it and adding characteristics to make it compatible with a test classification used by the community. We also improved the method to support specifications structured in different components, to use information from the operation s behavior on the test case generation process and to use new coverage criterias. Besides, we have implemented a tool to automate the method and we have submitted it to more complex case studies
Resumo:
In the world we are constantly performing everyday actions. Two of these actions are frequent and of great importance: classify (sort by classes) and take decision. When we encounter problems with a relatively high degree of complexity, we tend to seek other opinions, usually from people who have some knowledge or even to the extent possible, are experts in the problem domain in question in order to help us in the decision-making process. Both the classification process as the process of decision making, we are guided by consideration of the characteristics involved in the specific problem. The characterization of a set of objects is part of the decision making process in general. In Machine Learning this classification happens through a learning algorithm and the characterization is applied to databases. The classification algorithms can be employed individually or by machine committees. The choice of the best methods to be used in the construction of a committee is a very arduous task. In this work, it will be investigated meta-learning techniques in selecting the best configuration parameters of homogeneous committees for applications in various classification problems. These parameters are: the base classifier, the architecture and the size of this architecture. We investigated nine types of inductors candidates for based classifier, two methods of generation of architecture and nine medium-sized groups for architecture. Dimensionality reduction techniques have been applied to metabases looking for improvement. Five classifiers methods are investigated as meta-learners in the process of choosing the best parameters of a homogeneous committee.