999 resultados para Conjuntos densificables


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ligas de alumínio são extensamente usadas em partes aeronáuticas devido às boas propriedades mecânicas e baixa densidade. Estas partes devem ser unidas para formar conjuntos maiores. Uma junta estrutural é definida como um segmento de estrutura que provê um meio de transferir carga de um elemento estrutural para outro. A maioria das juntas aeronáuticas é mecanicamente fixada com múltiplos prendedores (parafusos ou rebites). Estas juntas apresentam uma alta concentração de tensões ao redor do prendedor, porque a transferência de carga entre elementos da junta acontece em uma fração da área disponível. Por outro lado, as cargas aplicadas em juntas adesivas são distribuídas sobre toda a área colada e reduz os pontos de concentração de tensão. Juntas são a fonte mais comum de falhas estruturais em aeronaves e quase todos os reparos envolvem juntas. Portanto, é importante entender todos os aspectos de projeto e análise de juntas. O objetivo deste trabalho é comparar estaticamente juntas estruturais de ligas de Al2024-T3 em três condições: juntas mecanicamente rebitadas, juntas coladas e uma configuração híbrida rebitada e colada. Foi usada a norma NASM 1312-4 para confecção dos corpos-de-prova. Além disso, foram conduzidos testes de fadiga, sob amplitude de carregamento constante e razão de tensão igual a 0,1 para avaliar a eficiência dos elementos estruturais durante sua vida em serviço. Os resultados mostraram que a configuração híbrida apresenta maior resistência estática e uma vida em fadiga superior à configuração colada.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabalho apresenta uma investigação sobre o emprego de FMEA (Failure Mode and Effect Analysis) de Processo com a exposição de irregularidades na sua utilização. O método AHP (Analytic Hierarchy Process) e os Conjuntos Fuzzy são aplicados no estudo das práticas atuais de utilização de FMEA. O AHP é aplicado para a priorização das irregularidades quanto à gravidade de sua ocorrência. Os Conjuntos Fuzzy são aplicados para avaliação do desempenho da utilização de FMEA em algumas empresas do ramo automotivo. Como resultado, tem-se a aceitação de oito e a não aceitação de três dos onze formulários de FMEA averiguados.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Criticism done to the undergraduate training process of the psychologist in Brazil raised debates known as "dilemmas of training". In recent years the classic training model, based on the Minimum Curriculum has undergone a series of changes after the National Curriculum Guidelines (DCN), modifying the context of courses. Thus, this paper aimed to investigate, in a post- DCN context how undergraduate courses in Psychology in Brazil have been dealing with the dilemmas of training. So, we decided to analyze the Course Pedagogical Projects (CPPs) of Psychology in the country. Forty CPPs, selected by region, academic organization and legal status were collected. The data was grouped into three blocks of discussions: theoretical, philosophical and pedagogical foundations; curriculum emphases and disciplines; and professional practices. The results were grouped into four sets of dilemmas: a) ethical and political; b) theoreticalepistemological; c) professional practice of the psychologist and d) academic-scientific. Courses claim a socially committed, generalist, pluralistic training, focusing on research, non-dissociation of teaching-research-extension, interdisciplinary training and defending a vision of man and of critical and reflective and non-individualistic psychology. The curriculum keeps the almost exclusive teaching of the classical areas of traditional fields of applied Psychology. Training is content based. The clinic is hegemonic, both in theory and in application fields. The historical debate is scarce and themes linked to the Brazilian reality are missing, despite having social policies present in the curricula. Currently, DCNs have a much greater impact on courses due to the influence of the control agencies, fruit of current educational policy, and the result is felt in the homogenization of curriculum discourses

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work is combined with the potential of the technique of near infrared spectroscopy - NIR and chemometrics order to determine the content of diclofenac tablets, without destruction of the sample, to which was used as the reference method, ultraviolet spectroscopy, which is one of the official methods. In the construction of multivariate calibration models has been studied several types of pre-processing of NIR spectral data, such as scatter correction, first derivative. The regression method used in the construction of calibration models is the PLS (partial least squares) using NIR spectroscopic data of a set of 90 tablets were divided into two sets (calibration and prediction). 54 were used in the calibration samples and the prediction was used 36, since the calibration method used was crossvalidation method (full cross-validation) that eliminates the need for a validation set. The evaluation of the models was done by observing the values of correlation coefficient R 2 and RMSEC mean square error (calibration error) and RMSEP (forecast error). As the forecast values estimated for the remaining 36 samples, which the results were consistent with the values obtained by UV spectroscopy

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O texto analisa a Arquivologia e sua relação com a mediação da informação. Defende que a base teórica da Arquivologia proporciona condições fundamentais para que, no âmbito da prática, se desenvolvam operações metodológicas que resultem no tratamento adequado dos documentos. Dessa forma, ressaltando a práxis arquivística, compreende-se a utilização de instrumentos e técnicas como uma mediação de sistemas, na qual as etapas da metodologia arquivística atendem ao objetivo primordial de organização de massas documentais, possibilitando seu tratamento, com o propósito de recuperar e disponibilizar as informações dos respectivos conjuntos documentais. A atuação técnica de profissionais da informação, especificamente do arquivista, nesse contexto, já configura uma mediação, mas uma mediação, sobretudo, que lida com a protoinformação. Dessa forma, argumenta-se que é necessário entender como essa protoinformação torna-se informação. Afirma-se que, nesse sentido, a mediação da informação apresenta-se como objeto que vislumbra tal compreensão, partindo, para tanto, do parâmetro da apropriação da informação dos usuários-pesquisadores do arquivo e sua produção e/ou alteração do conhecimento resultante da relação com esse ambiente, para garantir, de fato, uma mediação da informação arquivística. Advoga que essa perspectiva inovadora da mediação da informação nos arquivos, caracteriza uma abordagem que carece de maiores reflexões na área.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Car Rental Salesman Problem (CaRS) is a variant of the classical Traveling Salesman Problem which was not described in the literature where a tour of visits can be decomposed into contiguous paths that may be performed in different rental cars. The aim is to determine the Hamiltonian cycle that results in a final minimum cost, considering the cost of the route added to the cost of an expected penalty paid for each exchange of vehicles on the route. This penalty is due to the return of the car dropped to the base. This paper introduces the general problem and illustrates some examples, also featuring some of its associated variants. An overview of the complexity of this combinatorial problem is also outlined, to justify their classification in the NPhard class. A database of instances for the problem is presented, describing the methodology of its constitution. The presented problem is also the subject of a study based on experimental algorithmic implementation of six metaheuristic solutions, representing adaptations of the best of state-of-the-art heuristic programming. New neighborhoods, construction procedures, search operators, evolutionary agents, cooperation by multi-pheromone are created for this problem. Furtermore, computational experiments and comparative performance tests are conducted on a sample of 60 instances of the created database, aiming to offer a algorithm with an efficient solution for this problem. These results will illustrate the best performance reached by the transgenetic algorithm in all instances of the dataset

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Quadratic Minimum Spanning Tree Problem (QMST) is a version of the Minimum Spanning Tree Problem in which, besides the traditional linear costs, there is a quadratic structure of costs. This quadratic structure models interaction effects between pairs of edges. Linear and quadratic costs are added up to constitute the total cost of the spanning tree, which must be minimized. When these interactions are restricted to adjacent edges, the problem is named Adjacent Only Quadratic Minimum Spanning Tree (AQMST). AQMST and QMST are NP-hard problems that model several problems of transport and distribution networks design. In general, AQMST arises as a more suitable model for real problems. Although, in literature, linear and quadratic costs are added, in real applications, they may be conflicting. In this case, it may be interesting to consider these costs separately. In this sense, Multiobjective Optimization provides a more realistic model for QMST and AQMST. A review of the state-of-the-art, so far, was not able to find papers regarding these problems under a biobjective point of view. Thus, the objective of this Thesis is the development of exact and heuristic algorithms for the Biobjective Adjacent Only Quadratic Spanning Tree Problem (bi-AQST). In order to do so, as theoretical foundation, other NP-hard problems directly related to bi-AQST are discussed: the QMST and AQMST problems. Bracktracking and branch-and-bound exact algorithms are proposed to the target problem of this investigation. The heuristic algorithms developed are: Pareto Local Search, Tabu Search with ejection chain, Transgenetic Algorithm, NSGA-II and a hybridization of the two last-mentioned proposals called NSTA. The proposed algorithms are compared to each other through performance analysis regarding computational experiments with instances adapted from the QMST literature. With regard to exact algorithms, the analysis considers, in particular, the execution time. In case of the heuristic algorithms, besides execution time, the quality of the generated approximation sets is evaluated. Quality indicators are used to assess such information. Appropriate statistical tools are used to measure the performance of exact and heuristic algorithms. Considering the set of instances adopted as well as the criteria of execution time and quality of the generated approximation set, the experiments showed that the Tabu Search with ejection chain approach obtained the best results and the transgenetic algorithm ranked second. The PLS algorithm obtained good quality solutions, but at a very high computational time compared to the other (meta)heuristics, getting the third place. NSTA and NSGA-II algorithms got the last positions

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although some individual techniques of supervised Machine Learning (ML), also known as classifiers, or algorithms of classification, to supply solutions that, most of the time, are considered efficient, have experimental results gotten with the use of large sets of pattern and/or that they have a expressive amount of irrelevant data or incomplete characteristic, that show a decrease in the efficiency of the precision of these techniques. In other words, such techniques can t do an recognition of patterns of an efficient form in complex problems. With the intention to get better performance and efficiency of these ML techniques, were thought about the idea to using some types of LM algorithms work jointly, thus origin to the term Multi-Classifier System (MCS). The MCS s presents, as component, different of LM algorithms, called of base classifiers, and realized a combination of results gotten for these algorithms to reach the final result. So that the MCS has a better performance that the base classifiers, the results gotten for each base classifier must present an certain diversity, in other words, a difference between the results gotten for each classifier that compose the system. It can be said that it does not make signification to have MCS s whose base classifiers have identical answers to the sames patterns. Although the MCS s present better results that the individually systems, has always the search to improve the results gotten for this type of system. Aim at this improvement and a better consistency in the results, as well as a larger diversity of the classifiers of a MCS, comes being recently searched methodologies that present as characteristic the use of weights, or confidence values. These weights can describe the importance that certain classifier supplied when associating with each pattern to a determined class. These weights still are used, in associate with the exits of the classifiers, during the process of recognition (use) of the MCS s. Exist different ways of calculating these weights and can be divided in two categories: the static weights and the dynamic weights. The first category of weights is characterizes for not having the modification of its values during the classification process, different it occurs with the second category, where the values suffers modifications during the classification process. In this work an analysis will be made to verify if the use of the weights, statics as much as dynamics, they can increase the perfomance of the MCS s in comparison with the individually systems. Moreover, will be made an analysis in the diversity gotten for the MCS s, for this mode verify if it has some relation between the use of the weights in the MCS s with different levels of diversity

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work performs an algorithmic study of optimization of a conformal radiotherapy plan treatment. Initially we show: an overview about cancer, radiotherapy and the physics of interaction of ionizing radiation with matery. A proposal for optimization of a plan of treatment in radiotherapy is developed in a systematic way. We show the paradigm of multicriteria problem, the concept of Pareto optimum and Pareto dominance. A generic optimization model for radioterapic treatment is proposed. We construct the input of the model, estimate the dose given by the radiation using the dose matrix, and show the objective function for the model. The complexity of optimization models in radiotherapy treatment is typically NP which justifyis the use of heuristic methods. We propose three distinct methods: MOGA, MOSA e MOTS. The project of these three metaheuristic procedures is shown. For each procedures follows: a brief motivation, the algorithm itself and the method for tuning its parameters. The three method are applied to a concrete case and we confront their performances. Finally it is analyzed for each method: the quality of the Pareto sets, some solutions and the respective Pareto curves

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabalho apresenta uma extensão do provador haRVey destinada à verificação de obrigações de prova originadas de acordo com o método B. O método B de desenvolvimento de software abrange as fases de especificação, projeto e implementação do ciclo de vida do software. No contexto da verificação, destacam-se as ferramentas de prova Prioni, Z/EVES e Atelier-B/Click n Prove. Elas descrevem formalismos com suporte à checagem satisfatibilidade de fórmulas da teoria axiomática dos conjuntos, ou seja, podem ser aplicadas ao método B. A checagem de SMT consiste na checagem de satisfatibilidade de fórmulas da lógica de primeira-ordem livre de quantificadores dada uma teoria decidível. A abordagem de checagem de SMT implementada pelo provador automático de teoremas haRVey é apresentada, adotando-se a teoria dos vetores que não permite expressar todas as construções necessárias às especificações baseadas em conjuntos. Assim, para estender a checagem de SMT para teorias dos conjuntos destacam-se as teorias dos conjuntos de Zermelo-Frankel (ZFC) e de von Neumann-Bernays-Gödel (NBG). Tendo em vista que a abordagem de checagem de SMT implementada no haRVey requer uma teoria finita e pode ser estendida para as teorias nãodecidíveis, a teoria NBG apresenta-se como uma opção adequada para a expansão da capacidade dedutiva do haRVey à teoria dos conjuntos. Assim, através do mapeamento dos operadores de conjunto fornecidos pela linguagem B a classes da teoria NBG, obtem-se uma abordagem alternativa para a checagem de SMT aplicada ao método B

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of clustering methods for the discovery of cancer subtypes has drawn a great deal of attention in the scientific community. While bioinformaticians have proposed new clustering methods that take advantage of characteristics of the gene expression data, the medical community has a preference for using classic clustering methods. There have been no studies thus far performing a large-scale evaluation of different clustering methods in this context. This work presents the first large-scale analysis of seven different clustering methods and four proximity measures for the analysis of 35 cancer gene expression data sets. Results reveal that the finite mixture of Gaussians, followed closely by k-means, exhibited the best performance in terms of recovering the true structure of the data sets. These methods also exhibited, on average, the smallest difference between the actual number of classes in the data sets and the best number of clusters as indicated by our validation criteria. Furthermore, hierarchical methods, which have been widely used by the medical community, exhibited a poorer recovery performance than that of the other methods evaluated. Moreover, as a stable basis for the assessment and comparison of different clustering methods for cancer gene expression data, this study provides a common group of data sets (benchmark data sets) to be shared among researchers and used for comparisons with new methods

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The intervalar arithmetic well-known as arithmetic of Moore, doesn't possess the same properties of the real numbers, and for this reason, it is confronted with a problem of operative nature, when we want to solve intervalar equations as extension of real equations by the usual equality and of the intervalar arithmetic, for this not to possess the inverse addictive, as well as, the property of the distributivity of the multiplication for the sum doesn t be valid for any triplet of intervals. The lack of those properties disables the use of equacional logic, so much for the resolution of an intervalar equation using the same, as for a representation of a real equation, and still, for the algebraic verification of properties of a computational system, whose data are real numbers represented by intervals. However, with the notion of order of information and of approach on intervals, introduced by Acióly[6] in 1991, the idea of an intervalar equation appears to represent a real equation satisfactorily, since the terms of the intervalar equation carry the information about the solution of the real equation. In 1999, Santiago proposed the notion of simple equality and, later on, local equality for intervals [8] and [33]. Based on that idea, this dissertation extends Santiago's local groups for local algebras, following the idea of Σ-algebras according to (Hennessy[31], 1988) and (Santiago[7], 1995). One of the contributions of this dissertation, is the theorem 5.1.3.2 that it guarantees that, when deducing a local Σ-equation E t t in the proposed system SDedLoc(E), the interpretations of t and t' will be locally the same in any local Σ-algebra that satisfies the group of fixed equations local E, whenever t and t have meaning in A. This assures to a kind of safety between the local equacional logic and the local algebras

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite the emergence of other forms of artificial lift, sucker rod pumping systems remains hegemonic because of its flexibility of operation and lower investment cost compared to other lifting techniques developed. A successful rod pumping sizing necessarily passes through the supply of estimated flow and the controlled wear of pumping equipment used in the mounted configuration. However, the mediation of these elements is particularly challenging, especially for most designers dealing with this work, which still lack the experience needed to get good projects pumping in time. Even with the existence of various computer applications on the market in order to facilitate this task, they must face a grueling process of trial and error until you get the most appropriate combination of equipment for installation in the well. This thesis proposes the creation of an expert system in the design of sucker rod pumping systems. Its mission is to guide a petroleum engineer in the task of selecting a range of equipment appropriate to the context provided by the characteristics of the oil that will be raised to the surface. Features such as the level of gas separation, presence of corrosive elements, possibility of production of sand and waxing are taken into account in selecting the pumping unit, sucker-rod strings and subsurface pump and their operation mode. It is able to approximate the inferente process in the way of human reasoning, which leads to results closer to those obtained by a specialist. For this, their production rules were based on the theory of fuzzy sets, able to model vague concepts typically present in human reasoning. The calculations of operating parameters of the pumping system are made by the API RP 11L method. Based on information input, the system is able to return to the user a set of pumping configurations that meet a given design flow, but without subjecting the selected equipment to an effort beyond that which can bear

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Classifier ensembles are systems composed of a set of individual classifiers and a combination module, which is responsible for providing the final output of the system. In the design of these systems, diversity is considered as one of the main aspects to be taken into account since there is no gain in combining identical classification methods. The ideal situation is a set of individual classifiers with uncorrelated errors. In other words, the individual classifiers should be diverse among themselves. One way of increasing diversity is to provide different datasets (patterns and/or attributes) for the individual classifiers. The diversity is increased because the individual classifiers will perform the same task (classification of the same input patterns) but they will be built using different subsets of patterns and/or attributes. The majority of the papers using feature selection for ensembles address the homogenous structures of ensemble, i.e., ensembles composed only of the same type of classifiers. In this investigation, two approaches of genetic algorithms (single and multi-objective) will be used to guide the distribution of the features among the classifiers in the context of homogenous and heterogeneous ensembles. The experiments will be divided into two phases that use a filter approach of feature selection guided by genetic algorithm

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this paper is to identify how real estate companies from the city of Natal measure their organizational performance. Traditionally these companies measure their performance using financial measures; however, the technological improvement, the internationalization of the economy and the change in consumer behavior all demand better products and services, and other measuring models. Those changes motivate organizations to continually improve the quality of their products and services. In this way, these companies need to associate their financial results to their global performance. Therefore, it is necessary to have organizational performance models that associate financial and non-financial measures to the strategies of the companies. The research also tries to identify which performance indicators are used by these companies, as well as to test a model who questions: a) if there is any relationship between managers´ characteristics and performance measuring systems´ characteristics; b) if there is any relationship between the company s characteristics and the characteristics of the measuring system used to evaluate its organizational performance and c) finally to verify if there is a relationship between the characteristics of the measuring system and the company s performance. The information which served as a basis for the study was obtained through an empirical research, with questionnaires, answered by 66 (sixty six) companies from the city of Natal, capital of the state of Rio Grande do Norte. The results show that none of the companies investigated use any of the performance measurement models proposed in the modern literature. However, they use on an isolated way some of the measures those models, including some measures from adopted in the Balanced Scorecard, as well as the benchmarking process, making comparisons with the performance of their competitors. The research also reveals that either bigger companies, companies with more experienced managers or with better performance show better performance measurement systems