972 resultados para Evolutionary structural optimization
Resumo:
A função de escalonamento desempenha um papel importante nos sistemas de produção. Os sistemas de escalonamento têm como objetivo gerar um plano de escalonamento que permite gerir de uma forma eficiente um conjunto de tarefas que necessitam de ser executadas no mesmo período de tempo pelos mesmos recursos. Contudo, adaptação dinâmica e otimização é uma necessidade crítica em sistemas de escalonamento, uma vez que as organizações de produção têm uma natureza dinâmica. Nestas organizações ocorrem distúrbios nas condições requisitos de trabalho regularmente e de forma inesperada. Alguns exemplos destes distúrbios são: surgimento de uma nova tarefa, cancelamento de uma tarefa, alteração na data de entrega, entre outros. Estes eventos dinâmicos devem ser tidos em conta, uma vez que podem influenciar o plano criado, tornando-o ineficiente. Portanto, ambientes de produção necessitam de resposta imediata para estes eventos, usando um método de reescalonamento em tempo real, para minimizar o efeito destes eventos dinâmicos no sistema de produção. Deste modo, os sistemas de escalonamento devem de uma forma automática e inteligente, ser capazes de adaptar o plano de escalonamento que a organização está a seguir aos eventos inesperados em tempo real. Esta dissertação aborda o problema de incorporar novas tarefas num plano de escalonamento já existente. Deste modo, é proposta uma abordagem de otimização – Hiper-heurística baseada em Seleção Construtiva para Escalonamento Dinâmico- para lidar com eventos dinâmicos que podem ocorrer num ambiente de produção, a fim de manter o plano de escalonamento, o mais robusto possível. Esta abordagem é inspirada em computação evolutiva e hiper-heurísticas. Do estudo computacional realizado foi possível concluir que o uso da hiper-heurística de seleção construtiva pode ser vantajoso na resolução de problemas de otimização de adaptação dinâmica.
Resumo:
Para o projeto de qualquer estrutura existente (edifícios, pontes, veículos, máquinas, etc.) é necessário conhecer as condições de carga, geometria e comportamento de todas as suas partes, assim como respeitar as normativas em vigor nos países nos quais a estrutura será aplicada. A primeira parte de qualquer projeto nesta área passa pela fase da análise estrutural, onde são calculadas todas as interações e efeitos de cargas sobre as estruturas físicas e os seus componentes de maneira a verificar a aptidão da estrutura para o seu uso. Inicialmente parte-se de uma estrutura de geometria simplificada, pondo de parte os elementos físicos irrelevantes (elementos de fixação, revestimentos, etc.) de maneira a simplificar o cálculo de estruturas complexas e, em função dos resultados obtidos da análise estrutural, melhorar a estrutura se necessário. A análise por elementos finitos é a ferramenta principal durante esta primeira fase do projeto. E atualmente, devido às exigências do mercado, é imprescindível o suporte computorizado de maneira a agilizar esta fase do projeto. Existe para esta finalidade uma vasta gama de programas que permitem realizar tarefas que passam pelo desenho de estruturas, análise estática de cargas, análise dinâmica e vibrações, visualização do comportamento físico (deformações) em tempo real, que permitem a otimização da estrutura em análise. Porém, estes programas demostram uma certa complexidade durante a introdução dos parâmetros, levando muitas vezes a resultados errados. Assim sendo, é essencial para o projetista ter uma ferramenta fiável e simples de usar que possa ser usada para fins de projeto de estruturas e otimização. Sobre esta base nasce este projeto tese onde se elaborou um programa com interface gráfica no ambiente Matlab® para a análise de estruturas por elementos finitos, com elementos do tipo Barra e Viga, quer em 2D ou 3D. Este programa permite definir a estrutura por meio de coordenadas, introdução de forma rápida e clara, propriedades mecânicas dos elementos, condições fronteira e cargas a aplicar. Como resultados devolve ao utilizador as reações, deformações e distribuição de tensões nos elementos quer em forma tabular quer em representação gráfica sobre a estrutura em análise. Existe ainda a possibilidade de importação de dados e exportação dos resultados em ficheiros XLS e XLSX, de maneira a facilitar a gestão de informação. Foram realizados diferentes testes e análises de estruturas de forma a validar os resultados do programa e a sua integridade. Os resultados foram todos satisfatórios e convergem para os resultados de outros programas, publicados em livros, e para cálculo a mão feitos pelo autor.
Resumo:
10th Conference on Telecommunications (Conftele 2015), Aveiro, Portugal.
Resumo:
8th International Workshop on Multiple Access Communications (MACOM2015), Helsinki, Finland.
Resumo:
This paper analyses the performance of a genetic algorithm (GA) in the synthesis of digital circuits using two novel approaches. The first concept consists in improving the static fitness function by including a discontinuity evaluation. The measure of variability in the error of the Boolean table has similarities with the function continuity issue in classical calculus. The second concept extends the static fitness by introducing a fractional-order dynamical evaluation.
Resumo:
This work proposes a real-time algorithm to generate a trajectory for a 2 link planar robotic manipulator. The objective is to minimize the space/time ripple and the energy requirements or the time duration in the robot trajectories. The proposed method uses an off line genetic algorithm to calculate every possible trajectory between all cells of the workspace grid. The resultant trajectories are saved in several trees. Then any trajectory requested is constructed in real-time, from these trees. The article presents the results for several experiments.
Resumo:
Random amplified polymorphic DNA (RAPD) technique is a simple and reliable method to detect DNA polymorphism. Several factors can affect the amplification profiles, thereby causing false bands and non-reproducibility of assay. In this study, we analyzed the effect of changing the concentration of primer, magnesium chloride, template DNA and Taq DNA polymerase with the objective of determining their optimum concentration for the standardization of RAPD technique for genetic studies of Cuban Triatominae. Reproducible amplification patterns were obtained using 5 pmoL of primer, 2.5 mM of MgCl2, 25 ng of template DNA and 2 U of Taq DNA polymerase in 25 µL of the reaction. A panel of five random primers was used to evaluate the genetic variability of T. flavida. Three of these (OPA-1, OPA-2 and OPA-4) generated reproducible and distinguishable fingerprinting patterns of Triatominae. Numerical analysis of 52 RAPD amplified bands generated for all five primers was carried out with unweighted pair group method analysis (UPGMA). Jaccard's Similarity Coefficient data were used to construct a dendrogram. Two groups could be distinguished by RAPD data and these groups coincided with geographic origin, i.e. the populations captured in areas from east and west of Guanahacabibes, Pinar del Río. T. flavida present low interpopulation variability that could result in greater susceptibility to pesticides in control programs. The RAPD protocol and the selected primers are useful for molecular characterization of Cuban Triatominae.
Resumo:
Redundant manipulators have some advantages when compared with classical arms because they allow the trajectory optimization, both on the free space and on the presence of abstacles, and the resolution of singularities. For this type of manipulators, several kinetic algorithms adopt generalized inverse matrices. In this line of thought, the generalized inverse control scheme is tested through several experiments that reveal the difficulties that often arise. Motivated by theseproblems this paper presents a new method that ptimizes the manipulability through a least squre polynomialapproximation to determine the joints positions. Moreover, the article studies influence on the dynamics, when controlling redundant and hyper-redundant manipulators. The experiment confirm the superior performance of the proposed algorithm for redundant and hyper-redundant manipulators, revealing several fundamental properties of the chaotic phenomena, and gives a deeper insight towards the future development of superior trajectory control algorithms.
Resumo:
A Computação Evolutiva enquadra-se na área da Inteligência Artificial e é um ramo das ciências da computação que tem vindo a ser aplicado na resolução de problemas em diversas áreas da Engenharia. Este trabalho apresenta o estado da arte da Computação Evolutiva, assim como algumas das suas aplicações no ramo da eletrónica, denominada Eletrónica Evolutiva (ou Hardware Evolutivo), enfatizando a síntese de circuitos digitais combinatórios. Em primeiro lugar apresenta-se a Inteligência Artificial, passando à Computação Evolutiva, nas suas principais vertentes: os Algoritmos Evolutivos baseados no processo da evolução das espécies de Charles Darwin e a Inteligência dos Enxames baseada no comportamento coletivo de alguns animais. No que diz respeito aos Algoritmos Evolutivos, descrevem-se as estratégias evolutivas, a programação genética, a programação evolutiva e com maior ênfase, os Algoritmos Genéticos. Em relação à Inteligência dos Enxames, descreve-se a otimização por colônia de formigas e a otimização por enxame de partículas. Em simultâneo realizou-se também um estudo da Eletrónica Evolutiva, explicando sucintamente algumas das áreas de aplicação, entre elas: a robótica, as FPGA, o roteamento de placas de circuito impresso, a síntese de circuitos digitais e analógicos, as telecomunicações e os controladores. A título de concretizar o estudo efetuado, apresenta-se um caso de estudo da aplicação dos algoritmos genéticos na síntese de circuitos digitais combinatórios, com base na análise e comparação de três referências de autores distintos. Com este estudo foi possível comparar, não só os resultados obtidos por cada um dos autores, mas também a forma como os algoritmos genéticos foram implementados, nomeadamente no que diz respeito aos parâmetros, operadores genéticos utilizados, função de avaliação, implementação em hardware e tipo de codificação do circuito.
Resumo:
HHV-6 is the etiological agent of Exanthem subitum which is considered the sixth most frequent disease in infancy. In immuno-compromised hosts, reactivation of latent HHV-6 infection may cause severe acute disease. We developed a Sybr Green Real Time PCR for HHV-6 and compared the results with nested conventional PCR. A 214 pb PCR derived fragment was cloned using pGEM-T easy from Promega system. Subsequently, serial dilutions were made in a pool of negative leucocytes from 10-6 ng/µL (equivalent to 2465.8 molecules/µL) to 10-9 (equivalent to 2.46 molecules/µL). Dilutions of the plasmid were amplified by Sybr Green Real Time PCR, using primers HHV3 (5' TTG TGC GGG TCC GTT CCC ATC ATA 3)'and HHV4 (5' TCG GGA TAG AAA AAC CTA ATC CCT 3') and by conventional nested PCR using primers HHV1 (outer): 5'CAA TGC TTT TCT AGC CGC CTC TTC 3'; HHV2 (outer): 5' ACA TCT ATA ATT TTA GAC GAT CCC 3'; HHV3 (inner) and HHV4 (inner) 3'. The detection threshold was determined by plasmid serial dilutions. Threshold for Sybr Green real time PCR was 24.6 molecules/µL and for the nested PCR was 2.46 molecules/µL. We chose the Real Time PCR for diagnosing and quantifying HHV-6 DNA from samples using the new Sybr Green chemistry due to its sensitivity and lower risk of contamination.
Resumo:
In the traditional paradigm, the large power plants supply the reactive power required at a transmission level and the capacitors and transformer tap changer were also used at a distribution level. However, in a near future will be necessary to schedule both active and reactive power at a distribution level, due to the high number of resources connected in distribution levels. This paper proposes a new multi-objective methodology to deal with the optimal resource scheduling considering the distributed generation, electric vehicles and capacitor banks for the joint active and reactive power scheduling. The proposed methodology considers the minimization of the cost (economic perspective) of all distributed resources, and the minimization of the voltage magnitude difference (technical perspective) in all buses. The Pareto front is determined and a fuzzy-based mechanism is applied to present the best compromise solution. The proposed methodology has been tested in the 33-bus distribution network. The case study shows the results of three different scenarios for the economic, technical, and multi-objective perspectives, and the results demonstrated the importance of incorporating the reactive scheduling in the distribution network using the multi-objective perspective to obtain the best compromise solution for the economic and technical perspectives.
Resumo:
In this paper, we formulate the electricity retailers’ short-term decision-making problem in a liberalized retail market as a multi-objective optimization model. Retailers with light physical assets, such as generation and storage units in the distribution network, are considered. Following advances in smart grid technologies, electricity retailers are becoming able to employ incentive-based demand response (DR) programs in addition to their physical assets to effectively manage the risks of market price and load variations. In this model, the DR scheduling is performed simultaneously with the dispatch of generation and storage units. The ultimate goal is to find the optimal values of the hourly financial incentives offered to the end-users. The proposed model considers the capacity obligations imposed on retailers by the grid operator. The profit seeking retailer also has the objective to minimize the peak demand to avoid the high capacity charges in form of grid tariffs or penalties. The non-dominated sorting genetic algorithm II (NSGA-II) is used to solve the multi-objective problem. It is a fast and elitist multi-objective evolutionary algorithm. A case study is solved to illustrate the efficient performance of the proposed methodology. Simulation results show the effectiveness of the model for designing the incentive-based DR programs and indicate the efficiency of NSGA-II in solving the retailers’ multi-objective problem.
Resumo:
One parameter that influences the adhesively bonded joints performance is the adhesive layer thickness. Hence, its effect has to be investigated experimentally and should be taken into consideration in the design of adhesive joints. Most of the results from literature are for typical structural epoxy adhesives which are generally formulated to perform in thin sections. However, polyurethane adhesives are designed to perform in thicker sections and might have a different behavior as a function of adhesive thickness. In this study, the effect of adhesive thickness on the mechanical behavior of a structural polyurethane adhesive was investigated. The mode I fracture toughness of the adhesive was measured using double-cantilever beam (DCB) tests with various thicknesses of the adhesive layer ranging from 0.2 to 2 mm. In addition, single lap joints (SLJs) were fabricated and tested to assess the influence of adhesive thickness on the lap-shear strength of the adhesive. An increasing fracture toughness with increasing adhesive thickness was found. The lap-shear strength decreases as the adhesive layer gets thicker, but in contrast to joints with brittle adhesives the decrease trend was less pronounced.
Resumo:
In this study, the behaviour of two structural adhesives modified with thermally expandable particles (TEPs) was investigated as a preliminary study for further investigations on the potential of TEPs in adhesive joints. Tensile bulk tests were performed to get the tensile properties of the adhesives and TEPs-modified adhesives. In order to determine the expansion temperature of the particles while encapsulated in these particular adhesive systems, the variation of the volume of adhesive samples modified with different TEPs concentration as a function of temperature was measured. Further, the possibility of any chemical interactions between TEPs and adhesives matrix in the TEPs-modified specimens was verified by a Fourier transform infrared spectroscopy analysis. Finally, the fracture surfaces of the unmodified and TEPs-modified specimens, as well as the dispersion and the morphology of the particles, were examined by a scanning electron microscopy analysis. It was found that the stiffness of the TEPs-modified adhesives is not affected by incorporation of TEPs in the adhesives matrix, while the tensile yield strength decreased by increasing the wt% TEPs content. In applications of such particular materials (TEPs-modified adhesives), the temperature should be controlled to stay between 90°C and 120°C in order to obtain the highest expansion ratio. At a lower temperature, not all the particles will expand, and above, the TEPs will deteriorate and as a result the TEPs-modified adhesives will deteriorate.
Resumo:
According to the new KDIGO (Kidney Disease Improving Global Outcomes) guidelines, the term of renal osteodystrophy, should be used exclusively in reference to the invasive diagnosis of bone abnormalities. Due to the low sensitivity and specificity of biochemical serum markers of bone remodelling,the performance of bone biopsies is highly stimulated in dialysis patients and after kidney transplantation. The tartrate-resistant acid phosphatase (TRACP) is an iso-enzyme of the group of acid phosphatases, which is highly expressed by activated osteoclasts and macrophages. TRACP in osteoclasts is in intracytoplasmic vesicles that transport the products of bone matrix degradation. Being present in activated osteoclasts, the identification of this enzyme by histochemistry in undecalcified bone biopsies is an excellent method to quantify the resorption of bone. Since it is an enzymatic histochemical method for a thermolabile enzyme, the temperature at which it is performed is particularly relevant. This study aimed to determine the optimal temperature for identification of TRACP in activated osteoclasts in undecalcified bone biopsies embedded in methylmethacrylate. We selected 10 cases of undecalcified bone biopsies from hemodialysis patients with the diagnosis of secondary hyperparathyroidism. Sections of 5 μm were stained to identify TRACP at different incubation temperatures (37º, 45º, 60º, 70º and 80ºC) for 30 minutes. Activated osteoclasts stained red and trabecular bone (mineralized bone) was contrasted with toluidine blue. This approach also increased the visibility of the trabecular bone resorption areas (Howship lacunae). Unlike what is suggested in the literature and in several international protocols, we found that the best results were obtained with temperatures between 60ºC and 70ºC. For technical reasons and according to the results of the present study, we recommended that, for an incubation time of 30 minutes, the reaction should be carried out at 60ºC. As active osteoclasts are usually scarce in a bone section, the standardization of the histochemistry method is of great relevance, to optimize the identification of these cells and increase the accuracy of the histomosphometric results. Our results, allowing an increase in osteoclasts contrast, also support the use of semi-automatic histomorphometric measurements.