91 resultados para artificial linear structures
Resumo:
Esta dissertação teve como objetivo o desenvolvimento de espumas porosas de hidroxiapatite (HA) baseadas em réplicas invertidas de cristais coloidais (ICC) para substituição óssea. Um ICC é uma estrutura tridimensional de elevada porosidade que apresenta uma rede interconectada de poros com elevada uniformidade de tamanhos. Este tipo de arquitetura possibilita uma proliferação celular homogénea e superiores propriedades mecânicas quando comparada com espumas de geometria não uniforme. O cristal coloidal (CC) - o molde da espuma - foi criado por empacotamento de microesferas de poliestireno (270 μm) produzidas por microfluídica e posterior tratamento térmico. O molde foi impregnado com um gel de hidroxiapatite produzido via sol-gel utilizando pentóxido de fósforo e nitrato de cálcio tetrahidratado como percursores de fósforo e cálcio, respectivamente. A espuma cerâmica foi obtida num único passo depois de um tratamento térmico a 1100oC que permitiu a solidificação do gel e a remoção do CC. A análise por espetroscopia de infravermelho por transformada de Fourier (FTIR) e difração de raios-X (XRD) revelou uma hidroxiapatite carbonatada tipo A com presença de fosfatos tricálcicos. As propriedades mecânicas foram avaliadas por testes de compressão. A biocompatibilidade in vitro foi demonstrada através de testes de adesão e proliferação celular de osteoblastos.
Resumo:
A potentially renewable and sustainable source of energy is the chemical energy associated with solvation of salts. Mixing of two aqueous streams with different saline concentrations is spontaneous and releases energy. The global theoretically obtainable power from salinity gradient energy due to World’s rivers discharge into the oceans has been estimated to be within the range of 1.4-2.6 TW. Reverse electrodialysis (RED) is one of the emerging, membrane-based, technologies for harvesting the salinity gradient energy. A common RED stack is composed by alternately-arranged cation- and anion-exchange membranes, stacked between two electrodes. The compartments between the membranes are alternately fed with concentrated (e.g., sea water) and dilute (e.g., river water) saline solutions. Migration of the respective counter-ions through the membranes leads to ionic current between the electrodes, where an appropriate redox pair converts the chemical salinity gradient energy into electrical energy. Given the importance of the need for new sources of energy for power generation, the present study aims at better understanding and solving current challenges, associated with the RED stack design, fluid dynamics, ionic mass transfer and long-term RED stack performance with natural saline solutions as feedwaters. Chronopotentiometry was used to determinate diffusion boundary layer (DBL) thickness from diffusion relaxation data and the flow entrance effects on mass transfer were found to avail a power generation increase in RED stacks. Increasing the linear flow velocity also leads to a decrease of DBL thickness but on the cost of a higher pressure drop. Pressure drop inside RED stacks was successfully simulated by the developed mathematical model, in which contribution of several pressure drops, that until now have not been considered, was included. The effect of each pressure drop on the RED stack performance was identified and rationalized and guidelines for planning and/or optimization of RED stacks were derived. The design of new profiled membranes, with a chevron corrugation structure, was proposed using computational fluid dynamics (CFD) modeling. The performance of the suggested corrugation geometry was compared with the already existing ones, as well as with the use of conductive and non-conductive spacers. According to the estimations, use of chevron structures grants the highest net power density values, at the best compromise between the mass transfer coefficient and the pressure drop values. Finally, long-term experiments with natural waters were performed, during which fouling was experienced. For the first time, 2D fluorescence spectroscopy was used to monitor RED stack performance, with a dedicated focus on following fouling on ion-exchange membrane surfaces. To extract relevant information from fluorescence spectra, parallel factor analysis (PARAFAC) was performed. Moreover, the information obtained was then used to predict net power density, stack electric resistance and pressure drop by multivariate statistical models based on projection to latent structures (PLS) modeling. The use in such models of 2D fluorescence data, containing hidden, but extractable by PARAFAC, information about fouling on membrane surfaces, considerably improved the models fitting to the experimental data.
Resumo:
Com a evolução dos recursos computacionais e o desenvolvimento dos modelos constitutivos disponíveis na avaliação do comportamento estrutural de elementos de betão armado, é comum recorrer-se cada vez mais a modelos numéricos que consideram a não-linearidade física e geométrica. As simulações numéricas obtidas com recurso a este tipo de modelos computacionais permitem obter um historial completo do comportamento estrutural, desde o início da aplicação do carregamento, até ao colapso total da estrutura. Contudo, verifica-se que em zonas de descontinuidade geométrica em estruturas de betão armado, a evolução do padrão de fendilhação é um fenómeno relativamente complexo, cuja simulação numérica representa um desafio considerável. O objectivo deste trabalho é o de verificar a aplicabilidade do Método dos Elementos Aplicados no estudo do desenvolvimento do padrão de fendilhação em paredes de betão armado, solicitadas por um carregamento monotónico. Foi analisado um conjunto de dez paredes, todas com uma abertura que provoca uma zona de descontinuidade geométrica e, consequentemente, um padrão de fendilhação mais complexo. Cada parede tem uma pormenorização de armadura diferente, permitindo verificar a fiabilidade do modelo computacional. Os resultados numéricos foram comparados com ensaios experimentais realizados por Bounassar Filho [8], permitindo tirar conclusões sobre as vantagens e as limitações deste método, quando aplicado ao estudo de estruturas de betão armado solicitadas por cargas monotónicas.
Resumo:
A presente dissertação tem como objetivo principal a implementação de uma arquitetura baseada em algoritmos evolutivos para a sintonização dos parâmetros do controlador PID (Proporcional-Integral-Derivativo) difuso, sendo o conceito de desempenho em malha fechada explicitamente tido em conta. A sintonização dos parâmetros do controlador difuso é realizada tendo em conta um problema de otimização com restrições, em que a função de custo a ser minimizada é descrita em termos do desempenho em malha fechada, com a dinâmica do sistema a ser aproximada por um modelo não linear. Como nas metodologias de otimização existentes, a incorporação de mecanismos de adaptação referentes às funções de pertença não é comum, na presente dissertação é tido em conta, para além da usual sintonização dos fatores de escala, a sintonização dos fatores de escala e funções de pertença em simultâneo. Os resultados experimentais realizados num sistema de referência, visam demonstrar os benefícios de incorporar as funções de pertença no processo de otimização em diferido. É também utilizado um método analítico de segunda ordem como referência, por forma a comparar o desempenho de uma abordagem de otimização global contra uma de otimização local. Finalmente é implementada uma abordagem em-linha, usando o método analítico de segunda ordem, na otimização dos fatores de escala e funções de pertença.
Resumo:
The theme of this dissertation is the finite element method applied to mechanical structures. A new finite element program is developed that, besides executing different types of structural analysis, also allows the calculation of the derivatives of structural performances using the continuum method of design sensitivities analysis, with the purpose of allowing, in combination with the mathematical programming algorithms found in the commercial software MATLAB, to solve structural optimization problems. The program is called EFFECT – Efficient Finite Element Code. The object-oriented programming paradigm and specifically the C ++ programming language are used for program development. The main objective of this dissertation is to design EFFECT so that it can constitute, in this stage of development, the foundation for a program with analysis capacities similar to other open source finite element programs. In this first stage, 6 elements are implemented for linear analysis: 2-dimensional truss (Truss2D), 3-dimensional truss (Truss3D), 2-dimensional beam (Beam2D), 3-dimensional beam (Beam3D), triangular shell element (Shell3Node) and quadrilateral shell element (Shell4Node). The shell elements combine two distinct elements, one for simulating the membrane behavior and the other to simulate the plate bending behavior. The non-linear analysis capability is also developed, combining the corotational formulation with the Newton-Raphson iterative method, but at this stage is only avaiable to solve problems modeled with Beam2D elements subject to large displacements and rotations, called nonlinear geometric problems. The design sensitivity analysis capability is implemented in two elements, Truss2D and Beam2D, where are included the procedures and the analytic expressions for calculating derivatives of displacements, stress and volume performances with respect to 5 different design variables types. Finally, a set of test examples were created to validate the accuracy and consistency of the result obtained from EFFECT, by comparing them with results published in the literature or obtained with the ANSYS commercial finite element code.
Resumo:
The main purpose of the present dissertation is the simulation of the response of fibre grout strengthened RC panels when subjected to blast effects using the Applied Element Method, in order to validate and verify its applicability. Therefore, four experimental models, three of which were strengthened with a cement-based grout, each reinforced by one type of steel reinforcement, were tested against blast effects. After the calibration of the experimental set-up, it was possible to obtain and compare the response to the blast effects of the model without strengthening (reference model), and a fibre grout strengthened RC panel (strengthened model). Afterwards, a numerical model of the reference model was created in the commercial software Extreme Loading for Structures, which is based on the Applied Element Method, and calibrated to the obtained experimental results, namely to the residual displacement obtained by the experimental monitoring system. With the calibration verified, it is possible to assume that the numerical model correctly predicts the response of fibre grout RC panels when subjected to blast effects. In order to verify this assumption, the strengthened model was modelled and subjected to the blast effects of the corresponding experimental set-up. The comparison between the residual and maximum displacements and the bottom surface’s cracking obtained in the experimental and the numerical tests yields a difference of 4 % for the maximum displacements of the reference model, and a difference of 4 and 10 % for the residual and maximum displacements of the strengthened model, respectively. Additionally, the cracking on the bottom surface of the models was similar in both methods. Therefore, one can conclude that the Applied ElementMethod can correctly predict and simulate the response of fibre grout strengthened RC panels when subjected to blast effects.
Resumo:
Após o embate de uma embarcação contra o maciço do pilar da Ponte 25 de Abril em Lisboa e tendo em conta a importância desta estrutura, levantaram-se algumas preocupações referentes à segurança da ponte quanto a este tipo de solicitações. Como tal, no seguimento desta problemática, o principal objetivo do presente trabalho é avaliar a resposta da Ponte 25 de Abril sujeita ao embate de embarcações. Sabe-se que a Ponte 25 de Abril foi dimensionada tendo como base o sismo de El Centro de 1940, não tendo sido feito um estudo pormenorizado na temática do embate de uma embarcação uma vez que os momentos na base do maciço, resultantes do sismo estudado no projeto, eram largamente superiores aos mesmos momentos provocados por uma força no topo do maciço que pretendia simular o embate de uma embarcação. No entanto, as expressões que pretendem simular o efeito de um embate de uma embarcação em pilares de pontes evoluíram desde a data do projeto. Sendo os maciços dos pilares os elementos estruturais diretamente sujeitos ao evento do embate, transmitindo posteriormente os deslocamentos e forças aos respetivos pilares, o presente estudo focou-se nestes elementos. Para se estudar o comportamento do maciço de um pilar da ponte sujeito a estas ações dinâmicas, foram necessários dados para se definir a estrutura em causa e a respetiva solicitação, como também foi necessário um programa de cálculo não-linear de estruturas indicado para o presente caso. Obteve-se o projeto da Ponte 25 de Abril assim como dados relativos a todas as embarcações que cruzaram a Ponte no ano de 2014. A modelação da estrutura e respetivas solicitações foram feitas com recurso ao programa de cálculo não-linear de estruturas Extreme Loading for Structures, que utiliza o Método dos Elementos Aplicados. A partir do modelo do maciço do pilar da ponte traçaram-se as curvas de capacidade dinâmica e estática da estrutura e simularam-se tanto o sismo de El Centro de 1940 como o embate da embarcação mais condicionante que poderia ter ocorrido no ano de 2014. Com base nos resultados obtidos nos ensaios efetuados no presente trabalho verificou-se que, embora o embate de uma embarcação possa ser mais condicionante do que o sismo para o qual foi dimensionada, a estrutura do maciço resiste a ambas as solicitações não apresentando danos significativos.
Resumo:
Based in internet growth, through semantic web, together with communication speed improvement and fast development of storage device sizes, data and information volume rises considerably every day. Because of this, in the last few years there has been a growing interest in structures for formal representation with suitable characteristics, such as the possibility to organize data and information, as well as the reuse of its contents aimed for the generation of new knowledge. Controlled Vocabulary, specifically Ontologies, present themselves in the lead as one of such structures of representation with high potential. Not only allow for data representation, as well as the reuse of such data for knowledge extraction, coupled with its subsequent storage through not so complex formalisms. However, for the purpose of assuring that ontology knowledge is always up to date, they need maintenance. Ontology Learning is an area which studies the details of update and maintenance of ontologies. It is worth noting that relevant literature already presents first results on automatic maintenance of ontologies, but still in a very early stage. Human-based processes are still the current way to update and maintain an ontology, which turns this into a cumbersome task. The generation of new knowledge aimed for ontology growth can be done based in Data Mining techniques, which is an area that studies techniques for data processing, pattern discovery and knowledge extraction in IT systems. This work aims at proposing a novel semi-automatic method for knowledge extraction from unstructured data sources, using Data Mining techniques, namely through pattern discovery, focused in improving the precision of concept and its semantic relations present in an ontology. In order to verify the applicability of the proposed method, a proof of concept was developed, presenting its results, which were applied in building and construction sector.
Resumo:
In the early nineties, Mark Weiser wrote a series of seminal papers that introduced the concept of Ubiquitous Computing. According to Weiser, computers require too much attention from the user, drawing his focus from the tasks at hand. Instead of being the centre of attention, computers should be so natural that they would vanish into the human environment. Computers become not only truly pervasive but also effectively invisible and unobtrusive to the user. This requires not only for smaller, cheaper and low power consumption computers, but also for equally convenient display solutions that can be harmoniously integrated into our surroundings. With the advent of Printed Electronics, new ways to link the physical and the digital worlds became available. By combining common printing techniques such as inkjet printing with electro-optical functional inks, it is starting to be possible not only to mass-produce extremely thin, flexible and cost effective electronic circuits but also to introduce electronic functionalities into products where it was previously unavailable. Indeed, Printed Electronics is enabling the creation of novel sensing and display elements for interactive devices, free of form factor. At the same time, the rise in the availability and affordability of digital fabrication technologies, namely of 3D printers, to the average consumer is fostering a new industrial (digital) revolution and the democratisation of innovation. Nowadays, end-users are already able to custom design and manufacture on demand their own physical products, according to their own needs. In the future, they will be able to fabricate interactive digital devices with user-specific form and functionality from the comfort of their homes. This thesis explores how task-specific, low computation, interactive devices capable of presenting dynamic visual information can be created using Printed Electronics technologies, whilst following an approach based on the ideals behind Personal Fabrication. Focus is given on the use of printed electrochromic displays as a medium for delivering dynamic digital information. According to the architecture of the displays, several approaches are highlighted and categorised. Furthermore, a pictorial computation model based on extended cellular automata principles is used to programme dynamic simulation models into matrix-based electrochromic displays. Envisaged applications include the modelling of physical, chemical, biological, and environmental phenomena.
Resumo:
Nowadays, many of the manufactory and industrial system has a diagnosis system on top of it, responsible for ensuring the lifetime of the system itself. It achieves this by performing both diagnosis and error recovery procedures in real production time, on each of the individual parts of the system. There are many paradigms currently being used for diagnosis. However, they still fail to answer all the requirements imposed by the enterprises making it necessary for a different approach to take place. This happens mostly on the error recovery paradigms since the great diversity that is nowadays present in the industrial environment makes it highly unlikely for every single error to be fixed under a real time, no production stop, perspective. This work proposes a still relatively unknown paradigm to manufactory. The Artificial Immune Systems (AIS), which relies on bio-inspired algorithms, comes as a valid alternative to the ones currently being used. The proposed work is a multi-agent architecture that establishes the Artificial Immune Systems, based on bio-inspired algorithms. The main goal of this architecture is to solve for a resolution to the error currently detected by the system. The proposed architecture was tested using two different simulation environment, each meant to prove different points of views, using different tests. These tests will determine if, as the research suggests, this paradigm is a promising alternative for the industrial environment. It will also define what should be done to improve the current architecture and if it should be applied in a decentralised system.
Resumo:
Mutable state can be useful in certain algorithms, to structure programs, or for efficiency purposes. However, when shared mutable state is used in non-local or nonobvious ways, the interactions that can occur via aliases to that shared memory can be a source of program errors. Undisciplined uses of shared state may unsafely interfere with local reasoning as other aliases may interleave their changes to the shared state in unexpected ways. We propose a novel technique, rely-guarantee protocols, that structures the interactions between aliases and ensures that only safe interference is possible. We present a linear type system outfitted with our novel sharing mechanism that enables controlled interference over shared mutable resources. Each alias is assigned separate, local roles encoded in a protocol abstraction that constrains how an alias can legally use that shared state. By following the spirit of rely-guarantee reasoning, our rely-guarantee protocols ensure that only safe interference can occur but still allow many interesting uses of shared state, such as going beyond invariant and monotonic usages. This thesis describes the three core mechanisms that enable our type-based technique to work: 1) we show how a protocol models an alias’s perspective on how the shared state evolves and constrains that alias’s interactions with the shared state; 2) we show how protocols can be used while enforcing the agreed interference contract; and finally, 3) we show how to check that all local protocols to some shared state can be safely composed to ensure globally safe interference over that shared memory. The interference caused by shared state is rooted at how the uses of di↵erent aliases to that state may be interleaved (perhaps even in non-deterministic ways) at run-time. Therefore, our technique is mostly agnostic as to whether this interference was the result of alias interleaving caused by sequential or concurrent semantics. We show implementations of our technique in both settings, and highlight their di↵erences. Because sharing is “first-class” (and not tied to a module), we show a polymorphic procedure that enables abstract compositions of protocols. Thus, protocols can be specialized or extended without requiring specific knowledge of the interference produce by other protocols to that state. We show that protocol composition can ensure safety even when considering abstracted protocols. We show that this core composition mechanism is sound, decidable (without the need for manual intervention), and provide an algorithm implementation.
Resumo:
This paper presents an application of an Artificial Neural Network (ANN) to the prediction of stock market direction in the US. Using a multilayer perceptron neural network and a backpropagation algorithm for the training process, the model aims at learning the hidden patterns in the daily movement of the S&P500 to correctly identify if the market will be in a Trend Following or Mean Reversion behavior. The ANN is able to produce a successful investment strategy which outperforms the buy and hold strategy, but presents instability in its overall results which compromises its practical application in real life investment decisions.
Resumo:
The considerable amount of energy consumed on Earth is a major cause for not achieving sustainable development. Buildings are responsible for the highest worldwide energy consumption, nearly 40%. Strong efforts have been made in what concerns the reduction of buildings operational energy (heating, hot water, ventilation, electricity), since operational energy is so far the highest energy component in a building life cycle. However, as operational energy is being reduced the embodied energy increases. One of the building elements responsible for higher embodied energy consumption is the building structural system. Therefore, the present work is going to study part of embodied energy (initial embodied energy) in building structures using a life cycle assessment methodology, in order to contribute for a greater understanding of embodied energy in buildings structural systems. Initial embodied energy is estimated for a building structure by varying the span and the structural material type. The results are analysed and compared for different stages, and some conclusions are drawn. At the end of this work it was possible to conclude that the building span does not have considerable influence in embodied energy consumption of building structures. However, the structural material type has influence in the overall energetic performance. In fact, with this research it was possible that building structure that requires more initial embodied energy is the steel structure; then the glued laminated timber structure; and finally the concrete structure.
Resumo:
In this thesis, a feed-forward, back-propagating Artificial Neural Network using the gradient descent algorithm is developed to forecast the directional movement of daily returns for WTI, gold and copper futures. Out-of-sample back-test results vary, with some predictive abilities for copper futures but none for either WTI or gold. The best statistically significant hit rate achieved was 57% for copper with an absolute return Sharpe Ratio of 1.25 and a benchmarked Information Ratio of 2.11.