965 resultados para linear machine modeling
Resumo:
Os incêndios em edifícios representam um fenómeno que pode ter consequências devastadoras quando não controlado, não só em termos de perdas de vidas humanas, como em termos económicos. No passado, a ocorrência de incêndios de grandes dimensões mostrou os efeitos do fogo descontrolado nos edifícios, assim como a ineficiência dos meios de segurança ativa ao fogo. Nas últimas duas décadas, estas questões motivaram o estudo e compreensão da ação dos incêndios nas estruturas dos edifícios. Neste trabalho estuda-se a modelação da ação do fogo em estruturas metálicas e mistas, com o objetivo de contribuir para a sua melhor caracterização. A presente tese foca-se na validação e compreensão da implementação de análises termo-mecânicas a estruturas mistas no software de elementos finitos OpenSees (Open System for Earthquake Engineering Simulation), contribuindo assim para futuros estudos, não só de análises de estruturas mistas sujeitas a incêndio, mas também de análises de estruturas mistas sujeitas a eventos consecutivos, como sismo seguido de incêndio. Neste trabalho é feita uma breve descrição do fenómeno fogo, assim como dos processos inerentes à dinâmica de um incêndio que constituem uma fonte de incerteza para a modelação de cenários de incêndio num edifício. Posto isto, são abordados alguns modelos de incêndios presentes nos Eurocódigos, assim como o recente modelo de fogos móveis(“Travelling fires”). São realizados exemplos de aplicação no software e dois casos de estudo. O primeiro consiste na modelação de dois ensaios ao fogo realizados na Alemanha em 1986 em estruturas metálicas à escala 1/4. O segundo consiste na modelação de um ensaio ao fogo a uma viga de betão armado simplesmente apoiada, realizado no Instituto Superior Técnico em 2010. Os modelos numéricos desenvolvidos no OpenSees contabilizam as não linearidades físicas e geométricas, com elementos finitos de plasticidade distribuída e com uma formulação baseada em deslocamentos. Os resultados numéricos são então comparados com os experimentais, de modo a validar as análises termo-mecânicas no OpenSees.
Resumo:
A soldadura por fricção linear (SFL) é um processo de ligação no estado sólido com capacidade para soldar materiais dissimilares e ligas metálicas de baixa soldabilidade. Contudo, nas juntas soldadas com uma configuração topo-a-topo é possível o aparecimento de defeitos na raiz do cordão, nomeadamente, de falta de penetração e de alinhamentos de óxidos. Estes defeitos são responsáveis pela diminuição da resistência mecânica das juntas, sobretudo quando sujeitas a esforços de fadiga. Existem variantes do processo para eliminar este tipo de defeitos, no entanto, verificam-se algumas dificuldades tecnológicas e a competitividade do processo é afetada. No âmbito deste trabalho desenvolveu-se uma variante do processo designado soldadura por fricção linear assistida por corrente elétrica (SFLAE). Esta variante consiste em introduzir uma corrente elétrica de elevada intensidade na zona da raiz do cordão, por forma a que o calor gerado por efeito de Joule aqueça o material e aumente o fluxo visco-plástico nessa zona, tendo em vista atenuar ou eliminar os defeitos na raiz. Foram concebidas e produzidas ferramentas de SFL dedicadas, melhorado um modelo analítico do processo, realizadas simulações numéricas para permitir compreender alguns fenómenos físicos envolvidos no processo, e realizados ensaios experimentais de validação. A análise aos cordões permitiu observar uma diminuição da espessura dos defeitos na raiz em três ensaios realizados na liga AA1100 e AA6084-T6 usando intensidades de corrente de 300 A. Dos resultados de microdureza e de medição de condutividade elétrica é possível concluir que a microestrutura do material não é significativamente alterada pela passagem de corrente elétrica.
Resumo:
Diffusion Kurtosis Imaging (DKI) is a fairly new magnetic resonance imag-ing (MRI) technique that tackles the non-gaussian motion of water in biological tissues by taking into account the restrictions imposed by tissue microstructure, which are not considered in Diffusion Tensor Imaging (DTI), where the water diffusion is considered purely gaussian. As a result DKI provides more accurate information on biological structures and is able to detect important abnormalities which are not visible in standard DTI analysis. This work regards the development of a tool for DKI computation to be implemented as an OsiriX plugin. Thus, as OsiriX runs under Mac OS X, the pro-gram is written in Objective-C and also makes use of Apple’s Cocoa framework. The whole program is developed in the Xcode integrated development environ-ment (IDE). The plugin implements a fast heuristic constrained linear least squares al-gorithm (CLLS-H) for estimating the diffusion and kurtosis tensors, and offers the user the possibility to choose which maps are to be generated for not only standard DTI quantities such as Mean Diffusion (MD), Radial Diffusion (RD), Axial Diffusion (AD) and Fractional Anisotropy (FA), but also DKI metrics, Mean Kurtosis (MK), Radial Kurtosis (RK) and Axial Kurtosis (AK).The plugin was subjected to both a qualitative and a semi-quantitative analysis which yielded convincing results. A more accurate validation pro-cess is still being developed, after which, and with some few minor adjust-ments the plugin shall become a valid option for DKI computation
Resumo:
The rapid growth of big cities has been noticed since 1950s when the majority of world population turned to live in urban areas rather than villages, seeking better job opportunities and higher quality of services and lifestyle circumstances. This demographic transition from rural to urban is expected to have a continuous increase. Governments, especially in less developed countries, are going to face more challenges in different sectors, raising the essence of understanding the spatial pattern of the growth for an effective urban planning. The study aimed to detect, analyse and model the urban growth in Greater Cairo Region (GCR) as one of the fast growing mega cities in the world using remote sensing data. Knowing the current and estimated urbanization situation in GCR will help decision makers in Egypt to adjust their plans and develop new ones. These plans should focus on resources reallocation to overcome the problems arising in the future and to achieve a sustainable development of urban areas, especially after the high percentage of illegal settlements which took place in the last decades. The study focused on a period of 30 years; from 1984 to 2014, and the major transitions to urban were modelled to predict the future scenarios in 2025. Three satellite images of different time stamps (1984, 2003 and 2014) were classified using Support Vector Machines (SVM) classifier, then the land cover changes were detected by applying a high level mapping technique. Later the results were analyzed for higher accurate estimations of the urban growth in the future in 2025 using Land Change Modeler (LCM) embedded in IDRISI software. Moreover, the spatial and temporal urban growth patterns were analyzed using statistical metrics developed in FRAGSTATS software. The study resulted in an overall classification accuracy of 96%, 97.3% and 96.3% for 1984, 2003 and 2014’s map, respectively. Between 1984 and 2003, 19 179 hectares of vegetation and 21 417 hectares of desert changed to urban, while from 2003 to 2014, the transitions to urban from both land cover classes were found to be 16 486 and 31 045 hectares, respectively. The model results indicated that 14% of the vegetation and 4% of the desert in 2014 will turn into urban in 2025, representing 16 512 and 24 687 hectares, respectively.
Resumo:
This paper offers a new approach to estimating time-varying covariance matrices in the framework of the diagonal-vech version of the multivariate GARCH(1,1) model. Our method is numerically feasible for large-scale problems, produces positive semidefinite conditional covariance matrices, and does not impose unrealistic a priori restrictions. We provide an empirical application in the context of international stock markets, comparing the nev^ estimator with a number of existing ones.
Resumo:
The aim of this work project is to find a model that is able to accurately forecast the daily Value-at-Risk for PSI-20 Index, independently of the market conditions, in order to expand empirical literature for the Portuguese stock market. Hence, two subsamples, representing more and less volatile periods, were modeled through unconditional and conditional volatility models (because it is what drives returns). All models were evaluated through Kupiec’s and Christoffersen’s tests, by comparing forecasts with actual results. Using an out-of-sample of 204 observations, it was found that a GARCH(1,1) is an accurate model for our purposes.
Resumo:
Com a evolução dos recursos computacionais e o desenvolvimento dos modelos constitutivos disponíveis na avaliação do comportamento estrutural de elementos de betão armado, é comum recorrer-se cada vez mais a modelos numéricos que consideram a não-linearidade física e geométrica. As simulações numéricas obtidas com recurso a este tipo de modelos computacionais permitem obter um historial completo do comportamento estrutural, desde o início da aplicação do carregamento, até ao colapso total da estrutura. Contudo, verifica-se que em zonas de descontinuidade geométrica em estruturas de betão armado, a evolução do padrão de fendilhação é um fenómeno relativamente complexo, cuja simulação numérica representa um desafio considerável. O objectivo deste trabalho é o de verificar a aplicabilidade do Método dos Elementos Aplicados no estudo do desenvolvimento do padrão de fendilhação em paredes de betão armado, solicitadas por um carregamento monotónico. Foi analisado um conjunto de dez paredes, todas com uma abertura que provoca uma zona de descontinuidade geométrica e, consequentemente, um padrão de fendilhação mais complexo. Cada parede tem uma pormenorização de armadura diferente, permitindo verificar a fiabilidade do modelo computacional. Os resultados numéricos foram comparados com ensaios experimentais realizados por Bounassar Filho [8], permitindo tirar conclusões sobre as vantagens e as limitações deste método, quando aplicado ao estudo de estruturas de betão armado solicitadas por cargas monotónicas.
Resumo:
Organizations are undergoing serious difficulties to retain talent. Authors argue that Talent Management (TM) practices create beneficial outcomes for individuals and organizations. However, there is no research on the leaders’ role in the functioning of these practices. This study examines how LMX and role modeling influence the impact that TM practices have on employees’ trust in their organizations and retention. The analysis of two questionnaires (Nt1=175; Nt2=107) indicated that TM only reduced turnover intentions, via an increase in trust in the organization, when role modeling was high and not when it was low. Therefore, we can say that leaders are crucial in the TM context, and in sustaining a competitive advantage for organizations.
Resumo:
A presente dissertação tem como objetivo principal a implementação de uma arquitetura baseada em algoritmos evolutivos para a sintonização dos parâmetros do controlador PID (Proporcional-Integral-Derivativo) difuso, sendo o conceito de desempenho em malha fechada explicitamente tido em conta. A sintonização dos parâmetros do controlador difuso é realizada tendo em conta um problema de otimização com restrições, em que a função de custo a ser minimizada é descrita em termos do desempenho em malha fechada, com a dinâmica do sistema a ser aproximada por um modelo não linear. Como nas metodologias de otimização existentes, a incorporação de mecanismos de adaptação referentes às funções de pertença não é comum, na presente dissertação é tido em conta, para além da usual sintonização dos fatores de escala, a sintonização dos fatores de escala e funções de pertença em simultâneo. Os resultados experimentais realizados num sistema de referência, visam demonstrar os benefícios de incorporar as funções de pertença no processo de otimização em diferido. É também utilizado um método analítico de segunda ordem como referência, por forma a comparar o desempenho de uma abordagem de otimização global contra uma de otimização local. Finalmente é implementada uma abordagem em-linha, usando o método analítico de segunda ordem, na otimização dos fatores de escala e funções de pertença.
Resumo:
Nowadays, many of the manufactory and industrial system has a diagnosis system on top of it, responsible for ensuring the lifetime of the system itself. It achieves this by performing both diagnosis and error recovery procedures in real production time, on each of the individual parts of the system. There are many paradigms currently being used for diagnosis. However, they still fail to answer all the requirements imposed by the enterprises making it necessary for a different approach to take place. This happens mostly on the error recovery paradigms since the great diversity that is nowadays present in the industrial environment makes it highly unlikely for every single error to be fixed under a real time, no production stop, perspective. This work proposes a still relatively unknown paradigm to manufactory. The Artificial Immune Systems (AIS), which relies on bio-inspired algorithms, comes as a valid alternative to the ones currently being used. The proposed work is a multi-agent architecture that establishes the Artificial Immune Systems, based on bio-inspired algorithms. The main goal of this architecture is to solve for a resolution to the error currently detected by the system. The proposed architecture was tested using two different simulation environment, each meant to prove different points of views, using different tests. These tests will determine if, as the research suggests, this paradigm is a promising alternative for the industrial environment. It will also define what should be done to improve the current architecture and if it should be applied in a decentralised system.
Resumo:
Polysaccharides are gaining increasing attention as potential environmental friendly and sustainable building blocks in many fields of the (bio)chemical industry. The microbial production of polysaccharides is envisioned as a promising path, since higher biomass growth rates are possible and therefore higher productivities may be achieved compared to vegetable or animal polysaccharides sources. This Ph.D. thesis focuses on the modeling and optimization of a particular microbial polysaccharide, namely the production of extracellular polysaccharides (EPS) by the bacterial strain Enterobacter A47. Enterobacter A47 was found to be a metabolically versatile organism in terms of its adaptability to complex media, notably capable of achieving high growth rates in media containing glycerol byproduct from the biodiesel industry. However, the industrial implementation of this production process is still hampered due to a largely unoptimized process. Kinetic rates from the bioreactor operation are heavily dependent on operational parameters such as temperature, pH, stirring and aeration rate. The increase of culture broth viscosity is a common feature of this culture and has a major impact on the overall performance. This fact complicates the mathematical modeling of the process, limiting the possibility to understand, control and optimize productivity. In order to tackle this difficulty, data-driven mathematical methodologies such as Artificial Neural Networks can be employed to incorporate additional process data to complement the known mathematical description of the fermentation kinetics. In this Ph.D. thesis, we have adopted such an hybrid modeling framework that enabled the incorporation of temperature, pH and viscosity effects on the fermentation kinetics in order to improve the dynamical modeling and optimization of the process. A model-based optimization method was implemented that enabled to design bioreactor optimal control strategies in the sense of EPS productivity maximization. It is also critical to understand EPS synthesis at the level of the bacterial metabolism, since the production of EPS is a tightly regulated process. Methods of pathway analysis provide a means to unravel the fundamental pathways and their controls in bioprocesses. In the present Ph.D. thesis, a novel methodology called Principal Elementary Mode Analysis (PEMA) was developed and implemented that enabled to identify which cellular fluxes are activated under different conditions of temperature and pH. It is shown that differences in these two parameters affect the chemical composition of EPS, hence they are critical for the regulation of the product synthesis. In future studies, the knowledge provided by PEMA could foster the development of metabolically meaningful control strategies that target the EPS sugar content and oder product quality parameters.
Resumo:
Doctoral Program in Computer Science
Resumo:
This work presents a model and a heuristic to solve the non-emergency patients transport (NEPT) service issues given the new rules recently established in Portugal. The model follows the same principle of the Team Orienteering Problem by selecting the patients to be included in the routes attending the maximum reduction in costs when compared with individual transportation. This model establishes the best sets of patients to be transported together. The model was implemented in AMPL and a compact formulation was solved using NEOS Server. A heuristic procedure based on iteratively solving problems with one vehicle was presented, and this heuristic provides good results in terms of accuracy and computation time.
Resumo:
Hospitals are nowadays collecting vast amounts of data related with patient records. All this data hold valuable knowledge that can be used to improve hospital decision making. Data mining techniques aim precisely at the extraction of useful knowledge from raw data. This work describes an implementation of a medical data mining project approach based on the CRISP-DM methodology. Recent real-world data, from 2000 to 2013, were collected from a Portuguese hospital and related with inpatient hospitalization. The goal was to predict generic hospital Length Of Stay based on indicators that are commonly available at the hospitalization process (e.g., gender, age, episode type, medical specialty). At the data preparation stage, the data were cleaned and variables were selected and transformed, leading to 14 inputs. Next, at the modeling stage, a regression approach was adopted, where six learning methods were compared: Average Prediction, Multiple Regression, Decision Tree, Artificial Neural Network ensemble, Support Vector Machine and Random Forest. The best learning model was obtained by the Random Forest method, which presents a high quality coefficient of determination value (0.81). This model was then opened by using a sensitivity analysis procedure that revealed three influential input attributes: the hospital episode type, the physical service where the patient is hospitalized and the associated medical specialty. Such extracted knowledge confirmed that the obtained predictive model is credible and with potential value for supporting decisions of hospital managers.