953 resultados para Linear Codes over Finite Fields
Resumo:
Adhesive-bonding for the unions in multi-component structures is gaining momentum over welding, riveting and fastening. It is vital for the design of bonded structures the availability of accurate damage models, to minimize design costs and time to market. Cohesive Zone Models (CZM’s) have been used for fracture prediction in structures. The eXtended Finite Element Method (XFEM) is a recent improvement of the Finite Element Method (FEM) that relies on traction-separation laws similar to those of CZM’s but it allows the growth of discontinuities within bulk solids along an arbitrary path, by enriching degrees of freedom. This work proposes and validates a damage law to model crack propagation in a thin layer of a structural epoxy adhesive using the XFEM. The fracture toughness in pure mode I (GIc) and tensile cohesive strength (sn0) were defined by Double-Cantilever Beam (DCB) and bulk tensile tests, respectively, which permitted to build the damage law. The XFEM simulations of the DCB tests accurately matched the experimental load-displacement (P-d) curves, which validated the analysis procedure.
Resumo:
The structural integrity of multi-component structures is usually determined by the strength and durability of their unions. Adhesive bonding is often chosen over welding, riveting and bolting, due to the reduction of stress concentrations, reduced weight penalty and easy manufacturing, amongst other issues. In the past decades, the Finite Element Method (FEM) has been used for the simulation and strength prediction of bonded structures, by strength of materials or fracture mechanics-based criteria. Cohesive-zone models (CZMs) have already proved to be an effective tool in modelling damage growth, surpassing a few limitations of the aforementioned techniques. Despite this fact, they still suffer from the restriction of damage growth only at predefined growth paths. The eXtended Finite Element Method (XFEM) is a recent improvement of the FEM, developed to allow the growth of discontinuities within bulk solids along an arbitrary path, by enriching degrees of freedom with special displacement functions, thus overcoming the main restriction of CZMs. These two techniques were tested to simulate adhesively bonded single- and double-lap joints. The comparative evaluation of the two methods showed their capabilities and/or limitations for this specific purpose.
Resumo:
Dual-phase functionally graded materials are a particular type of composite materials whose properties are tailored to vary continuously, depending on its two constituent's composition distribution, and which use is increasing on the most diverse application fields. These materials are known to provide superior thermal and mechanical performances when compared to the traditional laminated composites, exactly because of this continuous properties variation characteristic, which enables among other advantages smoother stresses distribution profile. In this paper we study the influence of different homogenization schemes, namely the schemes due to Voigt, Hashin-Shtrikman and Mod-Tanaka, which can be used to obtain bounds estimates for the material properties of particulate composite structures. To achieve this goal we also use a set of finite element models based on higher order shear deformation theories and also on first order theory. From the studies carried out, on linear static analyses and on free vibration analyses, it is shown that the bounds estimates are as important as the deformation kinematics basis assumed to analyse these types of multifunctional structures. Concerning to the homogenization schemes studied, it is shown that Mori-Tanaka and Hashin-Shtrikman estimates lead to less conservative results when compared to Voigt rule of mixtures.
Resumo:
A presente dissertação centra-se no estudo de fadiga de uma ponte ferroviária com tabuleiro misto vigado pertencente a uma via de transporte de mercadorias. O caso de estudo incide sobre a ponte ferroviária sobre o rio do Sonho, localizada na Estrada de Ferro de Carajás situada no nordeste do Brasil. Nesta linha circulam alguns dos maiores comboios de mercadoria do mundo com cerca de 3.7 km de extensão e com cargas por eixo superiores a 300 kN. Numa primeira fase apresentam-se diversas metodologias de análise da fadiga em pontes ferroviárias metálicas. É também descrita a ferramenta computacional FADBridge, desenvolvida em ambiente MATLAB, e que possibilita o cálculo sistematizado e eficiente do dano de fadiga em detalhes construtivos de acordo com as indicações dos eurocódigos. Em seguida são abordadas as metodologias numéricas utilizadas para a realização das análises dinâmicas do sistema ponte-comboio e os aspetos regulamentares a ter em consideração no dimensionamento de pontes ferroviárias. O modelo numérico de elementos finitos da ponte foi realizado com recurso ao programa ANSYS. Com base neste modelo foram obtidos os parâmetros modais, nomeadamente as frequências naturais e os modos de vibração, tendo sido também analisada a importância do efeito compósito via-tabuleiro e a influência do comportamento não linear do balastro. O estudo do comportamento dinâmico da ponte foi realizado por intermédio de uma metodologia de cargas móveis através da ferramenta computacional Train-Bridge Interaction (TBI). As análises dinâmicas foram efetuadas para a passagem dos comboios reais de mercadorias e de passageiros e para os comboios de fadiga regulamentares. Nestas análises foi estudada a influência dos modos de vibração globais e locais, das configurações de carga dos comboios e do aumento da velocidade de circulação, na resposta dinâmica da ponte. Por último, foi avaliado o comportamento à fadiga de diversos detalhes construtivos para os cenários de tráfego regulamentar e reais. Foi ainda analisada a influência do aumento da velocidade, da configuração de cargas dos comboios e da degradação da estrutura nos valores do dano por fadiga e da respetiva vida residual.
Resumo:
15th IEEE International Conference on Electronics, Circuits and Systems, Malta
Resumo:
The ecotoxicological response of the living organisms in an aquatic system depends on the physical, chemical and bacteriological variables, as well as the interactions between them. An important challenge to scientists is to understand the interaction and behaviour of factors involved in a multidimensional process such as the ecotoxicological response.With this aim, multiple linear regression (MLR) and principal component regression were applied to the ecotoxicity bioassay response of Chlorella vulgaris and Vibrio fischeri in water collected at seven sites of Leça river during five monitoring campaigns (February, May, June, August and September of 2006). The river water characterization included the analysis of 22 physicochemical and 3 microbiological parameters. The model that best fitted the data was MLR, which shows: (i) a negative correlation with dissolved organic carbon, zinc and manganese, and a positive one with turbidity and arsenic, regarding C. vulgaris toxic response; (ii) a negative correlation with conductivity and turbidity and a positive one with phosphorus, hardness, iron, mercury, arsenic and faecal coliforms, concerning V. fischeri toxic response. This integrated assessment may allow the evaluation of the effect of future pollution abatement measures over the water quality of Leça River.
Resumo:
The associated production of a Higgs boson and a top-quark pair, t (t) over barH, in proton-proton collisions is addressed in this paper for a center of mass energy of 13 TeV at the LHC. Dileptonic final states of t (t) over barH events with two oppositely charged leptons and four jets from the decays t -> bW(+) -> bl(+)v(l), (t) over bar -> (b) over barW(-) -> (b) over barl(-)(v) over bar (l) and h -> b (b) over bar are used. Signal events, generated with MadGraph5_aMC@NLO, are fully reconstructed by applying a kinematic fit. New angular distributions of the decay products as well as angular asymmetries are explored in order to improve discrimination of t (t) over barH signal events over the dominant irreducible background contribution, t (t) over barb (b) over bar. Even after the full kinematic fit reconstruction of the events, the proposed angular distributions and asymmetries are still quite different in the t (t) over barH signal and the dominant background (t (t) over barb (b) over bar).
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Publicationes Mathematicae Debrecen
Resumo:
The aim is to examine the temporal trends of hip fracture incidence in Portugal by sex and age groups, and explore the relation with anti-osteoporotic medication. From the National Hospital Discharge Database, we selected from 1st January 2000 to 31st December 2008, 77,083 hospital admissions (77.4% women) caused by osteoporotic hip fractures (low energy, patients over 49 years-age), with diagnosis codes 820.x of ICD 9-CM. The 2001 Portuguese population was used as standard to calculate direct age-standardized incidence rates (ASIR) (100,000 inhabitants). Generalized additive and linear models were used to evaluate and quantify temporal trends of age specific rates (AR), by sex. We identified 2003 as a turning point in the trend of ASIR of hip fractures in women. After 2003, the ASIR in women decreased on average by 10.3 cases/100,000 inhabitants, 95% CI (− 15.7 to − 4.8), per 100,000 anti-osteoporotic medication packages sold. For women aged 65–69 and 75–79 we identified the same turning point. However, for women aged over 80, the year 2004 marked a change in the trend, from an increase to a decrease. Among the population aged 70–74 a linear decrease of incidence rate (95% CI) was observed in both sexes, higher for women: − 28.0% (− 36.2 to − 19.5) change vs − 18.8%, (− 32.6 to − 2.3). The abrupt turning point in the trend of ASIR of hip fractures in women is compatible with an intervention, such as a medication. The trends were different according to gender and age group, but compatible with the pattern of bisphosphonates sales.
Resumo:
The recent developments on Hidden Markov Models (HMM) based speech synthesis showed that this is a promising technology fully capable of competing with other established techniques. However some issues still lack a solution. Several authors report an over-smoothing phenomenon on both time and frequencies which decreases naturalness and sometimes intelligibility. In this work we present a new vowel intelligibility enhancement algorithm that uses a discrete Kalman filter (DKF) for tracking frame based parameters. The inter-frame correlations are modelled by an autoregressive structure which provides an underlying time frame dependency and can improve time-frequency resolution. The system’s performance has been evaluated using objective and subjective tests and the proposed methodology has led to improved results.
Resumo:
As juntas adesivas têm vindo a ser usadas em diversas áreas e contam com inúmeras aplicações práticas. Devido ao fácil e rápido fabrico, as juntas de sobreposição simples (JSS) são um tipo de configuração bastante comum. O aumento da resistência, a redução de peso e a resistência à corrosão são algumas das vantagens que este tipo de junta oferece relativamente aos processos de ligação tradicionais. Contudo, a concentração de tensões nas extremidades do comprimento da ligação é uma das principais desvantagens. Existem poucas técnicas de dimensionamento precisas para a diversidade de ligações que podem ser encontradas em situações reais, o que constitui um obstáculo à utilização de juntas adesivas em aplicações estruturais. O presente trabalho visa comparar diferentes métodos analíticos e numéricos na previsão da resistência de JSS com diferentes comprimentos de sobreposição (LO). O objectivo fundamental é avaliar qual o melhor método para prever a resistência das JSS. Foram produzidas juntas adesivas entre substratos de alumínio utilizando um adesivo époxido frágil (Araldite® AV138), um adesivo epóxido moderadamente dúctil (Araldite® 2015), e um adesivo poliuretano dúctil (SikaForce® 7888). Consideraram-se diferentes métodos analíticos e dois métodos numéricos: os Modelos de Dano Coesivo (MDC) e o Método de Elementos Finitos Extendido (MEFE), permitindo a análise comparativa. O estudo possibilitou uma percepção crítica das capacidades de cada método consoante as características do adesivo utilizado. Os métodos analíticos funcionam apenas relativamente bem em condições muito específicas. A análise por MDC com lei triangular revelou ser um método bastante preciso, com excepção de adesivos que sejam bastante dúcteis. Por outro lado, a análise por MEFE demonstrou ser uma técnica pouco adequada, especialmente para o crescimento de dano em modo misto.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Civil – Perfil de Estruturas
Resumo:
SUMÁRIO - O desafio atual da Saúde Pública é assegurar a sustentabilidade financeira do sistema de saúde. Em ambiente de recursos escassos, as análises económicas aplicadas à prestação dos cuidados de saúde são um contributo para a tomada de decisão que visa a maximização do bem-estar social sujeita a restrição orçamental. Portugal é um país com 10,6 milhões de habitantes (2011) com uma incidência e prevalência elevadas de doença renal crónica estadio 5 (DRC5), respetivamente, 234 doentes por milhão de habitantes (pmh) e 1.600 doentes/pmh. O crescimento de doenças associadas às causas de DRC, nomeadamente, diabetes Mellitus e hipertensão arterial, antecipam uma tendência para o aumento do número de doentes. Em 2011, dos 17.553 doentes em tratamento substitutivo renal, 59% encontrava-se em programa de hemodiálise (Hd) em centros de diálise extra-hospitalares, 37% viviam com um enxerto renal funcionante e 4% estavam em diálise peritoneal (SPN, 2011). A lista ativa para transplante (Tx) renal registava 2.500 doentes (SPN 2009). O Tx renal é a melhor modalidade terapêutica pela melhoria da sobrevida, qualidade de vida e relação custo-efetividade, mas a elegibilidade para Tx e a oferta de órgãos condicionam esta opção. Esta investigação desenvolveu-se em duas vertentes: i) determinar o rácio custo-utilidade incremental do Tx renal comparado com a Hd; ii) avaliar a capacidade máxima de dadores de cadáver em Portugal, as características e as causas de morte dos dadores potenciais a nível nacional, por hospital e por Gabinete Coordenador de Colheita e Transplantação (GCCT), e analisar o desempenho da rede de colheita de órgãos para Tx. Realizou-se um estudo observacional/não interventivo, prospetivo e analítico que incidiu sobre uma coorte de doentes em Hd que foi submetida a Tx renal. O tempo de seguimento mínimo foi de um ano e máximo de três anos. No início do estudo, colheram-se dados sociodemográficos e clínicos em 386 doentes em Hd, elegíveis para Tx renal. A qualidade de vida relacionada com a saúde (QVRS) foi avaliada nos doentes em Hd (tempo 0) e nos transplantados, aos três, seis, 12 meses, e depois, anualmente. Incluíram-se os doentes que por falência do enxerto renal transitaram para Hd. Na sua medição, utilizou-se um instrumento baseado em preferências da população, o EuroQol-5D, que permite o posterior cálculo dos QALY. Num grupo de 82 doentes, a QVRS em Hd foi avaliada em dois tempos de resposta o que permitiu a análise da sua evolução. Realizou-se uma análise custo-utilidade do Tx renal comparado com a Hd na perspetiva da sociedade. Identificaram-se os custos diretos, médicos e não médicos, e as alterações de produtividade em Hd e Tx renal. Incluíram-se os custos da colheita de órgãos, seleção dos candidatos a Tx renal e follow-up dos dadores vivos. Cada doente transplantado foi utilizado como controle de si próprio em diálise. Avaliou-se o custo médio anual em programa de Hd crónica relativo ao ano anterior à Tx renal. Os custos do Tx foram avaliados prospetivamente. Considerou-se como horizonte temporal o ciclo de vida nas duas modalidades. Usaram-se taxas de atualização de 0%, 3% e 5% na atualização dos custos e QALY e efetuaram-se análises de sensibilidade one way. Entre 2008 e 2010, 65 doentes foram submetidos a Tx renal. Registaram-se, prospetivamente, os resultados em saúde incluíndo os internamentos e os efeitos adversos da imunossupressão, e o consumo dos recursos em saúde. Utilizaram-se modelos de medidas repetidas na avaliação da evolução da QVRS e modelos de regressão múltipla na análise da associação da QVRS e dos custos do transplante com as características basais dos doentes e os eventos clínicos. Comparativamente à Hd, observou-se melhoria da utilidade ao 3º mês de Tx e a qualidade de vida aferida pela escala EQ-VAS melhorou em todos os tempos de observação após o Tx renal. O custo médio da Hd foi de 32.567,57€, considerado uniforme ao longo do tempo. O custo médio do Tx renal foi de 60.210,09€ no 1º ano e 12.956,77€ nos anos seguintes. O rácio custo-utilidade do Tx renal vs Hd crónica foi de 2.004,75€/QALY. A partir de uma sobrevivência do enxerto de dois anos e cinco meses, o Tx associou-se a poupança dos custos. Utilizaram-se os dados nacionais dos Grupos de Diagnóstico Homogéneos e realizou-se um estudo retrospectivo que abrangeu as mortes ocorridas em 34 hospitais com colheita de órgãos, em 2006. Considerou-se como dador potencial o indivíduo com idade entre 1-70 anos cuja morte ocorrera a nível hospitalar, e que apresentasse critérios de adequação à doação de rim. Analisou-se a associação dos dadores potenciais com características populacionais e hospitalares. O desempenho das organizações de colheita de órgãos foi avaliado pela taxa de conversão (rácio entre os dadores potenciais e efetivos) e pelo número de dadores potenciais por milhão de habitantes a nível nacional, regional e por Gabinete Coordenador de Colheita e Transplantação (GCCT). Identificaram-se 3.838 dadores potenciais dos quais 608 apresentaram códigos da Classificação Internacional de Doenças, 9.ª Revisão, Modificações Clínicas (CID- 9-MC) que, com maior frequência, evoluem para a morte cerebral. O modelo logit para dados agrupados identificou a idade, o rácio da lotação em Unidades de Cuidados Intensivos e lotação de agudos, existência de GCCT e de Unidade de Transplantação, e mortalidade por acidente de trabalho como fatores preditivos da conversão dum dador potencial em efetivo e através das estimativas do modelo logit quantificou-se a probabilidade dessa conversão. A doação de órgãos deve ser assumida como uma prioridade e as autoridades em saúde devem assegurar o financiamento dos hospitais com programas de doação, evitando o desperdício de órgãos para transplantação, enquanto um bem público e escasso. A colheita de órgãos deve ser considerada uma opção estratégica da atividade hospitalar orientada para a organização e planeamento de serviços que maximizem a conversão de dadores potenciais em efetivos incluindo esse critério como medida de qualidade e efetividade do desempenho hospitalar. Os resultados deste estudo demonstram que: 1) o Tx renal proporciona ganhos em saúde, aumento da sobrevida e qualidade de vida, e poupança de custos; 2) em Portugal, a taxa máxima de eficácia da conversão dos dadores cadavéricos em dadores potenciais está longe de ser atingida. O investimento na rede de colheita de órgãos para Tx é essencial para assegurar a sustentabilidade financeira e promover a qualidade, eficiência e equidade dos cuidados em saúde prestados na DRC5.
Resumo:
The theme of this dissertation is the finite element method applied to mechanical structures. A new finite element program is developed that, besides executing different types of structural analysis, also allows the calculation of the derivatives of structural performances using the continuum method of design sensitivities analysis, with the purpose of allowing, in combination with the mathematical programming algorithms found in the commercial software MATLAB, to solve structural optimization problems. The program is called EFFECT – Efficient Finite Element Code. The object-oriented programming paradigm and specifically the C ++ programming language are used for program development. The main objective of this dissertation is to design EFFECT so that it can constitute, in this stage of development, the foundation for a program with analysis capacities similar to other open source finite element programs. In this first stage, 6 elements are implemented for linear analysis: 2-dimensional truss (Truss2D), 3-dimensional truss (Truss3D), 2-dimensional beam (Beam2D), 3-dimensional beam (Beam3D), triangular shell element (Shell3Node) and quadrilateral shell element (Shell4Node). The shell elements combine two distinct elements, one for simulating the membrane behavior and the other to simulate the plate bending behavior. The non-linear analysis capability is also developed, combining the corotational formulation with the Newton-Raphson iterative method, but at this stage is only avaiable to solve problems modeled with Beam2D elements subject to large displacements and rotations, called nonlinear geometric problems. The design sensitivity analysis capability is implemented in two elements, Truss2D and Beam2D, where are included the procedures and the analytic expressions for calculating derivatives of displacements, stress and volume performances with respect to 5 different design variables types. Finally, a set of test examples were created to validate the accuracy and consistency of the result obtained from EFFECT, by comparing them with results published in the literature or obtained with the ANSYS commercial finite element code.