14 resultados para Institute for Numerical Analysis (U.S.)
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
A previously developed model is used to numerically simulate real clinical cases of the surgical correction of scoliosis. This model consists of one-dimensional finite elements with spatial deformation in which (i) the column is represented by its axis; (ii) the vertebrae are assumed to be rigid; and (iii) the deformability of the column is concentrated in springs that connect the successive rigid elements. The metallic rods used for the surgical correction are modeled by beam elements with linear elastic behavior. To obtain the forces at the connections between the metallic rods and the vertebrae geometrically, non-linear finite element analyses are performed. The tightening sequence determines the magnitude of the forces applied to the patient column, and it is desirable to keep those forces as small as possible. In this study, a Genetic Algorithm optimization is applied to this model in order to determine the sequence that minimizes the corrective forces applied during the surgery. This amounts to find the optimal permutation of integers 1, ... , n, n being the number of vertebrae involved. As such, we are faced with a combinatorial optimization problem isomorph to the Traveling Salesman Problem. The fitness evaluation requires one computing intensive Finite Element Analysis per candidate solution and, thus, a parallel implementation of the Genetic Algorithm is developed.
Resumo:
An improved class of Boussinesq systems of an arbitrary order using a wave surface elevation and velocity potential formulation is derived. Dissipative effects and wave generation due to a time-dependent varying seabed are included. Thus, high-order source functions are considered. For the reduction of the system order and maintenance of some dispersive characteristics of the higher-order models, an extra O(mu 2n+2) term (n ??? N) is included in the velocity potential expansion. We introduce a nonlocal continuous/discontinuous Galerkin FEM with inner penalty terms to calculate the numerical solutions of the improved fourth-order models. The discretization of the spatial variables is made using continuous P2 Lagrange elements. A predictor-corrector scheme with an initialization given by an explicit RungeKutta method is also used for the time-variable integration. Moreover, a CFL-type condition is deduced for the linear problem with a constant bathymetry. To demonstrate the applicability of the model, we considered several test cases. Improved stability is achieved.
Resumo:
We present a new dynamical approach to the Blumberg's equation, a family of unimodal maps. These maps are proportional to Beta(p, q) probability densities functions. Using the symmetry of the Beta(p, q) distribution and symbolic dynamics techniques, a new concept of mirror symmetry is defined for this family of maps. The kneading theory is used to analyze the effect of such symmetry in the presented models. The main result proves that two mirror symmetric unimodal maps have the same topological entropy. Different population dynamics regimes are identified, when the intrinsic growth rate is modified: extinctions, stabilities, bifurcations, chaos and Allee effect. To illustrate our results, we present a numerical analysis, where are demonstrated: monotonicity of the topological entropy with the variation of the intrinsic growth rate, existence of isentropic sets in the parameters space and mirror symmetry.
Resumo:
In the stair nested designs with u factors we have u steps and a(1), ... , a(u) "active" levels. We would have a(1) observations with different levels for the first factor each of them nesting a single level of each of the remaining factors; next a(2) observations with level a(1) + 1 for the first factor and distinct levels for the second factor each nesting a fixed level of each of the remaining factors, and so on. So the number of level combinations is Sigma(u)(i=1) a(i). In meta-analysis joint treatment of different experiments is considered. Joining the corresponding models may be useful to carry out that analysis. In this work we want joining L models with stair nesting.
Resumo:
This study aimed to determine and evaluate the diagnostic accuracy of visual screening tests for detecting vision loss in elderly. This study is defined as study of diagnostic performance. The diagnostic accuracy of 5 visual tests -near convergence point, near accommodation point, stereopsis, contrast sensibility and amsler grid—was evaluated by means of the ROC method (receiver operating characteristics curves), sensitivity, specificity, positive and negative likelihood ratios (LR+/LR−). Visual acuity was used as the reference standard. A sample of 44 elderly aged 76.7 years (±9.32), who were institutionalized, was collected. The curves of contrast sensitivity and stereopsis are the most accurate (area under the curves were 0.814−p = 0.001, C.I.95%[0.653;0.975]— and 0.713−p = 0.027, C.I.95%[0,540;0,887], respectively). The scores with the best diagnostic validity for the stereopsis test were 0.605 (sensitivity 0.87, specificity 0.54; LR+ 1.89, LR−0.24) and 0.610 (sensitivity 0.81, specificity 0.54; LR+1.75, LR−0.36). The scores with higher diagnostic validity for the contrast sensibility test were 0.530 (sensitivity 0.94, specificity 0.69; LR+ 3.04, LR−0.09). The contrast sensitivity and stereopsis test's proved to be clinically useful in detecting vision loss in the elderly.
Resumo:
The aging of Portuguese population is characterized by an increase of individuals aged older than 65 years. Preventable visual loss in older persons is an important public health problem. Tests used for vision screening should have a high degree of diagnostic validity confirmed by means of clinical trials. The primary aim of a screening program is the early detection of visual diseases. Between 20% and 50% of older people in the UK have undetected reduced vision and in most cases is correctable. Elderly patients do not receive a systematic eye examination unless a problem arises with their glasses or suspicion vision loss. This study aimed to determine and evaluate the diagnostic accuracy of visual screening tests for detecting vision loss in elderly. Furthermore, it pretends to define the ability to find the subjects affected with vision loss as positive and the subjects not affected with the same disease as negative. The ideal vision screening method should have high sensitivity and specificity for early detection of risk factors. It should be also low cost and easy to implement in all geographic and socioeconomic regions. Sensitivity is the ability of an examination to identify the presence of a given disease and specificity is the ability of the examination to identify the absence of a given disease. It was not an aim of this study to detect abnormalities that affect visual acuity. The aim of this study was to find out what´s the best test for the identification of any vision loss.
Resumo:
Neste trabalho aborda-se o desenvolvimento da carroçaria do Veículo Eléctrico Ecológico – VEECO recorrendo a tecnologias assistidas por computador. Devido à impossibilidade de abranger toda a temática das tecnologias assistidas por computador, associadas ao desenvolvimento de uma carroçaria automóvel, o foco deste trabalho assenta no processo de obtenção de um modelo digital válido e no estudo do desempenho aerodinâmico da carroçaria. A existência de um modelo digital válido é a base de qualquer processo de desenvolvimento associado a tecnologias assistidas por computador. Neste sentido, numa primeira etapa, foram aplicadas e desenvolvidas técnicas e metodologias que permitem o desenvolvimento de uma carroçaria desde a sua fase de “design” até à obtenção de um modelo digital CAD. Estas abrangem a conversão e importação de dados, a realização de engenharia inversa, a construção/reconstrução CAD em CATIA V5 e a preparação/correcção de modelos CAD para a análise numérica. Numa segunda etapa realizou-se o estudo da aerodinâmica exterior da carroçaria, recorrendo à ferramenta de análise computacional de fluidos (CFD) Flow Simulation da CosmosFloworks integrado no programa SolidWorks 2010. Associado à temática do estudo aerodinâmico e devido à elevada importância da validação dos resultados numéricos por meio de dados experimentais, foi realizado o estudo de análise dimensional que permite a realização de ensaios experimentais à escala, bem como a análise dos resultados experimentais obtidos.
Resumo:
The aim of this work is to use the MANCOVA model to study the influence of the phenotype of an enzyme - Acid phosphatase - and a genetic factor - Haptoglobin genotype - on two dependent variables - Activity of Acid Phosphatase (ACP1) and the Body Mass Index (BMI). Therefore it's used a general linear model, namely a multivariate analysis of covariance (Two-way MANCOVA). The covariate is the age of the subject. This covariate works as control variable for the independent factors, serving to reduce the error term in the model. The main results showed that only the ACP1 phenotype has a significant effect on the activity of ACP1 and the covariate has a significant effect in both dependent variables. The univariate analysis showed that ACP1 phenotype accounts for about 12.5% of the variability in the activity of ACP1. In respect to this covariate it can be seen that accounts for about 4.6% of the variability in the activity of ACP1 and 37.3% in the BMI.
Resumo:
Trabalho de Dissertação de natureza científica para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização em Estruturas
Resumo:
Crossed classification models are applied in many investigations taking in consideration the existence of interaction between all factors or, in alternative, excluding all interactions, and in this case only the effects and the error term are considered. In this work we use commutative Jordan algebras in the study of the algebraic structure of these designs and we use them to obtain similar designs where only some of the interactions are considered. We finish presenting the expressions of the variance componentes estimators.
Resumo:
Beam-like structures are the most common components in real engineering, while single side damage is often encountered. In this study, a numerical analysis of single side damage in a free-free beam is analysed with three different finite element models; namely solid, shell and beam models for demonstrating their performance in simulating real structures. Similar to experiment, damage is introduced into one side of the beam, and natural frequencies are extracted from the simulations and compared with experimental and analytical results. Mode shapes are also analysed with modal assurance criterion. The results from simulations reveal a good performance of the three models in extracting natural frequencies, and solid model performs better than shell while shell model performs better than beam model under intact state. For damaged states, the natural frequencies captured from solid model show more sensitivity to damage severity than shell model and shell model performs similar to the beam model in distinguishing damage. The main contribution of this paper is to perform a comparison between three finite element models and experimental data as well as analytical solutions. The finite element results show a relatively well performance.
Resumo:
Trabalho Final de Mestrado elaborado no Laboratório Nacional de Engenharia Civil (LNEC) para a obtenção do grau de Mestre em Engenharia Civil pelo Instituto Superior de Engenharia de Lisboa no âmbito do protocolo entre o ISEL e o LNEC
Resumo:
In the context of a renormalizable supersymmetric SO(10) Grand Unified Theory, we consider the fermion mass matrices generated by the Yukawa couplings to a 10 circle plus 120 circle plus (126) over bar representation of scalars. We perform a complete investigation of the possibilities of imposing flavour symmetries in this scenario; the purpose is to reduce the number of Yukawa coupling constants in order to identify potentially predictive models. We have found that there are only 14 inequivalent cases of Yukawa coupling matrices, out of which 13 cases are generated by 74 symmetries, with suitable n, and one case is generated by a Z(2) x Z(2) symmetry. A numerical analysis of the 14 cases reveals that only two of them-dubbed A and B in the present paper allow good fits to the experimentally known fermion masses and mixings. (C) 2016 The Authors. Published by Elsevier B.V.
Resumo:
This paper introduces a new method to blindly unmix hyperspectral data, termed dependent component analysis (DECA). This method decomposes a hyperspectral images into a collection of reflectance (or radiance) spectra of the materials present in the scene (endmember signatures) and the corresponding abundance fractions at each pixel. DECA assumes that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abudances are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. This method overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.