932 resultados para Non-linear dynamic analysis
Resumo:
Assuming that nuclear matter can be treated as a perfect fluid, we study the propagation of perturbations in the baryon density. The equation of state is derived from a relativistic mean field model, which is a variant of the non-linear Walecka model. The expansion of the Euler and continuity equations of relativistic hydrodynamics around equilibrium configurations leads to differential equations for the density perturbation. We solve them numerically for linear and spherical perturbations and follow the propagation of the initial pulses. For linear perturbations we find single soliton solutions and solutions with one or more solitons followed by ""radiation"". Depending on the equation of state a strong damping may occur. We consider also the evolution of perturbations in a medium without dispersive effects. In this case we observe the formation and breaking of shock waves. We study all these equations also for matter at finite temperature. Our results may be relevant for the analysis of RHIC data. They suggest that the shock waves formed in the quark gluon plasma phase may survive and propagate in the hadronic phase. (C) 2009 Elseiver. B.V. All rights reserved.
Resumo:
This work presents a novel approach in order to increase the recognition power of Multiscale Fractal Dimension (MFD) techniques, when applied to image classification. The proposal uses Functional Data Analysis (FDA) with the aim of enhancing the MFD technique precision achieving a more representative descriptors vector, capable of recognizing and characterizing more precisely objects in an image. FDA is applied to signatures extracted by using the Bouligand-Minkowsky MFD technique in the generation of a descriptors vector from them. For the evaluation of the obtained improvement, an experiment using two datasets of objects was carried out. A dataset was used of characters shapes (26 characters of the Latin alphabet) carrying different levels of controlled noise and a dataset of fish images contours. A comparison with the use of the well-known methods of Fourier and wavelets descriptors was performed with the aim of verifying the performance of FDA method. The descriptor vectors were submitted to Linear Discriminant Analysis (LDA) classification method and we compared the correctness rate in the classification process among the descriptors methods. The results demonstrate that FDA overcomes the literature methods (Fourier and wavelets) in the processing of information extracted from the MFD signature. In this way, the proposed method can be considered as an interesting choice for pattern recognition and image classification using fractal analysis.
Resumo:
2D electrophoresis is a well-known method for protein separation which is extremely useful in the field of proteomics. Each spot in the image represents a protein accumulation and the goal is to perform a differential analysis between pairs of images to study changes in protein content. It is thus necessary to register two images by finding spot correspondences. Although it may seem a simple task, generally, the manual processing of this kind of images is very cumbersome, especially when strong variations between corresponding sets of spots are expected (e.g. strong non-linear deformations and outliers). In order to solve this problem, this paper proposes a new quadratic assignment formulation together with a correspondence estimation algorithm based on graph matching which takes into account the structural information between the detected spots. Each image is represented by a graph and the task is to find a maximum common subgraph. Successful experimental results using real data are presented, including an extensive comparative performance evaluation with ground-truth data. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Most studies involving statistical time series analysis rely on assumptions of linearity, which by its simplicity facilitates parameter interpretation and estimation. However, the linearity assumption may be too restrictive for many practical applications. The implementation of nonlinear models in time series analysis involves the estimation of a large set of parameters, frequently leading to overfitting problems. In this article, a predictability coefficient is estimated using a combination of nonlinear autoregressive models and the use of support vector regression in this model is explored. We illustrate the usefulness and interpretability of results by using electroencephalographic records of an epileptic patient.
Resumo:
Background: The relationship between CETP and postprandial hyperlipemia is still unclear. We verified the effects of varying activities of plasma CETP on postprandial lipemia and precocious atherosclerosis in asymptomatic adult women. Methods: Twenty-eight women, selected from a healthy population sample (n = 148) were classified according to three CETP levels, all statistically different: CETP deficiency (CETPd <= 4.5%, n = 8), high activity (CETPi >= 23.8, n = 6) and controls (CTL, CETP >= 4.6% and <= 23.7%, n = 14). After a 12 h fast they underwent an oral fat tolerance test (40 g of fat/m(2) of body surface area) for 8 hours. TG, TG-rich-lipoproteins (TRL), cholesterol and TRL-TG measurements (AUC, AUIC, AR, RR and late peaks) and comparisons were performed on all time points. Lipases and phospholipids transfer protein (PLTP) were determined. Correlation between carotid atherosclerosis (c-IMT) and postprandial parameters was determined. CETP TaqIB and I405V and ApoE-epsilon 3/epsilon 2/epsilon 4 polymorphisms were examined. To elucidate the regulation of increased lipemia in CETPd a multiple linear regression analysis was performed. Results: In the CETPi and CTL groups, CETP activity was respectively 9 and 5.3 higher compared to the CETPd group. Concentrations of all HDL fractions and ApoA-I were higher in the CETPd group and clearance was delayed, as demonstrated by modified lipemia parameters (AUC, AUIC, RR, AR and late peaks and meal response patterns). LPL or HL deficiencies were not observed. No genetic determinants of CETP deficiency or of postprandial lipemia were found. Correlations with c-IMT in the CETPd group indicated postprandial pro-atherogenic associations. In CETPd the regression multivariate analysis (model A) showed that CETP was largely and negatively predicted by VLDL-C lipemia (R(2) = 92%) and much less by TG, LDL-C, ApoAI, phospholipids and non-HDL-C. CETP (model B) influenced mainly the increment in ApoB-100 containing lipoproteins (R(2) = 85% negatively) and phospholipids (R(2) = 13%), at the 6(th)h point. Conclusion: The moderate CETP deficiency phenotype included a paradoxically high HDL-C and its sub fractions (as earlier described), positive associations with c-IMT, a postprandial VLDL-C increment predicting negatively CETP activity and CETP activity regulating inversely the increment in ApoB100-containing lipoproteins. We hypothesize that the enrichment of TG content in triglyceride-rich ApoB-containing lipoproteins and in TG rich remnants increases lipoproteins` competition to active lipolysis sites, reducing their catabolism and resulting on postprandial lipemia with atherogenic consequences.
Resumo:
A new electrocatalytic active porphyrin nanocomposite material was obtained by electropolymerization of meso-tetra(4-sulphonatephenyl) porphyrinate manganese(III) complex (MnTPPS) in alkaline solutions containing sub-micromolar concentrations of silver chloride. The modified glassy carbon electrodes efficiently oxidize hydrazine at 10 mV versus Ag/AgCl, dramatically decreasing the overpotential of conventional carbon electrodes. The analytical characteristics of this amperometric sensor coupled with batch injection analysis (BIA) technique were explored. Wide linear dynamic range (2.5 x 10(-7) to 2.5 x 10(-4) mol L-1), good repeatability (R.S.D. = 0.84%, n = 30) and low detection (3.1 x 10(-8) mol L-1) and quantification (1.0 x 10(-7) mol L-1) limits, as well as very fast sampling frequency (60 determinations per hour) were achieved. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
To identify chemical descriptors to distinguish Cuban from non-Cuban rums, analyses of 44 samples of rum from 15 different countries are described. To provide the chemical descriptors, analyses of the the mineral fraction, phenolic compounds, caramel, alcohols, acetic acid, ethyl acetate, ketones, and aldehydes were carried out. The analytical data were treated through the following chemometric methods: principal component analysis (PCA), partial least square-discriminate analysis (PLS-DA), and linear discriminate analysis (LDA). These analyses indicated 23 analytes as relevant chemical descriptors for the separation of rums into two distinct groups. The possibility of clustering the rum samples investigated through PCA analysis led to an accumulative percentage of 70.4% in the first three principal components, and isoamyl alcohol, n-propyl alcohol, copper, iron, 2-furfuraldehyde (furfuraldehyde), phenylmethanal (benzaldehyde), epicatechin, and vanillin were used as chemical descriptors. By applying the PLS-DA technique to the whole set of analytical data, the following analytes have been selected as descriptors: acetone, sec-butyl alcohol, isobutyl alcohol, ethyl acetate, methanol, isoamyl alcohol, magnesium, sodium, lead, iron, manganese, copper, zinc, 4-hydroxy3,5-dimethoxybenzaldehyde (syringaldehyde), methaldehyde (formaldehyde), 5-hydroxymethyl-2furfuraldehyde (5-HMF), acetalclehyde, 2-furfuraldehyde, 2-butenal (crotonaldehyde), n-pentanal (valeraldehyde), iso-pentanal (isovaleraldehyde), benzaldehyde, 2,3-butanodione monoxime, acetylacetone, epicatechin, and vanillin. By applying the LIDA technique, a model was developed, and the following analytes were selected as descriptors: ethyl acetate, sec-butyl alcohol, n-propyl alcohol, n-butyl alcohol, isoamyl alcohol, isobutyl alcohol, caramel, catechin, vanillin, epicatechin, manganese, acetalclehyde, 4-hydroxy-3-methoxybenzoic acid, 2-butenal, 4-hydroxy-3,5-dimethoxybenzoic acid, cyclopentanone, acetone, lead, zinc, calcium, barium, strontium, and sodium. This model allowed the discrimination of Cuban rums from the others with 88.2% accuracy.
Resumo:
A new composite electrode based on multiwall carbon nanotubes (MWCNT) and silicone-rubber (SR) was developed and applied to the determination of propranolol in pharmaceutical formulations. The effect of using MWCNT/graphite mixtures in different proportions was also investigated. Cyclic voltammetry and electrochemical impedance spectroscopy were used for electrochemical characterization of different electrode compositions. Propranolol was determined using MWCNT/SR 70% (m/m) electrodes with linear dynamic ranges up to 7.0 mu molL(-1) by differential pulse and up to 5.4 mu molL(-1) by square wave voltammetry, with LODs of 0.12 and 0.078 mu molL(-1), respectively. Analysis of commercial samples agreed with that obtained by the official spectrophotometric method. The electrode is mechanically robust and presented reproducible results and a long useful life.
Resumo:
A literature survey and a theoretical study were performed to characterize residential chimney conditions for flue gas flow measurements. The focus is on Pitot-static probes to give sufficient basis for the development and calibration of a velocity pressure averaging probe suitable for the continuous dynamic (i.e. non steady state) measurement of the low flow velocities present in residential chimneys. The flow conditions do not meet the requirements set in ISO 10780 and ISO 3966 for Pitot-static probe measurements, and the methods and their uncertainties are not valid. The flow velocities in residential chimneys from a heating boiler under normal operating condi-tions are shown to be so low that they in some conditions result in voiding the assumptions of non-viscous fluid justifying the use of the quadratic Bernoulli equation. A non-linear Reynolds number dependent calibration coefficient that is correcting for the viscous effects is needed to avoid significant measurement errors. The wide range of flow velocity during normal boiler operation also results in the flow type changing from laminar, across the laminar to turbulent transition region, to fully turbulent flow, resulting in significant changes of the velocity profile during dynamic measurements. In addition, the short duct lengths (and changes of flow direction and duct shape) used in practice are shown to result in that the measurements are done in the hydrodynamic entrance region where the flow velocity profiles most likely are neither symmetrical nor fully developed. A measurement method insensitive to velocity profile changes is thus needed, if the flow velocity profile cannot otherwise be determined or predicted with reasonable accuracy for the whole measurement range. Because of particulate matter and condensing fluids in the flue gas it is beneficial if the probe can be constructed so that it can easily be taken out for cleaning, and equipped with a locking mechanism to always ensure the same alignment in the duct without affecting the calibration. The literature implies that there may be a significant time lag in the measurements of low flow rates due to viscous effects in the internal impact pressure passages of Pitot probes, and the significance in the discussed application should be studied experimentally. The measured differential pressures from Pitot-static probes in residential chimney flows are so low that the calibration and given uncertainties of commercially available pressure transducers are not adequate. The pressure transducers should be calibrated specifically for the application, preferably in combination with the probe, and the significance of all different error sources should be investigated carefully. Care should be taken also with the temperature measurement, e.g. with averaging of several sensors, as significant temperature gradients may be present in flue gas ducts.
Resumo:
o objetivo deste trabalho é a análise de barragens de gravidade de concreto desde a faseda sua construção até sua completa entrada em serviço. Inicialmente é feita a análise da fase construtiva, onde o problema fundamental é devido às tensões térmicas decorrentes do calor de hidratação. O método dos elementos finitos é empregado para a solução dos problemasde transferência de calor e de tensões. A influência da construção em camadas é introduzidaatravés da redefinição da malha de elementos finitos, logo após o lançamento de cadacamada de concreto. Uma atenção especial é dada ao problema de fissuração em estruturas de concreto simples.Algunsmodelos usuais são apresentados, discutindo-se a eficiência dos mesmos. Os modelosde fissuração distribuída têm sido preferidos, em virtude dos vários inconvenientes apresentados pelas formulações discretas. Esses modelos, entretanto, fornecem resultados dependentesda malha de elementos finitos e alguma consideração adicional deve ser feita para corrigiressas distorções. Normalmente, tenta-se corrigir esse problema através da adoção de umaresistênciaà tração minorada que é definida em função da energia de fratura do material. Neste trabalho, é demonstrado que esse procedimento não é satisfatório e é proposta uma novaformulaçãopara a análise de grandes estruturas de concreto. A análise das tensões na etapa de construção da barragem é feita com o emprego de um modelo constitutivo viscoelástico com envelhecimento para o concreto. Em virtude do envelhecimento,a matriz de rigidez da estrutura é variável no tempo, devendo ser redefinida e triangularizadaem cada instante. Isto leva a um grande esforço computacional, sobretudo, quandoa barragem é construída em muitas camadas. Para evitar esse inconveniente, adota-se um procedimento iterativo que permite que a matriz de rigidez seja redefinida em poucas idadesde referência. Numa segunda etapa da análise, a barragem é submetida à pressão hidrostática e a uma excitação sísmica. A análise dinâmica é realizada considerando-se o movimento do sistema acoplado barragem-reservatório-fundação. O sismo é considerado um processo estocásticonão estacionário e a segurança da estrutura é determinada em relação aos principais modos de falha
Resumo:
Fundamentalmente, o presente trabalho faz uma análise elástica linear de pontes ou vigas curvas assimétricas de seção transversal aberta e de parede fina, com propriedades físicas, geométricas e raio de curvatura constantes ao longo do eixo baricêntrico. Para tanto, utilizaram-se as equações diferenciais de VLASOV considerando o acoplamento entre as deformações nas direções vertical, transversal, axial de torcão nal. Na solução do sistema de quatro equações com derivadas parciais foi utilizado um apropriado método numérico de integração (Diferenças Finitas Centrais). A análise divide-se, basicamente, em dois tipos: análise DINÂMICA e ESTATICA. Ambas são utilizadas também na determinação do coeficiente de impacto (C.M.D.). A primeira refere-se tanto na determinação das características dinâmicas básicas (frequências naturais e respectivos modos de vibração), como também na determinação da resposta dinâmica da viga, em tensões e deformações, para cargas móveis arbitrárias. Vigas com qualquer combinação das condições de contorno, incluindo bordos rotulados e engastados nas três direções de flexão e na torção, são consideradas. 0s resultados da análise teórica, obtidos pela aplicação de programas computacionais implementados em microcomputador (análise estática) e no computador B-6700 (análise dinâmica), são comparados tanto com os da bibliografia técnica como também com resultados experimentais, apresentando boa correlação.
Resumo:
As contribuições deste artigo são duas. A primeira, um método de avaliação de regressões não lineares para a previsão de retornos intradiários de ações no mercado brasileiro é discutido e aplicado, com o objetivo de maximizar o retorno de um portfólio simulado de compras e vendas. A segunda, regressões usando funções-núcleo associadas ao particionamento da amostra por vizinhos mais próximos são realizadas. Algumas variáveis independentes utilizadas são indicadores técnicos, cujos parâmetros são otimizados dentro da amostra de estimação. Os resultados alcançados são positivos e superam, em uma análise quartil a quartil, os resultados produzidos por um modelo benchmark de autorregressão linear
Resumo:
Diversos estudos de Finanças Corporativas consideram os custos associados aos ajustes da estrutura de capital das empresas irrelevantes tanto na forma quanto em magnitude. Este estudo analisou empiricamente a influência dos custos de ajustamento na dinâmica dos ajustes da estrutura de capital de empresas brasileiras de capital aberto no período de 1999 a 2007. A alavancagem foi abordada sob três diferentes cenários, considerando a presença de custos fixos, custos proporcionais e por uma composição de custos fixos e proporcionais através de simulações utilizando um modelo reduzido da estrutura de capital. Em seguida a análise não paramétrica da amostra revelou que as empresas apresentam um comportamento dinâmico em suas decisões de financiamento para o ajuste da estruturas de capital, mas que não se revelou contínuo. A utilização de um modelo de duration mostrou-se adequado para mensurar o intervalo de tempo entre os ajustes da estrutura de capital das empresas. Os resultados são extremamente relevantes e suportam a teoria de um comportamento de rebalanceamento dinâmico pelas empresas de suas estruturas de capital em torno de um intervalo ótimo. Entretanto os ajustes não ocorrem de forma imediata e a persistência de choques à estrutura de capital deve-se em sua maior parte aos custos associados aos ajustes do que a uma possível indiferença à estrutura de capital. . Este trabalho constitui-se como pioneiro no mercado brasileiro acerca dos custos de ajustamento da estrutura de capital e abre espaço para a discussão do comportamento ótimo em torno da estrutura de capital de empresas nacionais.
Resumo:
O trabalho tem como objetivo aplicar uma modelagem não linear ao Produto Interno Bruto brasileiro. Para tanto foi testada a existência de não linearidade do processo gerador dos dados com a metodologia sugerida por Castle e Henry (2010). O teste consiste em verificar a persistência dos regressores não lineares no modelo linear irrestrito. A seguir a série é modelada a partir do modelo autoregressivo com limiar utilizando a abordagem geral para específico na seleção do modelo. O algoritmo Autometrics é utilizado para escolha do modelo não linear. Os resultados encontrados indicam que o Produto Interno Bruto do Brasil é melhor explicado por um modelo não linear com três mudanças de regime, que ocorrem no inicio dos anos 90, que, de fato, foi um período bastante volátil. Através da modelagem não linear existe o potencial para datação de ciclos, no entanto os resultados encontrados não foram suficientes para tal análise.
Resumo:
This paper presents new methodology for making Bayesian inference about dy~ o!s for exponential famiIy observations. The approach is simulation-based _~t> use of ~vlarkov chain Monte Carlo techniques. A yletropolis-Hastings i:U~UnLlllll 1::; combined with the Gibbs sampler in repeated use of an adjusted version of normal dynamic linear models. Different alternative schemes are derived and compared. The approach is fully Bayesian in obtaining posterior samples for state parameters and unknown hyperparameters. Illustrations to real data sets with sparse counts and missing values are presented. Extensions to accommodate for general distributions for observations and disturbances. intervention. non-linear models and rnultivariate time series are outlined.