963 resultados para Dynamic Models
Resumo:
This paper conducts a dynamic stability analysis of symmetrically laminated FGM rectangular plates with general out-of-plane supporting conditions, subjected to a uniaxial periodic in-plane load and undergoing uniform temperature change. Theoretical formulations are based on Reddy's third-order shear deformation plate theory, and account for the temperature dependence of material properties. A semi-analytical Galerkin-differential quadrature approach is employed to convert the governing equations into a linear system of Mathieu-Hill equations from which the boundary points on the unstable regions are determined by Bolotin's method. Free vibration and bifurcation buckling are also discussed as subset problems. Numerical results are presented in both dimensionless tabular and graphical forms for laminated plates with FGM layers made of silicon nitride and stainless steel. The influences of various parameters such as material composition, layer thickness ratio, temperature change, static load level, boundary constraints on the dynamic stability, buckling and vibration frequencies are examined in detail through parametric studies.
Resumo:
Borderline hypertension (BH) has been associated with an exaggerated blood pressure (BP) response during laboratory stressors. However, the incidence of target organ damage in this condition and its relation to BP hyperreactivity is an unsettled issue. Thus, we assessed the Doppler echocardiographic profile of a group of BH men (N = 36) according to office BP measurements with exaggerated BP in the cycloergometric test. A group of normotensive men (NT, N = 36) with a normal BP response during the cycloergometric test was used as control. To assess vascular function and reactivity, all subjects were submitted to the cold pressor test. Before Doppler echocardiography, the BP profile of all subjects was evaluated by 24-h ambulatory BP monitoring. All subjects from the NT group presented normal monitored levels of BP. In contrast, 19 subjects from the original BH group presented normal monitored BP levels and 17 presented elevated monitored BP levels. In the NT group all Doppler echocardiographic indexes were normal. All subjects from the original BH group presented normal left ventricular mass and geometrical pattern. However, in the subjects with elevated monitored BP levels, fractional shortening was greater, isovolumetric relaxation time longer, and early to late flow velocity ratio was reduced in relation to subjects from the original BH group with normal monitored BP levels (P<0.05). These subjects also presented an exaggerated BP response during the cold pressor test. These results support the notion of an integrated pattern of cardiac and vascular adaptation during the development of hypertension.
Resumo:
O presente trabalho investigou o problema da modelagem da dispersão de compostos odorantes em presença de obstáculos (cúbicos e com forma complexa) sob condição de estabilidade atmosférica neutra. Foi empregada modelagem numérica baseada nas equações de transporte (CFD1) bem como em modelos algébricos baseados na pluma Gausseana (AERMOD2, CALPUFF3 e FPM4). Para a validação dos resultados dos modelos e a avaliação do seu desempenho foram empregados dados de experimentos em túnel de vento e em campo. A fim de incluir os efeitos da turbulência atmosférica na dispersão, dois diferentes modelos de sub-malha associados à Simulação das Grandes Escalas (LES5) foram investigados (Smagorinsky dinâmico e WALE6) e, para a inclusão dos efeitos de obstáculos na dispersão nos modelos Gausseanos, foi empregado o modelo PRIME7. O uso do PRIME também foi proposto para o FPM como uma inovação. De forma geral, os resultados indicam que o uso de CFD/LES é uma ferramenta útil para a investigação da dispersão e o impacto de compostos odorantes em presença de obstáculos e também para desenvolvimento dos modelos Gausseanos. Os resultados também indicam que o modelo FPM proposto, com a inclusão dos efeitos do obstáculo baseado no PRIME também é uma ferramenta muito útil em modelagem da dispersão de odores devido à sua simplicidade e fácil configuração quando comparado a modelos mais complexos como CFD e mesmo os modelos regulatórios AERMOD e CALPUFF. A grande vantagem do FPM é a possibilidade de estimar-se o fator de intermitência e a relação pico-média (P/M), parâmetros úteis para a avaliação do impacto de odores. Os resultados obtidos no presente trabalho indicam que a determinação dos parâmetros de dispersão para os segmentos de pluma, bem como os parâmetros de tempo longo nas proximidades da fonte e do obstáculo no modelo FPM pode ser melhorada e simulações CFD podem ser usadas como uma ferramenta de desenvolvimento para este propósito. Palavras chave: controle de odor, dispersão, fluidodinâmica computacional, modelagem matemática, modelagem gaussiana de pluma flutuante, simulação de grandes vórtices (LES).
Resumo:
O número de municípios infestados pelo Aedes aegypti no Estado do Espírito Santo vem aumentando gradativamente, levando a altas taxas de incidência de dengue ao longo dos anos. Apesar das tentativas de combate à doença, esta se tornou uma das maiores preocupações na saúde pública do Estado. Este estudo se propõe a descrever a dinâmica da expansão da doença no Estado a partir da associação entre variáveis ambientais e populacionais, utilizando dados operacionalizados por meio de técnicas de geoprocessamento. O estudo utilizou como fonte de dados a infestação pelo mosquito vetor e o coeficiente de incidência da doença, as distâncias rodoviárias intermunicipais do Estado, a altitude dos municípios e as variáveis geoclimáticas (temperatura e suficiência de água), incorporadas a uma ferramenta operacional, as Unidades Naturais do Espírito Santo (UNES), representadas em um único mapa operacionalizado em Sistema de Informação Geográfica (SIG), obtido a partir do Sistema Integrado de Bases Georreferenciadas do Estado do Espírito Santo. Para análise dos dados, foi realizada a Regressão de Poisson para os dados de incidência de dengue e Regressão Logística para os de infestação pelo vetor. Em seguida, os dados de infestação pelo mosquito e incidência de dengue foram georreferenciados, utilizando como ferramenta operacional o SIG ArcGIS versão 9.2. Observou-se que a pluviosidade é um fator que contribui para o surgimento de mosquito em áreas não infestadas. Altas temperaturas contribuem para um alto coeficiente de incidência de dengue nos municípios capixabas. A variável distância em relação a municípios populosos é um fator de proteção para a incidência da doença. A grande variabilidade encontrada nos dados, que não é explicada pelas variáveis utilizadas no modelo para incidência da doença, reforça a premissa de que a dengue é condicionada pela interação dinâmica entre muitas variáveis que o estudo não abordou. A espacialização dos dados de infestação pelo mosquito e incidência de dengue e as Zonas Naturais do ES permitiu a visualização da influência das variáveis estatisticamente significantes nos modelos utilizados no padrão da introdução e disseminação da doença no Estado.
Resumo:
O Dynamic Gait Index (DGI) é um teste que avalia o equilíbrio e marcha do corpo humano. OBJETIVOS: Os objetivos deste estudo foram adaptar culturalmente o DGI para o português e avaliar a sua confiabilidade. MATERIAL E MÉTODO: Seguiu-se o método de Guillemin et al. (1993) para a adaptação cultural do instrumento. Trata-se de estudo prospectivo em que 46 pacientes foram avaliados na fase de adaptação cultural e os itens que apresentaram 20% ou mais de incompreensão foram reformulados e reaplicados. A versão final do DGI em português foi aplicada em 35 idosos para examinar a confiabilidade intra e inter-observadores. O coeficiente de Spearman foi utilizado para correlacionar os escores inter e intra-observador e o teste de Wilcoxon para comparar as pontuações. A consistência interna foi analisada pelo coeficiente alfa de Cronbach. RESULTADOS: Houve correlações estatisticamente significantes entre os escores obtidos às avaliações inter e intra-observadores para todos os itens (p<0,001), classificadas como boa a muito forte (com de variação de r=0,655 a r=0,951). O DGI mostrou alta consistência interna entre seus itens nas avaliações inter e intra-observadores (variação de µ ou = 0,820 a a=0,894). CONCLUSÃO: O DGI foi adaptado culturalmente para o português brasileiro, mostrando-se um instrumento confiável.
Resumo:
This paper examines the performance of Portuguese equity funds investing in the domestic and in the European Union market, using several unconditional and conditional multi-factor models. In terms of overall performance, we find that National funds are neutral performers, while European Union funds under-perform the market significantly. These results do not seem to be a consequence of management fees. Overall, our findings are supportive of the robustness of conditional multi-factor models. In fact, Portuguese equity funds seem to be relatively more exposed to smallcaps and more value-oriented. Also, they present strong evidence of time-varying betas and, in the case of the European Union funds, of time-varying alphas too. Finally, in terms of market timing, our tests suggest that mutual fund managers in our sample do not exhibit any market timing abilities. Nevertheless, we find some evidence of timevarying conditional market timing abilities but only at the individual fund level.
Resumo:
Abstract. Interest in design and development of graphical user interface (GUIs) is growing in the last few years. However, correctness of GUI's code is essential to the correct execution of the overall software. Models can help in the evaluation of interactive applications by allowing designers to concentrate on its more important aspects. This paper describes our approach to reverse engineering abstract GUI models directly from the Java/Swing code.
Resumo:
Color model representation allows characterizing in a quantitative manner, any defined color spectrum of visible light, i.e. with a wavelength between 400nm and 700nm. To accomplish that, each model, or color space, is associated with a function that allows mapping the spectral power distribution of the visible electromagnetic radiation, in a space defined by a set of discrete values that quantify the color components composing the model. Some color spaces are sensitive to changes in lighting conditions. Others assure the preservation of certain chromatic features, remaining immune to these changes. Therefore, it becomes necessary to identify the strengths and weaknesses of each model in order to justify the adoption of color spaces in image processing and analysis techniques. This chapter will address the topic of digital imaging, main standards and formats. Next we will set the mathematical model of the image acquisition sensor response, which enables assessment of the various color spaces, with the aim of determining their invariance to illumination changes.
Resumo:
Current software development relies increasingly on non-trivial coordination logic for com- bining autonomous services often running on di erent platforms. As a rule, however, in typical non-trivial software systems, such a coordination layer is strongly weaved within the application at source code level. Therefore, its precise identi cation becomes a major methodological (and technical) problem which cannot be overestimated along any program understanding or refactoring process. Open access to source code, as granted in OSS certi cation, provides an opportunity for the devel- opment of methods and technologies to extract, from source code, the relevant coordination information. This paper is a step in this direction, combining a number of program analysis techniques to automatically recover coordination information from legacy code. Such information is then expressed as a model in Orc, a general purpose orchestration language
Resumo:
A growing number of predicting corporate failure models has emerged since 60s. Economic and social consequences of business failure can be dramatic, thus it is not surprise that the issue has been of growing interest in academic research as well as in business context. The main purpose of this study is to compare the predictive ability of five developed models based on three statistical techniques (Discriminant Analysis, Logit and Probit) and two models based on Artificial Intelligence (Neural Networks and Rough Sets). The five models were employed to a dataset of 420 non-bankrupt firms and 125 bankrupt firms belonging to the textile and clothing industry, over the period 2003–09. Results show that all the models performed well, with an overall correct classification level higher than 90%, and a type II error always less than 2%. The type I error increases as we move away from the year prior to failure. Our models contribute to the discussion of corporate financial distress causes. Moreover it can be used to assist decisions of creditors, investors and auditors. Additionally, this research can be of great contribution to devisers of national economic policies that aim to reduce industrial unemployment.
Resumo:
A growing number of predicting corporate failure models has emerged since 60s. Economic and social consequences of business failure can be dramatic, thus it is not surprise that the issue has been of growing interest in academic research as well as in business context. The main purpose of this study is to compare the predictive ability of five developed models based on three statistical techniques (Discriminant Analysis, Logit and Probit) and two models based on Artificial Intelligence (Neural Networks and Rough Sets). The five models were employed to a dataset of 420 non-bankrupt firms and 125 bankrupt firms belonging to the textile and clothing industry, over the period 2003–09. Results show that all the models performed well, with an overall correct classification level higher than 90%, and a type II error always less than 2%. The type I error increases as we move away from the year prior to failure. Our models contribute to the discussion of corporate financial distress causes. Moreover it can be used to assist decisions of creditors, investors and auditors. Additionally, this research can be of great contribution to devisers of national economic policies that aim to reduce industrial unemployment.
Resumo:
Given the dynamic nature of cardiac function, correct temporal alignment of pre-operative models and intraoperative images is crucial for augmented reality in cardiac image-guided interventions. As such, the current study focuses on the development of an image-based strategy for temporal alignment of multimodal cardiac imaging sequences, such as cine Magnetic Resonance Imaging (MRI) or 3D Ultrasound (US). First, we derive a robust, modality-independent signal from the image sequences, estimated by computing the normalized crosscorrelation between each frame in the temporal sequence and the end-diastolic frame. This signal is a resembler for the left-ventricle (LV) volume curve over time, whose variation indicates di erent temporal landmarks of the cardiac cycle. We then perform the temporal alignment of these surrogate signals derived from MRI and US sequences of the same patient through Dynamic Time Warping (DTW), allowing to synchronize both sequences. The proposed framework was evaluated in 98 patients, which have undergone both 3D+t MRI and US scans. The end-systolic frame could be accurately estimated as the minimum of the image-derived surrogate signal, presenting a relative error of 1:6 1:9% and 4:0 4:2% for the MRI and US sequences, respectively, thus supporting its association with key temporal instants of the cardiac cycle. The use of DTW reduces the desynchronization of the cardiac events in MRI and US sequences, allowing to temporally align multimodal cardiac imaging sequences. Overall, a generic, fast and accurate method for temporal synchronization of MRI and US sequences of the same patient was introduced. This approach could be straightforwardly used for the correct temporal alignment of pre-operative MRI information and intra-operative US images.
Resumo:
The purpose of this study is to investigate the contribution of psychological variables and scales suggested by Economic Psychology in predicting individuals’ default. Therefore, a sample of 555 individuals completed a self-completion questionnaire, which was composed of psychological variables and scales. By adopting the methodology of the logistic regression, the following psychological and behavioral characteristics were found associated with the group of individuals in default: a) negative dimensions related to money (suffering, inequality and conflict); b) high scores on the self-efficacy scale, probably indicating a greater degree of optimism and over-confidence; c) buyers classified as compulsive; d) individuals who consider it necessary to give gifts to children and friends on special dates, even though many people consider this a luxury; e) problems of self-control identified by individuals who drink an average of more than four glasses of alcoholic beverage a day.