997 resultados para dynamic phenomena
Resumo:
Speckle is being used as a characterization tool for the analysis of the dynamic of slow varying phenomena occurring in biological and industrial samples. The retrieved data takes the form of a sequence of speckle images. The analysis of these images should reveal the inner dynamic of the biological or physical process taking place in the sample. Very recently, it has been shown that principal component analysis is able to split the original data set in a collection of classes. These classes can be related with the dynamic of the observed phenomena. At the same time, statistical descriptors of biospeckle images have been used to retrieve information on the characteristics of the sample. These statistical descriptors can be calculated in almost real time and provide a fast monitoring of the sample. On the other hand, principal component analysis requires longer computation time but the results contain more information related with spatial-temporal pattern that can be identified with physical process. This contribution merges both descriptions and uses principal component analysis as a pre-processing tool to obtain a collection of filtered images where a simpler statistical descriptor can be calculated. The method has been applied to slow-varying biological and industrial processes
Resumo:
This thesis makes use of the unique reregulation of pharmaceutical monopoly in Sweden to critically examine intraindustry firm heterogeneity. It contributes to existing divestiture research as it studies the dynamism in between reconfigurations of value constellations and its effects on value creation of divested pharmacies. Because the findings showed that the predominant theory of intraindustry firm heterogeneity could not explain firm performance, the value constellation concept was applied as it captured the phenomena. A patterned finding informed how reconfigurations of value constellations in a reregulated market characterized by strict rules, regulations, and high competition did not generate additional value for firms on short term. My study unveils that value creation is hampered in situations where rules and regulations significantly affect firms’ ability to reconfigure their value constellations. The key practical implication is an alternative perspective on fundamental aspects of the reregulation and how policy-makers may impede firm performance and the intended creation of new value for not only firms but for society as a whole.
Resumo:
A three-dimensional finite volume, unstructured mesh (FV-UM) method for dynamic fluid–structure interaction (DFSI) is described. Fluid structure interaction, as applied to flexible structures, has wide application in diverse areas such as flutter in aircraft, wind response of buildings, flows in elastic pipes and blood vessels. It involves the coupling of fluid flow and structural mechanics, two fields that are conventionally modelled using two dissimilar methods, thus a single comprehensive computational model of both phenomena is a considerable challenge. Until recently work in this area focused on one phenomenon and represented the behaviour of the other more simply. More recently, strategies for solving the full coupling between the fluid and solid mechanics behaviour have been developed. A key contribution has been made by Farhat et al. [Int. J. Numer. Meth. Fluids 21 (1995) 807] employing FV-UM methods for solving the Euler flow equations and a conventional finite element method for the elastic solid mechanics and the spring based mesh procedure of Batina [AIAA paper 0115, 1989] for mesh movement. In this paper, we describe an approach which broadly exploits the three field strategy described by Farhat for fluid flow, structural dynamics and mesh movement but, in the context of DFSI, contains a number of novel features: a single mesh covering the entire domain, a Navier–Stokes flow, a single FV-UM discretisation approach for both the flow and solid mechanics procedures, an implicit predictor–corrector version of the Newmark algorithm, a single code embedding the whole strategy.
Resumo:
Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.
Resumo:
Power system engineers face a double challenge: to operate electric power systems within narrow stability and security margins, and to maintain high reliability. There is an acute need to better understand the dynamic nature of power systems in order to be prepared for critical situations as they arise. Innovative measurement tools, such as phasor measurement units, can capture not only the slow variation of the voltages and currents but also the underlying oscillations in a power system. Such dynamic data accessibility provides us a strong motivation and a useful tool to explore dynamic-data driven applications in power systems. To fulfill this goal, this dissertation focuses on the following three areas: Developing accurate dynamic load models and updating variable parameters based on the measurement data, applying advanced nonlinear filtering concepts and technologies to real-time identification of power system models, and addressing computational issues by implementing the balanced truncation method. By obtaining more realistic system models, together with timely updated parameters and stochastic influence consideration, we can have an accurate portrait of the ongoing phenomena in an electrical power system. Hence we can further improve state estimation, stability analysis and real-time operation.
Resumo:
Interaction of ocean waves, currents and sea bed roughness is a complicated phenomena in fluid dynamic. This paper will describe the governing equations of motions of this phenomena in viscous and nonviscous conditions as well as study and analysis the experimental results of sets of physical models on waves, currents and artificial roughness, and consists of three parts: First, by establishing some typical patterns of roughness, the effects of sea bed roughness on a uniform current has been studied, as well as the manning coefficient of each type is reviewed to find the critical situation due to different arrangement. Second, the effect of roughness on wave parameters changes, such as wave height, wave length, and wave dispersion equations have been studied, third, superimposing, the waves + current + roughness patterns established in a flume, equipped with waves + currents generator, in this stage different analysis has been done to find the governing dimensionless numbers, and present the numbers to define the contortions and formulations of this phenomena. First step of the model is verified by the so called Chinese method, and the Second step by the Kamphius (1975), and third step by the van Rijn (1990) , and Brevik and Ass ( 1980), and in all cases reasonable agreements have been obtained. Finally new dimensionless parameters presented for this complicated phenomena.
Resumo:
Adaptability and invisibility are hallmarks of modern terrorism, and keeping pace with its dynamic nature presents a serious challenge for societies throughout the world. Innovations in computer science have incorporated applied mathematics to develop a wide array of predictive models to support the variety of approaches to counterterrorism. Predictive models are usually designed to forecast the location of attacks. Although this may protect individual structures or locations, it does not reduce the threat—it merely changes the target. While predictive models dedicated to events or social relationships receive much attention where the mathematical and social science communities intersect, models dedicated to terrorist locations such as safe-houses (rather than their targets or training sites) are rare and possibly nonexistent. At the time of this research, there were no publically available models designed to predict locations where violent extremists are likely to reside. This research uses France as a case study to present a complex systems model that incorporates multiple quantitative, qualitative and geospatial variables that differ in terms of scale, weight, and type. Though many of these variables are recognized by specialists in security studies, there remains controversy with respect to their relative importance, degree of interaction, and interdependence. Additionally, some of the variables proposed in this research are not generally recognized as drivers, yet they warrant examination based on their potential role within a complex system. This research tested multiple regression models and determined that geographically-weighted regression analysis produced the most accurate result to accommodate non-stationary coefficient behavior, demonstrating that geographic variables are critical to understanding and predicting the phenomenon of terrorism. This dissertation presents a flexible prototypical model that can be refined and applied to other regions to inform stakeholders such as policy-makers and law enforcement in their efforts to improve national security and enhance quality-of-life.
Resumo:
In solid rocket motors, the absence of combustion controllability and the large amount of financial resources involved in full-scale firing tests, increase the importance of numerical simulations in order to asses stringent mission thrust requirements and evaluate the influence of thrust chamber phenomena affecting the grain combustion. Among those phenomena, grain local defects (propellant casting inclusions and debondings), combustion heat accumulation involving pressure peaks (Friedman Curl effect), and case-insulating thermal protection material ablation affect thrust prediction in terms of not negligible deviations with respect to the nominal expected trace. Most of the recent models have proposed a simplified treatment to the problem using empirical corrective functions, with the disadvantages of not fully understanding the physical dynamics and thus of not obtaining predictive results for different configurations of solid rocket motors in a boundary conditions-varied scenario. This work is aimed to introduce different mathematical approaches to model, analyze, and predict the abovementioned phenomena, presenting a detailed physical interpretation based on existing SRMs configurations. Internal ballistics predictions are obtained with an in-house simulation software, where the adoption of a dynamic three-dimensional triangular mesh together with advanced computer graphics methods, allows the previous target to be reached. Numerical procedures are explained in detail. Simulation results are carried out and discussed based on experimental data.
Resumo:
Current data indicate that the size of high-density lipoprotein (HDL) may be considered an important marker for cardiovascular disease risk. We established reference values of mean HDL size and volume in an asymptomatic representative Brazilian population sample (n=590) and their associations with metabolic parameters by gender. Size and volume were determined in HDL isolated from plasma by polyethyleneglycol precipitation of apoB-containing lipoproteins and measured using the dynamic light scattering (DLS) technique. Although the gender and age distributions agreed with other studies, the mean HDL size reference value was slightly lower than in some other populations. Both HDL size and volume were influenced by gender and varied according to age. HDL size was associated with age and HDL-C (total population); non- white ethnicity and CETP inversely (females); HDL-C and PLTP mass (males). On the other hand, HDL volume was determined only by HDL-C (total population and in both genders) and by PLTP mass (males). The reference values for mean HDL size and volume using the DLS technique were established in an asymptomatic and representative Brazilian population sample, as well as their related metabolic factors. HDL-C was a major determinant of HDL size and volume, which were differently modulated in females and in males.
Resumo:
The objective of this study is to verify the dynamics between fiscal policy, measured by public debt, and monetary policy, measured by a reaction function of a central bank. Changes in monetary policies due to deviations from their targets always generate fiscal impacts. We examine two policy reaction functions: the first related to inflation targets and the second related to economic growth targets. We find that the condition for stable equilibrium is more restrictive in the first case than in the second. We then apply our simulation model to Brazil and United Kingdom and find that the equilibrium is unstable in the Brazilian case but stable in the UK case.
Resumo:
Cancer is a multistep process that begins with the transformation of normal epithelial cells and continues with tumor growth, stromal invasion and metastasis. The remodeling of the peritumoral environment is decisive for the onset of tumor invasiveness. This event is dependent on epithelial-stromal interactions, degradation of extracellular matrix components and reorganization of fibrillar components. Our research group has studied in a new proposed rodent model the participation of cellular and molecular components in the prostate microenvironment that contributes to cancer progression. Our group adopted the gerbil Meriones unguiculatus as an alternative experimental model for prostate cancer study. This model has presented significant responses to hormonal treatments and to development of spontaneous and induced neoplasias. The data obtained indicate reorganization of type I collagen fibers and reticular fibers, synthesis of new components such as tenascin and proteoglycans, degradation of basement membrane components and elastic fibers and increased expression of metalloproteinases. Fibroblasts that border the region, apparently participate in the stromal reaction. The roles of each of these events, as well as some signaling molecules, participants of neoplastic progression and factors that promote genetic reprogramming during epithelial-stromal transition are also discussed.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Yellow passion fruit pulp is unstable, presenting phase separation that can be avoided by the addition of hydrocolloids. For this purpose, xanthan and guar gum [0.3, 0.7 and 1.0% (w/w)] were added to yellow passion fruit pulp and the changes in the dynamic and steady - shear rheological behavior evaluated. Xanthan dispersions showed a more pronounced pseudoplasticity and the presence of yield stress, which was not observed in the guar gum dispersions. Cross model fitting to flow curves showed that the xanthan suspensions also had higher zero shear viscosity than the guar suspensions, and, for both gums, an increase in temperature led to lower values for this parameter. The gums showed different behavior as a function of temperature in the range of 5 - 35ºC. The activation energy of the apparent viscosity was dependent on the shear rate and gum concentration for guar, whereas for xanthan these values only varied with the concentration. The mechanical spectra were well described by the generalized Maxwell model and the xanthan dispersions showed a more elastic character than the guar dispersions, with higher values for the relaxation time. Xanthan was characterized as a weak gel, while guar presented a concentrated solution behavior. The simultaneous evaluation of temperature and concentration showed a stronger influence of the polysaccharide concentration on the apparent viscosity and the G' and G" moduli than the variation in temperature.
Resumo:
This work describes the infrared spectroscopy characterization and the charge compensation dynamics in supramolecular film FeTPPZFeCN derived from tetra-2-pyridyl-1,4-pyrazine (TPPZ) with hexacyanoferrate, as well as the hybrid film formed by FeTPPZFeCN and polypyrrole (PPy). For supramolecular film, it was found that anion flux is greater in a K+ containing solution than in Li+ solution, which seems to be due to the larger crystalline ionic radius of K+. The electroneutralization process is discussed in terms of electrostatic interactions between cations and metallic centers in the hosting matrix. The nature of the charge compensation process differs from others modified electrodes based on Prussian blue films, where only cations such as K+ participate in the electroneutralization process. In the case of FeTPPZFeCN/PPy hybrid film, the magnitude of the anions’s flux is also dependent on the identity of the anion of the supporting electrolyte.
Resumo:
This paper presents a rational approach to the design of a catamaran's hydrofoil applied within a modern context of multidisciplinary optimization. The approach used includes the use of response surfaces represented by neural networks and a distributed programming environment that increases the optimization speed. A rational approach to the problem simplifies the complex optimization model; when combined with the distributed dynamic training used for the response surfaces, this model increases the efficiency of the process. The results achieved using this approach have justified this publication.