965 resultados para Literature, Experimental
Resumo:
Mode of access: Internet.
Resumo:
A molecular dynamics (MD) investigation of LiCl in water, methanol, and ethylene glycol (EG) at 298 K is reported. Several; structural and dynamical properties of the ions as well as the solvent such as self-diffusivity, radial distribution functions, void and neck distributions, velocity autocorrelation functions, and mean residence times of solvent in the first solvation shell have been computed. The results show that the reciprocal relationship between the self-diffusivity of the ions and the viscosity is valid in almost all solvents with the exception of water. From an analysis of radial distribution functions and coordination numbers the nature of hydrogen bonding within the solvent and its influence on the void and neck distribution becomes evident. It is seen that the solvent solvent interaction is important in EG while solute solvent interactions dominate in water and methanol. From Voronoi tessellation, it is seen that the voids and necks within methanol are larger as compared to those within water or EG. On the basis of the void and neck distributions obtained from MD simulations and literature experimental data of limiting ion conductivity for various ions of different sizes we show that there is a relation between the void and neck radius on e one hand and dependence of conductivity on the ionic radius on the other. It is shown that the presence of large diameter voids and necks in methanol is responsible for maximum in limiting ion conductivity (lambda(0)) of TMA(+), while in water in EG, the maximum is seen for Rb+. In the case of monovalent anions, maximum in lambda(0) as a function ionic radius is seen for Br- in water EG but for the larger ClO4- ion in methanol. The relation between the void and neck distribution and the variation in lambda(0) with ionic radius arises via the Levitation effect which is discussed. These studies show the importance of the solvent structure and the associated void structure.
Resumo:
Unmanned vehicle path following by pursuing a virtual target moving along the path is considered. Limitations for pure pursuit guidance are analyzed while following the virtual target on curved paths. Trajectory shaping guidance is proposed as an alternate guidance scheme for a general curvature path. It is proven that under certain tenable assumptions trajectory shaping guidance yields an identical path as that of the virtual target. By linear analysis it is shown that the convergence to the path for trajectory shaping guidance is twice as fast as pure pursuit. Simulations highlight significant improvement in position errors by using trajectory shaping guidance. Comparative simulation studies comply with analytic findings and present better performance as compared with pure pursuit and a nonlinear guidance methodology from the literature. Experimental validation supports the analytic and simulations studies as the guidance laws are implemented on a radio-controlled car in a laboratory environment.
Resumo:
ABSTRACT - This text is divided in two parts. Firstly, I deal with the evolution on Portuguese theatre from the post-second world war period until present day. I’m focusing on the experimentalism of the forties, characterized by an urge for renewal and modernization; the constitution of a highly politicized independent theatre movement, in the seventies; and the plurality of contemporary Portuguese theatre. Secondly, I deal with the alleged inability of Portuguese writers for playwriting, signalling the most significant names of the post Carnation Revolution, on the 25th of April 1974, to present days.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
This paper addresses the m-machine no-wait flow shop problem where the set-up time of a job is separated from its processing time. The performance measure considered is the total flowtime. A new hybrid metaheuristic Genetic Algorithm-Cluster Search is proposed to solve the scheduling problem. The performance of the proposed method is evaluated and the results are compared with the best method reported in the literature. Experimental tests show superiority of the new method for the test problems set, regarding the solution quality. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Knowledge on how ligaments and articular surfaces guide passive motion at the human ankle joint complex is fundamental for the design of relevant surgical treatments. The dissertation presents a possible improvement of this knowledge by a new kinematic model of the tibiotalar articulation. In this dissertation two one-DOF spatial equivalent mechanisms are presented for the simulation of the passive motion of the human ankle joint: the 5-5 fully parallel mechanism and the fully parallel spherical wrist mechanism. These mechanisms are based on the main anatomical structures of the ankle joint, namely the talus/calcaneus and the tibio/fibula bones at their interface, and the TiCaL and CaFiL ligaments. In order to show the accuracy of the models and the efficiency of the proposed procedure, these mechanisms are synthesized from experimental data and the results are compared with those obtained both during experimental sessions and with data published in the literature. Experimental results proved the efficiency of the proposed new mechanisms to simulate the ankle passive motion and, at the same time, the potentiality of the mechanism to replicate the ankle’s main anatomical structures quite well. The new mechanisms represent a powerful tool for both pre-operation planning and new prosthesis design.
Resumo:
The research is aimed at contributing to the identification of reliable fully predictive Computational Fluid Dynamics (CFD) methods for the numerical simulation of equipment typically adopted in the chemical and process industries. The apparatuses selected for the investigation, specifically membrane modules, stirred vessels and fluidized beds, were characterized by a different and often complex fluid dynamic behaviour and in some cases the momentum transfer phenomena were coupled with mass transfer or multiphase interactions. Firs of all, a novel modelling approach based on CFD for the prediction of the gas separation process in membrane modules for hydrogen purification is developed. The reliability of the gas velocity field calculated numerically is assessed by comparison of the predictions with experimental velocity data collected by Particle Image Velocimetry, while the applicability of the model to properly predict the separation process under a wide range of operating conditions is assessed through a strict comparison with permeation experimental data. Then, the effect of numerical issues on the RANS-based predictions of single phase stirred tanks is analysed. The homogenisation process of a scalar tracer is also investigated and simulation results are compared to original passive tracer homogenisation curves determined with Planar Laser Induced Fluorescence. The capability of a CFD approach based on the solution of RANS equations is also investigated for describing the fluid dynamic characteristics of the dispersion of organics in water. Finally, an Eulerian-Eulerian fluid-dynamic model is used to simulate mono-disperse suspensions of Geldart A Group particles fluidized by a Newtonian incompressible fluid as well as binary segregating fluidized beds of particles differing in size and density. The results obtained under a number of different operating conditions are compared with literature experimental data and the effect of numerical uncertainties on axial segregation is also discussed.
Resumo:
A partir de la idea de que las obras literarias pueden instaurar, dentro de su propia estructura, claros y profundos planteos sobre conceptos como realidad, ficción, y verdad ficcional. se indaga la poética de la ficción que de modo implícito se inscribe en la obra de Antonio Di Benedetto, más precisamente en El pentágono; novela en forma de cuentos, texto de corte experimental publicado por vez primera en 1955, en el que se presta especial atención a las relaciones entre realidad y ficción, a la posibilidad de la ficción de instituir mundos diversos y a los límites y juegos especulares entre ambos dominios. Por otra parte, se observa, en la peculiar construcción de esta novela en forma de cuentos, una estructura que en sí misma es signo del concepto de ficción que domina la obra.
Resumo:
Grand canonical Monte Carlo simulations were applied to the adsorption of SPCE model water in finite graphitic pores with different configurations of carbonyl functional groups on only one surface and several pore sizes. It was found that almost all finite pores studied exhibit capillary condensation behaviour preceded by adsorption around the functional groups. Desorption showed the reverse transitions from a filled to a near empty pore resulting in a clear hysteresis loop in all pores except for some of the configurations of the 1.0nm pore. Carbonyl configurations had a strong effect on the filling pressure of all pores except, in some cases, in 1.0nm pores. A decrease in carbonyl neighbour density would result in a higher filling pressure. The emptying pressure was negligibly affected by the configuration of functional groups. Both the filling and emptying pressures increased with increasing pore size but the effect on the emptying pressure was much less. At pressures lower than the pore filling pressure, the adsorption of water was shown to have an extremely strong dependence on the neighbour density with adsorption changing from Type IV to Type III to linear as the neighbour density decreased. The isosteric heat was also calculated for these configurations to reveal its strong dependence on the neighbour density. These results were compared with literature experimental results for water and carbon black and found to qualitatively agree.
Resumo:
The theory of vapour-liquid equilibria is reviewed, as is the present status or prediction methods in this field. After discussion of the experimental methods available, development of a recirculating equilibrium still based on a previously successful design (the modified Raal, Code and Best still of O'Donnell and Jenkins) is described. This novel still is designed to work at pressures up to 35 bar and for the measurement of both isothermal and isobaric vapour-liquid equilibrium data. The equilibrium still was first commissioned by measuring the saturated vapour pressures of pure ethanol and cyclohexane in the temperature range 77-124°C and 80-142°C respectively. The data obtained were compared with available literature experimental values and with values derived from an extended form of the Antoine equation for which parameters were given in the literature. Commissioning continued with the study of the phase behaviour of mixtures of the two pure components as such mixtures are strongly non-ideal, showing azeotopic behaviour. Existing data did not exist above one atmosphere pressure. Isothermal measurements were made at 83.29°C and 106.54°C, whilst isobaric measurements were made at pressures of 1 bar, 3 bar and 5 bar respectively. The experimental vapour-liquid equilibrium data obtained are assessed by a standard literature method incorporating a themodynamic consistency test that minimises the errors in all the measured variables. This assessment showed that reasonable x-P-T data-sets had been measured, from which y-values could be deduced, but that the experimental y-values indicated the need for improvements in the design of the still. The final discussion sets out the improvements required and outlines how they might be attained.
Resumo:
The exponential growth of studies on the biological response to ocean acidification over the last few decades has generated a large amount of data. To facilitate data comparison, a data compilation hosted at the data publisher PANGAEA was initiated in 2008 and is updated on a regular basis (doi:10.1594/PANGAEA.149999). By January 2015, a total of 581 data sets (over 4 000 000 data points) from 539 papers had been archived. Here we present the developments of this data compilation five years since its first description by Nisumaa et al. (2010). Most of study sites from which data archived are still in the Northern Hemisphere and the number of archived data from studies from the Southern Hemisphere and polar oceans are still relatively low. Data from 60 studies that investigated the response of a mix of organisms or natural communities were all added after 2010, indicating a welcomed shift from the study of individual organisms to communities and ecosystems. The initial imbalance of considerably more data archived on calcification and primary production than on other processes has improved. There is also a clear tendency towards more data archived from multifactorial studies after 2010. For easier and more effective access to ocean acidification data, the ocean acidification community is strongly encouraged to contribute to the data archiving effort, and help develop standard vocabularies describing the variables and define best practices for archiving ocean acidification data.
Resumo:
Osteoporosis is a disease characterized by low bone mass and micro-architectural deterioration of bone tissue, with a consequent increase in bone fragility and susceptibility to fracture. Osteoporosis affects over 200 million people worldwide, with an estimated 1.5 million fractures annually in the United States alone, and with attendant costs exceeding $10 billion dollars per annum. Osteoporosis reduces bone density through a series of structural changes to the honeycomb-like trabecular bone structure (micro-structure). The reduced bone density, coupled with the microstructural changes, results in significant loss of bone strength and increased fracture risk. Vertebral compression fractures are the most common type of osteoporotic fracture and are associated with pain, increased thoracic curvature, reduced mobility, and difficulty with self care. Surgical interventions, such as kyphoplasty or vertebroplasty, are used to treat osteoporotic vertebral fractures by restoring vertebral stability and alleviating pain. These minimally invasive procedures involve injecting bone cement into the fractured vertebrae. The techniques are still relatively new and while initial results are promising, with the procedures relieving pain in 70-95% of cases, medium-term investigations are now indicating an increased risk of adjacent level fracture following the procedure. With the aging population, understanding and treatment of osteoporosis is an increasingly important public health issue in developed Western countries. The aim of this study was to investigate the biomechanics of spinal osteoporosis and osteoporotic vertebral compression fractures by developing multi-scale computational, Finite Element (FE) models of both healthy and osteoporotic vertebral bodies. The multi-scale approach included the overall vertebral body anatomy, as well as a detailed representation of the internal trabecular microstructure. This novel, multi-scale approach overcame limitations of previous investigations by allowing simultaneous investigation of the mechanics of the trabecular micro-structure as well as overall vertebral body mechanics. The models were used to simulate the progression of osteoporosis, the effect of different loading conditions on vertebral strength and stiffness, and the effects of vertebroplasty on vertebral and trabecular mechanics. The model development process began with the development of an individual trabecular strut model using 3D beam elements, which was used as the building block for lattice-type, structural trabecular bone models, which were in turn incorporated into the vertebral body models. At each stage of model development, model predictions were compared to analytical solutions and in-vitro data from existing literature. The incremental process provided confidence in the predictions of each model before incorporation into the overall vertebral body model. The trabecular bone model, vertebral body model and vertebroplasty models were validated against in-vitro data from a series of compression tests performed using human cadaveric vertebral bodies. Firstly, trabecular bone samples were acquired and morphological parameters for each sample were measured using high resolution micro-computed tomography (CT). Apparent mechanical properties for each sample were then determined using uni-axial compression tests. Bone tissue properties were inversely determined using voxel-based FE models based on the micro-CT data. Specimen specific trabecular bone models were developed and the predicted apparent stiffness and strength were compared to the experimentally measured apparent stiffness and strength of the corresponding specimen. Following the trabecular specimen tests, a series of 12 whole cadaveric vertebrae were then divided into treated and non-treated groups and vertebroplasty performed on the specimens of the treated group. The vertebrae in both groups underwent clinical-CT scanning and destructive uniaxial compression testing. Specimen specific FE vertebral body models were developed and the predicted mechanical response compared to the experimentally measured responses. The validation process demonstrated that the multi-scale FE models comprising a lattice network of beam elements were able to accurately capture the failure mechanics of trabecular bone; and a trabecular core represented with beam elements enclosed in a layer of shell elements to represent the cortical shell was able to adequately represent the failure mechanics of intact vertebral bodies with varying degrees of osteoporosis. Following model development and validation, the models were used to investigate the effects of progressive osteoporosis on vertebral body mechanics and trabecular bone mechanics. These simulations showed that overall failure of the osteoporotic vertebral body is initiated by failure of the trabecular core, and the failure mechanism of the trabeculae varies with the progression of osteoporosis; from tissue yield in healthy trabecular bone, to failure due to instability (buckling) in osteoporotic bone with its thinner trabecular struts. The mechanical response of the vertebral body under load is highly dependent on the ability of the endplates to deform to transmit the load to the underlying trabecular bone. The ability of the endplate to evenly transfer the load through the core diminishes with osteoporosis. Investigation into the effect of different loading conditions on the vertebral body found that, because the trabecular bone structural changes which occur in osteoporosis result in a structure that is highly aligned with the loading direction, the vertebral body is consequently less able to withstand non-uniform loading states such as occurs in forward flexion. Changes in vertebral body loading due to disc degeneration were simulated, but proved to have little effect on osteoporotic vertebra mechanics. Conversely, differences in vertebral body loading between simulated invivo (uniform endplate pressure) and in-vitro conditions (where the vertebral endplates are rigidly cemented) had a dramatic effect on the predicted vertebral mechanics. This investigation suggested that in-vitro loading using bone cement potting of both endplates has major limitations in its ability to represent vertebral body mechanics in-vivo. And lastly, FE investigation into the biomechanical effect of vertebroplasty was performed. The results of this investigation demonstrated that the effect of vertebroplasty on overall vertebra mechanics is strongly governed by the cement distribution achieved within the trabecular core. In agreement with a recent study, the models predicted that vertebroplasty cement distributions which do not form one continuous mass which contacts both endplates have little effect on vertebral body stiffness or strength. In summary, this work presents the development of a novel, multi-scale Finite Element model of the osteoporotic vertebral body, which provides a powerful new tool for investigating the mechanics of osteoporotic vertebral compression fractures at the trabecular bone micro-structural level, and at the vertebral body level.
Resumo:
An experimental investigation has been made of a round, non-buoyant plume of nitric oxide, NO, in a turbulent grid flow of ozone, 03, using the Turbulent Smog Chamber at the University of Sydney. The measurements have been made at a resolution not previously reported in the literature. The reaction is conducted at non-equilibrium so there is significant interaction between turbulent mixing and chemical reaction. The plume has been characterized by a set of constant initial reactant concentration measurements consisting of radial profiles at various axial locations. Whole plume behaviour can thus be characterized and parameters are selected for a second set of fixed physical location measurements where the effects of varying the initial reactant concentrations are investigated. Careful experiment design and specially developed chemilurninescent analysers, which measure fluctuating concentrations of reactive scalars, ensure that spatial and temporal resolutions are adequate to measure the quantities of interest. Conserved scalar theory is used to define a conserved scalar from the measured reactive scalars and to define frozen, equilibrium and reaction dominated cases for the reactive scalars. Reactive scalar means and the mean reaction rate are bounded by frozen and equilibrium limits but this is not always the case for the reactant variances and covariances. The plume reactant statistics are closer to the equilibrium limit than those for the ambient reactant. The covariance term in the mean reaction rate is found to be negative and significant for all measurements made. The Toor closure was found to overestimate the mean reaction rate by 15 to 65%. Gradient model turbulent diffusivities had significant scatter and were not observed to be affected by reaction. The ratio of turbulent diffusivities for the conserved scalar mean and that for the r.m.s. was found to be approximately 1. Estimates of the ratio of the dissipation timescales of around 2 were found downstream. Estimates of the correlation coefficient between the conserved scalar and its dissipation (parallel to the mean flow) were found to be between 0.25 and the significant value of 0.5. Scalar dissipations for non-reactive and reactive scalars were found to be significantly different. Conditional statistics are found to be a useful way of investigating the reactive behaviour of the plume, effectively decoupling the interaction of chemical reaction and turbulent mixing. It is found that conditional reactive scalar means lack significant transverse dependence as has previously been found theoretically by Klimenko (1995). It is also found that conditional variance around the conditional reactive scalar means is relatively small, simplifying the closure for the conditional reaction rate. These properties are important for the Conditional Moment Closure (CMC) model for turbulent reacting flows recently proposed by Klimenko (1990) and Bilger (1993). Preliminary CMC model calculations are carried out for this flow using a simple model for the conditional scalar dissipation. Model predictions and measured conditional reactive scalar means compare favorably. The reaction dominated limit is found to indicate the maximum reactedness of a reactive scalar and is a limiting case of the CMC model. Conventional (unconditional) reactive scalar means obtained from the preliminary CMC predictions using the conserved scalar p.d.f. compare favorably with those found from experiment except where measuring position is relatively far upstream of the stoichiometric distance. Recommendations include applying a full CMC model to the flow and investigations both of the less significant terms in the conditional mean species equation and the small variation of the conditional mean with radius. Forms for the p.d.f.s, in addition to those found from experiments, could be useful for extending the CMC model to reactive flows in the atmosphere.
Resumo:
Identifying crash “hotspots”, “blackspots”, “sites with promise”, or “high risk” locations is standard practice in departments of transportation throughout the US. The literature is replete with the development and discussion of statistical methods for hotspot identification (HSID). Theoretical derivations and empirical studies have been used to weigh the benefits of various HSID methods; however, a small number of studies have used controlled experiments to systematically assess various methods. Using experimentally derived simulated data—which are argued to be superior to empirical data, three hot spot identification methods observed in practice are evaluated: simple ranking, confidence interval, and Empirical Bayes. Using simulated data, sites with promise are known a priori, in contrast to empirical data where high risk sites are not known for certain. To conduct the evaluation, properties of observed crash data are used to generate simulated crash frequency distributions at hypothetical sites. A variety of factors is manipulated to simulate a host of ‘real world’ conditions. Various levels of confidence are explored, and false positives (identifying a safe site as high risk) and false negatives (identifying a high risk site as safe) are compared across methods. Finally, the effects of crash history duration in the three HSID approaches are assessed. The results illustrate that the Empirical Bayes technique significantly outperforms ranking and confidence interval techniques (with certain caveats). As found by others, false positives and negatives are inversely related. Three years of crash history appears, in general, to provide an appropriate crash history duration.