922 resultados para Parametric modeling of repair time
Resumo:
Parmodel is a web server for automated comparative modeling and evaluation of protein structures. The aim of this tool is to help inexperienced users to perform modeling, assessment, visualization, and optimization of protein models as well as crystallographers to evaluate structures solved experimentally. It is subdivided in four modules: Parmodel Modeling, Parmodel Assessment, Parmodel Visualization, and Parmodel Optimization. The main module is the Parmodel Modeling that allows the building of several models ford a same protein in a reduced time, through the distribution of modeling processes on a Beowulf cluster. Parmodel automates and integrates the main softwares used in comparative modeling as MODELLER, Whatcheck, Procheck, Raster3D, Molscript, and Gromacs. This web server is freely accessible at http://www.biocristalografia.df.ibilce.unesp.br/tools/parmodel. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a comparative analysis between the experimental characterization and the numerical simulation results for a three-dimensional FCC photonic crystal (PhC) based on a self-assembly synthesis of monodispersive latex spheres. Specifically, experimental optical characterization, by means of reflectance measurements under variable angles over the lattice plane family [1,1, 1], are compared to theoretical calculations based on the Finite Di®erence Time Domain (FDTD) method, in order to investigate the correlation between theoretical predictions and experimental data. The goal is to highlight the influence of crystal defects on the achieved performance.
Resumo:
The aim of this study was to establish the parameters of concentration, time and mode of application of citric acid and sodium citrate in relation to root conditioning. A total of 495 samples were obtained and equally distributed among 11 groups (5 for testing different concentrations of citric acid, 5 for testing different concentrations of sodium citrate and 1 control group). After laboratorial processing, the samples were analyzed under scanning electron microscopy. A previously calibrated and blind examiner evaluated micrographs of the samples. Non-parametric statistical analysis was performed to analyze the data obtained. Brushing 25% citric acid for 3 min, promoted greater exposure of collagen fibers in comparison with the brushing of 1% citric acid for 1 minute and its topical application at 1% for 3 min. Sodium citrate exposed collagen fibers in a few number of samples. Despite the lack of statistical significance, better results for collagen exposure were obtained with brushing application of 25% citric acid for 3 min than with other application parameter. Sodium citrate produced a few number of samples with collagen exposure, so it is not indicated for root conditioning.
Resumo:
The objective of this work is to develop a non-stoichiometric equilibrium model to study parameter effects in the gasification process of a feedstock in downdraft gasifiers. The non-stoichiometric equilibrium model is also known as the Gibbs free energy minimization method. Four models were developed and tested. First a pure non-stoichiometric equilibrium model called M1 was developed; then the methane content was constrained by correlating experimental data and generating the model M2. A kinetic constraint that determines the apparent gasification rate was considered for model M3 and finally the two aforementioned constraints were implemented together in model M4. Models M2 and M4 showed to be the more accurate among the four developed models with mean RMS (root mean square error) values of 1.25 each.Also the gasification of Brazilian Pinus elliottii in a downdraft gasifier with air as gasification agent was studied. The input parameters considered were: (a) equivalence ratio (0.28-035); (b) moisture content (5-20%); (c) gasification time (30-120 min) and carbon conversion efficiency (80-100%). (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
The aim of this prospective, blind, and randomized clinical trial was to assess the effectiveness of repair of localized clinical defects in amalgam restorations that were initially scheduled for replacement. A cohort of 20 patients with 40 (Class I and Class II) amalgam restorations that presented one or more clinical features that deviated from the ideal (Bravo or Charlie) according to US Public Health Service criteria, were randomly assigned to either the repair or the replacement group—A: repair, n = 19; and B: replacement, n = 21. Two examiners who had calibration expertise evaluated the restorations at baseline and 10 years after according to seven parameters: marginal occlusal adaptation, anatomic form, surface roughness, marginal staining, contact, secondary caries, and luster. After 10 years, 30 restorations (75%) were evaluated (Group A: n = 17; Group B: n = 13). Repaired and replaced amalgam restorations showed similar survival outcomes regarding marginal defects and secondary caries in patients with low and medium caries risk, and most of the restorations were considered clinically acceptable after 10 years. Repair treatment increased the potential for tooth longevity, using a minimally interventional procedure. All restorations trend to downgrade over time.
Resumo:
We investigate theoretical and observational aspects of a time-dependent parameterization for the dark energy equation of state w(z), which is a well behaved function of the redshift z over the entire cosmological evolution, i.e., z is an element of [-1, infinity). By using a theoretical algorithm of constructing the quintes-sence potential directly from the w(z) function, we derive and discuss the general features of the resulting potential for the cases in which dark energy is separately conserved and when it is coupled to dark matter. Since the parameterization here discussed allows us to divide the parametric plane in defined regions associated to distinct classes of dark energy models, we use some of the most recent observations from type Ia supernovae, baryon acoustic oscillation peak and Cosmic Microwave Background shift parameter to check which class is observationally preferred. We show that the largest portion of the confidence contours lies into the region corresponding to a possible crossing of the so-called phantom divide line at some point of the cosmic evolution.
Resumo:
The classic conservative approach for thermal process design can lead to over-processing, especially for laminar flow, when a significant distribution of temperature and of residence time occurs. In order to optimize quality retention, a more comprehensive model is required. A model comprising differential equations for mass and heat transfer is proposed for the simulation of the continuous thermal processing of a non-Newtonian food in a tubular system. The model takes into account the contribution from heating and cooling sections, the heat exchange with the ambient air and effective diffusion associated with non-ideal laminar flow. The study case of soursop juice processing was used to test the model. Various simulations were performed to evaluate the effect of the model assumptions. An expressive difference in the predicted lethality was observed between the classic approach and the proposed model. The main advantage of the model is its flexibility to represent different aspects with a small computational time, making it suitable for process evaluation and design. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Transplantation brings hope for many patients. A multidisciplinary approach on this field aims at creating biologically functional tissues to be used as implants and prostheses. The freeze-drying process allows the fundamental properties of these materials to be preserved, making future manipulation and storage easier. Optimizing a freeze-drying cycle is of great importance since it aims at reducing process costs while increasing product quality of this time-and-energy-consuming process. Mathematical modeling comes as a tool to help a better understanding of the process variables behavior and consequently it helps optimization studies. Freeze-drying microscopy is a technique usually applied to determine critical temperatures of liquid formulations. It has been used in this work to determine the sublimation rates of a biological tissue freeze-drying. The sublimation rates were measured from the speed of the moving interface between the dried and the frozen layer under 21.33, 42.66 and 63.99 Pa. The studied variables were used in a theoretical model to simulate various temperature profiles of the freeze-drying process. Good agreement between the experimental and the simulated results was found.
Resumo:
This paper deals with the numerical analysis of saturated porous media, taking into account the damage phenomena on the solid skeleton. The porous media is taken into poro-elastic framework, in full-saturated condition, based on Biot's Theory. A scalar damage model is assumed for this analysis. An implicit boundary element method (BEM) formulation, based on time-independent fundamental solutions, is developed and implemented to couple the fluid flow and two-dimensional elastostatic problems. The integration over boundary elements is evaluated using a numerical Gauss procedure. A semi-analytical scheme for the case of triangular domain cells is followed to carry out the relevant domain integrals. The non-linear problem is solved by a Newton-Raphson procedure. Numerical examples are presented, in order to validate the implemented formulation and to illustrate its efficacy. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
For many years, RF and analog integrated circuits have been mainly developed using bipolar and compound semiconductor technologies due to their better performance. In the last years, the advance made in CMOS technology allowed analog and RF circuits to be built with such a technology, but the use of CMOS technology in RF application instead of bipolar technology has brought more issues in terms of noise. The noise cannot be completely eliminated and will therefore ultimately limit the accuracy of measurements and set a lower limit on how small signals can be detected and processed in an electronic circuit. One kind of noise which affects MOS transistors much more than bipolar ones is the low-frequency noise. In MOSFETs, low-frequency noise is mainly of two kinds: flicker or 1/f noise and random telegraph signal noise (RTS). The objective of this thesis is to characterize and to model the low-frequency noise by studying RTS and flicker noise under both constant and switched bias conditions. The effect of different biasing schemes on both RTS and flicker noise in time and frequency domain has been investigated.
Resumo:
The term "Brain Imaging" identi�es a set of techniques to analyze the structure and/or functional behavior of the brain in normal and/or pathological situations. These techniques are largely used in the study of brain activity. In addition to clinical usage, analysis of brain activity is gaining popularity in others recent �fields, i.e. Brain Computer Interfaces (BCI) and the study of cognitive processes. In this context, usage of classical solutions (e.g. f MRI, PET-CT) could be unfeasible, due to their low temporal resolution, high cost and limited portability. For these reasons alternative low cost techniques are object of research, typically based on simple recording hardware and on intensive data elaboration process. Typical examples are ElectroEncephaloGraphy (EEG) and Electrical Impedance Tomography (EIT), where electric potential at the patient's scalp is recorded by high impedance electrodes. In EEG potentials are directly generated from neuronal activity, while in EIT by the injection of small currents at the scalp. To retrieve meaningful insights on brain activity from measurements, EIT and EEG relies on detailed knowledge of the underlying electrical properties of the body. This is obtained from numerical models of the electric �field distribution therein. The inhomogeneous and anisotropic electric properties of human tissues make accurate modeling and simulation very challenging, leading to a tradeo�ff between physical accuracy and technical feasibility, which currently severely limits the capabilities of these techniques. Moreover elaboration of data recorded requires usage of regularization techniques computationally intensive, which influences the application with heavy temporal constraints (such as BCI). This work focuses on the parallel implementation of a work-flow for EEG and EIT data processing. The resulting software is accelerated using multi-core GPUs, in order to provide solution in reasonable times and address requirements of real-time BCI systems, without over-simplifying the complexity and accuracy of the head models.
Resumo:
We use data from about 700 GPS stations in the EuroMediterranen region to investigate the present-day behavior of the the Calabrian subduction zone within the Mediterranean-scale plates kinematics and to perform local scale studies about the strain accumulation on active structures. We focus attenction on the Messina Straits and Crati Valley faults where GPS data show extentional velocity gradients of ∼3 mm/yr and ∼2 mm/yr, respectively. We use dislocation model and a non-linear constrained optimization algorithm to invert for fault geometric parameters and slip-rates and evaluate the associated uncertainties adopting a bootstrap approach. Our analysis suggest the presence of two partially locked normal faults. To investigate the impact of elastic strain contributes from other nearby active faults onto the observed velocity gradient we use a block modeling approach. Our models show that the inferred slip-rates on the two analyzed structures are strongly impacted by the assumed locking width of the Calabrian subduction thrust. In order to frame the observed local deformation features within the present- day central Mediterranean kinematics we realyze a statistical analysis testing the indipendent motion (w.r.t. the African and Eurasias plates) of the Adriatic, Cal- abrian and Sicilian blocks. Our preferred model confirms a microplate like behaviour for all the investigated blocks. Within these kinematic boundary conditions we fur- ther investigate the Calabrian Slab interface geometry using a combined approach of block modeling and χ2ν statistic. Almost no information is obtained using only the horizontal GPS velocities that prove to be a not sufficient dataset for a multi-parametric inversion approach. Trying to stronger constrain the slab geometry we estimate the predicted vertical velocities performing suites of forward models of elastic dislocations varying the fault locking depth. Comparison with the observed field suggest a maximum resolved locking depth of 25 km.
Resumo:
The objective of this thesis was to improve the commercial CFD software Ansys Fluent to obtain a tool able to perform accurate simulations of flow boiling in the slug flow regime. The achievement of a reliable numerical framework allows a better understanding of the bubble and flow dynamics induced by the evaporation and makes possible the prediction of the wall heat transfer trends. In order to save computational time, the flow is modeled with an axisymmetrical formulation. Vapor and liquid phases are treated as incompressible and in laminar flow. By means of a single fluid approach, the flow equations are written as for a single phase flow, but discontinuities at the interface and interfacial effects need to be accounted for and discretized properly. Ansys Fluent provides a Volume Of Fluid technique to advect the interface and to map the discontinuous fluid properties throughout the flow domain. The interfacial effects are dominant in the boiling slug flow and the accuracy of their estimation is fundamental for the reliability of the solver. Self-implemented functions, developed ad-hoc, are introduced within the numerical code to compute the surface tension force and the rates of mass and energy exchange at the interface related to the evaporation. Several validation benchmarks assess the better performances of the improved software. Various adiabatic configurations are simulated in order to test the capability of the numerical framework in modeling actual flows and the comparison with experimental results is very positive. The simulation of a single evaporating bubble underlines the dominant effect on the global heat transfer rate of the local transient heat convection in the liquid after the bubble transit. The simulation of multiple evaporating bubbles flowing in sequence shows that their mutual influence can strongly enhance the heat transfer coefficient, up to twice the single phase flow value.
Resumo:
In this thesis, we extend some ideas of statistical physics to describe the properties of human mobility. By using a database containing GPS measures of individual paths (position, velocity and covered space at a spatial scale of 2 Km or a time scale of 30 sec), which includes the 2% of the private vehicles in Italy, we succeed in determining some statistical empirical laws pointing out "universal" characteristics of human mobility. Developing simple stochastic models suggesting possible explanations of the empirical observations, we are able to indicate what are the key quantities and cognitive features that are ruling individuals' mobility. To understand the features of individual dynamics, we have studied different aspects of urban mobility from a physical point of view. We discuss the implications of the Benford's law emerging from the distribution of times elapsed between successive trips. We observe how the daily travel-time budget is related with many aspects of the urban environment, and describe how the daily mobility budget is then spent. We link the scaling properties of individual mobility networks to the inhomogeneous average durations of the activities that are performed, and those of the networks describing people's common use of space with the fractional dimension of the urban territory. We study entropy measures of individual mobility patterns, showing that they carry almost the same information of the related mobility networks, but are also influenced by a hierarchy among the activities performed. We discover that Wardrop's principles are violated as drivers have only incomplete information on traffic state and therefore rely on knowledge on the average travel-times. We propose an assimilation model to solve the intrinsic scattering of GPS data on the street network, permitting the real-time reconstruction of traffic state at a urban scale.
Resumo:
Urban centers significantly contribute to anthropogenic air pollution, although they cover only a minor fraction of the Earth's land surface. Since the worldwide degree of urbanization is steadily increasing, the anthropogenic contribution to air pollution from urban centers is expected to become more substantial in future air quality assessments. The main objective of this thesis was to obtain a more profound insight in the dispersion and the deposition of aerosol particles from 46 individual major population centers (MPCs) as well as the regional and global influence on the atmospheric distribution of several aerosol types. For the first time, this was assessed in one model framework, for which the global model EMAC was applied with different representations of aerosol particles. First, in an approach with passive tracers and a setup in which the results depend only on the source location and the size and the solubility of the tracers, several metrics and a regional climate classification were used to quantify the major outflow pathways, both vertically and horizontally, and to compare the balance between pollution export away from and pollution build-up around the source points. Then in a more comprehensive approach, the anthropogenic emissions of key trace species were changed at the MPC locations to determine the cumulative impact of the MPC emissions on the atmospheric aerosol burdens of black carbon, particulate organic matter, sulfate, and nitrate. Ten different mono-modal passive aerosol tracers were continuously released at the same constant rate at each emission point. The results clearly showed that on average about five times more mass is advected quasi-horizontally at low levels than exported into the upper troposphere. The strength of the low-level export is mainly determined by the location of the source, while the vertical transport is mainly governed by the lifting potential and the solubility of the tracers. Similar to insoluble gas phase tracers, the low-level export of aerosol tracers is strongest at middle and high latitudes, while the regions of strongest vertical export differ between aerosol (temperate winter dry) and gas phase (tropics) tracers. The emitted mass fraction that is kept around MPCs is largest in regions where aerosol tracers have short lifetimes; this mass is also critical for assessing the impact on humans. However, the number of people who live in a strongly polluted region around urban centers depends more on the population density than on the size of the area which is affected by strong air pollution. Another major result was that fine aerosol particles (diameters smaller than 2.5 micrometer) from MPCs undergo substantial long-range transport, with about half of the emitted mass being deposited beyond 1000 km away from the source. In contrast to this diluted remote deposition, there are areas around the MPCs which experience high deposition rates, especially in regions which are frequently affected by heavy precipitation or are situated in poorly ventilated locations. Moreover, most MPC aerosol emissions are removed over land surfaces. In particular, forests experience more deposition from MPC pollutants than other land ecosystems. In addition, it was found that the generic treatment of aerosols has no substantial influence on the major conclusions drawn in this thesis. Moreover, in the more comprehensive approach, it was found that emissions of black carbon, particulate organic matter, sulfur dioxide, and nitrogen oxides from MPCs influence the atmospheric burden of various aerosol types very differently, with impacts generally being larger for secondary species, sulfate and nitrate, than for primary species, black carbon and particulate organic matter. While the changes in the burdens of sulfate, black carbon, and particulate organic matter show an almost linear response for changes in the emission strength, the formation of nitrate was found to be contingent upon many more factors, e.g., the abundance of sulfuric acid, than only upon the strength of the nitrogen oxide emissions. The generic tracer experiments were further extended to conduct the first risk assessment to obtain the cumulative risk of contamination from multiple nuclear reactor accidents on the global scale. For this, many factors had to be taken into account: the probability of major accidents, the cumulative deposition field of the radionuclide cesium-137, and a threshold value that defines contamination. By collecting the necessary data and after accounting for uncertainties, it was found that the risk is highest in western Europe, the eastern US, and in Japan, where on average contamination by major accidents is expected about every 50 years.