902 resultados para Unstructured Grids
Resumo:
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).
Resumo:
KIVA is an open Computational Fluid Dynamics (CFD) source code that is capable to compute the transient two and three-dimensional chemically reactive fluid flows with spray. The latest version in the family of KIVA codes is the KIVA-4 which is capable of handling the unstructured mesh. This project focuses on the implementation of the Conjugate Heat Transfer code (CHT) in KIVA-4. The previous version of KIVA code with conjugate heat transfer code has been developed at Michigan Technological University by Egel Urip and is be used in this project. During the first phase of the project, the difference in the code structure between the previous version of KIVA and the KIVA-4 has been studied, which is the most challenging part of the project. The second phase involves the reverse engineering where the CHT code in previous version is extracted and implemented in KIVA-4 according to the new code structure. The validation of the implemented code is performed using a 4-valve Pentroof engine case. A solid cylinder wall has been developed using GRIDGEN which surrounds 3/4th of the engine cylinder and heat transfer to the solid wall during one engine cycle (0-720 Crank Angle Degree) is compared with that of the reference result. The reference results are nothing but the same engine case run in the previous version with the original code developed by Egel. The results of current code are very much comparable to that of the reference results which verifies that successful implementation of the CHT code in KIVA-4.
Resumo:
The numerical solution of the incompressible Navier-Stokes Equations offers an effective alternative to the experimental analysis of Fluid-Structure interaction i.e. dynamical coupling between a fluid and a solid which otherwise is very complex, time consuming and very expensive. To have a method which can accurately model these types of mechanical systems by numerical solutions becomes a great option, since these advantages are even more obvious when considering huge structures like bridges, high rise buildings, or even wind turbine blades with diameters as large as 200 meters. The modeling of such processes, however, involves complex multiphysics problems along with complex geometries. This thesis focuses on a novel vorticity-velocity formulation called the KLE to solve the incompressible Navier-stokes equations for such FSI problems. This scheme allows for the implementation of robust adaptive ODE time integration schemes and thus allows us to tackle the various multiphysics problems as separate modules. The current algorithm for KLE employs a structured or unstructured mesh for spatial discretization and it allows the use of a self-adaptive or fixed time step ODE solver while dealing with unsteady problems. This research deals with the analysis of the effects of the Courant-Friedrichs-Lewy (CFL) condition for KLE when applied to unsteady Stoke’s problem. The objective is to conduct a numerical analysis for stability and, hence, for convergence. Our results confirmthat the time step ∆t is constrained by the CFL-like condition ∆t ≤ const. hα, where h denotes the variable that represents spatial discretization.
Resumo:
KIVA is a FORTRAN code developed by Los Alamos national lab to simulate complete engine cycle. KIVA is a flow solver code which is used to perform calculation of properties in a fluid flow field. It involves using various numerical schemes and methods to solve the Navier-Stokes equation. This project involves improving the accuracy of one such scheme by upgrading it to a higher order scheme. The numerical scheme to be modified is used in the critical final stage calculation called as rezoning phase. The primitive objective of this project is to implement a higher order numerical scheme, to validate and verify that the new scheme is better than the existing scheme. The latest version of the KIVA family (KIVA 4) is used for implementing the higher order scheme to support handling the unstructured mesh. The code is validated using the traditional shock tube problem and the results are verified to be more accurate than the existing schemes in reference with the analytical result. The convection test is performed to compare the computational accuracy on convective transfer; it is found that the new scheme has less numerical diffusion compared to the existing schemes. A four valve pentroof engine, an example case of KIVA package is used as application to ensure the stability of the scheme in practical application. The results are compared for the temperature profile. In spite of all the positive results, the numerical scheme implemented has a downside of consuming more CPU time for the computational analysis. The detailed comparison is provided. However, in an overview, the implementation of the higher order scheme in the latest code KIVA 4 is verified to be successful and it gives better results than the existing scheme which satisfies the objective of this project.
Resumo:
This dissertation presents the competitive control methodologies for small-scale power system (SSPS). A SSPS is a collection of sources and loads that shares a common network which can be isolated during terrestrial disturbances. Micro-grids, naval ship electric power systems (NSEPS), aircraft power systems and telecommunication system power systems are typical examples of SSPS. The analysis and development of control systems for small-scale power systems (SSPS) lacks a defined slack bus. In addition, a change of a load or source will influence the real time system parameters of the system. Therefore, the control system should provide the required flexibility, to ensure operation as a single aggregated system. In most of the cases of a SSPS the sources and loads must be equipped with power electronic interfaces which can be modeled as a dynamic controllable quantity. The mathematical formulation of the micro-grid is carried out with the help of game theory, optimal control and fundamental theory of electrical power systems. Then the micro-grid can be viewed as a dynamical multi-objective optimization problem with nonlinear objectives and variables. Basically detailed analysis was done with optimal solutions with regards to start up transient modeling, bus selection modeling and level of communication within the micro-grids. In each approach a detail mathematical model is formed to observe the system response. The differential game theoretic approach was also used for modeling and optimization of startup transients. The startup transient controller was implemented with open loop, PI and feedback control methodologies. Then the hardware implementation was carried out to validate the theoretical results. The proposed game theoretic controller shows higher performances over traditional the PI controller during startup. In addition, the optimal transient surface is necessary while implementing the feedback controller for startup transient. Further, the experimental results are in agreement with the theoretical simulation. The bus selection and team communication was modeled with discrete and continuous game theory models. Although players have multiple choices, this controller is capable of choosing the optimum bus. Next the team communication structures are able to optimize the players’ Nash equilibrium point. All mathematical models are based on the local information of the load or source. As a result, these models are the keys to developing accurate distributed controllers.
Resumo:
ABSTRACT: BACKGROUND: Fine particulate matter originating from traffic correlates with increased morbidity and mortality. An important source of traffic particles is brake wear of cars which contributes up to 20% of the total traffic emissions. The aim of this study was to evaluate potential toxicological effects of human epithelial lung cells exposed to freshly generated brake wear particles. RESULTS: An exposure box was mounted around a car's braking system. Lung cells cultured at the air-liquid interface were then exposed to particles emitted from two typical braking behaviours ("full stop" and "normal deceleration"). The particle size distribution as well as the brake emission components like metals and carbons was measured on-line, and the particles deposited on grids for transmission electron microscopy were counted. The tight junction arrangement was observed by laser scanning microscopy. Cellular responses were assessed by measurement of lactate dehydrogenase (cytotoxicity), by investigating the production of reactive oxidative species and the release of the pro-inflammatory mediator interleukin-8. The tight junction protein occludin density decreased significantly (p < 0.05) with increasing concentrations of metals on the particles (iron, copper and manganese, which were all strongly correlated with each other). Occludin was also negatively correlated with the intensity of reactive oxidative species. The concentrations of interleukin-8 were significantly correlated with increasing organic carbon concentrations. No correlation was observed between occludin and interleukin-8, nor between reactive oxidative species and interleukin-8. CONCLUSION: These findings suggest that the metals on brake wear particles damage tight junctions with a mechanism involving oxidative stress. Brake wear particles also increase pro-inflammatory responses. However, this might be due to another mechanism than via oxidative stress.
Resumo:
Steers were sorted into four groups based on hip height and fat cover at the start of the finishing period. Each group of sorted steers was fed diets containing 0.59 or 0.64 Mcal NEg per lb. of diet dry matter. Steers with less initial fat cover (0.08 in.) compared with those with more (0.17) had less carcass fat cover 103 days later. The steers with less fat cover accumulated fat at a faster rate, but this was not apparent prior to 80 days. Accretion of fat was best predicted by an exponential growth equation, and was not affected by the two concentrations of energy fed in this study. Steers with greater initial height accumulated fat cover at a slower rate than shorter steers. This difference was interpreted to mean that large-frame steers accumulate subcutaneous fat at a slower rate than medium-frame steers. Increase in area of the ribeye was best described by a linear equation. Initial fat cover, hip height, and concentrations of energy in the diet did not affect rate of growth of this muscle. Predicting carcass fat cover from the initial ultrasound measurement of fat thickness found 46 of the 51 carcasses with less than 0.4 in. of fat cover. Twelve carcasses predicted to have less than 0.4 in. of fat cover had more than 0.4 in. Five carcasses predicted to have more than 0.4 in. actually had less than that. Accurate initial measurements of initial fat thickness with ultrasound might be a useful measurement to sort cattle for specific marketing grids.
Resumo:
A close to native structure of bulk biological specimens can be imaged by cryo-electron microscopy of vitreous sections (CEMOVIS). In some cases structural information can be combined with X-ray data leading to atomic resolution in situ. However, CEMOVIS is not routinely used. The two critical steps consist of producing a frozen section ribbon of a few millimeters in length and transferring the ribbon onto an electron microscopy grid. During these steps, the first sections of the ribbon are wrapped around an eyelash (unwrapping is frequent). When a ribbon is sufficiently attached to the eyelash, the operator must guide the nascent ribbon. Steady hands are required. Shaking or overstretching may break the ribbon. In turn, the ribbon immediately wraps around itself or flies away and thereby becomes unusable. Micromanipulators for eyelashes and grids as well as ionizers to attach section ribbons to grids were proposed. The rate of successful ribbon collection, however, remained low for most operators. Here we present a setup composed of two micromanipulators. One of the micromanipulators guides an electrically conductive fiber to which the ribbon sticks with unprecedented efficiency in comparison to a not conductive eyelash. The second micromanipulator positions the grid beneath the newly formed section ribbon and with the help of an ionizer the ribbon is attached to the grid. Although manipulations are greatly facilitated, sectioning artifacts remain but the likelihood to investigate high quality sections is significantly increased due to the large number of sections that can be produced with the reported tool.
Resumo:
The discovery of grid cells in the medial entorhinal cortex (MEC) permits the characterization of hippocampal computation in much greater detail than previously possible. The present study addresses how an integrate-and-fire unit driven by grid-cell spike trains may transform the multipeaked, spatial firing pattern of grid cells into the single-peaked activity that is typical of hippocampal place cells. Previous studies have shown that in the absence of network interactions, this transformation can succeed only if the place cell receives inputs from grids with overlapping vertices at the location of the place cell's firing field. In our simulations, the selection of these inputs was accomplished by fast Hebbian plasticity alone. The resulting nonlinear process was acutely sensitive to small input variations. Simulations differing only in the exact spike timing of grid cells produced different field locations for the same place cells. Place fields became concentrated in areas that correlated with the initial trajectory of the animal; the introduction of feedback inhibitory cells reduced this bias. These results suggest distinct roles for plasticity of the perforant path synapses and for competition via feedback inhibition in the formation of place fields in a novel environment. Furthermore, they imply that variability in MEC spiking patterns or in the rat's trajectory is sufficient for generating a distinct population code in a novel environment and suggest that recalling this code in a familiar environment involves additional inputs and/or a different mode of operation of the network.
Resumo:
The research literature on adolescent pregnancy indicates a relationship between early prenatal care and positive pregnancy outcomes, yet fewer than half of pregnant teenagers seek prenatal care in the first trimester of pregnancy. Although social support theory speculates that there should be a relationship between support and health outcomes, available studies do not reflect the processes by which pregnant adolescents use their social resources in making decisions about their pregnancies. This study describes the processes by which the adolescent comes to accept the reality of her pregnancy.^ Drawing from the social-psychological theories of illness behavior and symbolic interactionism, this study examines the symptom diagnosis and help seeking behavior of the pregnant adolescent. This approach describes how the adolescent interprets events and draws conclusions based on her social reality.^ Interviews were conducted with ten young women, aged 15-17, who had recently delivered a first child. Onset of prenatal care ranged from the third month to the seventh month. None were married, and all but two lived with a parent. All but one were currently in school. Initial unstructured interviews were attempted to construe the modes of expression of the young women regarding the event of pregnancy. Subsequent interviews elicited the processes of recognition and explanation of symptoms of pregnancy.^ Analysis revealed a consistent natural history in the subjects' experiences as they come to accept the reality of pregnancy. Symptom appraisal and definition involves noticing changes in themselves, and evaluating and attempting to find suitable explanations for these symptoms. Lay consultation from friends and family aids in identifying the symptoms and to receive suggestions for treatment. It is at this point that prenatal care is usually initiated. Finally the young women describe the integration of pregnancy into their belief systems. ^
Resumo:
Purpose/objectives. A grounded theory design was used to identify, describe, and generate a theoretical analysis of the pain experience of elderly hospice patients with cancer. ^ Sample. Eleven participants over the age of 65, receiving services from a for-profit hospice were interviewed in their homes. ^ Methods. Broad unstructured face to face audio-taped interviews were transcribed verbatim and analyzed using constant-comparative method of analysis. ^ Findings. Pain was described as a hierarchy of chronic, acute, and psychological pain with psychological pain as the worst. Suffering was the basic social problem of pain. Participants dealt with suffering by the basic social process of enduring. Enduring had two sub-processes, maintaining hope and adjusting. Trusting in a higher being and finding meaning were mechanisms of maintaining hope. Mechanisms of adjusting were dealing with uncertainty, accepting, and minimizing pain. ^ Implications for nursing practice. Nurses need to recognize and value the hard work of enduring to deal with suffering. Assisting elderly hospice patients with cancer to address the sub-processes of enduring and their mechanisms can foster enduring. ^
Resumo:
As an initial step in establishing mechanistic relationships between environmental variability and recruitment in Atlantic cod Gadhus morhua along the coast of the western Gulf of Maine, we assessed transport success of larvae from major spawning grounds to nursery areas with particle tracking using the unstructured grid model FVCOM (finite volume coastal ocean model). In coastal areas, dispersal of early planktonic life stages of fish and invertebrate species is highly dependent on the regional dynamics and its variability, which has to be captured by our models. With state-of-the-art forcing for the year 1995, we evaluate the sensitivity of particle dispersal to the timing and location of spawning, the spatial and temporal resolution of the model, and the vertical mixing scheme. A 3 d frequency for the release of particles is necessary to capture the effect of the circulation variability into an averaged dispersal pattern of the spawning season. The analysis of sensitivity to model setup showed that a higher resolution mesh, tidal forcing, and current variability do not change the general pattern of connectivity, but do tend to increase within-site retention. Our results indicate strong downstream connectivity among spawning grounds and higher chances for successful transport from spawning areas closer to the coast. The model run for January egg release indicates 1 to 19 % within-spawning ground retention of initial particles, which may be sufficient to sustain local populations. A systematic sensitivity analysis still needs to be conducted to determine the minimum mesh and forcing resolution that adequately resolves the complex dynamics of the western Gulf of Maine. Other sources of variability, i.e. large-scale upstream forcing and the biological environment, also need to be considered in future studies of the interannual variability in transport and survival of the early life stages of cod.
Resumo:
Stable oxygen isotope composition of atmospheric precipitation (δ18Op) was scrutinized from 39 stations distributed over Switzerland and its border zone. Monthly amount-weighted δ18Op values averaged over the 1995–2000 period showed the expected strong linear altitude dependence (−0.15 to −0.22‰ per 100 m) only during the summer season (May–September). Steeper gradients (~ −0.56 to −0.60‰ per 100 m) were observed for winter months over a low elevation belt, while hardly any altitudinal difference was seen for high elevation stations. This dichotomous pattern could be explained by the characteristically shallower vertical atmospheric mixing height during winter season and provides empirical evidence for recently simulated effects of stratified atmospheric flow on orographic precipitation isotopic ratios. This helps explain "anomalous" deflected altitudinal water isotope profiles reported from many other high relief regions. Grids and isotope distribution maps of the monthly δ18Op have been calculated over the study region for 1995–1996. The adopted interpolation method took into account both the variable mixing heights and the seasonal difference in the isotopic lapse rate and combined them with residual kriging. The presented data set allows a point estimation of δ18Op with monthly resolution. According to the test calculations executed on subsets, this biannual data set can be extended back to 1992 with maintained fidelity and, with a reduced station subset, even back to 1983 at the expense of faded reliability of the derived δ18Op estimates, mainly in the eastern part of Switzerland. Before 1983, reliable results can only be expected for the Swiss Plateau since important stations representing eastern and south-western Switzerland were not yet in operation.
Resumo:
Large amounts of animal health care data are present in veterinary electronic medical records (EMR) and they present an opportunity for companion animal disease surveillance. Veterinary patient records are largely in free-text without clinical coding or fixed vocabulary. Text-mining, a computer and information technology application, is needed to identify cases of interest and to add structure to the otherwise unstructured data. In this study EMR's were extracted from veterinary management programs of 12 participating veterinary practices and stored in a data warehouse. Using commercially available text-mining software (WordStat™), we developed a categorization dictionary that could be used to automatically classify and extract enteric syndrome cases from the warehoused electronic medical records. The diagnostic accuracy of the text-miner for retrieving cases of enteric syndrome was measured against human reviewers who independently categorized a random sample of 2500 cases as enteric syndrome positive or negative. Compared to the reviewers, the text-miner retrieved cases with enteric signs with a sensitivity of 87.6% (95%CI, 80.4-92.9%) and a specificity of 99.3% (95%CI, 98.9-99.6%). Automatic and accurate detection of enteric syndrome cases provides an opportunity for community surveillance of enteric pathogens in companion animals.
Resumo:
We demonstrate how redox control of intra-molecular quantum interference in phase-coherent molecular wires can be used to enhance the thermopower (Seebeck coefficient) S and thermoelectric figure of merit ZT of single molecules attached to nanogap electrodes. Using first principles theory, we study the thermoelectric properties of a family of nine molecules, which consist of dithiol-terminated oligo (phenylene-ethynylenes) (OPEs) containing various central units. Uniquely, one molecule of this family possesses a conjugated acene-based central backbone attached via triple bonds to terminal sulfur atoms bound to gold electrodes and incorporates a fully conjugated hydroquinonecentral unit. We demonstrate that both S and the electronic contribution Z el T to the figure of merit ZT can be dramatically enhanced by oxidizing the hydroquinone to yield a second molecule, which possesses a cross-conjugated anthraquinone central unit. This enhancement originates from the conversion of the pi-conjugation in the former to cross-conjugation in the latter, which promotes the appearance of a sharp anti-resonance at the Fermi energy. Comparison with thermoelectric properties of the remaining seven conjugated molecules demonstrates that such large values of S and Z el T are unprecedented. We also evaluate the phonon contribution to the thermal conductance, which allows us to compute the full figure of merit ZT = Z el T/(1 + κ p/κ el), where κ p is the phonon contribution to the thermal conductance and κ el is the electronic contribution. For unstructured gold electrodes, κ p/κ el Gt⃒ 1 and therefore strategies to reduce κ p are needed to realize the highest possible figure of merit.