989 resultados para Triaxial projected shell model
Resumo:
Local mass transfer coefficients were determined by using the electrochemical technique. A simple model of a heat exchanger with segmental nickel tube joined to p.v.c. rods replaced the exchanger tubes. Measurements were made for both no-Ieakage, semi-leakage and total leakage configurations. Baffle-spacings of 47.6 mm, 66.6 mm, 97 mm and 149.2 mm wer studied. Also studied were the overall exchanger pressure drops for each configuration. The comparison of the heat transfer data with this work showed good agreement at high flow rates for the no-leakage case, but the agreement became poor for lower flow rates and leakage configurations. This disagreement was explained by non-analogous driving forces existing in the two systems. The no-leakage data showed length-wise variation of transfer coefficients along the exchanger length. The end compartments showing transfer coefficients inferior by up to 26% compared to tbe internal compartments, depending on Reynolds number. With the introduction of leakage streams this variation however became smaller than the experimental accuracy. A model is outlined to show the characteristic behaviour of individual electrode segments within the compartment. This was able to discriminate between cross and window zones for the no- leakage case, but no such distinction could be made for the leakage case. A flow area was found which, when incorporated in the Reynolds number, enabled the correlation of baffle-cut and baffle-spacing parameters for the no-leakage case . This area is the free flow area determined at the baffle edge. Addition of the leakage area to this flow area resulted in correlation of all commercial leakage geometrical parameters. The procedures used to correlate the pressure drop data from a total of eighteen different configurations on a single curve are also outlined.
Resumo:
A diffusion-controlled electrochemical mass transfer technique has been employed in making local measurements of shell-side coefficients in segmentally baffled shell and tube heat exchangers. Corresponding heat transfer data are predicted through the Chilton and Colburn heat and mass transfer analogy. Mass transfer coefficients were measured for baffle spacing lengths of individual tubes in an internal baffle compartment. Shell-side pressure measurements were also made. Baffle compartment average coefficients derived from individual tube coefficients are shown to be in good agreement with reported experimental bundle average heat transfer data for a heat exchanger model of similar geometry. Mass transfer coefficients of individual tubes compare favourably with those obtained previously by another mass transfer technique. Experimental data are reported for a variety of segmental baffle configurations over the shell-side Reynolds number range 100 to 42 000. Baffles with zero clearances were studied at three baffle cuts and two baffle spacings. Baffle geometry is shown to have a large effect on the distribution of tube coefficients within the baffle compartment. Fluid "jetting" is identified with some baffle configurations. No simple characteristic velocity is found to correlate zonal or baffle compartment average mass transfer data for the effect of both baffle cut and baffle spacing. Experiments with baffle clearances typical of commercial heat exchangers are also reported. The effect of leakage streams associated with these baffles is identified. Investigations were extended to double segmental baffles for which no data had previously been published. The similarity in the shell-side characteristics of this baffle arrangement and two parallel single segmental baffle arrangements is demonstrated. A general relationship between the shell-side mass transfer performance and pressure drop was indicated by the data for all the baffle configurations examined.
Resumo:
Local shell side coefficient measurements in the end conpartments of a model shell and tube heat exchanger have been made using an electrochemical technique. Limited data are also reported far the second compartment. The end compartment average coefficients have been found to be smaller than reported data for a corresponding internal conpartment. The second compartment data. have been shown to lie between those for the end compartments and the reported internal compartment data. Experimental data are reported fcr two port types and two baffle orientations. with data for the case of an inlet compartment impingement baffle also being given . Port type is shown to have a small effect on compartment coefficients, these being largely unaffected. Likewise, the outlet compartment average coefficients are slightly snaller than those for the inlet compartment, with the distribution of individual tube coefficients being similar. Baffle orientation has been shown to have no effect on average coefficients, but the distribution of the data is substantially affected. The use of an impingement baffle in the inlet compartment lessens the efect of baffle orientation on distribution . Recommendations are made for future work.
Resumo:
Recently within the machine learning and spatial statistics communities many papers have explored the potential of reduced rank representations of the covariance matrix, often referred to as projected or fixed rank approaches. In such methods the covariance function of the posterior process is represented by a reduced rank approximation which is chosen such that there is minimal information loss. In this paper a sequential framework for inference in such projected processes is presented, where the observations are considered one at a time. We introduce a C++ library for carrying out such projected, sequential estimation which adds several novel features. In particular we have incorporated the ability to use a generic observation operator, or sensor model, to permit data fusion. We can also cope with a range of observation error characteristics, including non-Gaussian observation errors. Inference for the variogram parameters is based on maximum likelihood estimation. We illustrate the projected sequential method in application to synthetic and real data sets. We discuss the software implementation and suggest possible future extensions.
Resumo:
Large monitoring networks are becoming increasingly common and can generate large datasets from thousands to millions of observations in size, often with high temporal resolution. Processing large datasets using traditional geostatistical methods is prohibitively slow and in real world applications different types of sensor can be found across a monitoring network. Heterogeneities in the error characteristics of different sensors, both in terms of distribution and magnitude, presents problems for generating coherent maps. An assumption in traditional geostatistics is that observations are made directly of the underlying process being studied and that the observations are contaminated with Gaussian errors. Under this assumption, sub–optimal predictions will be obtained if the error characteristics of the sensor are effectively non–Gaussian. One method, model based geostatistics, assumes that a Gaussian process prior is imposed over the (latent) process being studied and that the sensor model forms part of the likelihood term. One problem with this type of approach is that the corresponding posterior distribution will be non–Gaussian and computationally demanding as Monte Carlo methods have to be used. An extension of a sequential, approximate Bayesian inference method enables observations with arbitrary likelihoods to be treated, in a projected process kriging framework which is less computationally intensive. The approach is illustrated using a simulated dataset with a range of sensor models and error characteristics.
Resumo:
Accurate prediction of shellside pressure drop in a baffled shell-and-tube heat exchanger is very difficult because of the complicated shellside geometry. Ideally, all the shellside fluid should be alternately deflected across the tube bundle as it traverses from inlet to outlet. In practice, up to 60% of the shellside fluid may bypass the tube bundle or leak through the baffles. This short-circuiting of the main flow reduces the efficiency of the exchanger. Of the various shellside methods, it is shown that only the multi-stream methods, which attempt to obtain the shellside flow distribution, predict the pressure drop with any degree of accuracy, the various predictions ranging from -30% to +70%, generally overpredicting. It is shown that the inaccuracies are mainly due to the manner in which baffle leakage is modelled. The present multi-stream methods do not allow for interactions of the various flowstreams, and yet it is shown that three main effects are identified, a) there is a strong interaction between the main cross flow and the baffle leakage streams, enhancing the crossflow pressure drop, b) there is a further short-circuit not considered previously i.e. leakage in the window, and c) the crossflow does not penetrate as far, on average, as previously supposed. Models are developed for each of these three effects, along with a new windowflow pressure drop model, and it is shown that the effect of baffle leakage in the window is the most significant. These models developed to allow for various interactions, lead to an improved multi-stream method, named the "STREAM-INTERACTION" method. The overall method is shown to be consistently more accurate than previous methods, with virtually all the available shellside data being predicted to within ±30% and over 60% being within ±20%. The method is, thus, strongly recommended for use as a design method.
Resumo:
Heterogeneous datasets arise naturally in most applications due to the use of a variety of sensors and measuring platforms. Such datasets can be heterogeneous in terms of the error characteristics and sensor models. Treating such data is most naturally accomplished using a Bayesian or model-based geostatistical approach; however, such methods generally scale rather badly with the size of dataset, and require computationally expensive Monte Carlo based inference. Recently within the machine learning and spatial statistics communities many papers have explored the potential of reduced rank representations of the covariance matrix, often referred to as projected or fixed rank approaches. In such methods the covariance function of the posterior process is represented by a reduced rank approximation which is chosen such that there is minimal information loss. In this paper a sequential Bayesian framework for inference in such projected processes is presented. The observations are considered one at a time which avoids the need for high dimensional integrals typically required in a Bayesian approach. A C++ library, gptk, which is part of the INTAMAP web service, is introduced which implements projected, sequential estimation and adds several novel features. In particular the library includes the ability to use a generic observation operator, or sensor model, to permit data fusion. It is also possible to cope with a range of observation error characteristics, including non-Gaussian observation errors. Inference for the covariance parameters is explored, including the impact of the projected process approximation on likelihood profiles. We illustrate the projected sequential method in application to synthetic and real datasets. Limitations and extensions are discussed. © 2010 Elsevier Ltd.
Resumo:
Historic changes in water-use management in the Florida Everglades have caused the quantity of freshwater inflow to Florida Bay to decline by approximately 60% while altering its timing and spatial distribution. Two consequences have been (1) increased salinity throughout the bay, including occurrences of hypersalinity, coupled with a decrease in salinity variability, and (2) change in benthic habitat structure. Restoration goals have been proposed to return the salinity climates (salinity and its variability) of Florida Bay to more estuarine conditions through changes in upstream water management, thereby returning seagrass species cover to a more historic state. To assess the potential for meeting those goals, we used two modeling approaches and long-term monitoring data. First, we applied the hydrological mass balance model FATHOM to predict salinity climate changes in sub-basins throughout the bay in response to a broad range of freshwater inflow from the Everglades. Second, because seagrass species exhibit different sensitivities to salinity climates, we used the FATHOM-modeled salinity climates as input to a statistical discriminant function model that associates eight seagrass community types with water quality variables including salinity, salinity variability, total organic carbon, total phosphorus, nitrate, and ammonium, as well as sediment depth and light reaching the benthos. Salinity climates in the western sub-basins bordering the Gulf of Mexico were insensitive to even the largest (5-fold) modeled increases in freshwater inflow. However, the north, northeastern, and eastern sub-basins were highly sensitive to freshwater inflow and responded to comparatively small increases with decreased salinity and increased salinity variability. The discriminant function model predicted increased occurrences ofHalodule wrightii communities and decreased occurrences of Thalassia testudinum communities in response to the more estuarine salinity climates. The shift in community composition represents a return to the historically observed state and suggests that restoration goals for Florida Bay can be achieved through restoration of freshwater inflow from the Everglades.
Resumo:
The exponential growth of studies on the biological response to ocean acidification over the last few decades has generated a large amount of data. To facilitate data comparison, a data compilation hosted at the data publisher PANGAEA was initiated in 2008 and is updated on a regular basis (doi:10.1594/PANGAEA.149999). By January 2015, a total of 581 data sets (over 4 000 000 data points) from 539 papers had been archived. Here we present the developments of this data compilation five years since its first description by Nisumaa et al. (2010). Most of study sites from which data archived are still in the Northern Hemisphere and the number of archived data from studies from the Southern Hemisphere and polar oceans are still relatively low. Data from 60 studies that investigated the response of a mix of organisms or natural communities were all added after 2010, indicating a welcomed shift from the study of individual organisms to communities and ecosystems. The initial imbalance of considerably more data archived on calcification and primary production than on other processes has improved. There is also a clear tendency towards more data archived from multifactorial studies after 2010. For easier and more effective access to ocean acidification data, the ocean acidification community is strongly encouraged to contribute to the data archiving effort, and help develop standard vocabularies describing the variables and define best practices for archiving ocean acidification data.
Resumo:
Autism spectrum disorder (ASD) is a complex heterogeneous neurodevelopmental disorder characterized by alterations in social functioning, communicative abilities, and engagement in repetitive or restrictive behaviors. The process of aging in individuals with autism and related neurodevelopmental disorders is not well understood, despite the fact that the number of individuals with ASD aged 65 and older is projected to increase by over half a million individuals in the next 20 years. To elucidate the effects of aging in the context of a modified central nervous system, we investigated the effects of age on the BTBR T + tf/j mouse, a well characterized and widely used mouse model that displays an ASD-like phenotype. We found that a reduction in social behavior persists into old age in male BTBR T + tf/j mice. We employed quantitative proteomics to discover potential alterations in signaling systems that could regulate aging in the BTBR mice. Unbiased proteomic analysis of hippocampal and cortical tissue of BTBR mice compared to age-matched wild-type controls revealed a significant decrease in brain derived neurotrophic factor and significant increases in multiple synaptic markers (spinophilin, Synapsin I, PSD 95, NeuN), as well as distinct changes in functional pathways related to these proteins, including "Neural synaptic plasticity regulation" and "Neurotransmitter secretion regulation." Taken together, these results contribute to our understanding of the effects of aging on an ASD-like mouse model in regards to both behavior and protein alterations, though additional studies are needed to fully understand the complex interplay underlying aging in mouse models displaying an ASD-like phenotype.
Resumo:
The primary objective is to investigate the main factors contributing to GMS expenditure on pharmaceutical prescribing and projecting this expenditure to 2026. This study is located in the area of pharmacoeconomic cost containment and projections literature. The thesis has five main aims: 1. To determine the main factors contributing to GMS expenditure on pharmaceutical prescribing. 2. To develop a model to project GMS prescribing expenditure in five year intervals to 2026, using 2006 Central Statistics Office (CSO) Census data and 2007 Health Service Executive{Primary Care Reimbursement Service (HSE{PCRS) sample data. 3. To develop a model to project GMS prescribing expenditure in five year intervals to 2026, using 2012 HSE{PCRS population data, incorporating cost containment measures, and 2011 CSO Census data. 4. To investigate the impact of demographic factors and the pharmacology of drugs (Anatomical Therapeutic Chemical (ATC)) on GMS expenditure. 5. To explore the consequences of GMS policy changes on prescribing expenditure and behaviour between 2008 and 2014. The thesis is centered around three published articles and is located between the end of a booming Irish economy in 2007, a recession from 2008{2013, to the beginning of a recovery in 2014. The literature identified a number of factors influencing pharmaceutical expenditure, including population growth, population aging, changes in drug utilisation and drug therapies, age, gender and location. The literature identified the methods previously used in predictive modelling and consequently, the Monte Carlo Simulation (MCS) model was used to simulate projected expenditures to 2026. Also, the literature guided the use of Ordinary Least Squares (OLS) regression in determining demographic and pharmacology factors influencing prescribing expenditure. The study commences against a backdrop of growing GMS prescribing costs, which has risen from e250 million in 1998 to over e1 billion by 2007. Using a sample 2007 HSE{PCRS prescribing data (n=192,000) and CSO population data from 2008, (Conway et al., 2014) estimated GMS prescribing expenditure could rise to e2 billion by2026. The cogency of these findings was impacted by the global economic crisis of 2008, which resulted in a sharp contraction in the Irish economy, mounting fiscal deficits resulting in Ireland's entry to a bailout programme. The sustainability of funding community drug schemes, such as the GMS, came under the spotlight of the EU, IMF, ECB (Trioka), who set stringent targets for reducing drug costs, as conditions of the bailout programme. Cost containment measures included: the introduction of income eligibility limits for GP visit cards and medical cards for those aged 70 and over, introduction of co{payments for prescription items, reductions in wholesale mark{up and pharmacy dispensing fees. Projections for GMS expenditure were reevaluated using 2012 HSE{PCRS prescribing population data and CSO population data based on Census 2011. Taking into account both cost containment measures and revised population predictions, GMS expenditure is estimated to increase by 64%, from e1.1 billion in 2016 to e1.8 billion by 2026, (ConwayLenihan and Woods, 2015). In the final paper, a cross{sectional study was carried out on HSE{PCRS population prescribing database (n=1.63 million claimants) to investigate the impact of demographic factors, and the pharmacology of the drugs, on GMS prescribing expenditure. Those aged over 75 (ẞ = 1:195) and cardiovascular prescribing (ẞ = 1:193) were the greatest contributors to annual GMS prescribing costs. Respiratory drugs (Montelukast) recorded the highest proportion and expenditure for GMS claimants under the age of 15. Drugs prescribed for the nervous system (Escitalopram, Olanzapine and Pregabalin) were highest for those between 16 and 64 years with cardiovascular drugs (Statins) were highest for those aged over 65. Females are more expensive than males and are prescribed more items across the four ATC groups, except among children under 11, (ConwayLenihan et al., 2016). This research indicates that growth in the proportion of the elderly claimants and associated levels of cardiovascular prescribing, particularly for statins, will present difficulties for Ireland in terms of cost containment. Whilst policies aimed at cost containment (co{payment charges, generic substitution, reference pricing, adjustments to GMS eligibility) can be used to curtail expenditure, health promotional programs and educational interventions should be given equal emphasis. Also policies intended to affect physicians prescribing behaviour include guidelines, information (about price and less expensive alternatives) and feedback, and the use of budgetary restrictions could yield savings.
Resumo:
We introduce a hybrid method for dielectric-metal composites that describes the dynamics of the metallic system classically whilst retaining a quantum description of the dielectric. The time-dependent dipole moment of the classical system is mimicked by the introduction of projected equations of motion (PEOM) and the coupling between the two systems is achieved through an effective dipole-dipole interaction. To benchmark this method, we model a test system (semiconducting quantum dot-metal nanoparticle hybrid). We begin by examining the energy absorption rate, showing agreement between the PEOM method and the analytical rotating wave approximation (RWA) solution. We then investigate population inversion and show that the PEOM method provides an accurate model for the interaction under ultrashort pulse excitation where the traditional RWA breaks down.
Resumo:
We present the first 3D simulation of the last minutes of oxygen shell burning in an 18 solar mass supernova progenitor up to the onset of core collapse. A moving inner boundary is used to accurately model the contraction of the silicon and iron core according to a 1D stellar evolution model with a self-consistent treatment of core deleptonization and nuclear quasi-equilibrium. The simulation covers the full solid angle to allow the emergence of large-scale convective modes. Due to core contraction and the concomitant acceleration of nuclear burning, the convective Mach number increases to ~0.1 at collapse, and an l=2 mode emerges shortly before the end of the simulation. Aside from a growth of the oxygen shell from 0.51 to 0.56 solar masses due to entrainment from the carbon shell, the convective flow is reasonably well described by mixing length theory, and the dominant scales are compatible with estimates from linear stability analysis. We deduce that artificial changes in the physics, such as accelerated core contraction, can have precarious consequences for the state of convection at collapse. We argue that scaling laws for the convective velocities and eddy sizes furnish good estimates for the state of shell convection at collapse and develop a simple analytic theory for the impact of convective seed perturbations on shock revival in the ensuing supernova. We predict a reduction of the critical luminosity for explosion by 12--24% due to seed asphericities for our 3D progenitor model relative to the case without large seed perturbations.
Resumo:
This document briefly summarizes the pavement management activities under the existing Iowa Department of Transportation (DOT) Pavement Management System. The second part of the document provides projected increase in use due to the implementation of the Iowa DOT Pavement Management Optimization System. All estimates of existing time devoted to the Pavement Management System and project increases in time requirements are estimates made by the appropriate Iowa DOT office director or function manager. Included is the new Pavement Management Optimization Structure for the three main offices which will work most closely with the Pavement Management Optimization System (Materials, Design, and Program Management).
Resumo:
Behavior of granular material subjected to repeated load triaxial compression tests is characterized by a model based on rate process theory. Starting with the Arrhenius equation from chemical kinetics, the relationship of temperature, shear stress, normal stress and volume change to deformation rate is developed. The proposed model equation includes these factors as a product of exponential terms. An empirical relationship between deformation and the cube root of the number of stress applications at constant temperature and normal stress is combined with the rate equation to yield an integrated relationship of temperature, deviator stress, confining pressure and number of deviator stress applications to axial strain. The experimental program consists of 64 repeated load triaxial compression tests, 52 on untreated crushed stone and 12 on the same crushed stone material treated with 4% asphalt cement. Results were analyzed with multiple linear regression techniques and show substantial agreement with the model equations. Experimental results fit the rate equation somewhat better than the integrated equation when all variable quantities are considered. The coefficient of shear temperature gives the activation enthalpy, which is about 4.7 kilocalories/mole for untreated material and 39.4 kilocalories/mole for asphalt-treated material. This indicates the activation enthalpy is about that of the pore fluid. The proportionality coefficient of deviator stress may be used to measure flow unit volume. The volumes thus determined for untreated and asphalt-treated material are not substantially different. This may be coincidental since comparison with flow unit volumes reported by others indicates flow unit volume is related to gradation of untreated material. The flow unit volume of asphalt-treated material may relate to asphalt cement content. The proposed model equations provide a more rational basis for further studies of factors affecting deformation of granular materials under stress similar to that in pavement subjected to transient traffic loads.