982 resultados para Dirichlet heat kernel estimates
Resumo:
The effects of ethanol fumigation on the inter-cycle variability of key in-cylinder pressure parameters in a modern common rail diesel engine have been investigated. Specifically, maximum rate of pressure rise, peak pressure, peak pressure timing and ignition delay were investigated. A new methodology for investigating the start of combustion was also proposed and demonstrated—which is particularly useful with noisy in-cylinder pressure data as it can have a significant effect on the calculation of an accurate net rate of heat release indicator diagram. Inter-cycle variability has been traditionally investigated using the coefficient of variation. However, deeper insight into engine operation is given by presenting the results as kernel density estimates; hence, allowing investigation of otherwise unnoticed phenomena, including: multi-modal and skewed behaviour. This study has found that operation of a common rail diesel engine with high ethanol substitutions (>20% at full load, >30% at three quarter load) results in a significant reduction in ignition delay. Further, this study also concluded that if the engine is operated with absolute air to fuel ratios (mole basis) less than 80, the inter-cycle variability is substantially increased compared to normal operation.
Resumo:
With the advent of alternative fuels, such as biodiesels and related blends, it is important to develop an understanding of their effects on inter-cycle variability which, in turn, influences engine performance as well as its emission. Using four methanol trans-esterified biomass fuels of differing carbon chain length and degree of unsaturation, this paper provides insight into the effect that alternative fuels have on inter-cycle variability. The experiments were conducted with a heavy-duty Cummins, turbo-charged, common-rail compression ignition engine. Combustion performance is reported in terms of the following key in-cylinder parameters: indicated mean effective pressure (IMEP), net heat release rate (NHRR), standard deviation of variability (StDev), coefficient of variation (CoV), peak pressure, peak pressure timing and maximum rate of pressure rise. A link is also established between the cyclic variability and oxygen ratio, which is a good indicator of stoichiometry. The results show that the fatty acid structures did not have a significant effect on injection timing, injection duration, injection pressure, StDev of IMEP, or the timing of peak motoring and combustion pressures. However, a significant effect was noted on the premixed and diffusion combustion proportions, combustion peak pressure and maximum rate of pressure rise. Additionally, the boost pressure, IMEP and combustion peak pressure were found to be directly correlated to the oxygen ratio. The emission of particles positively correlates with oxygen content in the fuel as well as in the air-fuel mixture resulting in a higher total number of particles per unit of mass.
The health effects of temperature : current estimates, future projections, and adaptation strategies
Resumo:
Climate change is expected to be one of the biggest global health threats in the 21st century. In response to changes in climate and associated extreme events, public health adaptation has become imperative. This thesis examined several key issues in this emerging research field. The thesis aimed to identify the climate-health (particularly temperature-health) relationships, then develop quantitative models that can be used to project future health impacts of climate change, and therefore help formulate adaptation strategies for dealing with climate-related health risks and reducing vulnerability. The research questions addressed by this thesis were: (1) What are the barriers to public health adaptation to climate change? What are the research priorities in this emerging field? (2) What models and frameworks can be used to project future temperature-related mortality under different climate change scenarios? (3) What is the actual burden of temperature-related mortality? What are the impacts of climate change on future burden of disease? and (4) Can we develop public health adaptation strategies to manage the health effects of temperature in response to climate change? Using a literature review, I discussed how public health organisations should implement and manage the process of planned adaptation. This review showed that public health adaptation can operate at two levels: building adaptive capacity and implementing adaptation actions. However, there are constraints and barriers to adaptation arising from uncertainty, cost, technologic limits, institutional arrangements, deficits of social capital, and individual perception of risks. The opportunities for planning and implementing public health adaptation are reliant on effective strategies to overcome likely barriers. I proposed that high priorities should be given to multidisciplinary research on the assessment of potential health effects of climate change, projections of future health impacts under different climate and socio-economic scenarios, identification of health cobenefits of climate change policies, and evaluation of cost-effective public health adaptation options. Heat-related mortality is the most direct and highly-significant potential climate change impact on human health. I thus conducted a systematic review of research and methods for projecting future heat-related mortality under different climate change scenarios. The review showed that climate change is likely to result in a substantial increase in heatrelated mortality. Projecting heat-related mortality requires understanding of historical temperature-mortality relationships, and consideration of future changes in climate, population and acclimatisation. Further research is needed to provide a stronger theoretical framework for mortality projections, including a better understanding of socioeconomic development, adaptation strategies, land-use patterns, air pollution and mortality displacement. Most previous studies were designed to examine temperature-related excess deaths or mortality risks. However, if most temperature-related deaths occur in the very elderly who had only a short life expectancy, then the burden of temperature on mortality would have less public health importance. To guide policy decisions and resource allocation, it is desirable to know the actual burden of temperature-related mortality. To achieve this, I used years of life lost to provide a new measure of health effects of temperature. I conducted a time-series analysis to estimate years of life lost associated with changes in season and temperature in Brisbane, Australia. I also projected the future temperaturerelated years of life lost attributable to climate change. This study showed that the association between temperature and years of life lost was U-shaped, with increased years of life lost on cold and hot days. The temperature-related years of life lost will worsen greatly if future climate change goes beyond a 2 °C increase and without any adaptation to higher temperatures. The excess mortality during prolonged extreme temperatures is often greater than the predicted using smoothed temperature-mortality association. This is because sustained period of extreme temperatures produce an extra effect beyond that predicted by daily temperatures. To better estimate the burden of extreme temperatures, I estimated their effects on years of life lost due to cardiovascular disease using data from Brisbane, Australia. The results showed that the association between daily mean temperature and years of life lost due to cardiovascular disease was U-shaped, with the lowest years of life lost at 24 °C (the 75th percentile of daily mean temperature in Brisbane), rising progressively as temperatures become hotter or colder. There were significant added effects of heat waves, but no added effects of cold spells. Finally, public health adaptation to hot weather is necessary and pressing. I discussed how to manage the health effects of temperature, especially with the context of climate change. Strategies to minimise the health effects of high temperatures and climate change can fall into two categories: reducing the heat exposure and managing the health effects of high temperatures. However, policy decisions need information on specific adaptations, together with their expected costs and benefits. Therefore, more research is needed to evaluate cost-effective adaptation options. In summary, this thesis adds to the large body of literature on the impacts of temperature and climate change on human health. It improves our understanding of the temperaturehealth relationship, and how this relationship will change as temperatures increase. Although the research is limited to one city, which restricts the generalisability of the findings, the methods and approaches developed in this thesis will be useful to other researchers studying temperature-health relationships and climate change impacts. The results may be helpful for decision-makers who develop public health adaptation strategies to minimise the health effects of extreme temperatures and climate change.
Resumo:
A critical requirement for safe autonomous navigation of a planetary rover is the ability to accurately estimate the traversability of the terrain. This work considers the problem of predicting the attitude and configuration angles of the platform from terrain representations that are often incomplete due to occlusions and sensor limitations. Using Gaussian Processes (GP) and exteroceptive data as training input, we can provide a continuous and complete representation of terrain traversability, with uncertainty in the output estimates. In this paper, we propose a novel method that focuses on exploiting the explicit correlation in vehicle attitude and configuration during operation by learning a kernel function from vehicle experience to perform GP regression. We provide an extensive experimental validation of the proposed method on a planetary rover. We show significant improvement in the accuracy of our estimation compared with results obtained using standard kernels (Squared Exponential and Neural Network), and compared to traversability estimation made over terrain models built using state-of-the-art GP techniques.
Resumo:
Error estimates for the error reproducing kernel method (ERKM) are provided. The ERKM is a mesh-free functional approximation scheme [A. Shaw, D. Roy, A NURBS-based error reproducing kernel method with applications in solid mechanics, Computational Mechanics (2006), to appear (available online)], wherein a targeted function and its derivatives are first approximated via non-uniform rational B-splines (NURBS) basis function. Errors in the NURBS approximation are then reproduced via a family of non-NURBS basis functions, constructed using a polynomial reproduction condition, and added to the NURBS approximation of the function obtained in the first step. In addition to the derivation of error estimates, convergence studies are undertaken for a couple of test boundary value problems with known exact solutions. The ERKM is next applied to a one-dimensional Burgers equation where, time evolution leads to a breakdown of the continuous solution and the appearance of a shock. Many available mesh-free schemes appear to be unable to capture this shock without numerical instability. However, given that any desired order of continuity is achievable through NURBS approximations, the ERKM can even accurately approximate functions with discontinuous derivatives. Moreover, due to the variation diminishing property of NURBS, it has advantages in representing sharp changes in gradients. This paper is focused on demonstrating this ability of ERKM via some numerical examples. Comparisons of some of the results with those via the standard form of the reproducing kernel particle method (RKPM) demonstrate the relative numerical advantages and accuracy of the ERKM.
Resumo:
We study diagonal estimates for the Bergman kernels of certain model domains in C-2 near boundary points that are of infinite type. To do so, we need a mild structural condition on the defining functions of interest that facilitates optimal upper and lower bounds. This is a mild condition; unlike earlier studies of this sort, we are able to make estimates for non-convex pseudoconvex domains as well. Thisn condition quantifies, in some sense, how flat a domain is at an infinite-type boundary point. In this scheme of quantification, the model domains considered below range-roughly speaking-from being mildly infinite-type'' to very flat at the infinite-type points.
Resumo:
Buoy and satellite data show pronounced subseasonal oscillations of sea surface temperature (SST) in the summertime Bay of Bengal. The SST oscillations are forced mainly by surface heat flux associated with the active break cycle of the south Asian summer monsoon. The input of freshwater (FW) from summer rain and rivers to the bay is large, but not much is known about subseasonal salinity variability. We use 2002-2007 observations from three Argo floats with 5 day repeat cycle to study the subseasonal response of temperature and salinity to surface heat and freshwater flux in the central Bay of Bengal. About 95% of Argo profiles show a shallow halocline, with substantial variability of mixed layer salinity. Estimates of surface heat and freshwater flux are based on daily satellite data sampled along the float trajectory. We find that intraseasonal variability of mixed layer temperature is mainly a response to net surface heat flux minus penetrative radiation during the summer monsoon season. In winter and spring, however, temperature variability appears to be mainly due to lateral advection rather than local heat flux. Variability of mixed layer freshwater content is generally independent of local surface flux (precipitation minus evaporation) in all seasons. There are occasions when intense monsoon rainfall leads to local freshening, but these are rare. Large fluctuations in FW appear to be due to advection, suggesting that freshwater from rivers and rain moves in eddies or filaments.
Resumo:
This thesis presents an experimental investigation of the axisymmetric heat transfer from a small scale fire and resulting buoyant plume to a horizontal, unobstructed ceiling during the initial stages of development. A propane-air burner yielding a heat source strength between 1.0 kW and 1.6 kW was used to simulate the fire, and measurements proved that this heat source did satisfactorily represent a source of buoyancy only. The ceiling consisted of a 1/16" steel plate of 0.91 m. diameter, insulated on the upper side. The ceiling height was adjustable between 0.5 m and 0.91 m. Temperature measurements were carried out in the plume, ceiling jet, and on the ceiling.
Heat transfer data were obtained by using the transient method and applying corrections for the radial conduction along the ceiling and losses through the insulation material. The ceiling heat transfer coefficient was based on the adiabatic ceiling jet temperature (recovery temperature) reached after a long time. A parameter involving the source strength Q and ceiling height H was found to correlate measurements of this temperature and its radial variation. A similar parameter for estimating the ceiling heat transfer coefficient was confirmed by the experimental results.
This investigation therefore provides reasonable estimates for the heat transfer from a buoyant gas plume to a ceiling in the axisymmetric case, for the stagnation region where such heat transfer is a maximum and for the ceiling jet region (r/H ≤ 0.7). A comparison with data from experiments which involved larger heat sources indicates that the predicted scaling of temperatures and heat transfer rates for larger scale fires is adequate.
Resumo:
The Supreme Court’s decision in Shelby County has severely limited the power of the Voting Rights Act. I argue that Congressional attempts to pass a new coverage formula are unlikely to gain the necessary Republican support. Instead, I propose a new strategy that takes a “carrot and stick” approach. As the stick, I suggest amending Section 3 to eliminate the need to prove that discrimination was intentional. For the carrot, I envision a competitive grant program similar to the highly successful Race to the Top education grants. I argue that this plan could pass the currently divided Congress.
Without Congressional action, Section 2 is more important than ever before. A successful Section 2 suit requires evidence that voting in the jurisdiction is racially polarized. Accurately and objectively assessing the level of polarization has been and continues to be a challenge for experts. Existing ecological inference methods require estimating polarization levels in individual elections. This is a problem because the Courts want to see a history of polarization across elections.
I propose a new 2-step method to estimate racially polarized voting in a multi-election context. The procedure builds upon the Rosen, Jiang, King, and Tanner (2001) multinomial-Dirichlet model. After obtaining election-specific estimates, I suggest regressing those results on election-specific variables, namely candidate quality, incumbency, and ethnicity of the minority candidate of choice. This allows researchers to estimate the baseline level of support for candidates of choice and test whether the ethnicity of the candidates affected how voters cast their ballots.
Resumo:
The experimental portion of this thesis tries to estimate the density of the power spectrum of very low frequency semiconductor noise, from 10-6.3 cps to 1. cps with a greater accuracy than that achieved in previous similar attempts: it is concluded that the spectrum is 1/fα with α approximately 1.3 over most of the frequency range, but appearing to have a value of about 1 in the lowest decade. The noise sources are, among others, the first stage circuits of a grounded input silicon epitaxial operational amplifier. This thesis also investigates a peculiar form of stationarity which seems to distinguish flicker noise from other semiconductor noise.
In order to decrease by an order of magnitude the pernicious effects of temperature drifts, semiconductor "aging", and possible mechanical failures associated with prolonged periods of data taking, 10 independent noise sources were time-multiplexed and their spectral estimates were subsequently averaged. If the sources have similar spectra, it is demonstrated that this reduces the necessary data-taking time by a factor of 10 for a given accuracy.
In view of the measured high temperature sensitivity of the noise sources, it was necessary to combine the passive attenuation of a special-material container with active control. The noise sources were placed in a copper-epoxy container of high heat capacity and medium heat conductivity, and that container was immersed in a temperature controlled circulating ethylene-glycol bath.
Other spectra of interest, estimated from data taken concurrently with the semiconductor noise data were the spectra of the bath's controlled temperature, the semiconductor surface temperature, and the power supply voltage amplitude fluctuations. A brief description of the equipment constructed to obtain the aforementioned data is included.
The analytical portion of this work is concerned with the following questions: what is the best final spectral density estimate given 10 statistically independent ones of varying quality and magnitude? How can the Blackman and Tukey algorithm which is used for spectral estimation in this work be improved upon? How can non-equidistant sampling reduce data processing cost? Should one try to remove common trands shared by supposedly statistically independent noise sources and, if so, what are the mathematical difficulties involved? What is a physically plausible mathematical model that can account for flicker noise and what are the mathematical implications on its statistical properties? Finally, the variance of the spectral estimate obtained through the Blackman/Tukey algorithm is analyzed in greater detail; the variance is shown to diverge for α ≥ 1 in an assumed power spectrum of k/|f|α, unless the assumed spectrum is "truncated".
Resumo:
The effectiveness of ventilation flows is considered from the perspective of buoyancy (or heat) removal from a space. This perspective is distinct from the standard in which the effectiveness is based on the concentrations of a neutrally buoyant contaminant/passive tracer. Three new measures of effectiveness are proposed based on the ability of a flow to flush buoyancy from a ventilated space. These measures provide estimates of instantaneous and time-averaged effectiveness for the entire space, and local effectiveness at any height of interest. From a generalisation of the latter, a vertical profile of effectiveness is defined. These measures enable quantitative comparisons to be made between different flows and they are applicable when there is a difference in density (as is typical due to temperature differences) between the interior environment and the replacement air. Applications, therefore, include natural ventilation, hybrid ventilation and a range of forced ventilation flows. Finally, we demonstrate how the ventilation effectiveness of a room may be assessed from simple traces of temperature versus time. © 2006 Elsevier Ltd. All rights reserved.
Resumo:
Semi-supervised clustering is the task of clustering data points into clusters where only a fraction of the points are labelled. The true number of clusters in the data is often unknown and most models require this parameter as an input. Dirichlet process mixture models are appealing as they can infer the number of clusters from the data. However, these models do not deal with high dimensional data well and can encounter difficulties in inference. We present a novel nonparameteric Bayesian kernel based method to cluster data points without the need to prespecify the number of clusters or to model complicated densities from which data points are assumed to be generated from. The key insight is to use determinants of submatrices of a kernel matrix as a measure of how close together a set of points are. We explore some theoretical properties of the model and derive a natural Gibbs based algorithm with MCMC hyperparameter learning. The model is implemented on a variety of synthetic and real world data sets.
Resumo:
The monthly and annual mean freshwater, heat and salt transport through the open boundaries of the South and East China Seas derived from a variable-grid global ocean circulation model is reported. The model has 1/6degrees resolution for the seas adjacent to China and 30 resolution for the global ocean. The model results are in fairly good agreement with the existing estimates based on measurements. The computation shows that the flows passing through the South China Sea contribute volume, heat and salt transport of 5.3 Sv, 0.57 PW and 184 Ggs(-1), respectively (about 1/4) to the Indonesian Throughflow, indicating that the South China Sea is an important pathway of the Pacific to Indian Ocean throughflow. The volume, heat and salt transport of the Kuroshio in the East China Sea is 25.6 Sv, 2.32 PW and 894 Ggs(-1), respectively. Less than 1/4 of this transport passes through the passage between Iriomote and Okinawa. The calculation of heat balance indicates that the South China Sea absorbs net heat flux from the sun and atmosphere with a rate of 0.08 PW, while the atmosphere gains net heat flux from the Baohai, Yellow and East China Seas with a rate of 0.05 PW.
Resumo:
M.Hieber, I.Wood: The Dirichlet problem in convex bounded domains for operators with L^\infty-coefficients, Diff. Int. Eq., 20, 7 (2007),721-734.
Resumo:
Wood, Ian; Hieber, M., (2007) 'The Dirichlet problem in convex bounded domains for operators with L8-coefficients', Differential and Integral Equations 20 pp.721-734 RAE2008