979 resultados para approximate calculation of sums
Resumo:
There are increasing indications that the contribution of holding costs and its impact on housing affordability is very significant. Their importance and perceived high level impact can be gauged from considering the unprecedented level of attention policy makers have given them recently. This may be evidenced by the embedding of specific strategies to address burgeoning holding costs (and particularly those cost savings associated with streamlining regulatory assessment) within statutory instruments such as the Queensland Housing Affordability Strategy, and the South East Queensland Regional Plan. However, several key issues require further investigation. Firstly, the computation and methodology behind the calculation of holding costs varies widely. In fact, it is not only variable, but in some instances completely ignored. Secondly, some ambiguity exists in terms of the inclusion of various elements of holding costs and assessment of their relative contribution. Perhaps this may in part be explained by their nature: such costs are not always immediately apparent. They are not as visible as more tangible cost items associated with greenfield development such as regulatory fees, government taxes, acquisition costs, selling fees, commissions and others. Holding costs are also more difficult to evaluate since for the most part they must be ultimately assessed over time in an ever-changing environment based on their strong relationship with opportunity cost which is in turn dependant, inter alia, upon prevailing inflation and / or interest rates. This paper seeks to provide a more detailed investigation of those elements related to holding costs, and in so doing determine the size of their impact specifically on the end user. It extends research in this area clarifying the extent to which holding costs impact housing affordability. Geographical diversity indicated by the considerable variation between various planning instruments and the length of regulatory assessment periods suggests further research should adopt a case study approach in order to test the relevance of theoretical modelling conducted.
Resumo:
Bearing damage in modern inverter-fed AC drive systems is more common than in motors working with 50 or 60 Hz power supply. Fast switching transients and common mode voltage generated by a PWM inverter cause unwanted shaft voltage and resultant bearing currents. Parasitic capacitive coupling creates a path to discharge current in rotors and bearings. In order to analyze bearing current discharges and their effect on bearing damage under different conditions, calculation of the capacitive coupling between the outer and inner races is needed. During motor operation, the distances between the balls and races may change the capacitance values. Due to changing of the thickness and spatial distribution of the lubricating grease, this capacitance does not have a constant value and is known to change with speed and load. Thus, the resultant electric field between the races and balls varies with motor speed. The lubricating grease in the ball bearing cannot withstand high voltages and a short circuit through the lubricated grease can occur. At low speeds, because of gravity, balls and shaft voltage may shift down and the system (ball positions and shaft) will be asymmetric. In this study, two different asymmetric cases (asymmetric ball position, asymmetric shaft position) are analyzed and the results are compared with the symmetric case. The objective of this paper is to calculate the capacitive coupling and electric fields between the outer and inner races and the balls at different motor speeds in symmetrical and asymmetrical shaft and balls positions. The analysis is carried out using finite element simulations to determine the conditions which will increase the probability of high rates of bearing failure due to current discharges through the balls and races.
Resumo:
Many of the costs associated with greenfield residential development are apparent and tangible. For example, regulatory fees, government taxes, acquisition costs, selling fees, commissions and others are all relatively easily identified since they represent actual costs incurred at a given point in time. However, identification of holding costs are not always immediately evident since by contrast they characteristically lack visibility. One reason for this is that, for the most part, they are typically assessed over time in an ever-changing environment. In addition, wide variations exist in development pipeline components: they are typically represented from anywhere between a two and over sixteen years time period - even if located within the same geographical region. Determination of the starting and end points, with regards holding cost computation, can also prove problematic. Furthermore, the choice between application of prevailing inflation, or interest rates, or a combination of both over time, adds further complexity. Although research is emerging in these areas, a review of the literature reveals attempts to identify holding cost components are limited. Their quantification (in terms of relative weight or proportionate cost to a development project) is even less apparent; in fact, the computation and methodology behind the calculation of holding costs varies widely and in some instances completely ignored. In addition, it may be demonstrated that ambiguities exists in terms of the inclusion of various elements of holding costs and assessment of their relative contribution. Yet their impact on housing affordability is widely acknowledged to be profound, with their quantification potentially maximising the opportunities for delivering affordable housing. This paper seeks to build on earlier investigations into those elements related to holding costs, providing theoretical modelling of the size of their impact - specifically on the end user. At this point the research is reliant upon quantitative data sets, however additional qualitative analysis (not included here) will be relevant to account for certain variations between expectations and actual outcomes achieved by developers. Although this research stops short of cross-referencing with a regional or international comparison study, an improved understanding of the relationship between holding costs, regulatory charges, and housing affordability results.
Resumo:
This technical report describes the methods used to obtain a list of acoustic indices that are used to characterise the structure and distribution of acoustic energy in recordings of the natural environment. In particular it describes methods for noise reduction from recordings of the environment and a fast clustering algorithm used to estimate the spectral richness of long recordings.
Resumo:
Fast calculation of quantities such as in-cylinder volume and indicated power is important in internal combustion engine research. Multiple channels of data including crank angle and pressure were collected for this purpose using a fully instrumented diesel engine research facility. Currently, existing methods use software to post-process the data, first calculating volume from crank angle, then calculating the indicated work and indicated power from the area enclosed by the pressure-volume indicator diagram. Instead, this work investigates the feasibility of achieving real-time calculation of volume and power via hardware implementation on Field Programmable Gate Arrays (FPGAs). Alternative hardware implementations were investigated using lookup tables, Taylor series methods or the CORDIC (CoOrdinate Rotation DIgital Computer) algorithm to compute the trigonometric operations in the crank angle to volume calculation, and the CORDIC algorithm was found to use the least amount of resources. Simulation of the hardware based implementation showed that the error in the volume and indicated power is less than 0.1%.
Resumo:
The feasibility of real-time calculation of parameters for an internal combustion engine via reconfigurable hardware implementation is investigated as an alternative to software computation. A detailed in-hardware field programmable gate array (FPGA)-based design is developed and evaluated using input crank angle and in-cylinder pressure data from fully instrumented diesel engines in the QUT Biofuel Engine Research Facility (BERF). Results indicate the feasibility of employing a hardware-based implementation for real-time processing for speeds comparable to the data sampling rate currently used in the facility, with acceptably low level of discrepancies between hardware and software-based calculation of key engine parameters.
Resumo:
Dose kernels may be used to calculate dose distributions in radiotherapy (as described by Ahnesjo et al., 1999). Their calculation requires use of Monte Carlo methods, usually by forcing interactions to occur at a point. The Geant4 Monte Carlo toolkit provides a capability to force interactions to occur in a particular volume. We have modified this capability and created a Geant4 application to calculate dose kernels in cartesian, cylindrical, and spherical scoring systems. The simulation considers monoenergetic photons incident at the origin of a 3 m x 3 x 9 3 m water volume. Photons interact via compton, photo-electric, pair production, and rayleigh scattering. By default, Geant4 models photon interactions by sampling a physical interaction length (PIL) for each process. The process returning the smallest PIL is then considered to occur. In order to force the interaction to occur within a given length, L_FIL, we scale each PIL according to the formula: PIL_forced = L_FIL 9 (1 - exp(-PIL/PILo)) where PILo is a constant. This ensures that the process occurs within L_FIL, whilst correctly modelling the relative probability of each process. Dose kernels were produced for an incident photon energy of 0.1, 1.0, and 10.0 MeV. In order to benchmark the code, dose kernels were also calculated using the EGSnrc Edknrc user code. Identical scoring systems were used; namely, the collapsed cone approach of the Edknrc code. Relative dose difference images were then produced. Preliminary results demonstrate the ability of the Geant4 application to reproduce the shape of the dose kernels; median relative dose differences of 12.6, 5.75, and 12.6 % were found for an incident photon energy of 0.1, 1.0, and 10.0 MeV respectively.
Resumo:
Measurement of discrimination against 18O during dark respiration in plants is currently accepted as the only reliable method of estimating the partitioning of electrons between the cytochrome and alternative pathways. In this paper, we review the theory of the technique and its application to a gas-phase system. We extend it to include sampling effects and show that the isotope discrimination factor, D, is calculated as –dln(1 + δ)/dlnO*, where δ is isotopic composition of the substrate oxygen and O*=[O2]/[N2] in a closed chamber containing tissue respiring in the dark. It is not necessary to integrate the expression but, if the integrated form is used, the resultant regression should not be constrained through the origin. This is important since any error in D will have significant effects on the estimation of the flux of electrons through the two pathways.
Resumo:
Purpose The previous literature on Bland-Altman analysis only describes approximate methods for calculating confidence intervals for 95% Limits of Agreement (LoAs). This paper describes exact methods for calculating such confidence intervals, based on the assumption that differences in measurement pairs are normally distributed. Methods Two basic situations are considered for calculating LoA confidence intervals: the first where LoAs are considered individually (i.e. using one-sided tolerance factors for a normal distribution); and the second, where LoAs are considered as a pair (i.e. using two-sided tolerance factors for a normal distribution). Equations underlying the calculation of exact confidence limits are briefly outlined. Results To assist in determining confidence intervals for LoAs (considered individually and as a pair) tables of coefficients have been included for degrees of freedom between 1 and 1000. Numerical examples, showing the use of the tables for calculating confidence limits for Bland-Altman LoAs, have been provided. Conclusions Exact confidence intervals for LoAs can differ considerably from Bland and Altman’s approximate method, especially for sample sizes that are not large. There are better, more precise methods for calculating confidence intervals for LoAs than Bland and Altman’s approximate method, although even an approximate calculation of confidence intervals for LoAs is likely to be better than none at all. Reporting confidence limits for LoAs considered as a pair is appropriate for most situations, however there may be circumstances where it is appropriate to report confidence limits for LoAs considered individually.
Resumo:
Magnetic resonance is a well-established tool for structural characterisation of porous media. Features of pore-space morphology can be inferred from NMR diffusion-diffraction plots or the time-dependence of the apparent diffusion coefficient. Diffusion NMR signal attenuation can be computed from the restricted diffusion propagator, which describes the distribution of diffusing particles for a given starting position and diffusion time. We present two techniques for efficient evaluation of restricted diffusion propagators for use in NMR porous-media characterisation. The first is the Lattice Path Count (LPC). Its physical essence is that the restricted diffusion propagator connecting points A and B in time t is proportional to the number of distinct length-t paths from A to B. By using a discrete lattice, the number of such paths can be counted exactly. The second technique is the Markov transition matrix (MTM). The matrix represents the probabilities of jumps between every pair of lattice nodes within a single timestep. The propagator for an arbitrary diffusion time can be calculated as the appropriate matrix power. For periodic geometries, the transition matrix needs to be defined only for a single unit cell. This makes MTM ideally suited for periodic systems. Both LPC and MTM are closely related to existing computational techniques: LPC, to combinatorial techniques; and MTM, to the Fokker-Planck master equation. The relationship between LPC, MTM and other computational techniques is briefly discussed in the paper. Both LPC and MTM perform favourably compared to Monte Carlo sampling, yielding highly accurate and almost noiseless restricted diffusion propagators. Initial tests indicate that their computational performance is comparable to that of finite element methods. Both LPC and MTM can be applied to complicated pore-space geometries with no analytic solution. We discuss the new methods in the context of diffusion propagator calculation in porous materials and model biological tissues.
Resumo:
Typically, the walking ability of individuals with a transfemoral amputation (TFA) can be represented by the speed of walking (SofW) obtained in experimental settings. Recent developments in portable kinetic systems allow assessing the level of activity of TFA during actual daily living outside the confined space of a gait lab. Unfortunately, only minimal spatio-temporal characteristics could be extracted from the kinetic data including the cadence and the duration on gait cycles. Therefore, there is a need for a way to use some of these characteristics to assess the instantaneous speed of walking during daily living. The purpose of the study was to compare several methods to determine SofW using minimal spatial gait characteristics.
Resumo:
A new theory of shock dynamics has been developed in the form of a finite number of compatibility conditions along shock rays. It has been used to study the growth or decay of shock strength for accelerating or decelerating piston starting with a nonzero piston velocity. The results show good agreement with those obtained by Harten's high resolution TVD scheme.