960 resultados para cost estimation
Resumo:
This article develops a method for analysis of growth data with multiple recaptures when the initial ages for all individuals are unknown. The existing approaches either impute the initial ages or model them as random effects. Assumptions about the initial age are not verifiable because all the initial ages are unknown. We present an alternative approach that treats all the lengths including the length at first capture as correlated repeated measures for each individual. Optimal estimating equations are developed using the generalized estimating equations approach that only requires the first two moment assumptions. Explicit expressions for estimation of both mean growth parameters and variance components are given to minimize the computational complexity. Simulation studies indicate that the proposed method works well. Two real data sets are analyzed for illustration, one from whelks (Dicathais aegaota) and the other from southern rock lobster (Jasus edwardsii) in South Australia.
Resumo:
The method of generalised estimating equations for regression modelling of clustered outcomes allows for specification of a working matrix that is intended to approximate the true correlation matrix of the observations. We investigate the asymptotic relative efficiency of the generalised estimating equation for the mean parameters when the correlation parameters are estimated by various methods. The asymptotic relative efficiency depends on three-features of the analysis, namely (i) the discrepancy between the working correlation structure and the unobservable true correlation structure, (ii) the method by which the correlation parameters are estimated and (iii) the 'design', by which we refer to both the structures of the predictor matrices within clusters and distribution of cluster sizes. Analytical and numerical studies of realistic data-analysis scenarios show that choice of working covariance model has a substantial impact on regression estimator efficiency. Protection against avoidable loss of efficiency associated with covariance misspecification is obtained when a 'Gaussian estimation' pseudolikelihood procedure is used with an AR(1) structure.
Resumo:
We consider estimation of mortality rates and growth parameters from length-frequency data of a fish stock when there is individual variability in the von Bertalanffy growth parameter L-infinity and investigate the possible bias in the estimates when the individual variability is ignored. Three methods are examined: (i) the regression method based on the Beverton and Holt's (1956, Rapp. P.V. Reun. Cons. Int. Explor. Mer, 140: 67-83) equation; (ii) the moment method of Powell (1979, Rapp. PV. Reun. Int. Explor. Mer, 175: 167-169); and (iii) a generalization of Powell's method that estimates the individual variability to be incorporated into the estimation. It is found that the biases in the estimates from the existing methods are, in general, substantial, even when individual variability in growth is small and recruitment is uniform, and the generalized method performs better in terms of bias but is subject to a larger variation. There is a need to develop robust and flexible methods to deal with individual variability in the analysis of length-frequency data.
Resumo:
In the analysis of tagging data, it has been found that the least-squares method, based on the increment function known as the Fabens method, produces biased estimates because individual variability in growth is not allowed for. This paper modifies the Fabens method to account for individual variability in the length asymptote. Significance tests using t-statistics or log-likelihood ratio statistics may be applied to show the level of individual variability. Simulation results indicate that the modified method reduces the biases in the estimates to negligible proportions. Tagging data from tiger prawns (Penaeus esculentus and Penaeus semisulcatus) and rock lobster (Panulirus ornatus) are analysed as an illustration.
Resumo:
The von Bertalanffy growth model is extended to incorporate explanatory variables. The generalized model includes the switched growth model and the seasonal growth model as special cases, and can also be used to assess the tagging effect on growth. Distribution-free and consistent estimating functions are constructed for estimation of growth parameters from tag-recapture data in which age at release is unknown. This generalizes the work of James (1991, Biometrics 47 1519-1530) who considered the classical model and allowed for individual variability in growth. A real dataset from barramundi (Lates calcarifer) is analysed to estimate the growth parameters and possible effect of tagging on growth.
Resumo:
Recent decreases in costs, and improvements in performance, of silicon array detectors open a range of potential applications of relevance to plant physiologists, associated with spectral analysis in the visible and short-wave near infra-red (far-red) spectrum. The performance characteristics of three commercially available ‘miniature’ spectrometers based on silicon array detectors operating in the 650–1050-nm spectral region (MMS1 from Zeiss, S2000 from Ocean Optics, and FICS from Oriel, operated with a Larry detector) were compared with respect to the application of non-invasive prediction of sugar content of fruit using near infra-red spectroscopy (NIRS). The FICS–Larry gave the best wavelength resolution; however, the narrow slit and small pixel size of the charge-coupled device detector resulted in a very low sensitivity, and this instrumentation was not considered further. Wavelength resolution was poor with the MMS1 relative to the S2000 (e.g. full width at half maximum of the 912 nm Hg peak, 13 and 2 nm for the MMS1 and S2000, respectively), but the large pixel height of the array used in the MMS1 gave it sensitivity comparable to the S2000. The signal-to-signal standard error ratio of spectra was greater by an order of magnitude with the MMS1, relative to the S2000, at both near saturation and low light levels. Calibrations were developed using reflectance spectra of filter paper soaked in range of concentrations (0–20% w/v) of sucrose, using a modified partial least squares procedure. Calibrations developed with the MMS1 were superior to those developed using the S2000 (e.g. coefficient of correlation of 0.90 and 0.62, and standard error of cross-validation of 1.9 and 5.4%, respectively), indicating the importance of high signal to noise ratio over wavelength resolution to calibration accuracy. The design of a bench top assembly using the MMS1 for the non-invasive assessment of mesocarp sugar content of (intact) melon fruit is reported in terms of light source and angle between detector and light source, and optimisation of math treatment (derivative condition and smoothing function).
Resumo:
Demagnetization to zero remanent value or to a predetermined value is of interest to magnet manufacturers and material users. Conventional methods of demagnetization using a varying alternating demagnetizing field, under a damped oscillatory or conveyor system, result in either high cost for demagnetization or large power dissipation. A simple technique using thyristors is presented for demagnetizing the material. Power consumption is mainly in the first two half-cycles of applied voltage. Hence power dissipation is very much reduced. An optimum value calculation for a thyristor triggering angle for demagnetizing high coercive materials is also presented.
Resumo:
There’s a polyester mullet skirt gracing a derrière near you. It’s short at the front, long at the back, and it’s also known as the hi-lo skirt. Like fads that preceded it, the mullet skirt has a short fashion life, and although it will remain potentially wearable for years, it’s likely to soon be heading to the charity shop or to landfill...
Resumo:
The emission from neutral hydrogen (HI) clouds in the post-reionization era (z <= 6), too faint to be individually detected, is present as a diffuse background in all low frequency radio observations below 1420MHz. The angular and frequency fluctuations of this radiation (similar to 1 mK) are an important future probe of the large-scale structures in the Universe. We show that such observations are a very effective probe of the background cosmological model and the perturbed Universe. In our study we focus on the possibility of determining the redshift-space distortion parameter beta, coordinate distance r(nu), and its derivative with redshift r(nu)('). Using reasonable estimates for the observational uncertainties and configurations representative of the ongoing and upcoming radio interferometers, we predict parameter estimation at a precision comparable with supernova Ia observations and galaxy redshift surveys, across a wide range in redshift that is only partially accessed by other probes. Future HI observations of the post-reionization era present a new technique, complementing several existing ones, to probe the expansion history and to elucidate the nature of the dark energy.
Resumo:
The focus of this article is on the cost-effectiveness of mitigation strategies to reduce pollution loads and improve water quality in South-East Queensland. Scenarios were developed about the types of catchment interventions that could be considered, and the resulting changes in water quality indicators that may result. Once these catchment scenarios were modelled, the range of expected outcomes was assessed and the costs of mitigation interventions were estimated. Strategies considered include point and non-point source interventions. Predicted reductions in pollution levels were calculated for each action based on the expected population growth. The cost of the interventions included the full investment and annual running costs as well as planned public investment by the state agencies. Cost-effectiveness of strategies is likely to vary according to whether suspended sediments, nitrogen or phosphorus loads are being targeted.
Resumo:
Forested areas play a dominant role in the global hydrological cycle. Evapotranspiration is a dominant component most of the time catching up with the rainfall. Though there are sophisticated methods which are available for its estimation, a simple reliable tool is needed so that a good budgeting could be made. Studies have established that evapotranspiration in forested areas is much higher than in agricultural areas. Latitude, type of forests, climate and geological characteristics also add to the complexity of its estimation. Few studies have compared different methods of evapotranspiration on forested watersheds in semi arid tropical forests. In this paper a comparative study of different methods of estimation of evapotranspiration is made with reference to the actual measurements made using all parameter climatological station data of a small deciduous forested watershed of Mulehole (area of 4.5 km2 ), South India. Potential evapotranspiration (ETo) was calculated using ten physically based and empirical methods. Actual evapotranspiration (AET) has been calculated through computation of water balance through SWAT model. The Penman-Montieth method has been used as a benchmark to compare the estimates arrived at using various methods. The AET calculated shows good agreement with the curve for evapotranspiration for forests worldwide. Error estimates have been made with respect to Penman-Montieth method. This study could give an idea of the errors involved whenever methods with limited data are used and also show the use indirect methods in estimation of Evapotranspiration which is more suitable for regional scale studies.
Resumo:
Heavy haul railway lines are important and expensive items of infrastructure operating in an environment which is increasingly focussed on risk-based management and constrained profit margins. It is vital that costs are minimised but also that infrastructure satisfies failure criteria and standards of reliability which account for the random nature of wheel-rail forces and of the properties of the materials in the track. In Australia and the USA, concrete railway sleepers/ties are still designed using methods which the rest of the civil engineering world discarded decades ago in favour of the more rational, more economical and probabilistically based, limit states design (LSD) concept. This paper describes a LSD method for concrete sleepers which is based on (a) billions of measurements over many years of the real, random wheel-rail forces on heavy haul lines, and (b) the true capacity of sleepers. The essential principles on which the new method is based are similar to current, widely used LSD-based standards for concrete structures. The paper proposes and describes four limit states which a sleeper must satisfy, namely: strength; operations; serviceability; and fatigue. The method has been applied commercially to two new major heavy haul lines in Australia, where it has saved clients millions of dollars in capital expenditure.
Resumo:
Light interception is a major factor influencing plant development and biomass production. Several methods have been proposed to determine this variable, but its calculation remains difficult in artificial environments with heterogeneous light. We propose a method that uses 3D virtual plant modelling and directional light characterisation to estimate light interception in highly heterogeneous light environments such as growth chambers and glasshouses. Intercepted light was estimated by coupling an architectural model and a light model for different genotypes of the rosette species Arabidopsis thaliana (L.) Heynh and a sunflower crop. The model was applied to plants of contrasting architectures, cultivated in isolation or in canopy, in natural or artificial environments, and under contrasting light conditions. The model gave satisfactory results when compared with observed data and enabled calculation of light interception in situations where direct measurements or classical methods were inefficient, such as young crops, isolated plants or artificial conditions. Furthermore, the model revealed that A. thaliana increased its light interception efficiency when shaded. To conclude, the method can be used to calculate intercepted light at organ, plant and plot levels, in natural and artificial environments, and should be useful in the investigation of genotype-environment interactions for plant architecture and light interception efficiency. This paper originates from a presentation at the 5th International Workshop on Functional–Structural Plant Models, Napier, New Zealand, November 2007.
Resumo:
This paper presents the site classification of Bangalore Mahanagar Palike (BMP) area using geophysical data and the evaluation of spectral acceleration at ground level using probabilistic approach. Site classification has been carried out using experimental data from the shallow geophysical method of Multichannel Analysis of Surface wave (MASW). One-dimensional (1-D) MASW survey has been carried out at 58 locations and respective velocity profiles are obtained. The average shear wave velocity for 30 m depth (Vs(30)) has been calculated and is used for the site classification of the BMP area as per NEHRP (National Earthquake Hazards Reduction Program). Based on the Vs(30) values major part of the BMP area can be classified as ``site class D'', and ``site class C'. A smaller portion of the study area, in and around Lalbagh Park, is classified as ``site class B''. Further, probabilistic seismic hazard analysis has been carried out to map the seismic hazard in terms spectral acceleration (S-a) at rock and the ground level considering the site classes and six seismogenic sources identified. The mean annual rate of exceedance and cumulative probability hazard curve for S. have been generated. The quantified hazard values in terms of spectral acceleration for short period and long period are mapped for rock, site class C and D with 10% probability of exceedance in 50 years on a grid size of 0.5 km. In addition to this, the Uniform Hazard Response Spectrum (UHRS) at surface level has been developed for the 5% damping and 10% probability of exceedance in 50 years for rock, site class C and D These spectral acceleration and uniform hazard spectrums can be used to assess the design force for important structures and also to develop the design spectrum.