58 resultados para Revenue estimates
Resumo:
Wound healing and tumour growth involve collective cell spreading, which is driven by individual motility and proliferation events within a population of cells. Mathematical models are often used to interpret experimental data and to estimate the parameters so that predictions can be made. Existing methods for parameter estimation typically assume that these parameters are constants and often ignore any uncertainty in the estimated values. We use approximate Bayesian computation (ABC) to estimate the cell diffusivity, D, and the cell proliferation rate, λ, from a discrete model of collective cell spreading, and we quantify the uncertainty associated with these estimates using Bayesian inference. We use a detailed experimental data set describing the collective cell spreading of 3T3 fibroblast cells. The ABC analysis is conducted for different combinations of initial cell densities and experimental times in two separate scenarios: (i) where collective cell spreading is driven by cell motility alone, and (ii) where collective cell spreading is driven by combined cell motility and cell proliferation. We find that D can be estimated precisely, with a small coefficient of variation (CV) of 2–6%. Our results indicate that D appears to depend on the experimental time, which is a feature that has been previously overlooked. Assuming that the values of D are the same in both experimental scenarios, we use the information about D from the first experimental scenario to obtain reasonably precise estimates of λ, with a CV between 4 and 12%. Our estimates of D and λ are consistent with previously reported values; however, our method is based on a straightforward measurement of the position of the leading edge whereas previous approaches have involved expensive cell counting techniques. Additional insights gained using a fully Bayesian approach justify the computational cost, especially since it allows us to accommodate information from different experiments in a principled way.
Resumo:
This article describes research conducted for the Japanese government in the wake of the magnitude 9.0 earthquake and tsunami that struck eastern Japan on March 11, 2011. In this study, material stock analysis (MSA) is used to examine the losses of building and infrastructure materials after this disaster. Estimates of the magnitude of material stock that has lost its social function as a result of a disaster can indicate the quantities required for reconstruction, help garner a better understanding of the volumes of waste flows generated by that disaster, and also help in the course of policy deliberations in the recovery of disaster-stricken areas. Calculations of the lost building and road materials in the five prefectures most affected were undertaken. Analysis in this study is based on the use of geographical information systems (GIS) databases and statistics; it aims to (1) describe in spatial terms what construction materials were lost, (2) estimate the amount of infrastructure material needed to rehabilitate disaster areas, and (3) indicate the amount of lost material stock that should be taken into consideration during government policy deliberations. Our analysis concludes that the material stock losses of buildings and road infrastructure are 31.8 and 2.1 million tonnes, respectively. This research approach and the use of spatial MSA can be useful for urban planners and may also convey more appropriate information about disposal based on the work of municipalities in disaster-afflicted areas.
Resumo:
In this paper we provide estimates for the coverage of parameter space when using Latin Hypercube Sampling, which forms the basis of building so-called populations of models. The estimates are obtained using combinatorial counting arguments to determine how many trials, k, are needed in order to obtain specified parameter space coverage for a given value of the discretisation size n. In the case of two dimensions, we show that if the ratio (Ø) of trials to discretisation size is greater than 1, then as n becomes moderately large the fractional coverage behaves as 1-exp-ø. We compare these estimates with simulation results obtained from an implementation of Latin Hypercube Sampling using MATLAB.
Resumo:
Population size is crucial when estimating population-normalized drug consumption (PNDC) from wastewater-based drug epidemiology (WBDE). Three conceptually different population estimates can be used: de jure (common census, residence), de facto (all persons within a sewer catchment), and chemical loads (contributors to the sampled wastewater). De facto and chemical loads will be the same where all households contribute to a central sewer system without wastewater loss. This study explored the feasibility of determining a de facto population and its effect on estimating PNDC in an urban community over an extended period. Drugs and other chemicals were analyzed in 311 daily composite wastewater samples. The daily estimated de facto population (using chemical loads) was on average 32% higher than the de jure population. Consequently, using the latter would systemically overestimate PNDC by 22%. However, the relative day-to-day pattern of drug consumption was similar regardless of the type of normalization as daily illicit drug loads appeared to vary substantially more than the population. Using chemical loads population, we objectively quantified the total methodological uncertainty of PNDC and reduced it by a factor of 2. Our study illustrated the potential benefits of using chemical loads population for obtaining more robust PNDC data in WBDE.
Resumo:
Previous studies have shown that the external growth records of the posterior adductor muscle scar (PAMS) of the bivalve Pinna nobilis are incomplete and do not produce accurate age estimations. We have developed a new methodology to study age and growth using the inner record of the PAMS, which avoids the necessity of costly in situ shell measurements or isotopic studies. Using the inner record we identified the positions of PAMS previously obscured by nacre and estimated the number of missing records in adult specimens with strong abrasion of the calcite layer in the anterior portion of the shell. The study of the PAMS and inner record of two shells that were 6 years old when collected showed that only 2 and 3 PAMS were observed, while 6 inner records could be counted, thus confirming our working methodology. Growth parameters of a P. nobilis population located in Moraira, Spain (western Mediterranean) were estimated with the new methodology and compared to those obtained using PAMS data and in situ measurements. For the comparisons, we applied different models considering the data alternatively as length-at-age (LA) and tag-recapture (TR). Among every method we tested to fit the Von Bertalanffy growth model, we observed that LA data from inner record fitted to the model using non-linear mixed effects and the estimation of missing records using the calcite width was the most appropriate. The equation obtained with this method, L = 573*(1 - e(-0.16(t-0.02))), is very similar to that calculated previously from in situ measurements for the same population.
Resumo:
We propose a new model for estimating the size of a population from successive catches taken during a removal experiment. The data from these experiments often have excessive variation, known as overdispersion, as compared with that predicted by the multinomial model. The new model allows catchability to vary randomly among samplings, which accounts for overdispersion. When the catchability is assumed to have a beta distribution, the likelihood function, which is refered to as beta-multinomial, is derived, and hence the maximum likelihood estimates can be evaluated. Simulations show that in the presence of extravariation in the data, the confidence intervals have been substantially underestimated in previous models (Leslie-DeLury, Moran) and that the new model provides more reliable confidence intervals. The performance of these methods was also demonstrated using two real data sets: one with overdispersion, from smallmouth bass (Micropterus dolomieu), and the other without overdispersion, from rat (Rattus rattus).
Resumo:
Although subsampling is a common method for describing the composition of large and diverse trawl catches, the accuracy of these techniques is often unknown. We determined the sampling errors generated from estimating the percentage of the total number of species recorded in catches, as well as the abundance of each species, at each increase in the proportion of the sorted catch. We completely partitioned twenty prawn trawl catches from tropical northern Australia into subsamples of about 10 kg each. All subsamples were then sorted, and species numbers recorded. Catch weights ranged from 71 to 445 kg, and the number of fish species in trawls ranged from 60 to 138, and invertebrate species from 18 to 63. Almost 70% of the species recorded in catches were "rare" in subsamples (less than one individual per 10 kg subsample or less than one in every 389 individuals). A matrix was used to show the increase in the total number of species that were recorded in each catch as the percentage of the sorted catch increased. Simulation modelling showed that sorting small subsamples (about 10% of catch weights) identified about 50% of the total number of species caught in a trawl. Larger subsamples (50% of catch weight on average) identified about 80% of the total species caught in a trawl. The accuracy of estimating the abundance of each species also increased with increasing subsample size. For the "rare" species, sampling error was around 80% after sorting 10% of catch weight and was just less than 50% after 40% of catch weight had been sorted. For the "abundant" species (five or more individuals per 10 kg subsample or five or more in every 389 individuals), sampling error was around 25% after sorting 10% of catch weight, but was reduced to around 10% after 40% of catch weight had been sorted.
Resumo:
Sugarcane is a major global agricultural crop that produces significant quantities of sugar and biomass in tropical and sub-tropical regions. Over many centuries, the crop has been grown primarily for its high sugar content which traditionally contributes over 95% of the revenue derived from the crop. While the production of renewable electricity from bagasse and rum from molasses has a long history, in more recent decades significant advances have been made in the production of cogeneration products and fuel ethanol at large scale. Sugarcane biorefineries producing fuels, green chemicals, biopolymers and bio-products offer great potential for improving the profitability of sugarcane production. This paper will address the opportunities available for sugarcane biorefineries to contribute to future profitability and sustainability of the sugarcane industry.
Resumo:
Purpose – Preliminary cost estimates for construction projects are often the basis of financial feasibility and budgeting decisions in the early stages of planning and for effective project control, monitoring and execution. The purpose of this paper is to identify and better understand the cost drivers and factors that contribute to the accuracy of estimates in residential construction projects from the developers’ perspective. Design/methodology/approach – The paper uses a literature review to determine the drivers that affect the accuracy of developers’ early stage cost estimates and the factors influencing the construction costs of residential construction projects. It used cost variance data and other supporting documentation collected from two case study projects in South East Queensland, Australia, along with semi-structured interviews conducted with the practitioners involved. Findings – It is found that many cost drivers or factors of cost uncertainty identified in the literature for large-scale projects are not as apparent and relevant for developers’ small-scale residential construction projects. Specifically, the certainty and completeness of project-specific information, suitability of historical cost data, contingency allowances, methods of estimating and the estimator’s level of experience significantly affect the accuracy of cost estimates. Developers of small-scale residential projects use pre-established and suitably priced bills of quantities as the prime estimating method, which is considered to be the most efficient and accurate method for standard house designs. However, this method needs to be backed with the expertise and experience of the estimator. Originality/value – There is a lack of research on the accuracy of developers’ early stage cost estimates and the relevance and applicability of cost drivers and factors in the residential construction projects. This research has practical significance for improving the accuracy of such preliminary cost estimates.
Resumo:
The number of genetic factors associated with common human traits and disease is increasing rapidly, and the general public is utilizing affordable, direct-to-consumer genetic tests. The results of these tests are often in the public domain. A combination of factors has increased the potential for the indirect estimation of an individual's risk for a particular trait. Here we explain the basic principals underlying risk estimation which allowed us to test the ability to make an indirect risk estimation from genetic data by imputing Dr. James Watson's redacted apolipoprotein E gene (APOE) information. The principles underlying risk prediction from genetic data have been well known and applied for many decades, however, the recent increase in genomic knowledge, and advances in mathematical and statistical techniques and computational power, make it relatively easy to make an accurate but indirect estimation of risk. There is a current hazard for indirect risk estimation that is relevant not only to the subject but also to individuals related to the subject; this risk will likely increase as more detailed genomic data and better computational tools become available.
Resumo:
Background: Significant recent attention has focussed on the role of antibiotic prescribing and usage with the aim of combating antibiotic resistance, a growing worldwide health concern. A significant gap in this literature concerns the consumption patterns and beliefs of consumers about antibiotics and their effects. We seek to remedy this gap by exploring a range of questionable antibiotic practices and obtaining reliable estimates of their prevalence as well as their normative status. Methods: We conducted an online survey of over 100 consumers. We used a new incentive compatible technique, the Bayesian Truth Serum (BTS), to elicit more truthful responding than standard self-report measures. We asked participants to indicate whether they engaged in a number of practices including whether they had: taken antibiotics when they are out of date and stored antibiotics at home for later use. We then sought estimates of the percentage of other patients (like them) who had engaged in each behaviour, as well as asking them among those patients who had, the percentage that would admit to having done so. We also asked about social acceptability and responsibility of the practices. Results: These results will show for each type of questionable practice how prevalent it is and whether consumers view it as both socially acceptable and socially responsible. We will gain the relative prevalence of each of these practices. Conclusion: These findings are of paramount importance in gaining a better understanding of consumers’ antibiotic consumption patterns. These will be vital for better targeting educational campaigns to lower inappropriate antibiotic consumption.
Resumo:
Quantifying nitrous oxide (N(2)O) fluxes, a potent greenhouse gas, from soils is necessary to improve our knowledge of terrestrial N(2)O losses. Developing universal sampling frequencies for calculating annual N(2)O fluxes is difficult, as fluxes are renowned for their high temporal variability. We demonstrate daily sampling was largely required to achieve annual N(2)O fluxes within 10% of the best estimate for 28 annual datasets collected from three continents, Australia, Europe and Asia. Decreasing the regularity of measurements either under- or overestimated annual N(2)O fluxes, with a maximum overestimation of 935%. Measurement frequency was lowered using a sampling strategy based on environmental factors known to affect temporal variability, but still required sampling more than once a week. Consequently, uncertainty in current global terrestrial N(2)O budgets associated with the upscaling of field-based datasets can be decreased significantly using adequate sampling frequencies.
Resumo:
The pulp and paper industry is very large and is now well in excess of $200 billion (FAO 2009). Estimates for the amount of bagasse used in the production of pulp and paper products vary but the general consensus is that it accounts for 2–5% of global production, making it one of the highest revenue earners for the global sugarcane industry.