254 resultados para PRIMATE DENSITY ESTIMATES
Resumo:
Decentralised sensor networks typically consist of multiple processing nodes supporting one or more sensors. These nodes are interconnected via wireless communication. Practical applications of Decentralised Data Fusion have generally been restricted to using Gaussian based approaches such as the Kalman or Information Filter This paper proposes the use of Parzen window estimates as an alternate representation to perform Decentralised Data Fusion. It is required that the common information between two nodes be removed from any received estimates before local data fusion may occur Otherwise, estimates may become overconfident due to data incest. A closed form approximation to the division of two estimates is described to enable conservative assimilation of incoming information to a node in a decentralised data fusion network. A simple example of tracking a moving particle with Parzen density estimates is shown to demonstrate how this algorithm allows conservative assimilation of network information.
Resumo:
The effects of ethanol fumigation on the inter-cycle variability of key in-cylinder pressure parameters in a modern common rail diesel engine have been investigated. Specifically, maximum rate of pressure rise, peak pressure, peak pressure timing and ignition delay were investigated. A new methodology for investigating the start of combustion was also proposed and demonstrated—which is particularly useful with noisy in-cylinder pressure data as it can have a significant effect on the calculation of an accurate net rate of heat release indicator diagram. Inter-cycle variability has been traditionally investigated using the coefficient of variation. However, deeper insight into engine operation is given by presenting the results as kernel density estimates; hence, allowing investigation of otherwise unnoticed phenomena, including: multi-modal and skewed behaviour. This study has found that operation of a common rail diesel engine with high ethanol substitutions (>20% at full load, >30% at three quarter load) results in a significant reduction in ignition delay. Further, this study also concluded that if the engine is operated with absolute air to fuel ratios (mole basis) less than 80, the inter-cycle variability is substantially increased compared to normal operation.
Resumo:
With the advent of alternative fuels, such as biodiesels and related blends, it is important to develop an understanding of their effects on inter-cycle variability which, in turn, influences engine performance as well as its emission. Using four methanol trans-esterified biomass fuels of differing carbon chain length and degree of unsaturation, this paper provides insight into the effect that alternative fuels have on inter-cycle variability. The experiments were conducted with a heavy-duty Cummins, turbo-charged, common-rail compression ignition engine. Combustion performance is reported in terms of the following key in-cylinder parameters: indicated mean effective pressure (IMEP), net heat release rate (NHRR), standard deviation of variability (StDev), coefficient of variation (CoV), peak pressure, peak pressure timing and maximum rate of pressure rise. A link is also established between the cyclic variability and oxygen ratio, which is a good indicator of stoichiometry. The results show that the fatty acid structures did not have a significant effect on injection timing, injection duration, injection pressure, StDev of IMEP, or the timing of peak motoring and combustion pressures. However, a significant effect was noted on the premixed and diffusion combustion proportions, combustion peak pressure and maximum rate of pressure rise. Additionally, the boost pressure, IMEP and combustion peak pressure were found to be directly correlated to the oxygen ratio. The emission of particles positively correlates with oxygen content in the fuel as well as in the air-fuel mixture resulting in a higher total number of particles per unit of mass.
Resumo:
Much of our understanding and management of ecological processes requires knowledge of the distribution and abundance of species. Reliable abundance or density estimates are essential for managing both threatened and invasive populations, yet are often challenging to obtain. Recent and emerging technological advances, particularly in unmanned aerial vehicles (UAVs), provide exciting opportunities to overcome these challenges in ecological surveillance. UAVs can provide automated, cost-effective surveillance and offer repeat surveys for pest incursions at an invasion front. They can capitalise on manoeuvrability and advanced imagery options to detect species that are cryptic due to behaviour, life-history or inaccessible habitat. UAVs may also cause less disturbance, in magnitude and duration, for sensitive fauna than other survey methods such as transect counting by humans or sniffer dogs. The surveillance approach depends upon the particular ecological context and the objective. For example, animal, plant and microbial target species differ in their movement, spread and observability. Lag-times may exist between a pest species presence at a site and its detectability, prompting a need for repeat surveys. Operationally, however, the frequency and coverage of UAV surveys may be limited by financial and other constraints, leading to errors in estimating species occurrence or density. We use simulation modelling to investigate how movement ecology should influence fine-scale decisions regarding ecological surveillance using UAVs. Movement and dispersal parameter choices allow contrasts between locally mobile but slow-dispersing populations, and species that are locally more static but invasive at the landscape scale. We find that low and slow UAV flights may offer the best monitoring strategy to predict local population densities in transects, but that the consequent reduction in overall area sampled may sacrifice the ability to reliably predict regional population density. Alternative flight plans may perform better, but this is also dependent on movement ecology and the magnitude of relative detection errors for different flight choices. Simulated investigations such as this will become increasingly useful to reveal how spatio-temporal extent and resolution of UAV monitoring should be adjusted to reduce observation errors and thus provide better population estimates, maximising the efficacy and efficiency of unmanned aerial surveys.
Resumo:
Scratch assays are difficult to reproduce. Here we identify a previously overlooked source of variability which could partially explain this difficulty. We analyse a suite of scratch assays in which we vary the initial degree of confluence (initial cell density). Our results indicate that the rate of re-colonisation is very sensitive to the initial density. To quantify the relative roles of cell migration and proliferation, we calibrate the solution of the Fisher–Kolmogorov model to cell density profiles to provide estimates of the cell diffusivity, D, and the cell proliferation rate, λ. This procedure indicates that the estimates of D and λ are very sensitive to the initial density. This dependence suggests that the Fisher–Kolmogorov model does not accurately represent the details of the collective cell spreading process, since this model assumes that D and λ are constants that ought to be independent of the initial density. Since higher initial cell density leads to enhanced spreading, we also calibrate the solution of the Porous–Fisher model to the data as this model assumes that the cell flux is an increasing function of the cell density. Estimates of D and λ associated with the Porous–Fisher model are less sensitive to the initial density, suggesting that the Porous–Fisher model provides a better description of the experiments.
Resumo:
Objective: Because studies of crowding in long-term care settings are lacking, the authors sought to: (1) generate initial estimates of crowding in nursing homes and assisted living facilities; and (2) evaluate two operational approaches to its measurement. ----- ----- Background: Reactions to density and proximity are complex. Greater density intensifies people's reaction to a situation in the direction (positive or negative) that they would react if the situation were to occur under less dense conditions. People with dementia are especially reactive to the environment. ----- ----- Methods: Using a cross-sectional correlational design in nursing homes and assisted living facilities involving 185 participants, multiple observations (N = 6,455) of crowding and other environmental variables were made. Crowding, location, and sound were measured three times per observation; ambiance was measured once. Data analyses consisted of descriptive statistics, t-tests, and one-way analysis of variance. ----- ----- Results: Crowding estimates were higher for nursing homes and in dining and activity rooms. Crowding also varied across settings and locations by time of day. Overall, the interaction of location and time affected crowding significantly (N = 5,559, df [47, 511], F = 105.69, p < .0001); effects were greater within location-by-hour than between location-by-hour, but the effect explained slightly less variance in Long-Term Care Crowding Index (LTC-CI) estimates (47.41%) than location alone. Crowding had small, direct, and highly significant correlations with sound and with the engaging subscale for ambiance; a similar, though inverse, correlation was seen with the soothing subscale for ambiance. ----- ----- Conclusions: Crowding fluctuates consistent with routine activities such as meals in long-term care settings. Furthermore, a relationship between crowding and other physical characteristics of the environment was found. The LTC-CI is likely to be more sensitive than simple people counts when seeking to evaluate the effects of crowding on the behavior of elders-particularly those with dementia-in long-term care settings. aging in place.
Time dependency of molecular rate estimates and systematic overestimation of recent divergence times
Resumo:
Studies of molecular evolutionary rates have yielded a wide range of rate estimates for various genes and taxa. Recent studies based on population-level and pedigree data have produced remarkably high estimates of mutation rate, which strongly contrast with substitution rates inferred in phylogenetic (species-level) studies. Using Bayesian analysis with a relaxed-clock model, we estimated rates for three groups of mitochondrial data: avian protein-coding genes, primate protein-coding genes, and primate d-loop sequences. In all three cases, we found a measurable transition between the high, short-term (<1–2 Myr) mutation rate and the low, long-term substitution rate. The relationship between the age of the calibration and the rate of change can be described by a vertically translated exponential decay curve, which may be used for correcting molecular date estimates. The phylogenetic substitution rates in mitochondria are approximately 0.5% per million years for avian protein-coding sequences and 1.5% per million years for primate protein-coding and d-loop sequences. Further analyses showed that purifying selection offers the most convincing explanation for the observed relationship between the estimated rate and the depth of the calibration. We rule out the possibility that it is a spurious result arising from sequence errors, and find it unlikely that the apparent decline in rates over time is caused by mutational saturation. Using a rate curve estimated from the d-loop data, several dates for last common ancestors were calculated: modern humans and Neandertals (354 ka; 222–705 ka), Neandertals (108 ka; 70–156 ka), and modern humans (76 ka; 47–110 ka). If the rate curve for a particular taxonomic group can be accurately estimated, it can be a useful tool for correcting divergence date estimates by taking the rate decay into account. Our results show that it is invalid to extrapolate molecular rates of change across different evolutionary timescales, which has important consequences for studies of populations, domestication, conservation genetics, and human evolution.
Resumo:
X-ray microtomography (micro-CT) with micron resolution enables new ways of characterizing microstructures and opens pathways for forward calculations of multiscale rock properties. A quantitative characterization of the microstructure is the first step in this challenge. We developed a new approach to extract scale-dependent characteristics of porosity, percolation, and anisotropic permeability from 3-D microstructural models of rocks. The Hoshen-Kopelman algorithm of percolation theory is employed for a standard percolation analysis. The anisotropy of permeability is calculated by means of the star volume distribution approach. The local porosity distribution and local percolation probability are obtained by using the local porosity theory. Additionally, the local anisotropy distribution is defined and analyzed through two empirical probability density functions, the isotropy index and the elongation index. For such a high-resolution data set, the typical data sizes of the CT images are on the order of gigabytes to tens of gigabytes; thus an extremely large number of calculations are required. To resolve this large memory problem parallelization in OpenMP was used to optimally harness the shared memory infrastructure on cache coherent Non-Uniform Memory Access architecture machines such as the iVEC SGI Altix 3700Bx2 Supercomputer. We see adequate visualization of the results as an important element in this first pioneering study.
Resumo:
Cells respond to various biochemical and physical cues during wound–healing and tumour progression. In vitro assays used to study these processes are typically conducted in one particular geometry and it is unclear how the assay geometry affects the capacity of cell populations to spread, or whether the relevant mechanisms, such as cell motility and cell proliferation, are somehow sensitive to the geometry of the assay. In this work we use a circular barrier assay to characterise the spreading of cell populations in two different geometries. Assay 1 describes a tumour–like geometry where a cell population spreads outwards into an open space. Assay 2 describes a wound–like geometry where a cell population spreads inwards to close a void. We use a combination of discrete and continuum mathematical models and automated image processing methods to obtain independent estimates of the effective cell diffusivity, D, and the effective cell proliferation rate, λ. Using our parameterised mathematical model we confirm that our estimates of D and λ accurately predict the time–evolution of the location of the leading edge and the cell density profiles for both assay 1 and assay 2. Our work suggests that the effective cell diffusivity is up to 50% lower for assay 2 compared to assay 1, whereas the effective cell proliferation rate is up to 30% lower for assay 2 compared to assay 1.
Resumo:
Quantifying the impact of biochemical compounds on collective cell spreading is an essential element of drug design, with various applications including developing treatments for chronic wounds and cancer. Scratch assays are a technically simple and inexpensive method used to study collective cell spreading; however, most previous interpretations of scratch assays are qualitative and do not provide estimates of the cell diffusivity, D, or the cell proliferation rate,l. Estimating D and l is important for investigating the efficacy of a potential treatment and provides insight into the mechanism through which the potential treatment acts. While a few methods for estimating D and l have been proposed, these previous methods lead to point estimates of D and l, and provide no insight into the uncertainty in these estimates. Here, we compare various types of information that can be extracted from images of a scratch assay, and quantify D and l using discrete computational simulations and approximate Bayesian computation. We show that it is possible to robustly recover estimates of D and l from synthetic data, as well as a new set of experimental data. For the first time, our approach also provides a method to estimate the uncertainty in our estimates of D and l. We anticipate that our approach can be generalized to deal with more realistic experimental scenarios in which we are interested in estimating D and l, as well as additional relevant parameters such as the strength of cell-to-cell adhesion or the strength of cell-to-substrate adhesion.
Resumo:
Objective. To assess the cost-effectiveness of bone density screening programmes for osteoporosis. Study design. Using published and locally available data regarding fracture rates and treatment costs, the overall costs per fracture prevented, cost per quality of life year (QALY) saved and cost per year of life gained were estimated for different bone density screening and osteoporosis treatment programmes. Main outcome measures. Cost per fracture prevented, cost per QALY saved, and cost per year of life gained. Results. In women over the age of 50 years, the costs per fracture prevented of treating all women with hormone replacement therapy, or treating only if osteoporosis is demonstrated on bone density screening were £32,594 or £23,867 respectively. For alendronate therapy for the same groups, the costs were £171,067 and £14,067 respectively. Once the background rate of treatment with alendronate reaches 18%, bone density screening becomes cost-saving. Costs estimates per QALY saved ranged from £1,514 to £39,076 for osteoporosis treatment with alendronate following bone density screening. Conclusions. For relatively expensive medications such as alendronate, treatment programmes with prior bone density screening are far more cost effective than those without, and in some circumstances become cost-saving. Costs per QALY of life saved and per year of life gained for osteoporosis treatment with prior bone density screening compare favourably with treatment of hypertension and hypercholesterolemia.
Rainfall, Mosquito Density and the Transmission of Ross River Virus: A Time-Series Forecasting Model