984 resultados para PRIMATE DENSITY ESTIMATES


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: Because studies of crowding in long-term care settings are lacking, the authors sought to: (1) generate initial estimates of crowding in nursing homes and assisted living facilities; and (2) evaluate two operational approaches to its measurement. ----- ----- Background: Reactions to density and proximity are complex. Greater density intensifies people's reaction to a situation in the direction (positive or negative) that they would react if the situation were to occur under less dense conditions. People with dementia are especially reactive to the environment. ----- ----- Methods: Using a cross-sectional correlational design in nursing homes and assisted living facilities involving 185 participants, multiple observations (N = 6,455) of crowding and other environmental variables were made. Crowding, location, and sound were measured three times per observation; ambiance was measured once. Data analyses consisted of descriptive statistics, t-tests, and one-way analysis of variance. ----- ----- Results: Crowding estimates were higher for nursing homes and in dining and activity rooms. Crowding also varied across settings and locations by time of day. Overall, the interaction of location and time affected crowding significantly (N = 5,559, df [47, 511], F = 105.69, p < .0001); effects were greater within location-by-hour than between location-by-hour, but the effect explained slightly less variance in Long-Term Care Crowding Index (LTC-CI) estimates (47.41%) than location alone. Crowding had small, direct, and highly significant correlations with sound and with the engaging subscale for ambiance; a similar, though inverse, correlation was seen with the soothing subscale for ambiance. ----- ----- Conclusions: Crowding fluctuates consistent with routine activities such as meals in long-term care settings. Furthermore, a relationship between crowding and other physical characteristics of the environment was found. The LTC-CI is likely to be more sensitive than simple people counts when seeking to evaluate the effects of crowding on the behavior of elders-particularly those with dementia-in long-term care settings. aging in place.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Studies of molecular evolutionary rates have yielded a wide range of rate estimates for various genes and taxa. Recent studies based on population-level and pedigree data have produced remarkably high estimates of mutation rate, which strongly contrast with substitution rates inferred in phylogenetic (species-level) studies. Using Bayesian analysis with a relaxed-clock model, we estimated rates for three groups of mitochondrial data: avian protein-coding genes, primate protein-coding genes, and primate d-loop sequences. In all three cases, we found a measurable transition between the high, short-term (<1–2 Myr) mutation rate and the low, long-term substitution rate. The relationship between the age of the calibration and the rate of change can be described by a vertically translated exponential decay curve, which may be used for correcting molecular date estimates. The phylogenetic substitution rates in mitochondria are approximately 0.5% per million years for avian protein-coding sequences and 1.5% per million years for primate protein-coding and d-loop sequences. Further analyses showed that purifying selection offers the most convincing explanation for the observed relationship between the estimated rate and the depth of the calibration. We rule out the possibility that it is a spurious result arising from sequence errors, and find it unlikely that the apparent decline in rates over time is caused by mutational saturation. Using a rate curve estimated from the d-loop data, several dates for last common ancestors were calculated: modern humans and Neandertals (354 ka; 222–705 ka), Neandertals (108 ka; 70–156 ka), and modern humans (76 ka; 47–110 ka). If the rate curve for a particular taxonomic group can be accurately estimated, it can be a useful tool for correcting divergence date estimates by taking the rate decay into account. Our results show that it is invalid to extrapolate molecular rates of change across different evolutionary timescales, which has important consequences for studies of populations, domestication, conservation genetics, and human evolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

X-ray microtomography (micro-CT) with micron resolution enables new ways of characterizing microstructures and opens pathways for forward calculations of multiscale rock properties. A quantitative characterization of the microstructure is the first step in this challenge. We developed a new approach to extract scale-dependent characteristics of porosity, percolation, and anisotropic permeability from 3-D microstructural models of rocks. The Hoshen-Kopelman algorithm of percolation theory is employed for a standard percolation analysis. The anisotropy of permeability is calculated by means of the star volume distribution approach. The local porosity distribution and local percolation probability are obtained by using the local porosity theory. Additionally, the local anisotropy distribution is defined and analyzed through two empirical probability density functions, the isotropy index and the elongation index. For such a high-resolution data set, the typical data sizes of the CT images are on the order of gigabytes to tens of gigabytes; thus an extremely large number of calculations are required. To resolve this large memory problem parallelization in OpenMP was used to optimally harness the shared memory infrastructure on cache coherent Non-Uniform Memory Access architecture machines such as the iVEC SGI Altix 3700Bx2 Supercomputer. We see adequate visualization of the results as an important element in this first pioneering study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cells respond to various biochemical and physical cues during wound–healing and tumour progression. In vitro assays used to study these processes are typically conducted in one particular geometry and it is unclear how the assay geometry affects the capacity of cell populations to spread, or whether the relevant mechanisms, such as cell motility and cell proliferation, are somehow sensitive to the geometry of the assay. In this work we use a circular barrier assay to characterise the spreading of cell populations in two different geometries. Assay 1 describes a tumour–like geometry where a cell population spreads outwards into an open space. Assay 2 describes a wound–like geometry where a cell population spreads inwards to close a void. We use a combination of discrete and continuum mathematical models and automated image processing methods to obtain independent estimates of the effective cell diffusivity, D, and the effective cell proliferation rate, λ. Using our parameterised mathematical model we confirm that our estimates of D and λ accurately predict the time–evolution of the location of the leading edge and the cell density profiles for both assay 1 and assay 2. Our work suggests that the effective cell diffusivity is up to 50% lower for assay 2 compared to assay 1, whereas the effective cell proliferation rate is up to 30% lower for assay 2 compared to assay 1.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantifying the impact of biochemical compounds on collective cell spreading is an essential element of drug design, with various applications including developing treatments for chronic wounds and cancer. Scratch assays are a technically simple and inexpensive method used to study collective cell spreading; however, most previous interpretations of scratch assays are qualitative and do not provide estimates of the cell diffusivity, D, or the cell proliferation rate,l. Estimating D and l is important for investigating the efficacy of a potential treatment and provides insight into the mechanism through which the potential treatment acts. While a few methods for estimating D and l have been proposed, these previous methods lead to point estimates of D and l, and provide no insight into the uncertainty in these estimates. Here, we compare various types of information that can be extracted from images of a scratch assay, and quantify D and l using discrete computational simulations and approximate Bayesian computation. We show that it is possible to robustly recover estimates of D and l from synthetic data, as well as a new set of experimental data. For the first time, our approach also provides a method to estimate the uncertainty in our estimates of D and l. We anticipate that our approach can be generalized to deal with more realistic experimental scenarios in which we are interested in estimating D and l, as well as additional relevant parameters such as the strength of cell-to-cell adhesion or the strength of cell-to-substrate adhesion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective. To assess the cost-effectiveness of bone density screening programmes for osteoporosis. Study design. Using published and locally available data regarding fracture rates and treatment costs, the overall costs per fracture prevented, cost per quality of life year (QALY) saved and cost per year of life gained were estimated for different bone density screening and osteoporosis treatment programmes. Main outcome measures. Cost per fracture prevented, cost per QALY saved, and cost per year of life gained. Results. In women over the age of 50 years, the costs per fracture prevented of treating all women with hormone replacement therapy, or treating only if osteoporosis is demonstrated on bone density screening were £32,594 or £23,867 respectively. For alendronate therapy for the same groups, the costs were £171,067 and £14,067 respectively. Once the background rate of treatment with alendronate reaches 18%, bone density screening becomes cost-saving. Costs estimates per QALY saved ranged from £1,514 to £39,076 for osteoporosis treatment with alendronate following bone density screening. Conclusions. For relatively expensive medications such as alendronate, treatment programmes with prior bone density screening are far more cost effective than those without, and in some circumstances become cost-saving. Costs per QALY of life saved and per year of life gained for osteoporosis treatment with prior bone density screening compare favourably with treatment of hypertension and hypercholesterolemia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Urban encroachment on dense, coastal koala populations has ensured that their management has received increasing government and public attention. The recently developed National Koala Conservation Strategy calls for maintenance of viable populations in the wild. Yet the success of this, and other, conservation initiatives is hampered by lack of reliable and generally accepted national and regional population estimates. In this paper we address this problem in a potentially large, but poorly studied, regional population in the State that is likely to have the largest wild populations. We draw on findings from previous reports in this series and apply the faecal standing-crop method (FSCM) to derive a regional estimate of more than 59 000 individuals. Validation trials in riverine communities showed that estimates of animal density obtained from the FSCM and direct observation were in close agreement. Bootstrapping and Monte Carlo simulations were used to obtain variance estimates for our population estimates in different vegetation associations across the region. The most favoured habitat was riverine vegetation, which covered only 0.9% of the region but supported 45% of the koalas. We also estimated that between 1969 and 1995 -30% of the native vegetation associations that are considered as potential koala habitat were cleared, leading to a decline of perhaps 10% in koala numbers. Management of this large regional population has significant implications for the national conservation of the species: the continued viability of this population is critically dependent on the retention and management of riverine and residual vegetation communities, and future vegetation-management guidelines should be cognisant of the potential impacts of clearing even small areas of critical habitat. We also highlight eight management implications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The European wild rabbit has been considered Australia’s worst vertebrate pest and yet little effort appears to have gone into producing maps of rabbit distribution and density. Mapping the distribution and density of pests is an important step in effective management. A map is essential for estimating the extent of damage caused and for efficiently planning and monitoring the success of pest control operations. This paper describes the use of soil type and point data to prepare a map showing the distribution and density of rabbits in Australia. The potential for the method to be used for mapping other vertebrate pests is explored. The approach used to prepare the map is based on that used for rabbits in Queensland (Berman et al. 1998). An index of rabbit density was determined using the number of Spanish rabbit fleas released per square kilometre for each Soil Map Unit (Atlas of Australian Soils). Spanish rabbit fleas were released into active rabbit warrens at 1606 sites in the early 1990s as an additional vector for myxoma virus and the locations of the releases were recorded using a Global Positioning System (GPS). Releases were predominantly in arid areas but some fleas were released in south east Queensland and the New England Tablelands of New South Wales. The map produced appears to reflect well the distribution and density of rabbits, at least in the areas where Spanish fleas were released. Rabbit pellet counts conducted in 2007 at 54 sites across an area of south east South Australia, south eastern Queensland, and parts of New South Wales (New England Tablelands and south west) in soil Map Units where Spanish fleas were released, provided a preliminary means to ground truth the map. There was a good relationship between mean pellet count score and the index of abundance for soil Map Units. Rabbit pellet counts may allow extension of the map into other parts of Australia where there were no Spanish rabbit fleas released and where there may be no other consistent information on rabbit location and density. The recent Equine Influenza outbreak provided a further test of the value of this mapping method. The distribution and density of domestic horses were mapped to provide estimates of the number of horses in various regions. These estimates were close to the actual numbers of horses subsequently determined from vaccination records and registrations. The soil Map Units are not simply soil types they contain information on landuse and vegetation and the soil classification is relatively localised. These properties make this mapping method useful, not only for rabbits, but also for other species that are not so dependent on soil type for survival.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Management of the commercial harvest of kangaroos relies on quotas set annually as a proportion of regular estimates of population size. Surveys to generate these estimates are expensive and, in the larger states, logistically difficult; a cheaper alternative is desirable. Rainfall is a disappointingly poor predictor of kangaroo rate of increase in many areas, but harvest statistics (sex ratio, carcass weight, skin size and animals shot per unit time) potentially offer cost-effective indirect monitoring of population abundance (and therefore trend) and status (i.e. under-or overharvest). Furthermore, because harvest data are collected continuously and throughout the harvested areas, they offer the promise of more intensive and more representative coverage of harvest areas than aerial surveys do. To be useful, harvest statistics would need to have a close and known relationship with either population size or harvest rate. We assessed this using longterm (11-22 years) data for three kangaroo species (Macropus rufus, M. giganteus and M. fuliginosus) and common wallaroos (M. robustus) across South Australia, New South Wales and Queensland. Regional variation in kangaroo body size, population composition, shooter efficiency and selectivity required separate analyses in different regions. Two approaches were taken. First, monthly harvest statistics were modelled as a function of a number of explanatory variables, including kangaroo density, harvest rate and rainfall. Second, density and harvest rate were modelled as a function of harvest statistics. Both approaches incorporated a correlated error structure. Many but not all regions had relationships with sufficient precision to be useful for indirect monitoring. However, there was no single relationship that could be applied across an entire state or across species. Combined with rainfall-driven population models and applied at a regional level, these relationships could be used to reduce the frequency of aerial surveys without compromising decisions about harvest management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Key message Eucalyptus pellita demonstrated good growth and wood quality traits in this study, with young plantation grown timber being suitable for both solid and pulp wood products. All traits examined were under moderate levels of genetic control with little genotype by environment interaction when grown on two contrasting sites in Vietnam. Context Eucalyptus pellita currently has a significant role in reforestation in the tropics. Research to support expanded of use of this species is needed: particularly, research to better understand the genetic control of key traits will facilitate the development of genetically improved planting stock. Aims This study aimed to provide estimates of the heritability of diameter at breast height over bark, wood basic density, Kraft pulp yield, modulus of elasticity and microfibril angle, and the genetic correlations among these traits, and understand the importance of genotype by environment interactions in Vietnam. Methods Data for diameter and wood properties were collected from two 10-year-old, open-pollinated progeny trials of E. pellita in Vietnam that evaluated 104 families from six native range and three orchard sources. Wood properties were estimated from wood samples using near-infrared (NIR) spectroscopy. Data were analysed using mixed linear models to estimate genetic parameters (heritability, proportion of variance between seed sources and genetic correlations). Results Variation among the nine sources was small compared to additive variance. Narrow-sense heritability and genetic correlation estimates indicated that simultaneous improvements in most traits could be achieved from selection among and within families as the genetic correlations among traits were either favourable or close to zero. Type B genetic correlations approached one for all traits suggesting that genotype by environment interactions were of little importance. These results support a breeding strategy utilizing a single breeding population advanced by selecting the best individuals across all seed sources. Conclusion Both growth and wood properties have been evaluated. Multi-trait selection for growth and wood property traits will lead to more productive populations of E. pellita both with improved productivity and improved timber and pulp properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The striking lack of observable variation of the volume fraction with height in the center of a granular flow down an inclined plane is analysed using constitutive relations obtained from kinetic theory. It is shown that the rate of conduction in the granular energy balance equation is O(delta(2)) smaller than the rate of production of energy due to mean shear and the rate of dissipation due to inelastic collisions, where the small parameter delta = (d/(1 - e(n))H-1/2), d is the particle diameter, en is the normal coefficient of restitution and H is the thickness of the flowing layer. This implies that the volume fraction is a constant in the leading approximation in an asymptotic analysis in small delta. Numerical estimates of both the parameter delta and its pre-factor are obtained to show that the lack of observable variation of the volume fraction with height can be explained by constitutive relations obtained from kinetic theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In terabit-density magnetic recording, several bits of data can be replaced by the values of their neighbors in the storage medium. As a result, errors in the medium are dependent on each other and also on the data written. We consider a simple 1-D combinatorial model of this medium. In our model, we assume a setting where binary data is sequentially written on the medium and a bit can erroneously change to the immediately preceding value. We derive several properties of codes that correct this type of errors, focusing on bounds on their cardinality. We also define a probabilistic finite-state channel model of the storage medium, and derive lower and upper estimates of its capacity. A lower bound is derived by evaluating the symmetric capacity of the channel, i.e., the maximum transmission rate under the assumption of the uniform input distribution of the channel. An upper bound is found by showing that the original channel is a stochastic degradation of another, related channel model whose capacity we can compute explicitly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The solution of the forward equation that models the transport of light through a highly scattering tissue material in diffuse optical tomography (DOT) using the finite element method gives flux density (Phi) at the nodal points of the mesh. The experimentally measured flux (U-measured) on the boundary over a finite surface area in a DOT system has to be corrected to account for the system transfer functions (R) of various building blocks of the measurement system. We present two methods to compensate for the perturbations caused by R and estimate true flux density (Phi) from U-measured(cal). In the first approach, the measurement data with a homogeneous phantom (U-measured(homo)) is used to calibrate the measurement system. The second scheme estimates the homogeneous phantom measurement using only the measurement from a heterogeneous phantom, thereby eliminating the necessity of a homogeneous phantom. This is done by statistically averaging the data (U-measured(hetero)) and redistributing it to the corresponding detector positions. The experiments carried out on tissue mimicking phantom with single and multiple inhomogeneities, human hand, and a pork tissue phantom demonstrate the robustness of the approach. (C) 2013 Society of Photo-Optical Instrumentation Engineers (SPIE) DOI: 10.1117/1.JBO.18.2.026023]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I. The binding of the intercalating dye ethidium bromide to closed circular SV 40 DNA causes an unwinding of the duplex structure and a simultaneous and quantitatively equivalent unwinding of the superhelices. The buoyant densities and sedimentation velocities of both intact (I) and singly nicked (II) SV 40 DNAs were measured as a function of free dye concentration. The buoyant density data were used to determine the binding isotherms over a dye concentration range extending from 0 to 600 µg/m1 in 5.8 M CsCl. At high dye concentrations all of the binding sites in II, but not in I, are saturated. At free dye concentrations less than 5.4 µg/ml, I has a greater affinity for dye than II. At a critical amount of dye bound I and II have equal affinities, and at higher dye concentration I has a lower affinity than II. The number of superhelical turns, τ, present in I is calculated at each dye concentration using Fuller and Waring's (1964) estimate of the angle of duplex unwinding per intercalation. The results reveal that SV 40 DNA I contains about -13 superhelical turns in concentrated salt solutions.

The free energy of superhelix formation is calculated as a function of τ from a consideration of the effect of the superhelical turns upon the binding isotherm of ethidium bromide to SV 40 DNA I. The value of the free energy is about 100 kcal/mole DNA in the native molecule. The free energy estimates are used to calculate the pitch and radius of the superhelix as a function of the number of superhelical turns. The pitch and radius of the native I superhelix are 430 Å and 135 Å, respectively.

A buoyant density method for the isolation and detection of closed circular DNA is described. The method is based upon the reduced binding of the intercalating dye, ethidium bromide, by closed circular DNA. In an application of this method it is found that HeLa cells contain in addition to closed circular mitochondrial DNA of mean length 4.81 microns, a heterogeneous group of smaller DNA molecules which vary in size from 0.2 to 3.5 microns and a paucidisperse group of multiples of the mitochondrial length.

II. The general theory is presented for the sedimentation equilibrium of a macromolecule in a concentrated binary solvent in the presence of an additional reacting small molecule. Equations are derived for the calculation of the buoyant density of the complex and for the determination of the binding isotherm of the reagent to the macrospecies. The standard buoyant density, a thermodynamic function, is defined and the density gradients which characterize the four component system are derived. The theory is applied to the specific cases of the binding of ethidium bromide to SV 40 DNA and of the binding of mercury and silver to DNA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The experimental portion of this thesis tries to estimate the density of the power spectrum of very low frequency semiconductor noise, from 10-6.3 cps to 1. cps with a greater accuracy than that achieved in previous similar attempts: it is concluded that the spectrum is 1/fα with α approximately 1.3 over most of the frequency range, but appearing to have a value of about 1 in the lowest decade. The noise sources are, among others, the first stage circuits of a grounded input silicon epitaxial operational amplifier. This thesis also investigates a peculiar form of stationarity which seems to distinguish flicker noise from other semiconductor noise.

In order to decrease by an order of magnitude the pernicious effects of temperature drifts, semiconductor "aging", and possible mechanical failures associated with prolonged periods of data taking, 10 independent noise sources were time-multiplexed and their spectral estimates were subsequently averaged. If the sources have similar spectra, it is demonstrated that this reduces the necessary data-taking time by a factor of 10 for a given accuracy.

In view of the measured high temperature sensitivity of the noise sources, it was necessary to combine the passive attenuation of a special-material container with active control. The noise sources were placed in a copper-epoxy container of high heat capacity and medium heat conductivity, and that container was immersed in a temperature controlled circulating ethylene-glycol bath.

Other spectra of interest, estimated from data taken concurrently with the semiconductor noise data were the spectra of the bath's controlled temperature, the semiconductor surface temperature, and the power supply voltage amplitude fluctuations. A brief description of the equipment constructed to obtain the aforementioned data is included.

The analytical portion of this work is concerned with the following questions: what is the best final spectral density estimate given 10 statistically independent ones of varying quality and magnitude? How can the Blackman and Tukey algorithm which is used for spectral estimation in this work be improved upon? How can non-equidistant sampling reduce data processing cost? Should one try to remove common trands shared by supposedly statistically independent noise sources and, if so, what are the mathematical difficulties involved? What is a physically plausible mathematical model that can account for flicker noise and what are the mathematical implications on its statistical properties? Finally, the variance of the spectral estimate obtained through the Blackman/Tukey algorithm is analyzed in greater detail; the variance is shown to diverge for α ≥ 1 in an assumed power spectrum of k/|f|α, unless the assumed spectrum is "truncated".