941 resultados para Extinction Probability
Resumo:
1. We describe patterns of post-fledging care, dispersal and recruitment in four cohorts of brown thornbills Acanthiza pusilla. We examine what factors influence post-fledging survival and determine how post-hedging care and the timing of dispersal influence the probability of recruitment in this small, pair breeding, Australian passerine. 2. Fledgling thornbills were dependent on their parents for approximately 6 weeks. Male fledglings were more likely than female fledglings to survive until independence. For both sexes, the probability of reaching independence increased as nestling weight increased and was higher for nestlings that fledged later in the season. 3. The timing of dispersal by juvenile thornbills was bimodal. Juveniles either dispersed by the end of the breeding season or remained on their natal territory into the autumn and winter. Juveniles that delayed dispersal were four times more likely to recruit into the local breeding population than juveniles that dispersed early. 4. Delayed dispersal was advantageous because individuals that remained on their natal territory suffered little mortality and tended to disperse only when a local vacancy was available. Consequently, the risk of mortality associated with obtaining a breeding vacancy using this dispersal strategy was low. 5. Males, the more philopatric sex, were far more likely than females to delay dispersal. Despite the apparent advantages of prolonged natal philopatry, however, only 54% of pairs that raised male fledglings to independence had sons that postponed dispersal, and most of these philopatric sons gained vacancies before their parents bred again. Consequently, few sons have the opportunity to help their parents. Constraints on delayed dispersal therefore appear to play a major role in the evolution of pair-breeding in the brown thornbill.
Resumo:
Effluent water from shrimp ponds typically contains elevated concentrations of dissolved nutrients and suspended particulates compared to influent water. Attempts to improve effluent water quality using filter feeding bivalves and macroalgae to reduce nutrients have previously been hampered by the high concentration of clay particles typically found in untreated pond effluent. These particles inhibit feeding in bivalves and reduce photosynthesis in macroalgae by increasing effluent turbidity. In a small-scale laboratory study, the effectiveness of a three-stage effluent treatment system was investigated. In the first stage, reduction in particle concentration occurred through natural sedimentation. In the second stage, filtration by the Sydney rock oyster, Saccostrea commercialis (Iredale and Roughley), further reduced the concentration of suspended particulates, including inorganic particles, phytoplankton, bacteria, and their associated nutrients. In the final stage, the macroalga, Gracilaria edulis (Gmelin) Silva, absorbed dissolved nutrients. Pond effluent was collected from a commercial shrimp farm, taken to an indoor culture facility and was left to settle for 24 h. Subsamples of water were then transferred into laboratory tanks stocked with oysters and maintained for 24 h, and then transferred to tanks containing macroalgae for another 24 h. Total suspended solid (TSS), chlorophyll a, total nitrogen (N), total phosphorus (P), NH4+, NO3-, and PO43-, and bacterial numbers were compared before and after each treatment at: 0 h (initial); 24 h (after sedimentation); 48 h (after oyster filtration); 72 h (after macroalgal absorption). The combined effect of the sequential treatments resulted in significant reductions in the concentrations of all parameters measured. High rates of nutrient regeneration were observed in the control tanks, which did not contain oysters or macroalgae. Conversely, significant reductions in nutrients and suspended particulates after sedimentation and biological treatment were observed. Overall, improvements in water quality (final percentage of the initial concentration) were as follows: TSS (12%); total N (28%); total P (14%); NH4+ (76%); NO3- (30%); PO43-(35%); bacteria (30%); and chlorophyll a (0.7%). Despite the probability of considerable differences in sedimentation, filtration and nutrient uptake rates when scaled to farm size, these results demonstrate that integrated treatment has the potential to significantly improve water quality of shrimp farm effluent. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Large (>1600 mum), ingestively masticated particles of bermuda grass (Cynodon dactylon L. Pers.) leaf and stem labelled with Yb-169 and Ce-144 respectively were inserted into the rumen digesta raft of heifers grazing bermuda grass. The concentration of markers in digesta sampled from the raft and ventral rumen were monitored at regular intervals over approximately 144 h. The data from the two sampling sites were simultaneously fitted to two pool (raft and ventral rumen-reticulum) models with either reversible or sequential flow between the two pools. The sequential flow model fitted the data equally as well as the reversible flow model but the reversible flow model was used because of its greater application. The reversible flow model, hereafter called the raft model, had the following features: a relatively slow age-dependent transfer rate from the raft (means for a gamma 2 distributed rate parameter for leaf 0.0740 v. stem 0.0478 h(-1)), a very slow first order reversible flow from the ventral rumen to the raft (mean for leaf and stem 0.010 h(-1)) and a very rapid first order exit from the ventral rumen (mean of leaf and stem 0.44 h(-1)). The raft was calculated to occupy approximately 0.82 total rumen DM of the raft and ventral rumen pools. Fitting a sequential two pool model or a single exponential model individually to values from each of the two sampling sites yielded similar parameter values for both sites and faster rate parameters for leaf as compared with stem, in agreement with the raft model. These results were interpreted as indicating that the raft forms a large relatively inert pool within the rumen. Particles generated within the raft have difficulty escaping but once into the ventral rumen pool they escape quickly with a low probability of return to the raft. It was concluded that the raft model gave a good interpretation of the data and emphasized escape from and movement within the raft as important components of the residence time of leaf and stem particles within the rumen digesta of cattle.
Resumo:
1 The effect of chronic morphine treatment (CMT) on sympathetic innervation of the mouse vas deferens and on alpha (2)-adrenoceptor mediated autoinhibition has been examined using intracellular recording of excitatory junction potentials (EJPs) and histochemistry. 2 In chronically saline treated (CST) preparations. morphine (1 muM) and the alpha (2)-adrenoceptor agonist (clonidine, 1 muM) decreased the mean amplitude of EJPs evoked with 0.03 Hz stimulation by 81+/-8% (n=16) and 92+/-6% (n=7) respectively. In CMT preparations, morphine (1 muM) and clonidine (1 muM) decreased mean EJP amplitude by 68+/-8% (n = 7) and 79+/-8% (n = 7) respectively. 3 When stimulating the sympathetic axons at 0.03 Hz. the mean EJP amplitude recorded from smooth muscles acutely withdrawn from CMT was four times greater than for CST smooth muscles (40.7+/-3.8 mV, n = 7 compared with 9.9+/-0.3 mV, n = 7). 4 Part of the increase in mean EJP amplitude following CMT was produced by a 31% increase in the density of sympathetic axons and varicosities innervating the smooth muscle. 5 Results from the present study indicate that the effectiveness of alpha (2)-adrenocrptor mediated autoinhibition is only slightly reduced in CMT preparations. Most of the cross tolerance which develops between morphine, clonidine and alpha (2)-adrenoceptor mediated autoinhibition occurs as a consequence of increased efficacy of neuromuscular transmission which is produced by an increase in the probability of transmitter release and an increase in the density of sympathetic innervation.
Resumo:
A deterministic mathematical model which predicts the probability of developing a new drug-resistant parasite population within the human host is reported, The model incorporates the host's specific antibody response to PfEMP1, and also investigates the influence of chemotherapy on the probability of developing a viable drug-resistant parasite population within the host. Results indicate that early, treatment, and a high antibody threshold coupled with a long lag time between antibody stimulation and activity, are risk factors which increase the likelihood of developing a viable drug-resistant parasite population. High parasite mutation rates and fast PfEMP1 var gene switching are also identified as risk factors. The model output allows the relative importance of the various risk factors as well as the relationships between them to be established, thereby increasing the understanding of the conditions which favour the development of a new drug-resistant parasite population.
Resumo:
The purpose of this study was to review the experience with fallopian tube carcinoma in Queensland and to compare it with previously published data. Thirty-six patients with primary fallopian tube carcinoma treated at the Queensland Gynaecological Cancer Center from 1988 to 1999 were reviewed in a retrospective clinicopathologic study. All patients had primary surgery and 31/36 received chemotherapy postoperatively. Abnormal vaginal bleeding (15/36) and abdominal pain (14/36) were the most common presenting symptoms at the time of diagnosis. Median follow-up was 70.3 months and the median overall survival was 68.1 months. Surgical stage I disease (P = 0.02) and the absence of residual tumor after operation (P = 0.03) were the only factors associated with improved survival. Twenty of the 36 patients (55%) presented with stage I disease and survival was 62.7% at 5 years. No patient with postoperative residual tumor survived. The majority of the patients with fallopian tube carcinoma present with stage I disease at diagnosis, but their survival probability is low compared with that of other early stage gynecological malignancies. If primary surgical debulking cannot achieve macroscopic tumor clearence, the chance of survival is extremely low.
Resumo:
Penalizing line management for the occurrence of lost time injuries has in some cases had unintended negative consequences. These are discussed. An alternative system is suggested that penalizes line management for accidents where the combination of the probability of recurrence and the maximum reasonable consequences such a recurrence may have exceeds an agreed limit. A reward is given for prompt effective control of the risk to below the agreed risk limit. The reward is smaller than the penalty. High-risk accidents require independent investigation by a safety officer using analytical techniques. Two case examples are given to illustrate the system. Continuous safety improvement is driven by a planned reduction in the agreed risk limit over time and reward for proactive risk assessment and control.
Resumo:
Two hazard risk assessment matrices for the ranking of occupational health risks are described. The qualitative matrix uses qualitative measures of probability and consequence to determine risk assessment codes for hazard-disease combinations. A walk-through survey of an underground metalliferous mine and concentrator is used to demonstrate how the qualitative matrix can be applied to determine priorities for the control of occupational health hazards. The semi-quantitative matrix uses attributable risk as a quantitative measure of probability and uses qualitative measures of consequence. A practical application of this matrix is the determination of occupational health priorities using existing epidemiological studies. Calculated attributable risks from epidemiological studies of hazard-disease combinations in mining and minerals processing are used as examples. These historic response data do not reflect the risks associated with current exposures. A method using current exposure data, known exposure-response relationships and the semi-quantitative matrix is proposed for more accurate and current risk rankings.
Resumo:
This note gives a theory of state transition matrices for linear systems of fuzzy differential equations. This is used to give a fuzzy version of the classical variation of constants formula. A simple example of a time-independent control system is used to illustrate the methods. While similar problems to the crisp case arise for time-dependent systems, in time-independent cases the calculations are elementary solutions of eigenvalue-eigenvector problems. In particular, for nonnegative or nonpositive matrices, the problems at each level set, can easily be solved in MATLAB to give the level sets of the fuzzy solution. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
We introduce a model for the dynamics of a patchy population in a stochastic environment and derive a criterion for its persistence. This criterion is based on the geometric mean (GM) through time of the spatial-arithmetic mean of growth rates. For the population to persist, the GM has to be greater than or equal to1. The GM increases with the number of patches (because the sampling error is reduced) and decreases with both the variance and the spatial covariance of growth rates. We derive analytical expressions for the minimum number of patches (and the maximum harvesting rate) required for the persistence of the population. As the magnitude of environmental fluctuations increases, the number of patches required for persistence increases, and the fraction of individuals that can be harvested decreases. The novelty of our approach is that we focus on Malthusian local population dynamics with high dispersal and strong environmental variability from year to year. Unlike previous models of patchy populations that assume an infinite number of patches, we focus specifically on the effect that the number of patches has on population persistence. Our work is therefore directly relevant to patchily distributed organisms that are restricted to a small number of habitat patches.
Resumo:
In many occupational safety interventions, the objective is to reduce the injury incidence as well as the mean claims cost once injury has occurred. The claims cost data within a period typically contain a large proportion of zero observations (no claim). The distribution thus comprises a point mass at 0 mixed with a non-degenerate parametric component. Essentially, the likelihood function can be factorized into two orthogonal components. These two components relate respectively to the effect of covariates on the incidence of claims and the magnitude of claims, given that claims are made. Furthermore, the longitudinal nature of the intervention inherently imposes some correlation among the observations. This paper introduces a zero-augmented gamma random effects model for analysing longitudinal data with many zeros. Adopting the generalized linear mixed model (GLMM) approach reduces the original problem to the fitting of two independent GLMMs. The method is applied to evaluate the effectiveness of a workplace risk assessment teams program, trialled within the cleaning services of a Western Australian public hospital.
Resumo:
Motivation: This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. Results: The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets.
Resumo:
Motivation: A consensus sequence for a family of related sequences is, as the name suggests, a sequence that captures the features common to most members of the family. Consensus sequences are important in various DNA sequencing applications and are a convenient way to characterize a family of molecules. Results: This paper describes a new algorithm for finding a consensus sequence, using the popular optimization method known as simulated annealing. Unlike the conventional approach of finding a consensus sequence by first forming a multiple sequence alignment, this algorithm searches for a sequence that minimises the sum of pairwise distances to each of the input sequences. The resulting consensus sequence can then be used to induce a multiple sequence alignment. The time required by the algorithm scales linearly with the number of input sequences and quadratically with the length of the consensus sequence. We present results demonstrating the high quality of the consensus sequences and alignments produced by the new algorithm. For comparison, we also present similar results obtained using ClustalW. The new algorithm outperforms ClustalW in many cases.
Resumo:
The two-node tandem Jackson network serves as a convenient reference model for the analysis and testing of different methodologies and techniques in rare event simulation. In this paper we consider a new approach to efficiently estimate the probability that the content of the second buffer exceeds some high level L before it becomes empty, starting from a given state. The approach is based on a Markov additive process representation of the buffer processes, leading to an exponential change of measure to be used in an importance sampling procedure. Unlike changes of measures proposed and studied in recent literature, the one derived here is a function of the content of the first buffer. We prove that when the first buffer is finite, this method yields asymptotically efficient simulation for any set of arrival and service rates. In fact, the relative error is bounded independent of the level L; a new result which is not established for any other known method. When the first buffer is infinite, we propose a natural extension of the exponential change of measure for the finite buffer case. In this case, the relative error is shown to be bounded (independent of L) only when the second server is the bottleneck; a result which is known to hold for some other methods derived through large deviations analysis. When the first server is the bottleneck, experimental results using our method seem to suggest that the relative error is bounded linearly in L.