992 resultados para SIZE MANIPULATION
Resumo:
We derive a new method for determining size-transition matrices (STMs) that eliminates probabilities of negative growth and accounts for individual variability. STMs are an important part of size-structured models, which are used in the stock assessment of aquatic species. The elements of STMs represent the probability of growth from one size class to another, given a time step. The growth increment over this time step can be modelled with a variety of methods, but when a population construct is assumed for the underlying growth model, the resulting STM may contain entries that predict negative growth. To solve this problem, we use a maximum likelihood method that incorporates individual variability in the asymptotic length, relative age at tagging, and measurement error to obtain von Bertalanffy growth model parameter estimates. The statistical moments for the future length given an individual's previous length measurement and time at liberty are then derived. We moment match the true conditional distributions with skewed-normal distributions and use these to accurately estimate the elements of the STMs. The method is investigated with simulated tag-recapture data and tag-recapture data gathered from the Australian eastern king prawn (Melicertus plebejus).
Resumo:
Power calculation and sample size determination are critical in designing environmental monitoring programs. The traditional approach based on comparing the mean values may become statistically inappropriate and even invalid when substantial proportions of the response values are below the detection limits or censored because strong distributional assumptions have to be made on the censored observations when implementing the traditional procedures. In this paper, we propose a quantile methodology that is robust to outliers and can also handle data with a substantial proportion of below-detection-limit observations without the need of imputing the censored values. As a demonstration, we applied the methods to a nutrient monitoring project, which is a part of the Perth Long-Term Ocean Outlet Monitoring Program. In this example, the sample size required by our quantile methodology is, in fact, smaller than that by the traditional t-test, illustrating the merit of our method.
Resumo:
We propose a new model for estimating the size of a population from successive catches taken during a removal experiment. The data from these experiments often have excessive variation, known as overdispersion, as compared with that predicted by the multinomial model. The new model allows catchability to vary randomly among samplings, which accounts for overdispersion. When the catchability is assumed to have a beta distribution, the likelihood function, which is refered to as beta-multinomial, is derived, and hence the maximum likelihood estimates can be evaluated. Simulations show that in the presence of extravariation in the data, the confidence intervals have been substantially underestimated in previous models (Leslie-DeLury, Moran) and that the new model provides more reliable confidence intervals. The performance of these methods was also demonstrated using two real data sets: one with overdispersion, from smallmouth bass (Micropterus dolomieu), and the other without overdispersion, from rat (Rattus rattus).
Resumo:
Natural mortality of marine invertebrates is often very high in the early life history stages and decreases in later stages. The possible size-dependent mortality of juvenile banana prawns, P. merguiensis (2-15 mm carapace length) in the Gulf of Carpentaria was investigated. The analysis was based on the data collected at 2-weekly intervals by beam trawls at four sites over a period of six years (between September 1986 and March 1992). It was assumed that mortality was a parametric function of size, rather than a constant. Another complication in estimating mortality for juvenile banana prawns is that a significant proportion of the population emigrates from the study area each year. This effect was accounted for by incorporating the size-frequency pattern of the emigrants in the analysis. Both the extra parameter in the model required to describe the size dependence of mortality, and that used to account for emigration were found to be significantly different from zero, and the instantaneous mortality rate declined from 0.89 week(-1) for 2 mm prawns to 0.02 week(-1) for 15 mm prawns.
Resumo:
Although subsampling is a common method for describing the composition of large and diverse trawl catches, the accuracy of these techniques is often unknown. We determined the sampling errors generated from estimating the percentage of the total number of species recorded in catches, as well as the abundance of each species, at each increase in the proportion of the sorted catch. We completely partitioned twenty prawn trawl catches from tropical northern Australia into subsamples of about 10 kg each. All subsamples were then sorted, and species numbers recorded. Catch weights ranged from 71 to 445 kg, and the number of fish species in trawls ranged from 60 to 138, and invertebrate species from 18 to 63. Almost 70% of the species recorded in catches were "rare" in subsamples (less than one individual per 10 kg subsample or less than one in every 389 individuals). A matrix was used to show the increase in the total number of species that were recorded in each catch as the percentage of the sorted catch increased. Simulation modelling showed that sorting small subsamples (about 10% of catch weights) identified about 50% of the total number of species caught in a trawl. Larger subsamples (50% of catch weight on average) identified about 80% of the total species caught in a trawl. The accuracy of estimating the abundance of each species also increased with increasing subsample size. For the "rare" species, sampling error was around 80% after sorting 10% of catch weight and was just less than 50% after 40% of catch weight had been sorted. For the "abundant" species (five or more individuals per 10 kg subsample or five or more in every 389 individuals), sampling error was around 25% after sorting 10% of catch weight, but was reduced to around 10% after 40% of catch weight had been sorted.
Resumo:
Stallard (1998, Biometrics 54, 279-294) recently used Bayesian decision theory for sample-size determination in phase II trials. His design maximizes the expected financial gains in the development of a new treatment. However, it results in a very high probability (0.65) of recommending an ineffective treatment for phase III testing. On the other hand, the expected gain using his design is more than 10 times that of a design that tightly controls the false positive error (Thall and Simon, 1994, Biometrics 50, 337-349). Stallard's design maximizes the expected gain per phase II trial, but it does not maximize the rate of gain or total gain for a fixed length of time because the rate of gain depends on the proportion: of treatments forwarding to the phase III study. We suggest maximizing the rate of gain, and the resulting optimal one-stage design becomes twice as efficient as Stallard's one-stage design. Furthermore, the new design has a probability of only 0.12 of passing an ineffective treatment to phase III study.
Resumo:
Multi-objective optimization is an active field of research with broad applicability in aeronautics. This report details a variant of the original NSGA-II software aimed to improve the performances of such a widely used Genetic Algorithm in finding the optimal Pareto-front of a Multi-Objective optimization problem for the use of UAV and aircraft design and optimsaiton. Original NSGA-II works on a population of predetermined constant size and its computational cost to evaluate one generation is O(mn^2 ), being m the number of objective functions and n the population size. The basic idea encouraging this work is that of reduce the computational cost of the NSGA-II algorithm by making it work on a population of variable size, in order to obtain better convergence towards the Pareto-front in less time. In this work some test functions will be tested with both original NSGA-II and VPNSGA-II algorithms; each test will be timed in order to get a measure of the computational cost of each trial and the results will be compared.
Resumo:
Thermodynamic model first published in 1909, is being used extensively to understand the size-dependent melting of nanoparticles. Pawlow deduced an expression for the size-dependent melting temperature of small particles based on the thermodynamic model which was then modified and applied to different nanostructures such as nanowires, prism-shaped nanoparticles, etc. The model has also been modified to understand the melting of supported nanoparticles and superheating of embedded nanoparticles. In this article, we have reviewed the melting behaviour of nanostructures reported in the literature since 1909.
Resumo:
In this paper the main features of ARDBID (A Relational Database for Interactive Design) have been described. An overview of the organization of the database has been presented and a detailed description of the data definition and manipulation languages has been given. These have been implemented on a DEC 1090 system.
Resumo:
Analyses of diffusion and dislocation creep in nanocrystals needs to take into account the generally utilized low temperatures, high stresses and very fine grain sizes. In nanocrystals, diffusion creep may be associated with a nonlinear stress dependence and dislocation creep may involve a grain size dependence.
Resumo:
Rail track undergoes complex loading patterns under moving traffic conditions compared to roads due to its continued and discontinued multi-layered structure, including rail, sleepers, ballast layer, sub-ballast layer, and subgrade. Particle size distributions (PSDs) of ballast, subballast, and subgrade layers can be critical in cyclic plastic deformation of rail track under moving traffic on frequent track degradation of rail tracks, especially at bridge transition zones. Conventional test approaches: static shear and cyclic single-point load tests are however unable to replicate actual loading patterns of moving train. Multi-ring shear apparatus; a new type of torsional simple shear apparatus, which can reproduce moving traffic conditions, was used in this study to investigate influence of particle size distribution of rail track layers on cyclic plastic deformation. Three particle size distributions, using glass beads were examined under different loading patterns: cyclic sin-gle-point load, and cyclic moving wheel load to evaluate cyclic plastic deformation of rail track under different loading methods. The results of these tests suggest that particle size distributions of rail track structural layers have significant impacts on cyclic plastic deformation under moving train load. Further, the limitations in con-ventional test methods used in laboratories to estimate the plastic deformation of rail track materials lead to underestimate the plastic deformation of rail tracks.
Resumo:
In this study, 120–144 commercial varieties and breeding lines were assessed for grain size attributes including plump grain (>2.8 mm) and retention (>2.5 mm+>2.8 mm). Grain samples were produced from replicated trials at 25 sites across four years. Climatic conditions varied between years as well as between sites. Several of the trial sites were irrigated while the remaining were produced under dryland conditions. A number of the dryland sites suffered severe drought stress. The grain size data was analysed for genetic (G), environmental (E) and genotype by environment (G×E) interactions. All analyses included maturity as a covariate. The genetic effect on grain size was greater than environmental or maturity effects despite some sites suffering terminal moisture stress. The model was used to calculate heritability values for each site used in the study. These values ranged from 89 to 98% for plump grain and 88 to 96% for retention. The results demonstrated that removing the sources of non-heritable variation, such as maturity and field effects, can improve genetic estimates of the retention and plump grain fractions. By partitioning all variance components, and thereby having more robust estimates of genetic differences, plant breeders can have greater confidence in selecting barley genotypes which maintain large, stable grain size across a range of environments.
Resumo:
This study compares estimates of the census size of the spawning population with genetic estimates of effective current and long-term population size for an abundant and commercially important marine invertebrate, the brown tiger prawn (Penaeus esculentus). Our aim was to focus on the relationship between genetic effective and census size that may provide a source of information for viability analyses of naturally occurring populations. Samples were taken in 2001, 2002 and 2003 from a population on the east coast of Australia and temporal allelic variation was measured at eight polymorphic microsatellite loci. Moments-based and maximum-likelihood estimates of current genetic effective population size ranged from 797 to 1304. The mean long-term genetic effective population size was 9968. Although small for a large population, the effective population size estimates were above the threshold where genetic diversity is lost at neutral alleles through drift or inbreeding. Simulation studies correctly predicted that under these experimental conditions the genetic estimates would have non-infinite upper confidence limits and revealed they might be overestimates of the true size. We also show that estimates of mortality and variance in family size may be derived from data on average fecundity, current genetic effective and census spawning population size, assuming effective population size is equivalent to the number of breeders. This work confirms that it is feasible to obtain accurate estimates of current genetic effective population size for abundant Type III species using existing genetic marker technology.
Resumo:
Evolutionarily stable sex ratios are determined for social hymenoptera under local mate competition (LMC) and when the brood size is finite. LMC is modelled by the parameter d. Of the reproductive progeny from a single foundress nest, a fraction d disperses (outbreeding), while (1-d) mate amongst themselves (sibmating). When the brood size is finite, d is taken to be the probability of an offspring dispersing, and similarly, r, the proportion of male offspring, the probability of a haploid egg being laid. Under the joint influence of these two stochastic processes, there is a nonzero probability that some females remain unmated in the nest. As a result, the optimal proportion of males (corresponding to the evolutionarily stable strategy, ESS) is higher than that obtained when the brood size is infinite. When the queen controls the sex ration, the ESS becomes more female biased under increased inbreeding (lower d), However, the ESS under worker control shows an unexpected pattern, including an increase in the proportion of males with increased inbreeding. This effect is traced to the complex interaction between inbreeding and local mate competition.