938 resultados para Births number
Resumo:
Flow patterns and aerodynamic characteristics behind three side-by-side square cylinders has been found depending upon the unequal gap spacing (g1 = s1/d and g2 = s2/d) between the three cylinders and the Reynolds number (Re) using the Lattice Boltzmann method. The effect of Reynolds numbers on the flow behind three cylinders are numerically studied for 75 ≤ Re ≤ 175 and chosen unequal gap spacings such as (g1, g2) = (1.5, 1), (3, 4) and (7, 6). We also investigate the effect of g2 while keeping g1 fixed for Re = 150. It is found that a Reynolds number have a strong effect on the flow at small unequal gap spacing (g1, g2) = (1.5, 1.0). It is also found that the secondary cylinder interaction frequency significantly contributes for unequal gap spacing for all chosen Reynolds numbers. It is observed that at intermediate unequal gap spacing (g1, g2) = (3, 4) the primary vortex shedding frequency plays a major role and the effect of secondary cylinder interaction frequencies almost disappear. Some vortices merge near the exit and as a result small modulation found in drag and lift coefficients. This means that with the increase in the Reynolds numbers and unequal gap spacing shows weakens wakes interaction between the cylinders. At large unequal gap spacing (g1, g2) = (7, 6) the flow is fully periodic and no small modulation found in drag and lift coefficients signals. It is found that the jet flows for unequal gap spacing strongly influenced the wake interaction by varying the Reynolds number. These unequal gap spacing separate wake patterns for different Reynolds numbers: flip-flopping, in-phase and anti-phase modulation synchronized, in-phase and anti-phase synchronized. It is also observed that in case of equal gap spacing between the cylinders the effect of gap spacing is stronger than the Reynolds number. On the other hand, in case of unequal gap spacing between the cylinders the wake patterns strongly depends on both unequal gap spacing and Reynolds number. The vorticity contour visualization, time history analysis of drag and lift coefficients, power spectrum analysis of lift coefficient and force statistics are systematically discussed for all chosen unequal gap spacings and Reynolds numbers to fully understand this valuable and practical problem.
Resumo:
A computed tomography number to relative electron density (CT-RED) calibration is performed when commissioning a radiotherapy CT scanner by imaging a calibration phantom with inserts of specified RED and recording the CT number displayed. In this work, CT-RED calibrations were generated using several commercially available phantoms to observe the effect of phantom geometry on conversion to electron density and, ultimately, the dose calculation in a treatment planning system. Using an anthropomorphic phantom as a gold standard, the CT number of a material was found to depend strongly on the amount and type of scattering material surrounding the volume of interest, with the largest variation observed for the highest density material tested, cortical bone. Cortical bone gave a maximum CT number difference of 1,110 when a cylindrical insert of diameter 28 mm scanned free in air was compared to that in the form of a 30 × 30 cm2 slab. The effect of using each CT-RED calibration on planned dose to a patient was quantified using a commercially available treatment planning system. When all calibrations were compared to the anthropomorphic calibration, the largest percentage dose difference was 4.2 % which occurred when the CT-RED calibration curve was acquired with heterogeneity inserts removed from the phantom and scanned free in air. The maximum dose difference observed between two dedicated CT-RED phantoms was ±2.1 %. A phantom that is to be used for CT-RED calibrations must have sufficient water equivalent scattering material surrounding the heterogeneous objects that are to be used for calibration.
Resumo:
Copy number variations (CNVs) as described in the healthy population are purported to contribute significantly to genetic heterogeneity. Recent studies have described CNVs using lymphoblastoid cell lines or by application of specifically developed algorithms to interrogate previously described data. However, the full extent of CNVs remains unclear. Using high-density SNP array, we have undertaken a comprehensive investigation of chromosome 18 for CNV discovery and characterisation of distribution and association with chromosome architecture. We identified 399 CNVs, of which loss represents 98%, 58% are less than 2.5 kb in size and 71% are intergenic. Intronic deletions account for the majority of copy number changes with gene involvement. Furthermore, one-third of CNVs do not have putative breakpoints within repetitive sequences. We conclude that replicative processes, mediated either by repetitive elements or microhomology, account for the majority of CNVs in the healthy population. Genomic instability involving the formation of a non-B structure is demonstrated in one region.
Resumo:
Identity crime is argued to be one of the most significant crime problems of today. This paper examines identity crime, through the attitudes and practices of a group of seniors in Queensland, Australia. It examines their own actions towards the protection of their personal data in response to a fraudulent email request. Applying the concept of a prudential citizen (as one who is responsible for self-regulating their behaviour to maintain the integrity of one’s identity) it will be argued that seniors often expose identity information through their actions. However, this is demonstrated to be the result of flawed assumptions and misguided beliefs over the perceived risk and likelihood of identity crime, rather than a deliberate act. This paper concludes that to protect seniors from identity crime, greater awareness of appropriate risk-management strategies towards disclosure of their personal details is required to reduce their inadvertent exposure to identity crime.
Resumo:
In order to understand the role of translational modes in the orientational relaxation in dense dipolar liquids, we have carried out a computer ''experiment'' where a random dipolar lattice was generated by quenching only the translational motion of the molecules of an equilibrated dipolar liquid. The lattice so generated was orientationally disordered and positionally random. The detailed study of orientational relaxation in this random dipolar lattice revealed interesting differences from those of the corresponding dipolar liquid. In particular, we found that the relaxation of the collective orientational correlation functions at the intermediate wave numbers was markedly slower at the long times for the random lattice than that of the liquid. This verified the important role of the translational modes in this regime, as predicted recently by the molecular theories. The single-particle orientational correlation functions of the random lattice also decayed significantly slowly at long times, compared to those of the dipolar liquid.
Resumo:
Doppler weather radars with fast scanning rates must estimate spectral moments based on a small number of echo samples. This paper concerns the estimation of mean Doppler velocity in a coherent radar using a short complex time series. Specific results are presented based on 16 samples. A wide range of signal-to-noise ratios are considered, and attention is given to ease of implementation. It is shown that FFT estimators fare poorly in low SNR and/or high spectrum-width situations. Several variants of a vector pulse-pair processor are postulated and an algorithm is developed for the resolution of phase angle ambiguity. This processor is found to be better than conventional processors at very low SNR values. A feasible approximation to the maximum entropy estimator is derived as well as a technique utilizing the maximization of the periodogram. It is found that a vector pulse-pair processor operating with four lags for clear air observation and a single lag (pulse-pair mode) for storm observation may be a good way to estimate Doppler velocities over the entire gamut of weather phenomena.
Resumo:
This study reports a diachronic corpus investigation of common-number pronouns used to convey unknown or otherwise unspecified reference. The study charts agreement patterns in these pronouns in various diachronic and synchronic corpora. The objective is to provide base-line data on variant frequencies and distributions in the history of English, as there are no previous systematic corpus-based observations on this topic. This study seeks to answer the questions of how pronoun use is linked with the overall typological development in English and how their diachronic evolution is embedded in the linguistic and social structures in which they are used. The theoretical framework draws on corpus linguistics and historical sociolinguistics, grammaticalisation, diachronic typology, and multivariate analysis of modelling sociolinguistic variation. The method employs quantitative corpus analyses from two main electronic corpora, one from Modern English and the other from Present-day English. The Modern English material is the Corpus of Early English Correspondence, and the time frame covered is 1500-1800. The written component of the British National Corpus is used in the Present-day English investigations. In addition, the study draws supplementary data from other electronic corpora. The material is used to compare the frequencies and distributions of common-number pronouns between these two time periods. The study limits the common-number uses to two subsystems, one anaphoric to grammatically singular antecedents and one cataphoric, in which the pronoun is followed by a relative clause. Various statistical tools are used to process the data, ranging from cross-tabulations to multivariate VARBRUL analyses in which the effects of sociolinguistic and systemic parameters are assessed to model their impact on the dependent variable. This study shows how one pronoun type has extended its uses in both subsystems, an increase linked with grammaticalisation and the changes in other pronouns in English through the centuries. The variationist sociolinguistic analysis charts how grammaticalisation in the subsystems is embedded in the linguistic and social structures in which the pronouns are used. The study suggests a scale of two statistical generalisations of various sociolinguistic factors which contribute to grammaticalisation and its embedding at various stages of the process.
Resumo:
This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC) sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.
Resumo:
The development of innovative methods of stock assessment is a priority for State and Commonwealth fisheries agencies. It is driven by the need to facilitate sustainable exploitation of naturally occurring fisheries resources for the current and future economic, social and environmental well being of Australia. This project was initiated in this context and took advantage of considerable recent achievements in genomics that are shaping our comprehension of the DNA of humans and animals. The basic idea behind this project was that genetic estimates of effective population size, which can be made from empirical measurements of genetic drift, were equivalent to estimates of the successful number of spawners that is an important parameter in process of fisheries stock assessment. The broad objectives of this study were to 1. Critically evaluate a variety of mathematical methods of calculating effective spawner numbers (Ne) by a. conducting comprehensive computer simulations, and by b. analysis of empirical data collected from the Moreton Bay population of tiger prawns (P. esculentus). 2. Lay the groundwork for the application of the technology in the northern prawn fishery (NPF). 3. Produce software for the calculation of Ne, and to make it widely available. The project pulled together a range of mathematical models for estimating current effective population size from diverse sources. Some of them had been recently implemented with the latest statistical methods (eg. Bayesian framework Berthier, Beaumont et al. 2002), while others had lower profiles (eg. Pudovkin, Zaykin et al. 1996; Rousset and Raymond 1995). Computer code and later software with a user-friendly interface (NeEstimator) was produced to implement the methods. This was used as a basis for simulation experiments to evaluate the performance of the methods with an individual-based model of a prawn population. Following the guidelines suggested by computer simulations, the tiger prawn population in Moreton Bay (south-east Queensland) was sampled for genetic analysis with eight microsatellite loci in three successive spring spawning seasons in 2001, 2002 and 2003. As predicted by the simulations, the estimates had non-infinite upper confidence limits, which is a major achievement for the application of the method to a naturally-occurring, short generation, highly fecund invertebrate species. The genetic estimate of the number of successful spawners was around 1000 individuals in two consecutive years. This contrasts with about 500,000 prawns participating in spawning. It is not possible to distinguish successful from non-successful spawners so we suggest a high level of protection for the entire spawning population. We interpret the difference in numbers between successful and non-successful spawners as a large variation in the number of offspring per family that survive – a large number of families have no surviving offspring, while a few have a large number. We explored various ways in which Ne can be useful in fisheries management. It can be a surrogate for spawning population size, assuming the ratio between Ne and spawning population size has been previously calculated for that species. Alternatively, it can be a surrogate for recruitment, again assuming that the ratio between Ne and recruitment has been previously determined. The number of species that can be analysed in this way, however, is likely to be small because of species-specific life history requirements that need to be satisfied for accuracy. The most universal approach would be to integrate Ne with spawning stock-recruitment models, so that these models are more accurate when applied to fisheries populations. A pathway to achieve this was established in this project, which we predict will significantly improve fisheries sustainability in the future. Regardless of the success of integrating Ne into spawning stock-recruitment models, Ne could be used as a fisheries monitoring tool. Declines in spawning stock size or increases in natural or harvest mortality would be reflected by a decline in Ne. This would be good for data-poor fisheries and provides fishery independent information, however, we suggest a species-by-species approach. Some species may be too numerous or experiencing too much migration for the method to work. During the project two important theoretical studies of the simultaneous estimation of effective population size and migration were published (Vitalis and Couvet 2001b; Wang and Whitlock 2003). These methods, combined with collection of preliminary genetic data from the tiger prawn population in southern Gulf of Carpentaria population and a computer simulation study that evaluated the effect of differing reproductive strategies on genetic estimates, suggest that this technology could make an important contribution to the stock assessment process in the northern prawn fishery (NPF). Advances in the genomics world are rapid and already a cheaper, more reliable substitute for microsatellite loci in this technology is available. Digital data from single nucleotide polymorphisms (SNPs) are likely to super cede ‘analogue’ microsatellite data, making it cheaper and easier to apply the method to species with large population sizes.
Resumo:
The Queensland Great Barrier Reef line fishery in Australia is regulated via a range of input and output controls including minimum size limits, daily catch limits and commercial catch quotas. As a result of these measures a substantial proportion of the catch is released or discarded. The fate of these released fish is uncertain, but hook-related mortality can potentially be decreased by using hooks that reduce the rates of injury, bleeding and deep hooking. There is also the potential to reduce the capture of non-target species though gear selectivity. A total of 1053 individual fish representing five target species and three non-target species were caught using six hook types including three hook patterns (non-offset circle, J and offset circle), each in two sizes (small 4/0 or 5/0 and large 8/0). Catch rates for each of the hook patterns and sizes varied between species with no consistent results for target or non-target species. When data for all of the fish species were aggregated there was a trend for larger hooks, J hooks and offset circle hooks to cause a greater number of injuries. Using larger hooks was more likely to result in bleeding, although this trend was not statistically significant. Larger hooks were also more likely to foul-hook fish or hook fish in the eye. There was a reduction in the rates of injuries and bleeding for both target and non-target species when using the smaller hook sizes. For a number of species included in our study the incidence of deep hooking decreased when using non-offset circle hooks, however, these results were not consistent for all species. Our results highlight the variability in hook performance across a range of tropical demersal finfish species. The most obvious conservation benefits for both target and non-target species arise from using smaller sized hooks and non-offset circle hooks. Fishers should be encouraged to use these hook configurations to reduce the potential for post-release mortality of released fish.
Resumo:
The number of bidders, N, involved in a construction procurement auction is known to have an important effect on the value of the lowest bid and the mark up applied by bidders. In practice, for example, it is important for a bidder to have a good estimate of N when bidding for a current contract. One approach, instigated by Friedman in 1956, is to make such an estimate by statistical analysis and modelling. Since then, however, finding a suitable model for N has been an enduring problem for researchers and, despite intensive research activity in the subsequent thirty years little progress has been made - due principally to the absence of new ideas and perspectives. This paper resumes the debate by checking old assumptions, providing new evidence relating to concomitant variables and proposing a new model. In doing this and in order to assure universality, a novel approach is developed and tested by using a unique set of twelve construction tender databases from four continents. This shows the new model provides a significant advancement on previous versions. Several new research questions are also posed and other approaches identified for future study.