135 resultados para Small Sample
Resumo:
We discuss the creation of entanglement between two two-level atoms in the dissipative process of spontaneous emission. It is shown that spontaneous emission can lead to a transient entanglement between the atoms even if the atoms were prepared initially in an unentangled state. The amount of entanglement created in the system is quantified by using two different measures: concurrence and negativity. We find analytical formulae for the evolution of concurrence and negativity in the system. We also find the analytical relation between the two measures of entanglement. The system consists of two two-level atoms which are separated by an arbitrary distance r(12) and interact with each other via the dipole-dipole interaction, and the antisymmetric state of the system is included throughout, even for small interatomic separations, in contrast to the small-sample model. It is shown that for sufficiently large values of the dipole-dipole interaction initially the entanglement exhibits oscillatory behaviour with considerable entanglement in the peaks. For longer times the amount of entanglement is directly related to the population of the slowly decaying antisymmetric state.
Resumo:
A new, fast, continuous flow technique is described for the simultaneous determination of 633 S and delta(34)S using SO masses 48, 49 and 50. Analysis time is similar to5min/sample with measurement precision and accuracy better than +/-0.3parts per thousand. This technique, which has been set up using IAEA Ag2S standards S-1, S-2 and S-3, allows for the fast determination of mass-dependent or mass-independent fractionation (MIF) effects in sulfide, organic sulfur samples and possibly sulfate. Small sample sizes can be analysed directly, without chemical pre-treatment. Robustness of the technique for natural versus artificial standards was demonstrated by analysis of a Canon Diablo troilite, which gave a delta(33)S of 0.04parts per thousand and a delta(34)S of -0.06parts per thousand compared to the values obtained for S-1 of 0.07parts per thousand and -0.20parts per thousand, respectively. Two pyrite samples from a banded-iron formation from the 3710 Ma Isua Greenstone Belt were analysed using this technique and yielded MIF (Delta(33)S of 2.45 and 3.31parts per thousand) comparable to pyrite previously analysed by secondary ion probe. Copyright (C) 2004 John Wiley Sons, Ltd.
Resumo:
Resistance training has been shown to be the most effective exercise mode to induce anabolic adaptations in older men and women. Advances in imaging techniques and histochemistry have increased the ability to detect such changes, confirming the high level of adaptability that remains in aging skeletal muscle. This brief review presents a summary of the resistance-training studies that directly compare chronic anabolic responses to training in older (> 60 years) men and women. Sixteen studies are summarized, most of which indicate similar relative anabolic responses between older men and women after resistance training. Relatively small sample sizes in most of the interventions limited their ability to detect significant sex differences and should be considered when interpreting these studies. Future research should incorporate larger sample sizes with multiple measurement time points for anabolic responses.
Resumo:
There are at least two reasons for a symmetric, unimodal, diffuse tailed hyperbolic secant distribution to be interesting in real-life applications. It displays one of the common types of non normality in natural data and is closely related to the logistic and Cauchy distributions that often arise in practice. To test the difference in location between two hyperbolic secant distributions, we develop a simple linear rank test with trigonometric scores. We investigate the small-sample and asymptotic properties of the test statistic and provide tables of the exact null distribution for small sample sizes. We compare the test to the Wilcoxon two-sample test and show that, although the asymptotic powers of the tests are comparable, the present test has certain practical advantages over the Wilcoxon test.
Resumo:
Subsequent to the influential paper of [Chan, K.C., Karolyi, G.A., Longstaff, F.A., Sanders, A.B., 1992. An empirical comparison of alternative models of the short-term interest rate. Journal of Finance 47, 1209-1227], the generalised method of moments (GMM) has been a popular technique for estimation and inference relating to continuous-time models of the short-term interest rate. GMM has been widely employed to estimate model parameters and to assess the goodness-of-fit of competing short-rate specifications. The current paper conducts a series of simulation experiments to document the bias and precision of GMM estimates of short-rate parameters, as well as the size and power of [Hansen, L.P., 1982. Large sample properties of generalised method of moments estimators. Econometrica 50, 1029-1054], J-test of over-identifying restrictions. While the J-test appears to have appropriate size and good power in sample sizes commonly encountered in the short-rate literature, GMM estimates of the speed of mean reversion are shown to be severely biased. Consequently, it is dangerous to draw strong conclusions about the strength of mean reversion using GMM. In contrast, the parameter capturing the levels effect, which is important in differentiating between competing short-rate specifications, is estimated with little bias. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
The discovery and interpretation of microscopic residues on stone artefacts is an expanding front within archaeological science, allowing reconstructions of the past use of specific tools. With notable exceptions, however, the field has seen little theoretical development, relying largely on a rationale in which either individual findings are widely generalized or the age of the site determines the importance of the results. Here an approach to residue interpretation is proposed that draws on notions of narrative, scale, action and agency as one means of expanding the theoretical scope and application of residue studies. It is suggested that the individual resonance of the findings of residue analyses with people in the present day can be used to provide a more nuanced understanding of past actions, which in turn allows both better integration and communication of those findings within and outside the archaeological comm unity, and begins to overcome the problems associated with the typically small sample sizes analysed in stone-tool residue studies.
Resumo:
As part of a longitudinal study of the epidemiology of rabbit haemorrhagic disease virus (RHDV) in New Zealand, serum samples were obtained from trapped feral animals that may have consumed European rabbit (Oryctolagus cuniculus) carcasses (non-target species). During a 21-month period when RHDV infection was monitored in a defined wild rabbit population, 16 feral house cats (Felis catus), 11 stoats (Mustela erminea), four ferrets (Mustela furo) and 126 hedgehogs (Erinaceus europaeus) were incidentally captured in the rabbit traps. The proportions of samples that were seropositive to RHDV were 38% for cats, 18% for stoats, 25% for ferrets and 4% for hedgehogs. Seropositive non-target species were trapped in April 2000, in the absence of an overt epidemic of rabbit haemorrhagic disease (RHD) in the rabbit population, but evidence of recent infection in rabbits was shown. Seropositive non-target species were found up to 2.5 months before and 1 month after this RHDV activity in wild rabbits was detected. Seropositive predators were also trapped on the site between 1 and 4.5 months after a dramatic RHD epidemic in February 2001. This study has shown that high antibody titres can be found in non-target species when there is no overt evidence of RHDV infection in the rabbit population, although a temporal relationship could not be assessed statistically owning to the small sample sizes. Predators and scavengers might be able to contribute to localised spread of RHDV through their movements.
Resumo:
The Fornax Spectroscopic Survey will use the Two degree Field spectrograph (2dF) of the Angle-Australian Telescope to obtain spectra for a complete sample of all 14000 objects with 16.5 less than or equal to b(j) less than or equal to 19.7 in a 12 square degree area centred on the Fornax Cluster. The aims of this project include the study of dwarf galaxies in the cluster (both known low surface brightness objects and putative normal surface brightness dwarfs) and a comparison sample of background field galaxies. We will also measure quasars and other active galaxies, any previously unrecognised compact galaxies and a large sample of Galactic stars. By selecting all objects-both stars and galaxies-independent of morphology, we cover a much larger range of surface brightness and scale size than previous surveys. In this paper we first describe the design of the survey. Our targets are selected from UK Schmidt Telescope sky survey plates digitised by the Automated Plate Measuring (APM) facility. We then describe the photometric and astrometric calibration of these data and show that the APM astrometry is accurate enough for use with the 2dF. We also describe a general approach to object identification using cross-correlations which allows us to identify and classify both stellar and galaxy spectra. We present results from the first 2dF field. Redshift distributions and velocity structures are shown for all observed objects in the direction of Fornax, including Galactic stars? galaxies in and around the Fornax Cluster, and for the background galaxy population. The velocity data for the stars show the contributions from the different Galactic components, plus a small tail to high velocities. We find no galaxies in the foreground to the cluster in our 2dF field. The Fornax Cluster is clearly defined kinematically. The mean velocity from the 26 cluster members having reliable redshifts is 1560 +/- 80 km s(-1). They show a velocity dispersion of 380 +/- 50 km s(-1). Large-scale structure can be traced behind the cluster to a redshift beyond z = 0.3. Background compact galaxies and low surface brightness galaxies are found to follow the general galaxy distribution.
Resumo:
Background Twin and family studies have shown that genetic effects explain a relatively high amount of the phenotypic variation in blood pressure. However, many studies have not been able to replicate findings of association between specific polymorphisms and diastolic and systolic blood pressure. Methods In a structural equation-modelling framework the authors investigated longitudinal changes in repeated measures of blood pressures in a sample of 298 like-sexed twin pairs from the population-based Swedish Twin Registry. Also examined was the association between blood pressure and polymorphisms in the angiotensin-I converting enzyme and the angiotensin 11 receptor type 1 with the 'Fulker' test Both linkage and association were tested simultaneously revealing whether the polymorphism is a Quantitative Trait Locus (QTL) or in linkage disequilibrium with the QTL. Results Genetic influences explained up to 46% of the phenotypic variance in diastolic and 63% of the phenotypic variance in systolic blood pressure. Genetic influences were stable over time and contributed up to 78% of the phenotypic correlation in both diastolic and systolic blood pressure. Non-shared environmental effects were characterised by time specific influences and little transmission from one time point to the next. There was no significant linkage and association between the polymorphisms and blood pressure. Conclusions There is a considerable genetic stability in both diastolic and systolic blood pressure for a 6-year period of time in adult life. Non-shared environmental influences have a small long-term effect Although associations with the polymorphisms could not be replicated, results should be interpreted with caution due to power considerations. (C) 2002 Lippincott Williams Wilkins.
Resumo:
Reports of substantial evidence for genetic linkage of schizophrenia to chromosome 1q were evaluated by genotyping 16 DNA markers across 107 centimorgans of this chromosome in a multicenter sample of 779 informative schizophrenia pedigrees. No significant evidence was observed for such linkage, nor for heterogeneity in allele sharing among the eight individual samples. Separate analyses of European-origin families, recessive models of inheritance, and families with larger numbers of affected cases also failed to produce significant evidence for linkage. If schizophrenia susceptibility genes are present on chromosome 1q, their population-wide genetic effects are likely to be small.
Resumo:
Absolute calibration relates the measured (arbitrary) intensity to the differential scattering cross section of the sample, which contains all of the quantitative information specific to the material. The importance of absolute calibration in small-angle scattering experiments has long been recognized. This work details the absolute calibration procedure of a small-angle X-ray scattering instrument from Bruker AXS. The absolute calibration presented here was achieved by using a number of different types of primary and secondary standards. The samples were: a glassy carbon specimen, which had been independently calibrated from neutron radiation; a range of pure liquids, which can be used as primary standards as their differential scattering cross section is directly related to their isothermal compressibility; and a suspension of monodisperse silica particles for which the differential scattering cross section is obtained from Porod's law. Good agreement was obtained between the different standard samples, provided that care was taken to obtain significant signal averaging and all sources of background scattering were accounted for. The specimen best suited for routine calibration was the glassy carbon sample, due to its relatively intense scattering and stability over time; however, initial calibration from a primary source is necessary. Pure liquids can be used as primary calibration standards, but the measurements take significantly longer and are, therefore, less suited for frequent use.
Resumo:
In natural estuaries, contaminant transport is driven by the turbulent momentum mixing. The predictions of scalar dispersion can rarely be predicted accurately because of a lack of fundamental understanding of the turbulence structure in estuaries. Herein detailed turbulence field measurements were conducted at high frequency and continuously for up to 50 hours per investigation in a small subtropical estuary with semi-diurnal tides. Acoustic Doppler velocimetry was deemed the most appropriate measurement technique for such small estuarine systems with shallow water depths (less than 0.5 m at low tides), and a thorough post-processing technique was applied. The estuarine flow is always a fluctuating process. The bulk flow parameters fluctuated with periods comparable to tidal cycles and other large-scale processes. But turbulence properties depended upon the instantaneous local flow properties. They were little affected by the flow history, but their structure and temporal variability were influenced by a variety of mechanisms. This resulted in behaviour which deviated from that for equilibrium turbulent boundary layer induced by velocity shear only. A striking feature of the data sets is the large fluctuations in all turbulence characteristics during the tidal cycle. This feature was rarely documented, but an important difference between the data sets used in this study from earlier reported measurements is that the present data were collected continuously at high frequency during relatively long periods. The findings bring new lights in the fluctuating nature of momentum exchange coefficients and integral time and length scales. These turbulent properties should not be assumed constant.
Resumo:
High-resolution measurements of velocity and physio-chemistry were conducted before, during and after the passage of a transient front in a small subtropical system about 2.1 km upstream of the river mouth. Detailed acoustic Doppler velocimetry measurements, conducted continuously at 25 Hz, showed the existence of transverse turbulent shear between 300 s prior to the front passage and 1300 s after. This was associated with an increased level of suspended sediment concentration fluctuations, some transverse shear next to the bed and some surface temperature anomaly.
Resumo:
The BR algorithm is a novel and efficient method to find all eigenvalues of upper Hessenberg matrices and has never been applied to eigenanalysis for power system small signal stability. This paper analyzes differences between the BR and the QR algorithms with performance comparison in terms of CPU time based on stopping criteria and storage requirement. The BR algorithm utilizes accelerating strategies to improve its performance when computing eigenvalues of narrowly banded, nearly tridiagonal upper Hessenberg matrices. These strategies significantly reduce the computation time at a reasonable level of precision. Compared with the QR algorithm, the BR algorithm requires fewer iteration steps and less storage space without depriving of appropriate precision in solving eigenvalue problems of large-scale power systems. Numerical examples demonstrate the efficiency of the BR algorithm in pursuing eigenanalysis tasks of 39-, 68-, 115-, 300-, and 600-bus systems. Experiment results suggest that the BR algorithm is a more efficient algorithm for large-scale power system small signal stability eigenanalysis.