990 resultados para Classical Information
Resumo:
A quantum critical point (QCP) is a singularity in the phase diagram arising because of quantum mechanical fluctuations. The exotic properties of some of the most enigmatic physical systems, including unconventional metals and superconductors, quantum magnets and ultracold atomic condensates, have been related to the importance of critical quantum and thermal fluctuations near such a point. However, direct and continuous control of these fluctuations has been difficult to realize, and complete thermodynamic and spectroscopic information is required to disentangle the effects of quantum and classical physics around a QCP. Here we achieve this control in a high-pressure, high-resolution neutron scattering experiment on the quantum dimer material TlCuCl3. By measuring the magnetic excitation spectrum across the entire quantum critical phase diagram, we illustrate the similarities between quantum and thermal melting of magnetic order. We prove the critical nature of the unconventional longitudinal (Higgs) mode of the ordered phase by damping it thermally. We demonstrate the development of two types of criticality, quantum and classical, and use their static and dynamic scaling properties to conclude that quantum and thermal fluctuations can behave largely independently near a QCP.
Resumo:
The hindsight bias represents the tendency of people to falsely believe that they would have predicted the outcome of an event, once the outcome is known. Two experiments will be presented that show a reduction or even reversal of the hindsight bias when the outcome information is self-threatening for the participants. Participants read a report of an interaction between a man and a woman that ended with different outcomes: The woman was raped vs. the woman was not raped vs. no outcome information was given. Results of the first experiment indicated that especially female participants, who did not accept rape myths, showed a reversed hindsight bias, when they received the rape outcome information. The more threatening the rape outcome had been, the lower was their estimated likelihood of rape. Results of the second experiment confirmed those of the first. Female participants, who did not accept rape myths and perceived themselves highly similar to the victim, showed a strong reversed hindsight bias, when threatened by the rape outcome, whereas female participants, who did believe in rape myth and were not similar to the victim, showed a classical hindsight bias. These effects were interpreted in terms of self-serving or in-group serving functions of the hindsight bias: Participants deny the foreseeability of a self-threatening outcome as a means of self-protection even if they are not personally affected by the negative information, but a member of their group.
Resumo:
In population studies, most current methods focus on identifying one outcome-related SNP at a time by testing for differences of genotype frequencies between disease and healthy groups or among different population groups. However, testing a great number of SNPs simultaneously has a problem of multiple testing and will give false-positive results. Although, this problem can be effectively dealt with through several approaches such as Bonferroni correction, permutation testing and false discovery rates, patterns of the joint effects by several genes, each with weak effect, might not be able to be determined. With the availability of high-throughput genotyping technology, searching for multiple scattered SNPs over the whole genome and modeling their joint effect on the target variable has become possible. Exhaustive search of all SNP subsets is computationally infeasible for millions of SNPs in a genome-wide study. Several effective feature selection methods combined with classification functions have been proposed to search for an optimal SNP subset among big data sets where the number of feature SNPs far exceeds the number of observations. ^ In this study, we take two steps to achieve the goal. First we selected 1000 SNPs through an effective filter method and then we performed a feature selection wrapped around a classifier to identify an optimal SNP subset for predicting disease. And also we developed a novel classification method-sequential information bottleneck method wrapped inside different search algorithms to identify an optimal subset of SNPs for classifying the outcome variable. This new method was compared with the classical linear discriminant analysis in terms of classification performance. Finally, we performed chi-square test to look at the relationship between each SNP and disease from another point of view. ^ In general, our results show that filtering features using harmononic mean of sensitivity and specificity(HMSS) through linear discriminant analysis (LDA) is better than using LDA training accuracy or mutual information in our study. Our results also demonstrate that exhaustive search of a small subset with one SNP, two SNPs or 3 SNP subset based on best 100 composite 2-SNPs can find an optimal subset and further inclusion of more SNPs through heuristic algorithm doesn't always increase the performance of SNP subsets. Although sequential forward floating selection can be applied to prevent from the nesting effect of forward selection, it does not always out-perform the latter due to overfitting from observing more complex subset states. ^ Our results also indicate that HMSS as a criterion to evaluate the classification ability of a function can be used in imbalanced data without modifying the original dataset as against classification accuracy. Our four studies suggest that Sequential Information Bottleneck(sIB), a new unsupervised technique, can be adopted to predict the outcome and its ability to detect the target status is superior to the traditional LDA in the study. ^ From our results we can see that the best test probability-HMSS for predicting CVD, stroke,CAD and psoriasis through sIB is 0.59406, 0.641815, 0.645315 and 0.678658, respectively. In terms of group prediction accuracy, the highest test accuracy of sIB for diagnosing a normal status among controls can reach 0.708999, 0.863216, 0.639918 and 0.850275 respectively in the four studies if the test accuracy among cases is required to be not less than 0.4. On the other hand, the highest test accuracy of sIB for diagnosing a disease among cases can reach 0.748644, 0.789916, 0.705701 and 0.749436 respectively in the four studies if the test accuracy among controls is required to be at least 0.4. ^ A further genome-wide association study through Chi square test shows that there are no significant SNPs detected at the cut-off level 9.09451E-08 in the Framingham heart study of CVD. Study results in WTCCC can only detect two significant SNPs that are associated with CAD. In the genome-wide study of psoriasis most of top 20 SNP markers with impressive classification accuracy are also significantly associated with the disease through chi-square test at the cut-off value 1.11E-07. ^ Although our classification methods can achieve high accuracy in the study, complete descriptions of those classification results(95% confidence interval or statistical test of differences) require more cost-effective methods or efficient computing system, both of which can't be accomplished currently in our genome-wide study. We should also note that the purpose of this study is to identify subsets of SNPs with high prediction ability and those SNPs with good discriminant power are not necessary to be causal markers for the disease.^
Resumo:
Management of certain populations requires the preservation of its pure genetic background. When, for different reasons, undesired alleles are introduced, the original genetic conformation must be recovered. The present study tested, through computer simulations, the power of recovery (the ability for removing the foreign information) from genealogical data. Simulated scenarios comprised different numbers of exogenous individuals taking partofthe founder population anddifferent numbers of unmanaged generations before the removal program started. Strategies were based on variables arising from classical pedigree analyses such as founders? contribution and partial coancestry. The ef?ciency of the different strategies was measured as the proportion of native genetic information remaining in the population. Consequences on the inbreeding and coancestry levels of the population were also evaluated. Minimisation of the exogenous founders? contributions was the most powerful method, removing the largest amount of genetic information in just one generation.However, as a side effect, it led to the highest values of inbreeding. Scenarios with a large amount of initial exogenous alleles (i.e. high percentage of non native founders), or many generations of mixing became very dif?cult to recover, pointing out the importance of being careful about introgression events in population
Resumo:
Classical Guitar Music in Printed Collections is a new, open-access, online index to the contents of published score collections for classical guitar. Its interlinked, alphabetized lists allow one to find a composition by title or composer, to discover what score collections include that piece, to see what other works are included in each collection identified, and to locate a copy in a library collection. Accuracy of identification is guaranteed by incipit images of each work. The article discusses how this index differs from existing bibliographies of the classical guitar literature, its structure and design, and technical details of its publication.
New Approaches for Teaching Soil and Rock Mechanics Using Information and Communication Technologies
Resumo:
Soil and rock mechanics are disciplines with a strong conceptual and methodological basis. Initially, when engineering students study these subjects, they have to understand new theoretical phenomena, which are explained through mathematical and/or physical laws (e.g. consolidation process, water flow through a porous media). In addition to the study of these phenomena, students have to learn how to carry out estimations of soil and rock parameters in laboratories according to standard tests. Nowadays, information and communication technologies (ICTs) provide a unique opportunity to improve the learning process of students studying the aforementioned subjects. In this paper, we describe our experience of the incorporation of ICTs into the classical teaching-learning process of soil and rock mechanics and explain in detail how we have successfully developed various initiatives which, in summary, are: (a) implementation of an online social networking and microblogging service (using Twitter) for gradually sending key concepts to students throughout the semester (gradual learning); (b) detailed online virtual laboratory tests for a delocalized development of lab practices (self-learning); (c) integration of different complementary learning resources (e.g. videos, free software, technical regulations, etc.) using an open webpage. The complementary use to the classical teaching-learning process of these ICT resources has been highly satisfactory for students, who have positively evaluated this new approach.
Resumo:
Context. Classical supergiant X-ray binaries (SGXBs) and supergiant fast X-ray transients (SFXTs) are two types of high-mass X-ray binaries (HMXBs) that present similar donors but, at the same time, show very different behavior in the X-rays. The reason for this dichotomy of wind-fed HMXBs is still a matter of debate. Among the several explanations that have been proposed, some of them invoke specific stellar wind properties of the donor stars. Only dedicated empiric analysis of the donors’ stellar wind can provide the required information to accomplish an adequate test of these theories. However, such analyses are scarce. Aims. To close this gap, we perform a comparative analysis of the optical companion in two important systems: IGR J17544-2619 (SFXT) and Vela X-1 (SGXB). We analyze the spectra of each star in detail and derive their stellar and wind properties. As a next step, we compare the wind parameters, giving us an excellent chance of recognizing key differences between donor winds in SFXTs and SGXBs. Methods. We use archival infrared, optical and ultraviolet observations, and analyze them with the non-local thermodynamic equilibrium (NLTE) Potsdam Wolf-Rayet model atmosphere code. We derive the physical properties of the stars and their stellar winds, accounting for the influence of X-rays on the stellar winds. Results. We find that the stellar parameters derived from the analysis generally agree well with the spectral types of the two donors: O9I (IGR J17544-2619) and B0.5Iae (Vela X-1). The distance to the sources have been revised and also agree well with the estimations already available in the literature. In IGR J17544-2619 we are able to narrow the uncertainty to d = 3.0 ± 0.2 kpc. From the stellar radius of the donor and its X-ray behavior, the eccentricity of IGR J17544-2619 is constrained to e< 0.25. The derived chemical abundances point to certain mixing during the lifetime of the donors. An important difference between the stellar winds of the two stars is their terminal velocities (ν∞ = 1500 km s-1 in IGR J17544-2619 and ν∞ = 700 km s-1 in Vela X-1), which have important consequences on the X-ray luminosity of these sources. Conclusions. The donors of IGR J17544-2619 and Vela X-1 have similar spectral types as well as similar parameters that physically characterize them and their spectra. In addition, the orbital parameters of the systems are similar too, with a nearly circular orbit and short orbital period. However, they show moderate differences in their stellar wind velocity and the spin period of their neutron star which has a strong impact on the X-ray luminosity of the sources. This specific combination of wind speed and pulsar spin favors an accretion regime with a persistently high luminosity in Vela X-1, while it favors an inhibiting accretion mechanism in IGR J17544-2619. Our study demonstrates that the relative wind velocity is critical in class determination for the HMXBs hosting a supergiant donor, given that it may shift the accretion mechanism from direct accretion to propeller regimes when combined with other parameters.
Resumo:
Mode of access: Internet.
Resumo:
We show that deterministic quantum computing with a single bit can determine whether the classical limit of a quantum system is chaotic or integrable using O(N) physical resources, where N is the dimension of the Hilbert space of the system under study. This is a square-root improvement over all known classical procedures. Our study relies strictly on the random matrix conjecture. We also present numerical results for the nonlinear kicked top.
Resumo:
We provide optimal measurement schemes for estimating relative parameters of the quantum state of a pair of spin systems. We prove that the optimal measurements are joint measurements on the pair of systems, meaning that they cannot be achieved by local operations and classical communication. We also demonstrate that in the limit where one of the spins becomes macroscopic, our results reproduce those that are obtained by treating that spin as a classical reference direction.
Resumo:
In this paper we apply a new method for the determination of surface area of carbonaceous materials, using the local surface excess isotherms obtained from the Grand Canonical Monte Carlo simulation and a concept of area distribution in terms of energy well-depth of solid–fluid interaction. The range of this well-depth considered in our GCMC simulation is from 10 to 100 K, which is wide enough to cover all carbon surfaces that we dealt with (for comparison, the well-depth for perfect graphite surface is about 58 K). Having the set of local surface excess isotherms and the differential area distribution, the overall adsorption isotherm can be obtained in an integral form. Thus, given the experimental data of nitrogen or argon adsorption on a carbon material, the differential area distribution can be obtained from the inversion process, using the regularization method. The total surface area is then obtained as the area of this distribution. We test this approach with a number of data in the literature, and compare our GCMC-surface area with that obtained from the classical BET method. In general, we find that the difference between these two surface areas is about 10%, indicating the need to reliably determine the surface area with a very consistent method. We, therefore, suggest the approach of this paper as an alternative to the BET method because of the long-recognized unrealistic assumptions used in the BET theory. Beside the surface area obtained by this method, it also provides information about the differential area distribution versus the well-depth. This information could be used as a microscopic finger-print of the carbon surface. It is expected that samples prepared from different precursors and different activation conditions will have distinct finger-prints. We illustrate this with Cabot BP120, 280 and 460 samples, and the differential area distributions obtained from the adsorption of argon at 77 K and nitrogen also at 77 K have exactly the same patterns, suggesting the characteristics of this carbon.