929 resultados para scientific computation
Resumo:
It is shown that determining whether a quantum computation has a non-zero probability of accepting is at least as hard as the polynomial time hierarchy. This hardness result also applies to determining in general whether a given quantum basis state appears with nonzero amplitude in a superposition, or whether a given quantum bit has positive expectation value at the end of a quantum computation.
Resumo:
It is shown that determining whether a quantum computation has a non-zero probability of accepting is at least as hard as the polynomial time hierarchy. This hardness result also applies to determining in general whether a given quantum basis state appears with nonzero amplitude in a superposition, or whether a given quantum bit has positive expectation value at the end of a quantum computation. This result is achieved by showing that the complexity class NQP of Adleman, Demarrais, and Huang, a quantum analog of NP, is equal to the counting class coC=P.
Resumo:
This article introduces ART 2-A, an efficient algorithm that emulates the self-organizing pattern recognition and hypothesis testing properties of the ART 2 neural network architecture, but at a speed two to three orders of magnitude faster. Analysis and simulations show how the ART 2-A systems correspond to ART 2 dynamics at both the fast-learn limit and at intermediate learning rates. Intermediate learning rates permit fast commitment of category nodes but slow recoding, analogous to properties of word frequency effects, encoding specificity effects, and episodic memory. Better noise tolerance is hereby achieved without a loss of learning stability. The ART 2 and ART 2-A systems are contrasted with the leader algorithm. The speed of ART 2-A makes practical the use of ART 2 modules in large-scale neural computation.
Resumo:
This PhD thesis investigates the potential use of science communication models to engage a broader swathe of actors in decision making in relation to scientific and technological innovation in order to address possible democratic deficits in science and technology policy-making. A four-pronged research approach has been employed to examine different representations of the public(s) and different modes of engagement. The first case study investigates whether patient-groups could represent an alternative needs-driven approach to biomedical and health sciences R & D. This is followed by enquiry into the potential for Science Shops to represent a bottom-up approach to promote research and development of local relevance. The barriers and opportunities for the involvement of scientific researchers in science communication are next investigated via a national survey which is comparable to a similar survey conducted in the UK. The final case study investigates to what extent opposition or support regarding nanotechnology (as an emerging technology) is reflected amongst the YouTube user community and the findings are considered in the context of how support or opposition to new or emerging technologies can be addressed using conflict resolution based approaches to manage potential conflict trajectories. The research indicates that the majority of communication exercises of relevance to science policy and planning take the form of a one-way flow of information with little or no facility for public feedback. This thesis proposes that a more bottom-up approach to research and technology would help broaden acceptability and accountability for decisions made relating to new or existing technological trajectories. This approach could be better integrated with and complementary to government, institutional, e.g. university, and research funding agencies activities and help ensure that public needs and issues are better addressed directly by the research community. Such approaches could also facilitate empowerment of societal stakeholders regarding scientific literacy and agenda-setting. One-way information relays could be adapted to facilitate feedback from representative groups e.g. Non-governmental organisations or Civil Society Organisations (such as patient groups) in order to enhance the functioning and socio-economic relevance of knowledge-based societies to the betterment of human livelihoods.
Resumo:
We obtain an upper bound on the time available for quantum computation for a given quantum computer and decohering environment with quantum error correction implemented. First, we derive an explicit quantum evolution operator for the logical qubits and show that it has the same form as that for the physical qubits but with a reduced coupling strength to the environment. Using this evolution operator, we find the trace distance between the real and ideal states of the logical qubits in two cases. For a super-Ohmic bath, the trace distance saturates, while for Ohmic or sub-Ohmic baths, there is a finite time before the trace distance exceeds a value set by the user. © 2010 The American Physical Society.
Resumo:
BACKGROUND: The ability to write clearly and effectively is of central importance to the scientific enterprise. Encouraged by the success of simulation environments in other biomedical sciences, we developed WriteSim TCExam, an open-source, Web-based, textual simulation environment for teaching effective writing techniques to novice researchers. We shortlisted and modified an existing open source application - TCExam to serve as a textual simulation environment. After testing usability internally in our team, we conducted formal field usability studies with novice researchers. These were followed by formal surveys with researchers fitting the role of administrators and users (novice researchers) RESULTS: The development process was guided by feedback from usability tests within our research team. Online surveys and formal studies, involving members of the Research on Research group and selected novice researchers, show that the application is user-friendly. Additionally it has been used to train 25 novice researchers in scientific writing to date and has generated encouraging results. CONCLUSION: WriteSim TCExam is the first Web-based, open-source textual simulation environment designed to complement traditional scientific writing instruction. While initial reviews by students and educators have been positive, a formal study is needed to measure its benefits in comparison to standard instructional methods.
Resumo:
BACKGROUND: Writing plays a central role in the communication of scientific ideas and is therefore a key aspect in researcher education, ultimately determining the success and long-term sustainability of their careers. Despite the growing popularity of e-learning, we are not aware of any existing study comparing on-line vs. traditional classroom-based methods for teaching scientific writing. METHODS: Forty eight participants from a medical, nursing and physiotherapy background from US and Brazil were randomly assigned to two groups (n = 24 per group): An on-line writing workshop group (on-line group), in which participants used virtual communication, google docs and standard writing templates, and a standard writing guidance training (standard group) where participants received standard instruction without the aid of virtual communication and writing templates. Two outcomes, manuscript quality was assessed using the scores obtained in Six subgroup analysis scale as the primary outcome measure, and satisfaction scores with Likert scale were evaluated. To control for observer variability, inter-observer reliability was assessed using Fleiss's kappa. A post-hoc analysis comparing rates of communication between mentors and participants was performed. Nonparametric tests were used to assess intervention efficacy. RESULTS: Excellent inter-observer reliability among three reviewers was found, with an Intraclass Correlation Coefficient (ICC) agreement = 0.931882 and ICC consistency = 0.932485. On-line group had better overall manuscript quality (p = 0.0017, SSQSavg score 75.3 +/- 14.21, ranging from 37 to 94) compared to the standard group (47.27 +/- 14.64, ranging from 20 to 72). Participant satisfaction was higher in the on-line group (4.3 +/- 0.73) compared to the standard group (3.09 +/- 1.11) (p = 0.001). The standard group also had fewer communication events compared to the on-line group (0.91 +/- 0.81 vs. 2.05 +/- 1.23; p = 0.0219). CONCLUSION: Our protocol for on-line scientific writing instruction is better than standard face-to-face instruction in terms of writing quality and student satisfaction. Future studies should evaluate the protocol efficacy in larger longitudinal cohorts involving participants from different languages.
Resumo:
This article describes advances in statistical computation for large-scale data analysis in structured Bayesian mixture models via graphics processing unit (GPU) programming. The developments are partly motivated by computational challenges arising in fitting models of increasing heterogeneity to increasingly large datasets. An example context concerns common biological studies using high-throughput technologies generating many, very large datasets and requiring increasingly high-dimensional mixture models with large numbers of mixture components.We outline important strategies and processes for GPU computation in Bayesian simulation and optimization approaches, give examples of the benefits of GPU implementations in terms of processing speed and scale-up in ability to analyze large datasets, and provide a detailed, tutorial-style exposition that will benefit readers interested in developing GPU-based approaches in other statistical models. Novel, GPU-oriented approaches to modifying existing algorithms software design can lead to vast speed-up and, critically, enable statistical analyses that presently will not be performed due to compute time limitations in traditional computational environments. Supplementalmaterials are provided with all source code, example data, and details that will enable readers to implement and explore the GPU approach in this mixture modeling context. © 2010 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.
Resumo:
Presentation at the Controlling Dangerous Pathogens Project Regional Workshop on Dual-Use Research, Teresopolis, Brazil
Resumo:
Technological advances in genotyping have given rise to hypothesis-based association studies of increasing scope. As a result, the scientific hypotheses addressed by these studies have become more complex and more difficult to address using existing analytic methodologies. Obstacles to analysis include inference in the face of multiple comparisons, complications arising from correlations among the SNPs (single nucleotide polymorphisms), choice of their genetic parametrization and missing data. In this paper we present an efficient Bayesian model search strategy that searches over the space of genetic markers and their genetic parametrization. The resulting method for Multilevel Inference of SNP Associations, MISA, allows computation of multilevel posterior probabilities and Bayes factors at the global, gene and SNP level, with the prior distribution on SNP inclusion in the model providing an intrinsic multiplicity correction. We use simulated data sets to characterize MISA's statistical power, and show that MISA has higher power to detect association than standard procedures. Using data from the North Carolina Ovarian Cancer Study (NCOCS), MISA identifies variants that were not identified by standard methods and have been externally "validated" in independent studies. We examine sensitivity of the NCOCS results to prior choice and method for imputing missing data. MISA is available in an R package on CRAN.
Resumo:
p.131-140
Resumo:
p.131-140
Resumo:
For the numerical solution of the linearized Euler equations, an optimized computational scheme is considered. It is based on fully staggered (in space and time) regular meshes and on a simple mirroring procedure at the stepwise solid walls. There is no need to define ghost points into the solid ohjects that reflect the sound waves. Test results demonstrate the accuracy of the method that may be used for aeroacoustic problems with complex geometries.
Resumo:
A new finite volume method for solving the incompressible Navier--Stokes equations is presented. The main features of this method are the location of the velocity components and pressure on different staggered grids and a semi-Lagrangian method for the treatment of convection. An interpolation procedure based on area-weighting is used for the convection part of the computation. The method is applied to flow through a constricted channel, and results are obtained for Reynolds numbers, based on half the flow rate, up to 1000. The behavior of the vortex in the salient corner is investigated qualitatively and quantitatively, and excellent agreement is found with the numerical results of Dennis and Smith [Proc. Roy. Soc. London A, 372 (1980), pp. 393-414] and the asymptotic theory of Smith [J. Fluid Mech., 90 (1979), pp. 725-754].
Resumo:
This work presents computation analysis of levitated liquid thermal and flow fields with free surface oscillations in AC and DC magnetic fields. The volume electromagnetic force distribution is continuously updated with the shape and position change. The oscillation frequency spectra are analysed for droplets levitation against gravity in AC and DC magnetic fields at various combinations. For larger volume liquid metal confinement and melting the semi-levitation induction skull melting process is simulated with the same numerical model. Applications are aimed at pure electromagnetic material processing techniques and the material properties measurements in uncontaminated conditions.