929 resultados para scientific computation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Approximate Bayesian computation has become an essential tool for the analysis of complex stochastic models when the likelihood function is numerically unavailable. However, the well-established statistical method of empirical likelihood provides another route to such settings that bypasses simulations from the model and the choices of the approximate Bayesian computation parameters (summary statistics, distance, tolerance), while being convergent in the number of observations. Furthermore, bypassing model simulations may lead to significant time savings in complex models, for instance those found in population genetics. The Bayesian computation with empirical likelihood algorithm we develop in this paper also provides an evaluation of its own performance through an associated effective sample size. The method is illustrated using several examples, including estimation of standard distributions, time series, and population genetics models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computer Experiments, consisting of a number of runs of a computer model with different inputs, are now common-place in scientific research. Using a simple fire model for illustration some guidelines are given for the size of a computer experiment. A graph is provided relating the error of prediction to the sample size which should be of use when designing computer experiments. Methods for augmenting computer experiments with extra runs are also described and illustrated. The simplest method involves adding one point at a time choosing that point with the maximum prediction variance. Another method that appears to work well is to choose points from a candidate set with maximum determinant of the variance covariance matrix of predictions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Deterministic computer simulations of physical experiments are now common techniques in science and engineering. Often, physical experiments are too time consuming, expensive or impossible to conduct. Complex computer models or codes, rather than physical experiments lead to the study of computer experiments, which are used to investigate many scientific phenomena of this nature. A computer experiment consists of a number of runs of the computer code with different input choices. The Design and Analysis of Computer Experiments is a rapidly growing technique in statistical experimental design. This thesis investigates some practical issues in the design and analysis of computer experiments and attempts to answer some of the questions faced by experimenters using computer experiments. In particular, the question of the number of computer experiments and how they should be augmented is studied and attention is given to when the response is a function over time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Through the use of critical discourse analysis, this thesis investigated the perceived importance of scientific literacy in the new Australian Curriculum: Science. It was found that scientific literacy was ambiguous, and that the document did not provide detailed scope for intentional teaching for scientific literacy. To overcome this, recommendations on how to intentionally teach for scientific literacy were provided, so that Australian Science teachers can focus on improving scientific literacy outcomes for all students within this new curriculum.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper develops analytical distributions of temperature indices on which temperature derivatives are written. If the deviations of daily temperatures from their expected values are modelled as an Ornstein-Uhlenbeck process with timevarying variance, then the distributions of the temperature index on which the derivative is written is the sum of truncated, correlated Gaussian deviates. The key result of this paper is to provide an analytical approximation to the distribution of this sum, thus allowing the accurate computation of payoffs without the need for any simulation. A data set comprising average daily temperature spanning over a hundred years for four Australian cities is used to demonstrate the efficacy of this approach for estimating the payoffs to temperature derivatives. It is demonstrated that expected payoffs computed directly from historical records are a particularly poor approach to the problem when there are trends in underlying average daily temperature. It is shown that the proposed analytical approach is superior to historical pricing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud computing is an emerging computing paradigm in which IT resources are provided over the Internet as a service to users. One such service offered through the Cloud is Software as a Service or SaaS. SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. SaaS is receiving substantial attention today from both software providers and users. It is also predicted to has positive future markets by analyst firms. This raises new challenges for SaaS providers managing SaaS, especially in large-scale data centres like Cloud. One of the challenges is providing management of Cloud resources for SaaS which guarantees maintaining SaaS performance while optimising resources use. Extensive research on the resource optimisation of Cloud service has not yet addressed the challenges of managing resources for composite SaaS. This research addresses this gap by focusing on three new problems of composite SaaS: placement, clustering and scalability. The overall aim is to develop efficient and scalable mechanisms that facilitate the delivery of high performance composite SaaS for users while optimising the resources used. All three problems are characterised as highly constrained, large-scaled and complex combinatorial optimisation problems. Therefore, evolutionary algorithms are adopted as the main technique in solving these problems. The first research problem refers to how a composite SaaS is placed onto Cloud servers to optimise its performance while satisfying the SaaS resource and response time constraints. Existing research on this problem often ignores the dependencies between components and considers placement of a homogenous type of component only. A precise problem formulation of composite SaaS placement problem is presented. A classical genetic algorithm and two versions of cooperative co-evolutionary algorithms are designed to now manage the placement of heterogeneous types of SaaS components together with their dependencies, requirements and constraints. Experimental results demonstrate the efficiency and scalability of these new algorithms. In the second problem, SaaS components are assumed to be already running on Cloud virtual machines (VMs). However, due to the environment of a Cloud, the current placement may need to be modified. Existing techniques focused mostly at the infrastructure level instead of the application level. This research addressed the problem at the application level by clustering suitable components to VMs to optimise the resource used and to maintain the SaaS performance. Two versions of grouping genetic algorithms (GGAs) are designed to cater for the structural group of a composite SaaS. The first GGA used a repair-based method while the second used a penalty-based method to handle the problem constraints. The experimental results confirmed that the GGAs always produced a better reconfiguration placement plan compared with a common heuristic for clustering problems. The third research problem deals with the replication or deletion of SaaS instances in coping with the SaaS workload. To determine a scaling plan that can minimise the resource used and maintain the SaaS performance is a critical task. Additionally, the problem consists of constraints and interdependency between components, making solutions even more difficult to find. A hybrid genetic algorithm (HGA) was developed to solve this problem by exploring the problem search space through its genetic operators and fitness function to determine the SaaS scaling plan. The HGA also uses the problem's domain knowledge to ensure that the solutions meet the problem's constraints and achieve its objectives. The experimental results demonstrated that the HGA constantly outperform a heuristic algorithm by achieving a low-cost scaling and placement plan. This research has identified three significant new problems for composite SaaS in Cloud. Various types of evolutionary algorithms have also been developed in addressing the problems where these contribute to the evolutionary computation field. The algorithms provide solutions for efficient resource management of composite SaaS in Cloud that resulted to a low total cost of ownership for users while guaranteeing the SaaS performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study is about young adolescents' engagement in learning science. The middle years of schooling are critical in the development of students' interest and engagement with learning. Successful school experiences enhance dispositions towards a career related to those experiences. Poor experiences lead to negative attitudes and rejection of certain career pathways. At a time when students are becoming more aware, more independent and focused on peer relationships and social status, the high school environment in some circumstances offers more a content-centred curriculum that is less personally relevant to their lives than the social melee surrounding them. Science education can further exacerbate the situation by presenting abstract concepts that have limited contextual relevance and a seemingly difficult vocabulary that further alienates adolescents from the curriculum. In an attempt to reverse a perceived growing disinterest by students to science (Goodrum, Druhan & Abbs, 2011), a study was initiated based on a student-centred unit designed to enhance and sustain adolescent engagement in science. The premise of the study was that adolescent students are more responsive toward learning if they are given an appropriate learning environment that helps connect their learning with life beyond the school. The purpose of this study was to examine the experiences of young adolescents with the aim of transforming school learning in science into meaningful experiences that connected with their lives. Two areas were specifically canvassed and subsumed within the study to strengthen the design base. One area that of the middle schooling ideology, offered specific pedagogical approaches and a philosophical framework that could provide opportunities for reform. The other area, the construct of scientific literacy (OECD, 2007) as defined by Holbrook and Rannikmae, (2009) appeared to provide a sense of purpose for students to aim toward and value for becoming active citizens. The study reported here is a self-reflection of a teacher/researcher exploring practice and challenging existing approaches to the teaching of science in the middle years of schooling. The case study approach (Yin, 2003) was adopted to guide the design of the study. Over a 6-month period, the researcher, an experienced secondary-science teacher, designed, implemented and documented a range of student-centred pedagogical practices with a Year-7 secondary science class. Data for this case study included video recordings, journals, interviews and surveys of students. Both quantitative and qualitative data sources were employed in a partially mixed methods research approach (Leech & Onwuegbuzie, 2009) dominated by qualitative data with the concurrent collection of quantitative data to corroborate interpretations as a means of analysing and developing a model of the dynamic learning environment. The findings from the case study identified five propositions that became the basis for a model of a student-centred learning environment that was able to sustain student participation and thus engagement in science. The study suggested that adolescent student engagement can be promoted and sustained by providing a classroom climate that encourages and strengthens social interaction. Engagement in science can be enhanced by presenting developmentally appropriate challenges that require rigorous exploration of contextually relevant learning environments; supporting students to develop connections with a curriculum that aligns with their own experiences. By setting an environment empathetic to adolescent needs and understandings, students were able to actively explore phenomena collaboratively through developmentally appropriate experiences. A significant outcome of this study was the transformative experiences of an insider, the teacher as researcher, whose reflections provide an authentic model for reforming pedagogy. The model and theory presented became an adjunct to my repertoire for science teaching in the middle years of schooling. The study was rewarding in that it helped address a void in my understanding of middle years of schooling by prompting me to re-think the notion of adolescence in the context of the science classroom. This study is timely given the report "The Status and Quality of Year 11 and 12 Science in Australian Schools" (Goodrum, Druhan & Abbs, 2011) and national curricular changes that are being proposed for science (ACARA, 2009).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Porn studies researchers in the humanities have tended to use different research methods from those in social sciences. There has been surprisingly little conversation between the groups about methodology. This article presents a basic introduction to textual analysis and statistical analysis, aiming to provide for all porn studies researchers a familiarity with these two quite distinct traditions of data analysis. Comparing these two approaches, the article suggests that social science approaches are often strongly reliable – but can sacrifice validity to this end. Textual analysis is much less reliable, but has the capacity to be strongly valid. Statistical methods tend to produce a picture of human beings as groups, in terms of what they have in common, whereas humanities approaches often seek out uniqueness. Social science approaches have asked a more limited range of questions than have the humanities. The article ends with a call to mix up the kinds of research methods that are applied to various objects of study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Diabetes in South Asia represents a different disease entity in terms of its onset, progression, and complications. In the present study, we systematically analyzed the medical research output on diabetes in South Asia. METHODS: The online SciVerse Scopus database was searched using the search terms "diabetes" and "diabetes mellitus" in the article Title, Abstract or Keywords fields, in conjunction with the names of each regional country in the Author Affiliation field. RESULTS: In total, 8478 research articles were identified. Most were from India (85.1%) and Pakistan (9.6%) and the contribution to the global diabetes research output was 2.1%. Publications from South Asia increased markedly after 2007, with 58.7% of papers published between 2000 and 2010 being published after 2007. Most papers were Research Articles (75.9%) and Reviews (12.9%), with only 90 (1.1%) clinical trials. Publications predominantly appeared in local national journals. Indian authors and institutions had the most number of articles and the highest h-index. There were 136 (1.6%) intraregional collaborative studies. Only 39 articles (0.46%) had >100 citations. CONCLUSIONS: Regional research output on diabetes mellitus is unsatisfactory, with only a minimal contribution to global diabetes research. Publications are not highly cited and only a few randomized controlled trials have been performed. In the coming decades, scientists in the region must collaborate and focus on practical and culturally acceptable interventional studies on diabetes mellitus.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this response to Tom G. K. Bryce and Stephen P. Day’s (Cult Stud Sci Educ. doi:10.1007/s11422-013-9500-0, 2013) original article, I share with them their interest in the teaching of climate change in school science, but I widen it to include other contemporary complex socio-scientific issues that also need to be discussed. I use an alternative view of the relationship between science, technology and society, supported by evidence from both science and society, to suggest science-informed citizens as a more realistic outcome image of school science than the authors’ one of mini-scientists. The intellectual independence of students Bryce and Day assume, and intend for school science, is countered with an active intellectual dependence. It is only in relation to emerging and uncertain scientific contexts that students should be taught about scepticism, but they also need to learn when, and why to trust science as an antidote to the expressions of doubting it. Some suggestions for pedagogies that could lead to these new learnings are made. The very recent fifth report of the IPCC answers many of their concerns about climate change.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Scientific visualisations such as computer-based animations and simulations are increasingly a feature of high school science instruction. Visualisations are adopted enthusiastically by teachers and embraced by students, and there is good evidence that they are popular and well received. There is limited evidence, however, of how effective they are in enabling students to learn key scientific concepts. This paper reports the results of a quantitative study conducted in Australian chemistry classrooms. The visualisations chosen were from free online sources, intended to model the ways in which classroom teachers use visualisations, but were found to have serious flaws for conceptual learning. There were also challenges in the degree of interactivity available to students using the visualisations. Within these limitations, no significant difference was found for teaching with and without these visualisations. Further study using better designed visualisations and with explicit attention to the pedagogy surrounding the visualisations will be required to gather high quality evidence of the effectiveness of visualisations for conceptual development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This is a discussion of the journal article: "Construcing summary statistics for approximate Bayesian computation: semi-automatic approximate Bayesian computation". The article and discussion have appeared in the Journal of the Royal Statistical Society: Series B (Statistical Methodology).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a novel approach for developing summary statistics for use in approximate Bayesian computation (ABC) algorithms using indirect infer- ence. We embed this approach within a sequential Monte Carlo algorithm that is completely adaptive. This methodological development was motivated by an application involving data on macroparasite population evolution modelled with a trivariate Markov process. The main objective of the analysis is to compare inferences on the Markov process when considering two di®erent indirect mod- els. The two indirect models are based on a Beta-Binomial model and a three component mixture of Binomials, with the former providing a better ¯t to the observed data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the natural problem of secure n-party computation (in the computationally unbounded attack model) of circuits over an arbitrary finite non-Abelian group (G,⋅), which we call G-circuits. Besides its intrinsic interest, this problem is also motivating by a completeness result of Barrington, stating that such protocols can be applied for general secure computation of arbitrary functions. For flexibility, we are interested in protocols which only require black-box access to the group G (i.e. the only computations performed by players in the protocol are a group operation, a group inverse, or sampling a uniformly random group element). Our investigations focus on the passive adversarial model, where up to t of the n participating parties are corrupted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Classical results in unconditionally secure multi-party computation (MPC) protocols with a passive adversary indicate that every n-variate function can be computed by n participants, such that no set of size t < n/2 participants learns any additional information other than what they could derive from their private inputs and the output of the protocol. We study unconditionally secure MPC protocols in the presence of a passive adversary in the trusted setup (‘semi-ideal’) model, in which the participants are supplied with some auxiliary information (which is random and independent from the participant inputs) ahead of the protocol execution (such information can be purchased as a “commodity” well before a run of the protocol). We present a new MPC protocol in the trusted setup model, which allows the adversary to corrupt an arbitrary number t < n of participants. Our protocol makes use of a novel subprotocol for converting an additive secret sharing over a field to a multiplicative secret sharing, and can be used to securely evaluate any n-variate polynomial G over a field F, with inputs restricted to non-zero elements of F. The communication complexity of our protocol is O(ℓ · n 2) field elements, where ℓ is the number of non-linear monomials in G. Previous protocols in the trusted setup model require communication proportional to the number of multiplications in an arithmetic circuit for G; thus, our protocol may offer savings over previous protocols for functions with a small number of monomials but a large number of multiplications.