54 resultados para spse model (situation, problem, solution, evaluation)
Resumo:
Randomising set index functions can reduce the number of conflict misses in data caches by spreading the cache blocks uniformly over all sets. Typically, the randomisation functions compute the exclusive ors of several address bits. Not all randomising set index functions perform equally well, which calls for the evaluation of many set index functions. This paper discusses and improves a technique that tackles this problem by predicting the miss rate incurred by a randomisation function, based on profiling information. A new way of looking at randomisation functions is used, namely the null space of the randomisation function. The members of the null space describe pairs of cache blocks that are mapped to the same set. This paper presents an analytical model of the error made by the technique and uses this to propose several optimisations to the technique. The technique is then applied to generate a conflict-free randomisation function for the SPEC benchmarks. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
This paper addresses the analytical solution of the mixed-mode bending (MMB) problem. The first published solutions used a load separation in pure mode I and mode II and were applied for a crack length less than the beam half-span, a <= L. In later publications, the same mode separation was used in deriving the analytical solution for crack lengths bigger than the beam half-span, a > L. In this paper it is shown that this mode separation is not valid when a > L and in some cases may lead to very erroneous results. The correct mode separation and the corresponding analytical solutions, when a > L, are presented. Results, of force vs. displacement and force vs. crack length graphs, obtained using the existing formulation and the corrected formulation are compared. A finite element solution, which does not use mode separation, is also presented
Resumo:
BACKGROUND:
A novel online resource has been developed to aid OSCE examiner training comprising a series of videos of OSCE performances that allow inter-examiner comparison of global grade decisions.
AIMS:
To evaluate this training resource in terms of usefulness and ability to improve examiner confidence in awarding global grades in OSCEs.
METHOD:
Data collected from the first 200 users included global grades awarded, willingness to change grades following peer comparison and confidence in awarding grades before and after training.
RESULTS:
Most (86.5%) agreed that the resource was useful in developing global grade scoring ability in OSCEs, with a significant improvement in confidence in awarding grades after using the training package (p<0.001).
CONCLUSIONS:
This is a useful and effective online training package. As an adjunct to traditional training it offers a practical solution to the problem of availability of examiners.
Resumo:
Chitons are often referred to as "living fossils" in part because they are proposed as one of the earliest-diverging groups of living molluscs, but also because the gross morphology of the polyplacophoran shell has been conserved for hundreds of millions of years. As such, the analysis of evolution and radiation within polyplacophorans is of considerable interest not only for resolving the shape of pan-molluscan phylogeny but also as model organisms for the study of character evolution. This study presents a new, rigorous cladistic analysis of the morphological characters used in taxonomic descriptions for chitons in the living suborder Lepidopleurina Thiele, 1910 (the earliest-derived living group of chitons). Shell-based characters alone entirely fail to recover any recognized subdivisions within the group, which may raise serious questions about the application of fossil data (from isolated shell valves). New analysis including characters from girdle armature and gill arrangements recovers some genera within the group but also points to the lack of monophyly within the main genus Leptochiton Gray, 1847. Additional characters from molecular data and soft anatomy, used in combination, are clearly needed to resolve questions of chiton relationships. However, the data sets currently available already provide interesting insights into the analytical power of traditional morphology as well as some knowledge about the early evolution and radiation of this group.
Resumo:
NiTi alloys have been widely used in the applications for micro-electro-mechanical-systems (MEMS), which often involve some precise and complex motion control. However, when using the NiTi alloys in MEMS application, the main problem to be considered is the degradation of functional property during cycling loading. This also stresses the importance of accurate prediction of the functional behavior of NiTi alloys. In the last two decades, a large number of constitutive models have been proposed to achieve the task. A portion of them focused on the deformation behavior of NiTi alloys under cyclic loading, which is a practical and non-negligible situation. Despite of the scale of modeling studies of the field in NiTi alloys, two experimental observations under uniaxial tension loading have not received proper attentions. First, a deviation from linearity well before the stress-induced martensitic transformation (SIMT) has not been modeled. Recent experiments confirmed that it is caused by the formation of stress-induced R phase. Second, the influence of the well-known localized Lüders-like SIMT on the macroscopic behavior of NiTi alloys, in particular the residual strain during cyclic loading, has not been addressed. In response, we develop a 1-D phenomenological constitutive model for NiTi alloys with two novel features: the formation of stress-induced R phase and the explicit modeling of the localized Lüders-like SIMT. The derived constitutive relations are simple and at the same time sufficient to describe the behavior of NiTi alloys. The accumulation of residual strain caused by R phase under different loading schemes is accurately described by the proposed model. Also, the residual strain caused by irreversible SIMT at different maximum loading strain under cyclic tension loading in individual samples can be explained by and fitted into a single equation in the proposed model. These results show that the proposed model successfully captures the behavior of R phase and the essence of localized SIMT.
Resumo:
This article reconstructs British constitutional policy in Northern Ireland after power-sharing collapsed in May 1974. Over the following two years, the British government publicly emphasised that Northern Ireland would decide its own future, but ministers secretly considered a range of options including withdrawal, integration and Dominion status. These discussions have been fundamentally misunderstood by previous authors, and this article shows that Harold Wilson did not seriously advocate withdrawal nor was policy as inconsistent as argued elsewhere. An historical approach, drawing from recently released archival material, shows that consociationalists such as Brendan O'Leary and Michael Kerr have neglected the proper context of government policy because of their commitment to a particular form of government, failing to recognise the constraints under which ministers operated. The British government remained committed to an internal devolved settlement including both communities but was unable to impose one.
Resumo:
We address the problem of designing distributed algorithms for large scale networks that are robust to Byzantine faults. We consider a message passing, full information model: the adversary is malicious, controls a constant fraction of processors, and can view all messages in a round before sending out its own messages for that round. Furthermore, each bad processor may send an unlimited number of messages. The only constraint on the adversary is that it must choose its corrupt processors at the start, without knowledge of the processors’ private random bits.
A good quorum is a set of O(logn) processors, which contains a majority of good processors. In this paper, we give a synchronous algorithm which uses polylogarithmic time and Õ(vn) bits of communication per processor to bring all processors to agreement on a collection of n good quorums, solving Byzantine agreement as well. The collection is balanced in that no processor is in more than O(logn) quorums. This yields the first solution to Byzantine agreement which is both scalable and load-balanced in the full information model.
The technique which involves going from situation where slightly more than 1/2 fraction of processors are good and and agree on a short string with a constant fraction of random bits to a situation where all good processors agree on n good quorums can be done in a fully asynchronous model as well, providing an approach for extending the Byzantine agreement result to this model.
Resumo:
Refined vegetable oils are widely used in the food industry as ingredients or components in many processed food products in the form of oil blends. To date, the generic term 'vegetable oil' has been used in the labelling of food containing oil blends. With the introduction of new EU Regulation for Food Information (1169/2011) due to take effect in 2014, the oil species used must be clearly identified on the package and there is a need for development of fit for purpose methodology for industry and regulators alike to verify the oil species present in a product. The available methodologies that may be employed to authenticate the botanical origin of a vegetable oil admixture were reviewed and evaluated. The majority of the sources however, described techniques applied to crude vegetable oils such as olive oil due to the lack of refined vegetable oil focused studies. Nevertheless, DNA based typing methods and stable isotopes procedures were found not suitable for this particular purpose due to several issues. Only a small number of specific chromatographic and spectroscopic fingerprinting methods in either targeted or untargeted mode were found to be applicable in potentially providing a solution to this complex authenticity problem. Applied as a single method in isolation, these techniques would be able to give limited information on the oils identity as signals obtained for various oil types may well be overlapping. Therefore, more complex and combined approaches are likely to be needed to identify the oil species present in oil blends employing a stepwise approach in combination with advanced chemometrics. Options to provide such a methodology are outlined in the current study.
Resumo:
This paper reports the detailed description and validation of a fully automated, computer controlled analytical method to spatially probe the gas composition and thermal characteristics in packed bed systems. This method has been designed to limit the invasiveness of the probe, a characteristic assessed using CFD. The thermocouple is aligned with the sampling holes to enable simultaneous recording of the gas composition and temperature profiles. This analysis technique has been validated by studying CO oxidation over a 1% Pt/Al2O3 catalyst. The resultant profiles have been compared with a micro-kinetic model, to further assess the strength of the technique.
Resumo:
A revised water model intended for use in condensed phase simulations in the framework of the self consistent polarizable ion tight binding theory is constructed. The model is applied to water monomer, dimer, hexamers, ice, and liquid, where it demonstrates good agreement with theoretical results obtained by more accurate methods, such as DFT and CCSD(T), and with experiment. In particular, the temperature dependence of the self diffusion coefficient in liquid water predicted by the model, closely reproduces experimental curves in the temperature interval between 230 K and 350 K. In addition, and in contrast to standard DFT, the model properly orders the relative densities of liquid water and ice. A notable, but inevitable, shortcoming of the model is underestimation of the static dielectric constant by a factor of two. We demonstrate that the description of inter and intramolecular forces embodied in the tight binding approximation in quantum mechanics leads to a number of valuable insights which can be missing from ab initio quantum chemistry and classical force fields. These include a discussion of the origin of the enhanced molecular electric dipole moment in the condensed phases, and a detailed explanation for the increase of coordination number in liquid water as a function of temperature and compared with ice-leading to insights into the anomalous expansion on freezing. The theory holds out the prospect of an understanding of the currently unexplained density maximum of water near the freezing point.
Resumo:
We demonstrate a model for stoichiometric and reduced titanium dioxide intended for use in molecular dynamics and other atomistic simulations and based in the polarizable ion tight binding theory. This extends the model introduced in two previous papers from molecular and liquid applications into the solid state, thus completing the task of providing a comprehensive and unified scheme for studying chemical reactions, particularly aimed at problems in catalysis and electrochemistry. As before, experimental results are given priority over theoretical ones in selecting targets for model fitting, for which we used crystal parameters and band gaps of titania bulk polymorphs, rutile and anatase. The model is applied to six low index titania surfaces, with and without oxygen vacancies and adsorbed water molecules, both in dissociated and non-dissociated states. Finally, we present the results of molecular dynamics simulation of an anatase cluster with a number of adsorbed water molecules and discuss the role of edge and corner atoms of the cluster. (C) 2014 AIP Publishing LLC.
Resumo:
As is now well established, a first order expansion of the Hohenberg-Kohn total energy density functional about a trial input density, namely, the Harris-Foulkes functional, can be used to rationalize a non self consistent tight binding model. If the expansion is taken to second order then the energy and electron density matrix need to be calculated self consistently and from this functional one can derive a charge self consistent tight binding theory. In this paper we have used this to describe a polarizable ion tight binding model which has the benefit of treating charge transfer in point multipoles. This admits a ready description of ionic polarizability and crystal field splitting. It is necessary in constructing such a model to find a number of parameters that mimic their more exact counterparts in the density functional theory. We describe in detail how this is done using a combination of intuition, exact analytical fitting, and a genetic optimization algorithm. Having obtained model parameters we show that this constitutes a transferable scheme that can be applied rather universally to small and medium sized organic molecules. We have shown that the model gives a good account of static structural and dynamic vibrational properties of a library of molecules, and finally we demonstrate the model's capability by showing a real time simulation of an enolization reaction in aqueous solution. In two subsequent papers, we show that the model is a great deal more general in that it will describe solvents and solid substrates and that therefore we have created a self consistent quantum mechanical scheme that may be applied to simulations in heterogeneous catalysis.
Resumo:
The problem of learning from imbalanced data is of critical importance in a large number of application domains and can be a bottleneck in the performance of various conventional learning methods that assume the data distribution to be balanced. The class imbalance problem corresponds to dealing with the situation where one class massively outnumbers the other. The imbalance between majority and minority would lead machine learning to be biased and produce unreliable outcomes if the imbalanced data is used directly. There has been increasing interest in this research area and a number of algorithms have been developed. However, independent evaluation of the algorithms is limited. This paper aims at evaluating the performance of five representative data sampling methods namely SMOTE, ADASYN, BorderlineSMOTE, SMOTETomek and RUSBoost that deal with class imbalance problems. A comparative study is conducted and the performance of each method is critically analysed in terms of assessment metrics. © 2013 Springer-Verlag.
Resumo:
In a Bayesian learning setting, the posterior distribution of a predictive model arises from a trade-off between its prior distribution and the conditional likelihood of observed data. Such distribution functions usually rely on additional hyperparameters which need to be tuned in order to achieve optimum predictive performance; this operation can be efficiently performed in an Empirical Bayes fashion by maximizing the posterior marginal likelihood of the observed data. Since the score function of this optimization problem is in general characterized by the presence of local optima, it is necessary to resort to global optimization strategies, which require a large number of function evaluations. Given that the evaluation is usually computationally intensive and badly scaled with respect to the dataset size, the maximum number of observations that can be treated simultaneously is quite limited. In this paper, we consider the case of hyperparameter tuning in Gaussian process regression. A straightforward implementation of the posterior log-likelihood for this model requires O(N^3) operations for every iteration of the optimization procedure, where N is the number of examples in the input dataset. We derive a novel set of identities that allow, after an initial overhead of O(N^3), the evaluation of the score function, as well as the Jacobian and Hessian matrices, in O(N) operations. We prove how the proposed identities, that follow from the eigendecomposition of the kernel matrix, yield a reduction of several orders of magnitude in the computation time for the hyperparameter optimization problem. Notably, the proposed solution provides computational advantages even with respect to state of the art approximations that rely on sparse kernel matrices.