886 resultados para Optimal test set


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid increase in the number of immigrants from outside of the EU coming to Germany has become the paramount political issue. According to new estimates, the number of individuals expected arrive in Germany in 2015 and apply for asylum there is 800,000, which is nearly twice as many as estimated in earlier forecasts. Various administrative, financial and social problems related to the influx of migrants are becoming increasingly apparent. The problem of ‘refugees’ (in public debate, the terms ‘immigrants’, ‘refugees’, ‘illegal immigrants’, ‘economic immigrants’ have not been clearly defined and have often been used interchangeably) has been culminating for over a year. Despite this, it was being disregarded by Angela Merkel’s government which was preoccupied with debates on how to rescue Greece. It was only daily reports of cases of refugee centres being set on fire that convinced Chancellor Merkel to speak and to make immigration problem a priority issue (Chefsache). Neither the ruling coalition nor the opposition parties have a consistent idea of how Germany should react to the growing number of refugees. In this matter, divisions run across parties. Various solutions have been proposed, from liberalisation of laws on the right to stay in Germany to combating illegal immigration more effectively, which would be possible if asylum granting procedures were accelerated. The proposed solutions have not been properly thought through, instead they are reactive measures inspired by the results of opinion polls. This is why their assumptions are often contradictory. The situation is similar regarding the actions proposed by Chancellor Merkel which involve faster procedures to expel individuals with no right to stay in Germany and a plan to convince other EU states to accept ‘refugees’. None of these ideas is new – they were already present in the German internal debate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

30.00% 30.00%

Publicador:

Resumo:

What is the time-optimal way of using a set of control Hamiltonians to obtain a desired interaction? Vidal, Hammerer, and Cirac [Phys. Rev. Lett. 88, 237902 (2002)] have obtained a set of powerful results characterizing the time-optimal simulation of a two-qubit quantum gate using a fixed interaction Hamiltonian and fast local control over the individual qubits. How practically useful are these results? We prove that there are two-qubit Hamiltonians such that time-optimal simulation requires infinitely many steps of evolution, each infinitesimally small, and thus is physically impractical. A procedure is given to determine which two-qubit Hamiltonians have this property, and we show that almost all Hamiltonians do. Finally, we determine some bounds on the penalty that must be paid in the simulation time if the number of steps is fixed at a finite number, and show that the cost in simulation time is not too great.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, methods for computing D-optimal designs for population pharmacokinetic studies have become available. However there are few publications that have prospectively evaluated the benefits of D-optimality in population or single-subject settings. This study compared a population optimal design with an empirical design for estimating the base pharmacokinetic model for enoxaparin in a stratified randomized setting. The population pharmacokinetic D-optimal design for enoxaparin was estimated using the PFIM function (MATLAB version 6.0.0.88). The optimal design was based on a one-compartment model with lognormal between subject variability and proportional residual variability and consisted of a single design with three sampling windows (0-30 min, 1.5-5 hr and 11 - 12 hr post-dose) for all patients. The empirical design consisted of three sample time windows per patient from a total of nine windows that collectively represented the entire dose interval. Each patient was assigned to have one blood sample taken from three different windows. Windows for blood sampling times were also provided for the optimal design. Ninety six patients were recruited into the study who were currently receiving enoxaparin therapy. Patients were randomly assigned to either the optimal or empirical sampling design, stratified for body mass index. The exact times of blood samples and doses were recorded. Analysis was undertaken using NONMEM (version 5). The empirical design supported a one compartment linear model with additive residual error, while the optimal design supported a two compartment linear model with additive residual error as did the model derived from the full data set. A posterior predictive check was performed where the models arising from the empirical and optimal designs were used to predict into the full data set. This revealed the optimal'' design derived model was superior to the empirical design model in terms of precision and was similar to the model developed from the full dataset. This study suggests optimal design techniques may be useful, even when the optimized design was based on a model that was misspecified in terms of the structural and statistical models and when the implementation of the optimal designed study deviated from the nominal design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we investigate the trade-off faced by regulators who must set a price for an intermediate good somewhere between the marginal cost and the monopoly price. We utilize a growth model with monopolistic suppliers of intermediate goods. Investment in innovation is required to produce a new intermediate good. Marginal cost pricing deters innovation, while monopoly pricing maximizes innovation and economic growth at the cost of some static inefficiency. We demonstrate the existence of a second-best price above the marginal cost but below the monopoly price, which maximizes consumer welfare. Simulation results suggest that substantial reductions in consumption, production, growth, and welfare occur where regulators focus on static efficiency issues by setting prices at or near marginal cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Type 1 diabetes (TID) susceptibility locus, IDDM8, has been accurately mapped to 200 kilobases at the terminal end of chromosome 6q27. This is within the region which harbours a cluster of three genes encoding proteasome subunit beta 1 (PMSB1), TATA-box binding protein (TBP) and a homologue of mouse programming cell death activator 2 (PDCD2). In this study, we evaluated whether these genes contribute to TID susceptibility using the transmission disequilibrium test of the data set from 114 affected Russian simplex families. The A allele of the G/A1180 single nucleotide polymorphism (SNP) at the PDCD2 gene, which was significant in its preferential transfer from parents to diabetic children (75 transmissions vs. 47 non-transmissionS, x(2) = 12.85, P corrected = 0.0038), was found to be associated with T1D. G/A1180 dimorphism and two other SNPs, C/T771 TBP and G/T(-271) PDCD2, were shown to share three common haplotypes, two of which (A-T-G and A-T-T) have been associated with higher development risk of TID. The third haplotype (G-T-G) was related to having a lower risk of disease. These findings suggest that the PDCD2 gene is a likely susceptibility gene for TID within IDDM8. However, it was not possible to exclude the TBP gene from being another putative susceptibility gene in this region. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present Ehrenfest relations for the high temperature stochastic Gross-Pitaevskii equation description of a trapped Bose gas, including the effect of growth noise and the energy cutoff. A condition for neglecting the cutoff terms in the Ehrenfest relations is found which is more stringent than the usual validity condition of the truncated Wigner or classical field method-that all modes are highly occupied. The condition requires a small overlap of the nonlinear interaction term with the lowest energy single particle state of the noncondensate band, and gives a means to constrain dynamical artefacts arising from the energy cutoff in numerical simulations. We apply the formalism to two simple test problems: (i) simulation of the Kohn mode oscillation for a trapped Bose gas at zero temperature, and (ii) computing the equilibrium properties of a finite temperature Bose gas within the classical field method. The examples indicate ways to control the effects of the cutoff, and that there is an optimal choice of plane wave basis for a given cutoff energy. This basis gives the best reproduction of the single particle spectrum, the condensate fraction and the position and momentum densities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To account for the preponderance of zero counts and simultaneous correlation of observations, a class of zero-inflated Poisson mixed regression models is applicable for accommodating the within-cluster dependence. In this paper, a score test for zero-inflation is developed for assessing correlated count data with excess zeros. The sampling distribution and the power of the test statistic are evaluated by simulation studies. The results show that the test statistic performs satisfactorily under a wide range of conditions. The test procedure is further illustrated using a data set on recurrent urinary tract infections. Copyright (c) 2005 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Through a prospective study of 70 youths staying at homeless-youth shelters, the authors tested the utility of I. Ajzen's (1991) theory of planned behavior (TPB), by comparing the constructs of self-efficacy with perceived behavioral control (PBC), in predicting people's rule-following behavior during shelter stays. They performed the 1st wave of data collection through a questionnaire assessing the standard TPB components of attitudes, subjective norms, PBC, and behavioral intentions in relation to following the set rules at youth shelters. Further, they distinguished between items assessing PBC (or perceived control) and those reflecting self-efficacy (or perceived difficulty). At the completion of each youth's stay at the shelter, shelter staff rated the rule adherence for that participant. Regression analyses revealed some support for the TPB in that subjective norm was a significant predictor of intentions. However, self-efficacy emerged as the strongest predictor of intentions and was the only significant predictor of rule-following behavior. Thus, the results of the present study indicate the possibility that self-efficacy is integral to predicting rule adherence within this context and reaffirm the importance of incorporating notions of people's perceived ease or difficulty in performing actions in models of attitude-behavior prediction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Women are under-represented at senior levels within organisations. They also fareless well than their male counterparts in reward and career opportunities. Attitudestoward women in the workplace are thought to underpin these disparities and moreand more organisations are introducing attitude measures into diversity and inclusioninitiatives to: 1) raise awareness amongst employees of implicit attitudes, 2) educateemployees on how these attitudes can influence behaviour and 3) re-measure theattitude after an intervention to assess whether the attitude has changed. TheImplicit Association Test (IAT: Greenwald, et al., 1998) is the most popular tool usedto assess attitudes. However, questions over the predictive validity of the measurehave been raised and the evidence for the real world impact of the implicit attitudes islimited (Blanton et al., 2009; Landy, 2008; Tetlock & Mitchell, 2009; Wax, 2010).Whilst there is growing research in the area of race, little research has explored theability of the IAT to predict gender discrimination. This thesis addresses thisimportant gap in the literature. Three empirical studies were conducted. The firststudy explored whether gender IATs were predictive of personnel decisions thatfavour men and whether affect- and cognition-based gender IATs were equallypredictive of behaviour. The second two studies explored the predictive validity ofthe IAT in comparison to an explicit measure of one type of gender attitude,benevolent sexism. The results revealed implicit gender attitudes were stronglyheld. However, they did not consistently predict behaviour across the studies.Overall, the results suggest that the IAT may only predict workplace genderdiscrimination in a very select set of circumstances. The attitude component that anIAT assesses, the personnel decision and participant demographics all impact thepredictive validity of the tool. The interplay between the IAT and behaviour thereforeappears to be more complex than is assumed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work reports the developnent of a mathenatical model and distributed, multi variable computer-control for a pilot plant double-effect climbing-film evaporator. A distributed-parameter model of the plant has been developed and the time-domain model transformed into the Laplace domain. The model has been further transformed into an integral domain conforming to an algebraic ring of polynomials, to eliminate the transcendental terms which arise in the Laplace domain due to the distributed nature of the plant model. This has made possible the application of linear control theories to a set of linear-partial differential equations. The models obtained have well tracked the experimental results of the plant. A distributed-computer network has been interfaced with the plant to implement digital controllers in a hierarchical structure. A modern rnultivariable Wiener-Hopf controller has been applled to the plant model. The application has revealed a limitation condition that the plant matrix should be positive-definite along the infinite frequency axis. A new multi variable control theory has emerged fram this study, which avoids the above limitation. The controller has the structure of the modern Wiener-Hopf controller, but with a unique feature enabling a designer to specify the closed-loop poles in advance and to shape the sensitivity matrix as required. In this way, the method treats directly the interaction problems found in the chemical processes with good tracking and regulation performances. Though the ability of the analytical design methods to determine once and for all whether a given set of specifications can be met is one of its chief advantages over the conventional trial-and-error design procedures. However, one disadvantage that offsets to some degree the enormous advantages is the relatively complicated algebra that must be employed in working out all but the simplest problem. Mathematical algorithms and computer software have been developed to treat some of the mathematical operations defined over the integral domain, such as matrix fraction description, spectral factorization, the Bezout identity, and the general manipulation of polynomial matrices. Hence, the design problems of Wiener-Hopf type of controllers and other similar algebraic design methods can be easily solved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There may be circumstances where it is necessary for microbiologists to compare variances rather than means, e,g., in analysing data from experiments to determine whether a particular treatment alters the degree of variability or testing the assumption of homogeneity of variance prior to other statistical tests. All of the tests described in this Statnote have their limitations. Bartlett’s test may be too sensitive but Levene’s and the Brown-Forsythe tests also have problems. We would recommend the use of the variance-ratio test to compare two variances and the careful application of Bartlett’s test if there are more than two groups. Considering that these tests are not particularly robust, it should be remembered that the homogeneity of variance assumption is usually the least important of those considered when carrying out an ANOVA. If there is concern about this assumption and especially if the other assumptions of the analysis are also not likely to be met, e.g., lack of normality or non additivity of treatment effects then it may be better either to transform the data or to carry out a non-parametric test on the data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Product reliability and its environmental performance have become critical elements within a product's specification and design. To obtain a high level of confidence in the reliability of the design it is customary to test the design under realistic conditions in a laboratory. The objective of the work is to examine the feasibility of designing mechanical test rigs which exhibit prescribed dynamical characteristics. The design is then attached to the rig and excitation is applied to the rig, which then transmits representative vibration levels into the product. The philosophical considerations made at the outset of the project are discussed as they form the basis for the resulting design methodologies. It is attempted to directly identify the parameters of a test rig from the spatial model derived during the system identification process. It is shown to be impossible to identify a feasible test rig design using this technique. A finite dimensional optimal design methodology is developed which identifies the parameters of a discrete spring/mass system which is dynamically similar to a point coordinate on a continuous structure. This design methodology is incorporated within another procedure which derives a structure comprising a continuous element and a discrete system. This methodology is used to obtain point coordinate similarity for two planes of motion, which is validated by experimental tests. A limitation of this approach is that it is impossible to achieve multi-coordinate similarity due to an interaction of the discrete system and the continuous element at points away from the coordinate of interest. During the work the importance of the continuous element is highlighted and a design methodology is developed for continuous structures. The design methodology is based upon distributed parameter optimal design techniques and allows an initial poor design estimate to be moved in a feasible direction towards an acceptable design solution. Cumulative damage theory is used to provide a quantitative method of assessing the quality of dynamic similarity. It is shown that the combination of modal analysis techniques and cumulative damage theory provides a feasible design synthesis methodology for representative test rigs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis applies a hierarchical latent trait model system to a large quantity of data. The motivation for it was lack of viable approaches to analyse High Throughput Screening datasets which maybe include thousands of data points with high dimensions. High Throughput Screening (HTS) is an important tool in the pharmaceutical industry for discovering leads which can be optimised and further developed into candidate drugs. Since the development of new robotic technologies, the ability to test the activities of compounds has considerably increased in recent years. Traditional methods, looking at tables and graphical plots for analysing relationships between measured activities and the structure of compounds, have not been feasible when facing a large HTS dataset. Instead, data visualisation provides a method for analysing such large datasets, especially with high dimensions. So far, a few visualisation techniques for drug design have been developed, but most of them just cope with several properties of compounds at one time. We believe that a latent variable model (LTM) with a non-linear mapping from the latent space to the data space is a preferred choice for visualising a complex high-dimensional data set. As a type of latent variable model, the latent trait model can deal with either continuous data or discrete data, which makes it particularly useful in this domain. In addition, with the aid of differential geometry, we can imagine the distribution of data from magnification factor and curvature plots. Rather than obtaining the useful information just from a single plot, a hierarchical LTM arranges a set of LTMs and their corresponding plots in a tree structure. We model the whole data set with a LTM at the top level, which is broken down into clusters at deeper levels of t.he hierarchy. In this manner, the refined visualisation plots can be displayed in deeper levels and sub-clusters may be found. Hierarchy of LTMs is trained using expectation-maximisation (EM) algorithm to maximise its likelihood with respect to the data sample. Training proceeds interactively in a recursive fashion (top-down). The user subjectively identifies interesting regions on the visualisation plot that they would like to model in a greater detail. At each stage of hierarchical LTM construction, the EM algorithm alternates between the E- and M-step. Another problem that can occur when visualising a large data set is that there may be significant overlaps of data clusters. It is very difficult for the user to judge where centres of regions of interest should be put. We address this problem by employing the minimum message length technique, which can help the user to decide the optimal structure of the model. In this thesis we also demonstrate the applicability of the hierarchy of latent trait models in the field of document data mining.