52 resultados para Routh-Hurwitz criterion
em CentAUR: Central Archive University of Reading - UK
Resumo:
Most factorial experiments in industrial research form one stage in a sequence of experiments and so considerable prior knowledge is often available from earlier stages. A Bayesian A-optimality criterion is proposed for choosing designs, when each stage in experimentation consists of a small number of runs and the objective is to optimise a response. Simple formulae for the weights are developed, some examples of the use of the design criterion are given and general recommendations are made. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
In this paper Cognitive Abilities Test scores are compared directly with moderated GCSE scores awarded to the same group of pupils. For ease of interpretation the comparisons are presented in a graphical form. Whilst some provisional and tentative conclusions are drawn about the reliability of GCSE art, questions are raised about the general validity of criterion-referenced assessment in this area.
Resumo:
A hybridised and Knowledge-based Evolutionary Algorithm (KEA) is applied to the multi-criterion minimum spanning tree problems. Hybridisation is used across its three phases. In the first phase a deterministic single objective optimization algorithm finds the extreme points of the Pareto front. In the second phase a K-best approach finds the first neighbours of the extreme points, which serve as an elitist parent population to an evolutionary algorithm in the third phase. A knowledge-based mutation operator is applied in each generation to reproduce individuals that are at least as good as the unique parent. The advantages of KEA over previous algorithms include its speed (making it applicable to large real-world problems), its scalability to more than two criteria, and its ability to find both the supported and unsupported optimal solutions.
Resumo:
The banded organization of clouds and zonal winds in the atmospheres of the outer planets has long fascinated observers. Several recent studies in the theory and idealized modeling of geostrophic turbulence have suggested possible explanations for the emergence of such organized patterns, typically involving highly anisotropic exchanges of kinetic energy and vorticity within the dissipationless inertial ranges of turbulent flows dominated (at least at large scales) by ensembles of propagating Rossby waves. The results from an attempt to reproduce such conditions in the laboratory are presented here. Achievement of a distinct inertial range turns out to require an experiment on the largest feasible scale. Deep, rotating convection on small horizontal scales was induced by gently and continuously spraying dense, salty water onto the free surface of the 13-m-diameter cylindrical tank on the Coriolis platform in Grenoble, France. A “planetary vorticity gradient” or “β effect” was obtained by use of a conically sloping bottom and the whole tank rotated at angular speeds up to 0.15 rad s−1. Over a period of several hours, a highly barotropic, zonally banded large-scale flow pattern was seen to emerge with up to 5–6 narrow, alternating, zonally aligned jets across the tank, indicating the development of an anisotropic field of geostrophic turbulence. Using particle image velocimetry (PIV) techniques, zonal jets are shown to have arisen from nonlinear interactions between barotropic eddies on a scale comparable to either a Rhines or “frictional” wavelength, which scales roughly as (β/Urms)−1/2. This resulted in an anisotropic kinetic energy spectrum with a significantly steeper slope with wavenumber k for the zonal flow than for the nonzonal eddies, which largely follows the classical Kolmogorov k−5/3 inertial range. Potential vorticity fields show evidence of Rossby wave breaking and the presence of a “hyperstaircase” with radius, indicating instantaneous flows that are supercritical with respect to the Rayleigh–Kuo instability criterion and in a state of “barotropic adjustment.” The implications of these results are discussed in light of zonal jets observed in planetary atmospheres and, most recently, in the terrestrial oceans.
Resumo:
The no response test is a new scheme in inverse problems for partial differential equations which was recently proposed in [D. R. Luke and R. Potthast, SIAM J. Appl. Math., 63 (2003), pp. 1292–1312] in the framework of inverse acoustic scattering problems. The main idea of the scheme is to construct special probing waves which are small on some test domain. Then the response for these waves is constructed. If the response is small, the unknown object is assumed to be a subset of the test domain. The response is constructed from one, several, or many particular solutions of the problem under consideration. In this paper, we investigate the convergence of the no response test for the reconstruction information about inclusions D from the Cauchy values of solutions to the Helmholtz equation on an outer surface $\partial\Omega$ with $\overline{D} \subset \Omega$. We show that the one‐wave no response test provides a criterion to test the analytic extensibility of a field. In particular, we investigate the construction of approximations for the set of singular points $N(u)$ of the total fields u from one given pair of Cauchy data. Thus, the no response test solves a particular version of the classical Cauchy problem. Also, if an infinite number of fields is given, we prove that a multifield version of the no response test reconstructs the unknown inclusion D. This is the first convergence analysis which could be achieved for the no response test.
Resumo:
The shallow water equations are solved using a mesh of polygons on the sphere, which adapts infrequently to the predicted future solution. Infrequent mesh adaptation reduces the cost of adaptation and load-balancing and will thus allow for more accurate mapping on adaptation. We simulate the growth of a barotropically unstable jet adapting the mesh every 12 h. Using an adaptation criterion based largely on the gradient of the vorticity leads to a mesh with around 20 per cent of the cells of a uniform mesh that gives equivalent results. This is a similar proportion to previous studies of the same test case with mesh adaptation every 1–20 min. The prediction of the mesh density involves solving the shallow water equations on a coarse mesh in advance of the locally refined mesh in order to estimate where features requiring higher resolution will grow, decay or move to. The adaptation criterion consists of two parts: that resolved on the coarse mesh, and that which is not resolved and so is passively advected on the coarse mesh. This combination leads to a balance between resolving features controlled by the large-scale dynamics and maintaining fine-scale features.
Resumo:
Severe wind storms are one of the major natural hazards in the extratropics and inflict substantial economic damages and even casualties. Insured storm-related losses depend on (i) the frequency, nature and dynamics of storms, (ii) the vulnerability of the values at risk, (iii) the geographical distribution of these values, and (iv) the particular conditions of the risk transfer. It is thus of great importance to assess the impact of climate change on future storm losses. To this end, the current study employs—to our knowledge for the first time—a coupled approach, using output from high-resolution regional climate model scenarios for the European sector to drive an operational insurance loss model. An ensemble of coupled climate-damage scenarios is used to provide an estimate of the inherent uncertainties. Output of two state-of-the-art global climate models (HadAM3, ECHAM5) is used for present (1961–1990) and future climates (2071–2100, SRES A2 scenario). These serve as boundary data for two nested regional climate models with a sophisticated gust parametrizations (CLM, CHRM). For validation and calibration purposes, an additional simulation is undertaken with the CHRM driven by the ERA40 reanalysis. The operational insurance model (Swiss Re) uses a European-wide damage function, an average vulnerability curve for all risk types, and contains the actual value distribution of a complete European market portfolio. The coupling between climate and damage models is based on daily maxima of 10 m gust winds, and the strategy adopted consists of three main steps: (i) development and application of a pragmatic selection criterion to retrieve significant storm events, (ii) generation of a probabilistic event set using a Monte-Carlo approach in the hazard module of the insurance model, and (iii) calibration of the simulated annual expected losses with a historic loss data base. The climate models considered agree regarding an increase in the intensity of extreme storms in a band across central Europe (stretching from southern UK and northern France to Denmark, northern Germany into eastern Europe). This effect increases with event strength, and rare storms show the largest climate change sensitivity, but are also beset with the largest uncertainties. Wind gusts decrease over northern Scandinavia and Southern Europe. Highest intra-ensemble variability is simulated for Ireland, the UK, the Mediterranean, and parts of Eastern Europe. The resulting changes on European-wide losses over the 110-year period are positive for all layers and all model runs considered and amount to 44% (annual expected loss), 23% (10 years loss), 50% (30 years loss), and 104% (100 years loss). There is a disproportionate increase in losses for rare high-impact events. The changes result from increases in both severity and frequency of wind gusts. Considerable geographical variability of the expected losses exists, with Denmark and Germany experiencing the largest loss increases (116% and 114%, respectively). All countries considered except for Ireland (−22%) experience some loss increases. Some ramifications of these results for the socio-economic sector are discussed, and future avenues for research are highlighted. The technique introduced in this study and its application to realistic market portfolios offer exciting prospects for future research on the impact of climate change that is relevant for policy makers, scientists and economists.
Resumo:
The k-means cluster technique is used to examine 43 yr of daily winter Northern Hemisphere (NH) polar stratospheric data from the 40-yr ECMWF Re-Analysis (ERA-40). The results show that the NH winter stratosphere exists in two natural well-separated states. In total, 10% of the analyzed days exhibit a warm disturbed state that is typical of sudden stratospheric warming events. The remaining 90% of the days are in a state typical of a colder undisturbed vortex. These states are determined objectively, with no preconceived notion of the groups. The two stratospheric states are described and compared with alternative indicators of the polar winter flow, such as the northern annular mode. It is shown that the zonally averaged zonal winds in the polar upper stratosphere at 7 hPa can best distinguish between the two states, using a threshold value of 4 m s−1, which is remarkably close to the standard WMO criterion for major warming events. The analysis also determines that there are no further divisions within the warm state, indicating that there is no well-designated threshold between major and minor warmings, nor between split and displaced vortex events. These different manifestations are simply members of a continuum of warming events.
Resumo:
M. R. Banaji and A. G. Greenwald (1995) demonstrated a gender bias in fame judgments—that is, an increase in judged fame due to prior processing that was larger for male than for female names. They suggested that participants shift criteria between judging men and women, using the more liberal criterion for judging men. This "criterion-shift" account appeared problematic for a number of reasons. In this article, 3 experiments are reported that were designed to evaluate the criterion-shift account of the gender bias in the false-fame effect against a distribution-shift account. The results were consistent with the criterion-shift account, and they helped to define more precisely the situations in which people may be ready to shift their response criterion on an item-by-item basis. In addition, the results were incompatible with an interpretation of the criterion shift as an artifact of the experimental situation in the experiments reported by M. R. Banaji and A. G. Greenwald. (PsycINFO Database Record (c) 2010 APA, all rights reserved)
Resumo:
In this review, we consider three possible criteria by which knowledge might be regarded as implicit or inaccessible: It might be implicit only in the sense that it is difficult to articulate freely, or it might be implicit according to either an objective threshold or a subjective threshold. We evaluate evidence for these criteria in relation to artificial grammar learning, the control of complex systems, and sequence learning, respectively. We argue that the convincing evidence is not yet in, but construing the implicit nature of implicit learning in terms of a subjective threshold is most likely to prove fruitful for future research. Furthermore, the subjective threshold criterion may demarcate qualitatively different types of knowledge. We argue that (1) implicit, rather than explicit, knowledge is often relatively inflexible in transfer to different domains, (2) implicit, rather than explicit, learning occurs when attention is focused on specific items and not underlying rules, and (3) implicit learning and the resulting knowledge are often relatively robust.
Resumo:
Two experiments examined the learning of a set of Greek pronunciation rules through explicit and implicit modes of rule presentation. Experiment 1 compared the effectiveness of implicit and explicit modes of presentation in two modalities, visual and auditory. Subjects in the explicit or rule group were presented with the rule set, and those in the implicit or natural group were shown a set of Greek words, composed of letters from the rule set, linked to their pronunciations. Subjects learned the Greek words to criterion and were then given a series of tests which aimed to tap different types of knowledge. The results showed an advantage of explicit study of the rules. In addition, an interaction was found between mode of presentation and modality. Explicit instruction was more effective in the visual than in the auditory modality, whereas there was no modality effect for implicit instruction. Experiment 2 examined a possible reason for the advantage of the rule groups by comparing different combinations of explicit and implicit presentation in the study and learning phases. The results suggested that explicit presentation of the rules is only beneficial when it is followed by practice at applying them.
Resumo:
The ultimate criterion of success for interactive expert systems is that they will be used, and used to effect, by individuals other than the system developers. A key ingredient of success in most systems is involving users in the specification and development of systems as they are being built. However, until recently, system designers have paid little attention to ascertaining user needs and to developing systems with corresponding functionality and appropriate interfaces to match those requirements. Although the situation is beginning to change, many developers do not know how to go about involving users, or else tackle the problem in an inadequate way. This paper discusses the need for user involvement and considers why many developers are still not involving users in an optimal way. It looks at the different ways in which users can be involved in the development process and describes how to select appropriate techniques and methods for studying users. Finally, it discusses some of the problems inherent in involving users in expert system development, and recommends an approach which incorporates both ethnographic analysis and formal user testing.