971 resultados para order statistics


Relevância:

40.00% 40.00%

Publicador:

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This article draws attention to the importance of routinely collected administrative data as an important source for understanding the characteristics of the Northern Ireland child welfare system as it has developed since the Children (Northern Ireland) Order 1995 became its legislative base. The article argues that the availability of such data is a strength of the Northern Ireland child welfare system and urges local politicians, lobbyists, researchers, policy-makers, operational managers, practitioners and service user groups to make more use of them. The main sources of administrative data are identified. Illustration of how these can be used to understand and to ask questions about the system is provided by considering some of the trends since the Children Order was enacted. The “protection” principle of the Children Order provides the focus for the illustration. The statistical trends considered relate to child protection referrals, investigations and registrations and to children and young people looked after under a range of court orders available to ensure their protection and well-being.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We seek numerical methods for second‐order stochastic differential equations that reproduce the stationary density accurately for all values of damping. A complete analysis is possible for scalar linear second‐order equations (damped harmonic oscillators with additive noise), where the statistics are Gaussian and can be calculated exactly in the continuous‐time and discrete‐time cases. A matrix equation is given for the stationary variances and correlation for methods using one Gaussian random variable per timestep. The only Runge–Kutta method with a nonsingular tableau matrix that gives the exact steady state density for all values of damping is the implicit midpoint rule. Numerical experiments, comparing the implicit midpoint rule with Heun and leapfrog methods on nonlinear equations with additive or multiplicative noise, produce behavior similar to the linear case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistics of the estimates of tricoherence are obtained analytically for nonlinear harmonic random processes with known true tricoherence. Expressions are presented for the bias, variance, and probability distributions of estimates of tricoherence as functions of the true tricoherence and the number of realizations averaged in the estimates. The expressions are applicable to arbitrary higher order coherence and arbitrary degree of interaction between modes. Theoretical results are compared with those obtained from numerical simulations of nonlinear harmonic random processes. Estimation of true values of tricoherence given observed values is also discussed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to summarise a successfully defended doctoral thesis. The main purpose of this paper is to provide a summary of the scope, and main issues raised in the thesis so that readers undertaking studies in the same or connected areas may be aware of current contributions to the topic. The secondary aims are to frame the completed thesis in the context of doctoral-level research in project management as well as offer ideas for further investigation which would serve to extend scientific knowledge on the topic. Design/methodology/approach – Research reported in this paper is based on a quantitative study using inferential statistics aimed at better understanding the actual and potential usage of earned value management (EVM) as applied to external projects under contract. Theories uncovered during the literature review were hypothesized and tested using experiential data collected from 145 EVM practitioners with direct experience on one or more external projects under contract that applied the methodology. Findings – The results of this research suggest that EVM is an effective project management methodology. The principles of EVM were shown to be significant positive predictors of project success on contracted efforts and to be a relatively greater positive predictor of project success when using fixed-price versus cost-plus (CP) type contracts. Moreover, EVM's work-breakdown structure (WBS) utility was shown to positively contribute to the formation of project contracts. The contribution was not significantly different between fixed-price and CP contracted projects, with exceptions in the areas of schedule planning and payment planning. EVM's “S” curve benefited the administration of project contracts. The contribution of the S-curve was not significantly different between fixed-price and CP contracted projects. Furthermore, EVM metrics were shown to also be important contributors to the administration of project contracts. The relative contribution of EVM metrics to projects under fixed-price versus CP contracts was not significantly different, with one exception in the area of evaluating and processing payment requests. Practical implications – These results have important implications for project practitioners, EVM advocates, as well as corporate and governmental policy makers. EVM should be considered for all projects – not only for its positive contribution to project contract development and administration, for its contribution to project success as well, regardless of contract type. Contract type should not be the sole determining factor in the decision whether or not to use EVM. More particularly, the more fixed the contracted project cost, the more the principles of EVM explain the success of the project. The use of EVM mechanics should also be used in all projects regardless of contract type. Payment planning using a WBS should be emphasized in fixed-price contracts using EVM in order to help mitigate performance risk. Schedule planning using a WBS should be emphasized in CP contracts using EVM in order to help mitigate financial risk. Similarly, EVM metrics should be emphasized in fixed-price contracts in evaluating and processing payment requests. Originality/value – This paper provides a summary of cutting-edge research work and a link to the published thesis that researchers can use to help them understand how the research methodology was applied as well as how it can be extended.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer vision is increasingly becoming interested in the rapid estimation of object detectors. The canonical strategy of using Hard Negative Mining to train a Support Vector Machine is slow, since the large negative set must be traversed at least once per detector. Recent work has demonstrated that, with an assumption of signal stationarity, Linear Discriminant Analysis is able to learn comparable detectors without ever revisiting the negative set. Even with this insight, the time to learn a detector can still be on the order of minutes. Correlation filters, on the other hand, can produce a detector in under a second. However, this involves the unnatural assumption that the statistics are periodic, and requires the negative set to be re-sampled per detector size. These two methods differ chie y in the structure which they impose on the co- variance matrix of all examples. This paper is a comparative study which develops techniques (i) to assume periodic statistics without needing to revisit the negative set and (ii) to accelerate the estimation of detectors with aperiodic statistics. It is experimentally verified that periodicity is detrimental.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many statistical forecast systems are available to interested users. In order to be useful for decision-making, these systems must be based on evidence of underlying mechanisms. Once causal connections between the mechanism and their statistical manifestation have been firmly established, the forecasts must also provide some quantitative evidence of `quality’. However, the quality of statistical climate forecast systems (forecast quality) is an ill-defined and frequently misunderstood property. Often, providers and users of such forecast systems are unclear about what ‘quality’ entails and how to measure it, leading to confusion and misinformation. Here we present a generic framework to quantify aspects of forecast quality using an inferential approach to calculate nominal significance levels (p-values) that can be obtained either by directly applying non-parametric statistical tests such as Kruskal-Wallis (KW) or Kolmogorov-Smirnov (KS) or by using Monte-Carlo methods (in the case of forecast skill scores). Once converted to p-values, these forecast quality measures provide a means to objectively evaluate and compare temporal and spatial patterns of forecast quality across datasets and forecast systems. Our analysis demonstrates the importance of providing p-values rather than adopting some arbitrarily chosen significance levels such as p < 0.05 or p < 0.01, which is still common practice. This is illustrated by applying non-parametric tests (such as KW and KS) and skill scoring methods (LEPS and RPSS) to the 5-phase Southern Oscillation Index classification system using historical rainfall data from Australia, The Republic of South Africa and India. The selection of quality measures is solely based on their common use and does not constitute endorsement. We found that non-parametric statistical tests can be adequate proxies for skill measures such as LEPS or RPSS. The framework can be implemented anywhere, regardless of dataset, forecast system or quality measure. Eventually such inferential evidence should be complimented by descriptive statistical methods in order to fully assist in operational risk management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Climate variability and change are risk factors for climate sensitive activities such as agriculture. Managing these risks requires "climate knowledge", i.e. a sound understanding of causes and consequences of climate variability and knowledge of potential management options that are suitable in light of the climatic risks posed. Often such information about prognostic variables (e.g. yield, rainfall, run-off) is provided in probabilistic terms (e.g. via cumulative distribution functions, CDF), whereby the quantitative assessments of these alternative management options is based on such CDFs. Sound statistical approaches are needed in order to assess whether difference between such CDFs are intrinsic features of systems dynamics or chance events (i.e. quantifying evidences against an appropriate null hypothesis). Statistical procedures that rely on such a hypothesis testing framework are referred to as "inferential statistics" in contrast to descriptive statistics (e.g. mean, median, variance of population samples, skill scores). Here we report on the extension of some of the existing inferential techniques that provides more relevant and adequate information for decision making under uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents novel modelling applications for environmental geospatial data using remote sensing, GIS and statistical modelling techniques. The studied themes can be classified into four main themes: (i) to develop advanced geospatial databases. Paper (I) demonstrates the creation of a geospatial database for the Glanville fritillary butterfly (Melitaea cinxia) in the Åland Islands, south-western Finland; (ii) to analyse species diversity and distribution using GIS techniques. Paper (II) presents a diversity and geographical distribution analysis for Scopulini moths at a world-wide scale; (iii) to study spatiotemporal forest cover change. Paper (III) presents a study of exotic and indigenous tree cover change detection in Taita Hills Kenya using airborne imagery and GIS analysis techniques; (iv) to explore predictive modelling techniques using geospatial data. In Paper (IV) human population occurrence and abundance in the Taita Hills highlands was predicted using the generalized additive modelling (GAM) technique. Paper (V) presents techniques to enhance fire prediction and burned area estimation at a regional scale in East Caprivi Namibia. Paper (VI) compares eight state-of-the-art predictive modelling methods to improve fire prediction, burned area estimation and fire risk mapping in East Caprivi Namibia. The results in Paper (I) showed that geospatial data can be managed effectively using advanced relational database management systems. Metapopulation data for Melitaea cinxia butterfly was successfully combined with GPS-delimited habitat patch information and climatic data. Using the geospatial database, spatial analyses were successfully conducted at habitat patch level or at more coarse analysis scales. Moreover, this study showed it appears evident that at a large-scale spatially correlated weather conditions are one of the primary causes of spatially correlated changes in Melitaea cinxia population sizes. In Paper (II) spatiotemporal characteristics of Socupulini moths description, diversity and distribution were analysed at a world-wide scale and for the first time GIS techniques were used for Scopulini moth geographical distribution analysis. This study revealed that Scopulini moths have a cosmopolitan distribution. The majority of the species have been described from the low latitudes, sub-Saharan Africa being the hot spot of species diversity. However, the taxonomical effort has been uneven among biogeographical regions. Paper III showed that forest cover change can be analysed in great detail using modern airborne imagery techniques and historical aerial photographs. However, when spatiotemporal forest cover change is studied care has to be taken in co-registration and image interpretation when historical black and white aerial photography is used. In Paper (IV) human population distribution and abundance could be modelled with fairly good results using geospatial predictors and non-Gaussian predictive modelling techniques. Moreover, land cover layer is not necessary needed as a predictor because first and second-order image texture measurements derived from satellite imagery had more power to explain the variation in dwelling unit occurrence and abundance. Paper V showed that generalized linear model (GLM) is a suitable technique for fire occurrence prediction and for burned area estimation. GLM based burned area estimations were found to be more superior than the existing MODIS burned area product (MCD45A1). However, spatial autocorrelation of fires has to be taken into account when using the GLM technique for fire occurrence prediction. Paper VI showed that novel statistical predictive modelling techniques can be used to improve fire prediction, burned area estimation and fire risk mapping at a regional scale. However, some noticeable variation between different predictive modelling techniques for fire occurrence prediction and burned area estimation existed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is a study of a rather new logic called dependence logic and its closure under classical negation, team logic. In this thesis, dependence logic is investigated from several aspects. Some rules are presented for quantifier swapping in dependence logic and team logic. Such rules are among the basic tools one must be familiar with in order to gain the required intuition for using the logic for practical purposes. The thesis compares Ehrenfeucht-Fraïssé (EF) games of first order logic and dependence logic and defines a third EF game that characterises a mixed case where first order formulas are measured in the formula rank of dependence logic. The thesis contains detailed proofs of several translations between dependence logic, team logic, second order logic and its existential fragment. Translations are useful for showing relationships between the expressive powers of logics. Also, by inspecting the form of the translated formulas, one can see how an aspect of one logic can be expressed in the other logic. The thesis makes preliminary investigations into proof theory of dependence logic. Attempts focus on finding a complete proof system for a modest yet nontrivial fragment of dependence logic. A key problem is identified and addressed in adapting a known proof system of classical propositional logic to become a proof system for the fragment, namely that the rule of contraction is needed but is unsound in its unrestricted form. A proof system is suggested for the fragment and its completeness conjectured. Finally, the thesis investigates the very foundation of dependence logic. An alternative semantics called 1-semantics is suggested for the syntax of dependence logic. There are several key differences between 1-semantics and other semantics of dependence logic. 1-semantics is derived from first order semantics by a natural type shift. Therefore 1-semantics reflects an established semantics in a coherent manner. Negation in 1-semantics is a semantic operation and satisfies the law of excluded middle. A translation is provided from unrestricted formulas of existential second order logic into 1-semantics. Also game theoretic semantics are considerd in the light of 1-semantics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We solve the Dynamic Ehrenfeucht-Fra\"iss\'e Game on linear orders for both players, yielding a normal form for quantifier-rank equivalence classes of linear orders in first-order logic, infinitary logic, and generalized-infinitary logics with linearly ordered clocks. We show that Scott Sentences can be manipulated quickly, classified into local information, and consistency can be decided effectively in the length of the Scott Sentence. We describe a finite set of linked automata moving continuously on a linear order. Running them on ordinals, we compute the ordinal truth predicate and compute truth in the constructible universe of set-theory. Among the corollaries are a study of semi-models as efficient database of both model-theoretic and formulaic information, and a new proof of the atomicity of the Boolean algebra of sentences consistent with the theory of linear order -- i.e., that the finitely axiomatized theories of linear order are dense.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Malli on logiikassa käytetty abstraktio monille matemaattisille objekteille. Esimerkiksi verkot, ryhmät ja metriset avaruudet ovat malleja. Äärellisten mallien teoria on logiikan osa-alue, jossa tarkastellaan logiikkojen, formaalien kielten, ilmaisuvoimaa malleissa, joiden alkioiden lukumäärä on äärellinen. Rajoittuminen äärellisiin malleihin mahdollistaa tulosten soveltamisen teoreettisessa tietojenkäsittelytieteessä, jonka näkökulmasta logiikan kaavoja voidaan ajatella ohjelmina ja äärellisiä malleja niiden syötteinä. Lokaalisuus tarkoittaa logiikan kyvyttömyyttä erottaa toisistaan malleja, joiden paikalliset piirteet vastaavat toisiaan. Väitöskirjassa tarkastellaan useita lokaalisuuden muotoja ja niiden säilymistä logiikkoja yhdistellessä. Kehitettyjä työkaluja apuna käyttäen osoitetaan, että Gaifman- ja Hanf-lokaalisuudeksi kutsuttujen varianttien välissä on lokaalisuuskäsitteiden hierarkia, jonka eri tasot voidaan erottaa toisistaan kasvavaa dimensiota olevissa hiloissa. Toisaalta osoitetaan, että lokaalisuuskäsitteet eivät eroa toisistaan, kun rajoitutaan tarkastelemaan äärellisiä puita. Järjestysinvariantit logiikat ovat kieliä, joissa on käytössä sisäänrakennettu järjestysrelaatio, mutta sitä on käytettävä siten, etteivät kaavojen ilmaisemat asiat riipu valitusta järjestyksestä. Määritelmää voi motivoida tietojenkäsittelyn näkökulmasta: vaikka ohjelman syötteen tietojen järjestyksellä ei olisi odotetun tuloksen kannalta merkitystä, on syöte tietokoneen muistissa aina jossakin järjestyksessä, jota ohjelma voi laskennassaan hyödyntää. Väitöskirjassa tutkitaan minkälaisia lokaalisuuden muotoja järjestysinvariantit ensimmäisen kertaluvun predikaattilogiikan laajennukset yksipaikkaisilla kvanttoreilla voivat toteuttaa. Tuloksia sovelletaan tarkastelemalla, milloin sisäänrakennettu järjestys lisää logiikan ilmaisuvoimaa äärellisissä puissa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is the Salmonid & Freshwater Fisheries Statistics for England & Wales 1996 produced by the Environment Agency in 1997. The principal aim of the Environment Agency in respect of fisheries is to maintain improve and develop fish stocks, the basic fisheries resource, in order to optimise the social and economic benefits from their sustainable exploitation. This report is the second collation of salmon and migratory trout catch statistics for England and Wales produced by the Environment Agency. For the years 1989-94, these statistics were published by the National Rivers Authority (NRA) and the years 1983-88 by the Ministry of Agriculture, Fisheries and Food, Directorate of Fisheries Research in its Data Report series. The 1996 data, have been presented in a broadly similar format to those of 1995. This report makes a general review of different catches: Northumbria, Yorkshire, Anglian, Thames, Southern, Wessex, South West, Severn-Trent, Welsh and North West.