895 resultados para Risk Analysis, Security Models, Counter Measures, Threat Networks
Resumo:
Background Burns and scalds are a significant cause of morbidity and mortality in children. Successful counter-measures to prevent burn and scald-related injury have been identified. However, evidence indicating the successful roll-out of these counter-measures into the wider community is lacking. Community-based interventions in the form of multi-strategy, multi-focused programmes are hypothesised to result in a reduction in population-wide injury rates. This review tests this hypothesis with regards to burn and scald injury in children. Objectives To assess the effects of community-based interventions, defined as coordinated, multi-strategy initiatives, for reducing burns and scalds in children aged 14 years and under. Search strategy We searched the Cochrane Injuries Group's specialised register, CENTRAL, MEDLINE, EMBASE, CINAHL, PsycINFO, National Research Register and the Web of Knowledge. We also handsearched selected journals and checked the reference lists of selected publications. The searches were last updated in May 2007. Selection criteria Included studies were those that reported changes in medically attended burn and scald-related injury rates in a paediatric population (aged 14 years and under), following the implementation of a controlled community-based intervention. Data collection and analysis Two authors independently assess studies for eligibility and extracted data. Due to heterogeneity between the included studies, a pooled analysis was not appropriate. Main results Of 39 identified studies, four met the criteria for inclusion. Two of the included studies reported a significant decrease in paediatric burn and scald injury in the intervention compared with the control communities. The failure of the other two studies to show a positive result may have been due to limited time-frame for the intervention and/or failure to adequately implement the counter-measures in the communities. Authors' conclusions There are a very limited number of research studies allowing conclusions to be drawn about the effectiveness of community-based injury prevention programmes to prevent burns and scalds in children. There is a pressing need to evaluate high-quality community-based intervention programmes based on efficacious counter-measures to reduce burns and scalds in children. It is important that a framework for considering the problem of burns and scalds in children from a prevention perspective be articulated, and that an evidence-based suite of interventions be combined to create programme guidelines suitable for implementation in communities throughout the world.
Resumo:
Five case study communities in both metropolitan and regional urban locations in Australia are used as test sites to develop measures of 'community strength' on four domains: Natural Capital; Produced Economic Capital; Human Capital; and Social and Institutional Capital. The paper focuses on the fourth domain. Sample surveys of households in the five case study communities used a survey instrument with scaled items to measure four aspects of social capital - formal norms, informal norms, formal structures and informal structures - that embrace the concepts of trust, reciprocity, bonds, bridges, links and networks in the interaction of individuals with their community inherent in the notion social capital. Exploratory principal components analysis is used to identify factors that measure those aspects of social and institutional capital, while a confirmatory analysis based on Cronbach's alpha explores the robustness of the measures. Four primary scales and 15 subscales are identified when defining the domain of social and institutional capital. Further analysis reveals that two measures - anomie, and perceived quality of life and wellbeing - relate to certain primary scales of social capital.
Resumo:
The paper presents a spreadsheet-based multiple account framework for cost-benefit analysis which incorporates all the usual concerns of cost-benefit analysts such as shadow-pricing to account for market failure. distribution of net benefits. sensitivity and risk analysis, cost of public funds, and environmental effects. The approach is generalizable to a wide range of projects and situations and offers a number of advantages to both analysts and decision-makers, including transparency, a check on internal consistency, and a detailed summary of project net benefits disaggregated by stakeholder group. Of particular importance is the ease with which this framework allows for a project to be evaluated from alternative decision-making perspectives and under alternative policy scenarios where the trade-offs among the project's stakeholders can readily be identified and quantified. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
In this paper we propose a range of dynamic data envelopment analysis (DEA) models which allow information on costs of adjustment to be incorporated into the DEA framework. We first specify a basic dynamic DEA model predicated on a number or simplifying assumptions. We then outline a number of extensions to this model to accommodate asymmetric adjustment costs, non-static output quantities, non-static input prices, and non-static costs of adjustment, technological change, quasi-fixed inputs and investment budget constraints. The new dynamic DEA models provide valuable extra information relative to the standard static DEA models-they identify an optimal path of adjustment for the input quantities, and provide a measure of the potential cost savings that result from recognising the costs of adjusting input quantities towards the optimal point. The new models are illustrated using data relating to a chain of 35 retail department stores in Chile. The empirical results illustrate the wealth of information that can be derived from these models, and clearly show that static models overstate potential cost savings when adjustment costs are non-zero.
Resumo:
Background There are no analytical studies of individual risks for Ross River virus (RRV) disease. Therefore, we set out to determine individual risk and protective factors for RRV disease in a high incidence area and to assess the utility of the case-control design applied for this purpose to an arbovirus disease. Methods We used a prospective matched case-control study of new community cases of RRV disease in the local government areas of Cairns, Mareeba, Douglas, and Atherton, in tropical Queensland, from January I to May 31, 1998. Results Protective measures against mosquitoes reduced the risk for disease. Mosquito coils, repellents, and citronella candles each decreased risk by at least 2-fold, with a dose-response for the number of protective measures used. Light-coloured clothing decreased risk 3-fold. Camping increased the risk 8-fold. Conclusions These risks were substantial and statistically significant, and provide a basis for educational programs on individual protection against RRV disease in Australia. Our study demonstrates the utility of the case-control method for investigating arbovirus risks. Such a risk analysis has not been done before for RRV infection, and is infrequently reported for other arbovirus infections.
Resumo:
The role of mutualisms in contributing to species invasions is rarely considered, inhibiting effective risk analysis and management options. Potential ecological consequences of invasion of non-native pollinators include increased pollination and seed set of invasive plants, with subsequent impacts on population growth rates and rates of spread. We outline a quantitative approach for evaluating the impact of a proposed introduction of an invasive pollinator on existing weed population dynamics and demonstrate the use of this approach on a relatively data-rich case study: the impacts on Cytisus scoparius (Scotch broom) from proposed introduction of Bombus terrestris. Three models have been used to assess population growth (matrix model), spread speed (integrodifference equation), and equilibrium occupancy (lattice model) for C. scoparius. We use available demographic data for an Australian population to parameterize two of these models. Increased seed set due to more efficient pollination resulted in a higher population growth rate in the density-independent matrix model, whereas simulations of enhanced pollination scenarios had a negligible effect on equilibrium weed occupancy in the lattice model. This is attributed to strong microsite limitation of recruitment in invasive C. scoparius populations observed in Australia and incorporated in the lattice model. A lack of information regarding secondary ant dispersal of C. scoparius prevents us from parameterizing the integrodifference equation model for Australia, but studies of invasive populations in California suggest that spread speed will also increase with higher seed set. For microsite-limited C. scoparius populations, increased seed set has minimal effects on equilibrium site occupancy. However, for density-independent rapidly invading populations, increased seed set is likely to lead to higher growth rates and spread speeds. The impacts of introduced pollinators on native flora and fauna and the potential for promoting range expansion in pollinator-limited 'sleeper weeds' also remain substantial risks.
Resumo:
How can empirical evidence of adverse effects from exposure to noxious agents, which is often incomplete and uncertain, be used most appropriately to protect human health? We examine several important questions on the best uses of empirical evidence in regulatory risk management decision-making raised by the US Environmental Protection Agency (EPA)'s science-policy concerning uncertainty and variability in human health risk assessment. In our view, the US EPA (and other agencies that have adopted similar views of risk management) can often improve decision-making by decreasing reliance on default values and assumptions, particularly when causation is uncertain. This can be achieved by more fully exploiting decision-theoretic methods and criteria that explicitly account for uncertain, possibly conflicting scientific beliefs and that can be fully studied by advocates and adversaries of a policy choice, in administrative decision-making involving risk assessment. The substitution of decision-theoretic frameworks for default assumption-driven policies also allows stakeholder attitudes toward risk to be incorporated into policy debates, so that the public and risk managers can more explicitly identify the roles of risk-aversion or other attitudes toward risk and uncertainty in policy recommendations. Decision theory provides a sound scientific way explicitly to account for new knowledge and its effects on eventual policy choices. Although these improvements can complicate regulatory analyses, simplifying default assumptions can create substantial costs to society and can prematurely cut off consideration of new scientific insights (e.g., possible beneficial health effects from exposure to sufficiently low 'hormetic' doses of some agents). In many cases, the administrative burden of applying decision-analytic methods is likely to be more than offset by improved effectiveness of regulations in achieving desired goals. Because many foreign jurisdictions adopt US EPA reasoning and methods of risk analysis, it may be especially valuable to incorporate decision-theoretic principles that transcend local differences among jurisdictions.
Resumo:
Aim of study: As part of a Cochrane review of viscosupplementation in knee OA, randomised controlled trials (RCT) were reviewed to evaluate evidence for the efficacy of viscosupplementation with Hylan G-F 20 compared to placebo. Methods: Electronic searches were conducted of MEDLINE, EMBASE, Premedline, Current Contents, and CENTRAL. Human, RCT involving Hylan G-F 20 compared to placebo, published prior to 1Q2004, were included. Trials were selected and data extracted by two independent reviewers. Methodological quality was assessed with the Jadad criteria by two reviewers. Data on the OARSI and OMERACT core set clinical outcome measures were extracted where possible. Weighted mean difference (WMD), based on post-test scores, and 95% confidence intervals (CI) were calculated for continuous outcome measures and relative risk (RR) for dichotomous outcome measures. Results: Seven RCT met the inclusion criteria. Median methodological quality was 4 (range 1–5). A further two studies were only reported in abstract form (Jadad score Z 1) and contained insufficient extractable data for inclusion in the analysis. Nine RCT, which compared Hylan G-F 20 to other interventions such as intra-articular corticosteroid, physiotherapy, NSAID, appropriate care, intra-articular gaseous oxygen and other hyaluronan, are not reported here. Twenty-three studies failed to meet inclusion criteria and were excluded. Hylan G-F 20 was more efficacious than placebo at 1–4 weeks post-injection for pain on weight-bearing WMD (random effects [RE]) 13 mm on a 0–100 mm VAS (P Z 0.002) based on 6 RCT. This difference was even greater at 5–13 weeks post-injection, 22 m (RE) (P Z 0.001) based on 5 RCT, and at 14–6 weeks postinjection, 21 m (RE) (P Z 0.006) based on 4 RCT. Hylan G-F 20 was more efficacious than placebo at 1–4 weeks post-injection for pain at night, WMD 7 mm on a 0–100 mm VAS (P Z 0.003) based on 5 RCT. This difference was even greater at 5–13 weeks post-injection, 11 mm (P Z 0.008) based on 4 RCT, and at 14–26 weeks post-injection, 17 mm (P ! 0.00001) based on 3 RCT. There was no significant difference (WMD 8 mm) between Hylan G-F 20 C oral placebo and arthrocentesis C oral placebo at 5–13 weeks post-injection for WOMAC Pain, but Hylan G-F 20 C oral placebo was more efficacious than arthrocentesis C oral placebo for WOMAC Function, WMD 9 mm on a 0–100 mm VAS (P Z 0.01) (Dickson, 2001). Hylan G-F 20 was more effective than placebo at 1–4 weeks postinjection for the variable designed treatment efficacy, WMD 22 mm on a 0–100 mm VAS (P ! 0.00001) based on improvement in 4 RCT. This difference was even greater at 5–13 weeks post injection, 35 mm (P ! 0.00001). Conclusions: Evidence from this updated Cochrane review supports the superior efficacy of Hylan G-F 20 compared to placebo on weight-bearing pain, night pain, function and treatment efficacy in the treatment of knee OA.
Resumo:
Boolean models of genetic regulatory networks (GRNs) have been shown to exhibit many of the characteristic dynamics of real GRNs, with gene expression patterns settling to point attractors or limit cycles, or displaying chaotic behaviour, depending upon the connectivity of the network and the relative proportions of excitatory and inhibitory interactions. This range of behaviours is only apparent, however, when the nodes of the GRN are updated synchronously, a biologically implausible state of affairs. In this paper we demonstrate that evolution can produce GRNs with interesting dynamics under an asynchronous update scheme. We use an Artificial Genome to generate networks which exhibit limit cycle dynamics when updated synchronously, but collapse to a point attractor when updated asynchronously. Using a hill climbing algorithm the networks are then evolved using a fitness function which rewards patterns of gene expression which revisit as many previously seen states as possible. The final networks exhibit “fuzzy limit cycle” dynamics when updated asynchronously.
Resumo:
Measuring Job Openings: Evidence from Swedish Plant Level Data. In modern macroeconomic models “job openings'' are a key component. Thus, when taking these models to the data we need an empirical counterpart to the theoretical concept of job openings. To achieve this, the literature relies on job vacancies measured either in survey or register data. Insofar as this concept captures the concept of job openings well we should see a tight relationship between vacancies and subsequent hires on the micro level. To investigate this, I analyze a new data set of Swedish hires and job vacancies on the plant level covering the period 2001-2012. I find that vacancies contain little power in predicting hires over and above (i) whether the number of vacancies is positive and (ii) plant size. Building on this, I propose an alternative measure of job openings in the economy. This measure (i) better predicts hiring at the plant level and (ii) provides a better fitting aggregate matching function vis-à-vis the traditional vacancy measure. Firm Level Evidence from Two Vacancy Measures. Using firm level survey and register data for both Sweden and Denmark we show systematic mis-measurement in both vacancy measures. While the register-based measure on the aggregate constitutes a quarter of the survey-based measure, the latter is not a super-set of the former. To obtain the full set of unique vacancies in these two databases, the number of survey vacancies should be multiplied by approximately 1.2. Importantly, this adjustment factor varies over time and across firm characteristics. Our findings have implications for both the search-matching literature and policy analysis based on vacancy measures: observed changes in vacancies can be an outcome of changes in mis-measurement, and are not necessarily changes in the actual number of vacancies. Swedish Unemployment Dynamics. We study the contribution of different labor market flows to business cycle variations in unemployment in the context of a dual labor market. To this end, we develop a decomposition method that allows for a distinction between permanent and temporary employment. We also allow for slow convergence to steady state which is characteristic of European labor markets. We apply the method to a new Swedish data set covering the period 1987-2012 and show that the relative contributions of inflows and outflows to/from unemployment are roughly 60/30. The remaining 10\% are due to flows not involving unemployment. Even though temporary contracts only cover 9-11\% of the working age population, variations in flows involving temporary contracts account for 44\% of the variation in unemployment. We also show that the importance of flows involving temporary contracts is likely to be understated if one does not account for non-steady state dynamics. The New Keynesian Transmission Mechanism: A Heterogeneous-Agent Perspective. We argue that a 2-agent version of the standard New Keynesian model---where a ``worker'' receives only labor income and a “capitalist'' only profit income---offers insights about how income inequality affects the monetary transmission mechanism. Under rigid prices, monetary policy affects the distribution of consumption, but it has no effect on output as workers choose not to change their hours worked in response to wage movements. In the corresponding representative-agent model, in contrast, hours do rise after a monetary policy loosening due to a wealth effect on labor supply: profits fall, thus reducing the representative worker's income. If wages are rigid too, however, the monetary transmission mechanism is active and resembles that in the corresponding representative-agent model. Here, workers are not on their labor supply curve and hence respond passively to demand, and profits are procyclical.
Resumo:
The judicial interest in ‘scientific’ evidence has driven recent work to quantify results for forensic linguistic authorship analysis. Through a methodological discussion and a worked example this paper examines the issues which complicate attempts to quantify results in work. The solution suggested to some of the difficulties is a sampling and testing strategy which helps to identify potentially useful, valid and reliable markers of authorship. An important feature of the sampling strategy is that these markers identified as being generally valid and reliable are retested for use in specific authorship analysis cases. The suggested approach for drawing quantified conclusions combines discriminant function analysis and Bayesian likelihood measures. The worked example starts with twenty comparison texts for each of three potential authors and then uses a progressively smaller comparison corpus, reducing to fifteen, ten, five and finally three texts per author. This worked example demonstrates how reducing the amount of data affects the way conclusions can be drawn. With greater numbers of reference texts quantified and safe attributions are shown to be possible, but as the number of reference texts reduces the analysis shows how the conclusion which should be reached is that no attribution can be made. The testing process at no point results in instances of a misattribution.
Resumo:
Neural networks have often been motivated by superficial analogy with biological nervous systems. Recently, however, it has become widely recognised that the effective application of neural networks requires instead a deeper understanding of the theoretical foundations of these models. Insight into neural networks comes from a number of fields including statistical pattern recognition, computational learning theory, statistics, information geometry and statistical mechanics. As an illustration of the importance of understanding the theoretical basis for neural network models, we consider their application to the solution of multi-valued inverse problems. We show how a naive application of the standard least-squares approach can lead to very poor results, and how an appreciation of the underlying statistical goals of the modelling process allows the development of a more general and more powerful formalism which can tackle the problem of multi-modality.
Resumo:
It is well known that one of the obstacles to effective forecasting of exchange rates is heteroscedasticity (non-stationary conditional variance). The autoregressive conditional heteroscedastic (ARCH) model and its variants have been used to estimate a time dependent variance for many financial time series. However, such models are essentially linear in form and we can ask whether a non-linear model for variance can improve results just as non-linear models (such as neural networks) for the mean have done. In this paper we consider two neural network models for variance estimation. Mixture Density Networks (Bishop 1994, Nix and Weigend 1994) combine a Multi-Layer Perceptron (MLP) and a mixture model to estimate the conditional data density. They are trained using a maximum likelihood approach. However, it is known that maximum likelihood estimates are biased and lead to a systematic under-estimate of variance. More recently, a Bayesian approach to parameter estimation has been developed (Bishop and Qazaz 1996) that shows promise in removing the maximum likelihood bias. However, up to now, this model has not been used for time series prediction. Here we compare these algorithms with two other models to provide benchmark results: a linear model (from the ARIMA family), and a conventional neural network trained with a sum-of-squares error function (which estimates the conditional mean of the time series with a constant variance noise model). This comparison is carried out on daily exchange rate data for five currencies.
Resumo:
We study online approximations to Gaussian process models for spatially distributed systems. We apply our method to the prediction of wind fields over the ocean surface from scatterometer data. Our approach combines a sequential update of a Gaussian approximation to the posterior with a sparse representation that allows to treat problems with a large number of observations.