895 resultados para Risk Analysis, Security Models, Counter Measures, Threat Networks
Resumo:
Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.
Resumo:
Cyber security is one of the main topics that are discussed around the world today. The threat is real, and it is unlikely to diminish. People, business, governments, and even armed forces are networked in a way or another. Thus, the cyber threat is also facing military networking. On the other hand, the concept of Network Centric Warfare sets high requirements for military tactical data communications and security. A challenging networking environment and cyber threats force us to consider new approaches to build security on the military communication systems. The purpose of this thesis is to develop a cyber security architecture for military networks, and to evaluate the designed architecture. The architecture is described as a technical functionality. As a new approach, the thesis introduces Cognitive Networks (CN) which are a theoretical concept to build more intelligent, dynamic and even secure communication networks. The cognitive networks are capable of observe the networking environment, make decisions for optimal performance and adapt its system parameter according to the decisions. As a result, the thesis presents a five-layer cyber security architecture that consists of security elements controlled by a cognitive process. The proposed architecture includes the infrastructure, services and application layers that are managed and controlled by the cognitive and management layers. The architecture defines the tasks of the security elements at a functional level without introducing any new protocols or algorithms. For evaluating two separated method were used. The first method is based on the SABSA framework that uses a layered approach to analyze overall security of an organization. The second method was a scenario based method in which a risk severity level is calculated. The evaluation results show that the proposed architecture fulfills the security requirements at least at a high level. However, the evaluation of the proposed architecture proved to be very challenging. Thus, the evaluation results must be considered very critically. The thesis proves the cognitive networks are a promising approach, and they provide lots of benefits when designing a cyber security architecture for the tactical military networks. However, many implementation problems exist, and several details must be considered and studied during the future work.
Resumo:
A complex network is an abstract representation of an intricate system of interrelated elements where the patterns of connection hold significant meaning. One particular complex network is a social network whereby the vertices represent people and edges denote their daily interactions. Understanding social network dynamics can be vital to the mitigation of disease spread as these networks model the interactions, and thus avenues of spread, between individuals. To better understand complex networks, algorithms which generate graphs exhibiting observed properties of real-world networks, known as graph models, are often constructed. While various efforts to aid with the construction of graph models have been proposed using statistical and probabilistic methods, genetic programming (GP) has only recently been considered. However, determining that a graph model of a complex network accurately describes the target network(s) is not a trivial task as the graph models are often stochastic in nature and the notion of similarity is dependent upon the expected behavior of the network. This thesis examines a number of well-known network properties to determine which measures best allowed networks generated by different graph models, and thus the models themselves, to be distinguished. A proposed meta-analysis procedure was used to demonstrate how these network measures interact when used together as classifiers to determine network, and thus model, (dis)similarity. The analytical results form the basis of the fitness evaluation for a GP system used to automatically construct graph models for complex networks. The GP-based automatic inference system was used to reproduce existing, well-known graph models as well as a real-world network. Results indicated that the automatically inferred models exemplified functional similarity when compared to their respective target networks. This approach also showed promise when used to infer a model for a mammalian brain network.
Resumo:
Esta tesis está dividida en dos partes: en la primera parte se presentan y estudian los procesos telegráficos, los procesos de Poisson con compensador telegráfico y los procesos telegráficos con saltos. El estudio presentado en esta primera parte incluye el cálculo de las distribuciones de cada proceso, las medias y varianzas, así como las funciones generadoras de momentos entre otras propiedades. Utilizando estas propiedades en la segunda parte se estudian los modelos de valoración de opciones basados en procesos telegráficos con saltos. En esta parte se da una descripción de cómo calcular las medidas neutrales al riesgo, se encuentra la condición de no arbitraje en este tipo de modelos y por último se calcula el precio de las opciones Europeas de compra y venta.
Resumo:
Desde la noción universal sobre la empresa como un sistema de interacción con un entorno determinado para alcanzar un objetivo, de manera planificada y en función de satisfacer las demandas de un mercado mediante la actividad económica, su viabilidad, sostenibilidad y crecimiento dependerán, por supuesto, de una serie de estrategias adecuadas no solo para tales fines, sino también para enfrentar diversidad de agentes endógenos y exógenos que puedan afectar el normal desempeño de su gestión. Estamos hablando de la importancia de la resiliencia organizacional y del Capital Psicológico. En un escenario tan impredecible como el de la economía mundial, donde la constante son los cambios en su comportamiento —unos propios de su dinámica e interdependencia, naturales de fenómenos como la globalización, y otros derivados de eventos disruptivos— hoy más que nunca es necesario implementar el modelo de la empresa resiliente, que es aquella entidad capaz de adaptarse y recuperarse frente a una perturbación. Al mismo tiempo, más allá de su tamaño, naturaleza u objeto social, es indispensable reconocer básicamente que toda organización está constituida por personas, lo cual implica la trascendencia que para su funcionamiento tiene el factor humano-dependiente, y por lo tanto se crea la necesidad de promover el Capital Psicológico y la resiliencia a nivel de las organizaciones a través de una cultura empresarial.
Resumo:
We focus on the comparison of three statistical models used to estimate the treatment effect in metaanalysis when individually pooled data are available. The models are two conventional models, namely a multi-level and a model based upon an approximate likelihood, and a newly developed model, the profile likelihood model which might be viewed as an extension of the Mantel-Haenszel approach. To exemplify these methods, we use results from a meta-analysis of 22 trials to prevent respiratory tract infections. We show that by using the multi-level approach, in the case of baseline heterogeneity, the number of clusters or components is considerably over-estimated. The approximate and profile likelihood method showed nearly the same pattern for the treatment effect distribution. To provide more evidence two simulation studies are accomplished. The profile likelihood can be considered as a clear alternative to the approximate likelihood model. In the case of strong baseline heterogeneity, the profile likelihood method shows superior behaviour when compared with the multi-level model. Copyright (C) 2006 John Wiley & Sons, Ltd.
Resumo:
The UK has a target for an 80% reduction in CO2 emissions by 2050 from a 1990 base. Domestic energy use accounts for around 30% of total emissions. This paper presents a comprehensive review of existing models and modelling techniques and indicates how they might be improved by considering individual buying behaviour. Macro (top-down) and micro (bottom-up) models have been reviewed and analysed. It is found that bottom-up models can project technology diffusion due to their higher resolution. The weakness of existing bottom-up models at capturing individual green technology buying behaviour has been identified. Consequently, Markov chains, neural networks and agent-based modelling are proposed as possible methods to incorporate buying behaviour within a domestic energy forecast model. Among the three methods, agent-based models are found to be the most promising, although a successful agent approach requires large amounts of input data. A prototype agent-based model has been developed and tested, which demonstrates the feasibility of an agent approach. This model shows that an agent-based approach is promising as a means to predict the effectiveness of various policy measures.
Resumo:
The narrative of the United States is of a "nation of immigrants" in which the language shift patterns of earlier ethnolinguistic groups have tended towards linguistic assimilation through English. In recent years, however, changes in the demographic landscape and language maintenance by non-English speaking immigrants, particularly Hispanics, have been perceived as threats and have led to calls for an official English language policy.This thesis aims to contribute to the study of language policy making from a societal security perspective as expressed in attitudes regarding language and identity originating in the daily interaction between language groups. The focus is on the role of language and American identity in relation to immigration. The study takes an interdisciplinary approach combining language policy studies, security theory, and critical discourse analysis. The material consists of articles collected from four newspapers, namely USA Today, The New York Times, Los Angeles Times, and San Francisco Chronicle between April 2006 and December 2007.Two discourse types are evident from the analysis namely Loyalty and Efficiency. The former is mainly marked by concerns of national identity and contains speech acts of security related to language shift, choice and English for unity. Immigrants are represented as dehumanised, and harmful. Immigration is given as sovereignty-related, racial, and as war. The discourse type of Efficiency is mainly instrumental and contains speech acts of security related to cost, provision of services, health and safety, and social mobility. Immigrants are further represented as a labour resource. These discourse types reflect how the construction of the linguistic 'we' is expected to be maintained. Loyalty is triggered by arguments that the collective identity is threatened and is itself used in reproducing the collective 'we' through hegemonic expressions of monolingualism in the public space and semi-public space. The denigration of immigrants is used as a tool for enhancing societal security through solidarity and as a possible justification for the denial of minority rights. Also, although language acquisition patterns still follow the historical trend of language shift, factors indicating cultural separateness such as the appearance of speech communities or the use of minority languages in the public space and semi-public space have led to manifestations of intolerance. Examples of discrimination and prejudice towards minority groups indicate that the perception of worth of a shared language differs from the actual worth of dominant language acquisition for integration purposes. The study further indicates that the efficient working of the free market by using minority languages to sell services or buy labour is perceived as conflicting with nation-building notions since it may create separately functioning sub-communities with a new cultural capital recognised as legitimate competence. The discourse types mainly represent securitising moves constructing existential threats. The perception of threat and ideas of national belonging are primarily based on a zero-sum notion favouring monolingualism. Further, the identity of the immigrant individual is seen as dynamic and adaptable to assimilationist measures whereas the identity of the state and its members are perceived as static. Also, the study shows that debates concerning language status are linked to extra-linguistic matters. To conclude, policy makers in the US need to consider the relationship between four factors, namely societal security based on collective identity, individual/human security, human rights, and a changing linguistic demography, for proposed language intervention measures to be successful.
Resumo:
The purpose of this paper is to present the application of a three-phase harmonic propagation analysis time-domain tool, using the Norton model to approach the modeling of non-linear loads, making the harmonics currents flow more appropriate to the operation analysis and to the influence of mitigation elements analysis. This software makes it possible to obtain results closer to the real distribution network, considering voltages unbalances, currents imbalances and the application of mitigation elements for harmonic distortions. In this scenario, a real case study with network data and equipments connected to the network will be presented, as well as the modeling of non-linear loads based on real data obtained from some PCCs (Points of Common Coupling) of interests for a distribution company.
Resumo:
Congenital heart disease (CHD) occurs in similar to 1% of newborns. CHD arises from many distinct etiologies, ranging from genetic or genomic variation to exposure to teratogens, which elicit diverse cell and molecular responses during cardiac development. To systematically explore the relationships between CHD risk factors and responses, we compiled and integrated comprehensive datasets from studies of CHD in humans and model organisms. We examined two alternative models of potential functional relationships between genes in these datasets: direct convergence, in which CHD risk factors significantly and directly impact the same genes and molecules and functional convergence, in which risk factors significantly impact different molecules that participate in a discrete heart development network. We observed no evidence for direct convergence. In contrast, we show that CHD risk factors functionally converge in protein networks driving the development of specific anatomical structures (e.g., outflow tract, ventricular septum, and atrial septum) that are malformed by CHD. This integrative analysis of CHD risk factors and responses suggests a complex pattern of functional interactions between genomic variation and environmental exposures that modulate critical biological systems during heart development.
Resumo:
In Performance-Based Earthquake Engineering (PBEE), evaluating the seismic performance (or seismic risk) of a structure at a designed site has gained major attention, especially in the past decade. One of the objectives in PBEE is to quantify the seismic reliability of a structure (due to the future random earthquakes) at a site. For that purpose, Probabilistic Seismic Demand Analysis (PSDA) is utilized as a tool to estimate the Mean Annual Frequency (MAF) of exceeding a specified value of a structural Engineering Demand Parameter (EDP). This dissertation focuses mainly on applying an average of a certain number of spectral acceleration ordinates in a certain interval of periods, Sa,avg (T1,…,Tn), as scalar ground motion Intensity Measure (IM) when assessing the seismic performance of inelastic structures. Since the interval of periods where computing Sa,avg is related to the more or less influence of higher vibration modes on the inelastic response, it is appropriate to speak about improved IMs. The results using these improved IMs are compared with a conventional elastic-based scalar IMs (e.g., pseudo spectral acceleration, Sa ( T(¹)), or peak ground acceleration, PGA) and the advanced inelastic-based scalar IM (i.e., inelastic spectral displacement, Sdi). The advantages of applying improved IMs are: (i ) "computability" of the seismic hazard according to traditional Probabilistic Seismic Hazard Analysis (PSHA), because ground motion prediction models are already available for Sa (Ti), and hence it is possibile to employ existing models to assess hazard in terms of Sa,avg, and (ii ) "efficiency" or smaller variability of structural response, which was minimized to assess the optimal range to compute Sa,avg. More work is needed to assess also "sufficiency" and "scaling robustness" desirable properties, which are disregarded in this dissertation. However, for ordinary records (i.e., with no pulse like effects), using the improved IMs is found to be more accurate than using the elastic- and inelastic-based IMs. For structural demands that are dominated by the first mode of vibration, using Sa,avg can be negligible relative to the conventionally-used Sa (T(¹)) and the advanced Sdi. For structural demands with sign.cant higher-mode contribution, an improved scalar IM that incorporates higher modes needs to be utilized. In order to fully understand the influence of the IM on the seismis risk, a simplified closed-form expression for the probability of exceeding a limit state capacity was chosen as a reliability measure under seismic excitations and implemented for Reinforced Concrete (RC) frame structures. This closed-form expression is partuclarly useful for seismic assessment and design of structures, taking into account the uncertainty in the generic variables, structural "demand" and "capacity" as well as the uncertainty in seismic excitations. The assumed framework employs nonlinear Incremental Dynamic Analysis (IDA) procedures in order to estimate variability in the response of the structure (demand) to seismic excitations, conditioned to IM. The estimation of the seismic risk using the simplified closed-form expression is affected by IM, because the final seismic risk is not constant, but with the same order of magnitude. Possible reasons concern the non-linear model assumed, or the insufficiency of the selected IM. Since it is impossibile to state what is the "real" probability of exceeding a limit state looking the total risk, the only way is represented by the optimization of the desirable properties of an IM.
Resumo:
Flood disasters are a major cause of fatalities and economic losses, and several studies indicate that global flood risk is currently increasing. In order to reduce and mitigate the impact of river flood disasters, the current trend is to integrate existing structural defences with non structural measures. This calls for a wider application of advanced hydraulic models for flood hazard and risk mapping, engineering design, and flood forecasting systems. Within this framework, two different hydraulic models for large scale analysis of flood events have been developed. The two models, named CA2D and IFD-GGA, adopt an integrated approach based on the diffusive shallow water equations and a simplified finite volume scheme. The models are also designed for massive code parallelization, which has a key importance in reducing run times in large scale and high-detail applications. The two models were first applied to several numerical cases, to test the reliability and accuracy of different model versions. Then, the most effective versions were applied to different real flood events and flood scenarios. The IFD-GGA model showed serious problems that prevented further applications. On the contrary, the CA2D model proved to be fast and robust, and able to reproduce 1D and 2D flow processes in terms of water depth and velocity. In most applications the accuracy of model results was good and adequate to large scale analysis. Where complex flow processes occurred local errors were observed, due to the model approximations. However, they did not compromise the correct representation of overall flow processes. In conclusion, the CA model can be a valuable tool for the simulation of a wide range of flood event types, including lowland and flash flood events.
Resumo:
In the last years radar sensor networks for localization and tracking in indoor environment have generated more and more interest, especially for anti-intrusion security systems. These networks often use Ultra Wide Band (UWB) technology, which consists in sending very short (few nanoseconds) impulse signals. This approach guarantees high resolution and accuracy and also other advantages such as low price, low power consumption and narrow-band interference (jamming) robustness. In this thesis the overall data processing (done in MATLAB environment) is discussed, starting from experimental measures from sensor devices, ending with the 2D visualization of targets movements over time and focusing mainly on detection and localization algorithms. Moreover, two different scenarios and both single and multiple target tracking are analyzed.