875 resultados para Linear Multi-step Formulae
Resumo:
The research presented in this thesis addresses inherent problems in signaturebased intrusion detection systems (IDSs) operating in heterogeneous environments. The research proposes a solution to address the difficulties associated with multistep attack scenario specification and detection for such environments. The research has focused on two distinct problems: the representation of events derived from heterogeneous sources and multi-step attack specification and detection. The first part of the research investigates the application of an event abstraction model to event logs collected from a heterogeneous environment. The event abstraction model comprises a hierarchy of events derived from different log sources such as system audit data, application logs, captured network traffic, and intrusion detection system alerts. Unlike existing event abstraction models where low-level information may be discarded during the abstraction process, the event abstraction model presented in this work preserves all low-level information as well as providing high-level information in the form of abstract events. The event abstraction model presented in this work was designed independently of any particular IDS and thus may be used by any IDS, intrusion forensic tools, or monitoring tools. The second part of the research investigates the use of unification for multi-step attack scenario specification and detection. Multi-step attack scenarios are hard to specify and detect as they often involve the correlation of events from multiple sources which may be affected by time uncertainty. The unification algorithm provides a simple and straightforward scenario matching mechanism by using variable instantiation where variables represent events as defined in the event abstraction model. The third part of the research looks into the solution to address time uncertainty. Clock synchronisation is crucial for detecting multi-step attack scenarios which involve logs from multiple hosts. Issues involving time uncertainty have been largely neglected by intrusion detection research. The system presented in this research introduces two techniques for addressing time uncertainty issues: clock skew compensation and clock drift modelling using linear regression. An off-line IDS prototype for detecting multi-step attacks has been implemented. The prototype comprises two modules: implementation of the abstract event system architecture (AESA) and of the scenario detection module. The scenario detection module implements our signature language developed based on the Python programming language syntax and the unification-based scenario detection engine. The prototype has been evaluated using a publicly available dataset of real attack traffic and event logs and a synthetic dataset. The distinct features of the public dataset are the fact that it contains multi-step attacks which involve multiple hosts with clock skew and clock drift. These features allow us to demonstrate the application and the advantages of the contributions of this research. All instances of multi-step attacks in the dataset have been correctly identified even though there exists a significant clock skew and drift in the dataset. Future work identified by this research would be to develop a refined unification algorithm suitable for processing streams of events to enable an on-line detection. In terms of time uncertainty, identified future work would be to develop mechanisms which allows automatic clock skew and clock drift identification and correction. The immediate application of the research presented in this thesis is the framework of an off-line IDS which processes events from heterogeneous sources using abstraction and which can detect multi-step attack scenarios which may involve time uncertainty.
Resumo:
Unmanned Aerial Vehicles (UAVs) are emerging as an ideal platform for a wide range of civil applications such as disaster monitoring, atmospheric observation and outback delivery. However, the operation of UAVs is currently restricted to specially segregated regions of airspace outside of the National Airspace System (NAS). Mission Flight Planning (MFP) is an integral part of UAV operation that addresses some of the requirements (such as safety and the rules of the air) of integrating UAVs in the NAS. Automated MFP is a key enabler for a number of UAV operating scenarios as it aids in increasing the level of onboard autonomy. For example, onboard MFP is required to ensure continued conformance with the NAS integration requirements when there is an outage in the communications link. MFP is a motion planning task concerned with finding a path between a designated start waypoint and goal waypoint. This path is described with a sequence of 4 Dimensional (4D) waypoints (three spatial and one time dimension) or equivalently with a sequence of trajectory segments (or tracks). It is necessary to consider the time dimension as the UAV operates in a dynamic environment. Existing methods for generic motion planning, UAV motion planning and general vehicle motion planning cannot adequately address the requirements of MFP. The flight plan needs to optimise for multiple decision objectives including mission safety objectives, the rules of the air and mission efficiency objectives. Online (in-flight) replanning capability is needed as the UAV operates in a large, dynamic and uncertain outdoor environment. This thesis derives a multi-objective 4D search algorithm entitled Multi- Step A* (MSA*) based on the seminal A* search algorithm. MSA* is proven to find the optimal (least cost) path given a variable successor operator (which enables arbitrary track angle and track velocity resolution). Furthermore, it is shown to be of comparable complexity to multi-objective, vector neighbourhood based A* (Vector A*, an extension of A*). A variable successor operator enables the imposition of a multi-resolution lattice structure on the search space (which results in fewer search nodes). Unlike cell decomposition based methods, soundness is guaranteed with multi-resolution MSA*. MSA* is demonstrated through Monte Carlo simulations to be computationally efficient. It is shown that multi-resolution, lattice based MSA* finds paths of equivalent cost (less than 0.5% difference) to Vector A* (the benchmark) in a third of the computation time (on average). This is the first contribution of the research. The second contribution is the discovery of the additive consistency property for planning with multiple decision objectives. Additive consistency ensures that the planner is not biased (which results in a suboptimal path) by ensuring that the cost of traversing a track using one step equals that of traversing the same track using multiple steps. MSA* mitigates uncertainty through online replanning, Multi-Criteria Decision Making (MCDM) and tolerance. Each trajectory segment is modeled with a cell sequence that completely encloses the trajectory segment. The tolerance, measured as the minimum distance between the track and cell boundaries, is the third major contribution. Even though MSA* is demonstrated for UAV MFP, it is extensible to other 4D vehicle motion planning applications. Finally, the research proposes a self-scheduling replanning architecture for MFP. This architecture replicates the decision strategies of human experts to meet the time constraints of online replanning. Based on a feedback loop, the proposed architecture switches between fast, near-optimal planning and optimal planning to minimise the need for hold manoeuvres. The derived MFP framework is original and shown, through extensive verification and validation, to satisfy the requirements of UAV MFP. As MFP is an enabling factor for operation of UAVs in the NAS, the presented work is both original and significant.
Resumo:
The reliability analysis is crucial to reducing unexpected down time, severe failures and ever tightened maintenance budget of engineering assets. Hazard based reliability methods are of particular interest as hazard reflects the current health status of engineering assets and their imminent failure risks. Most existing hazard models were constructed using the statistical methods. However, these methods were established largely based on two assumptions: one is the assumption of baseline failure distributions being accurate to the population concerned and the other is the assumption of effects of covariates on hazards. These two assumptions may be difficult to achieve and therefore compromise the effectiveness of hazard models in the application. To address this issue, a non-linear hazard modelling approach is developed in this research using neural networks (NNs), resulting in neural network hazard models (NNHMs), to deal with limitations due to the two assumptions for statistical models. With the success of failure prevention effort, less failure history becomes available for reliability analysis. Involving condition data or covariates is a natural solution to this challenge. A critical issue for involving covariates in reliability analysis is that complete and consistent covariate data are often unavailable in reality due to inconsistent measuring frequencies of multiple covariates, sensor failure, and sparse intrusive measurements. This problem has not been studied adequately in current reliability applications. This research thus investigates such incomplete covariates problem in reliability analysis. Typical approaches to handling incomplete covariates have been studied to investigate their performance and effects on the reliability analysis results. Since these existing approaches could underestimate the variance in regressions and introduce extra uncertainties to reliability analysis, the developed NNHMs are extended to include handling incomplete covariates as an integral part. The extended versions of NNHMs have been validated using simulated bearing data and real data from a liquefied natural gas pump. The results demonstrate the new approach outperforms the typical incomplete covariates handling approaches. Another problem in reliability analysis is that future covariates of engineering assets are generally unavailable. In existing practices for multi-step reliability analysis, historical covariates were used to estimate the future covariates. Covariates of engineering assets, however, are often subject to substantial fluctuation due to the influence of both engineering degradation and changes in environmental settings. The commonly used covariate extrapolation methods thus would not be suitable because of the error accumulation and uncertainty propagation. To overcome this difficulty, instead of directly extrapolating covariate values, projection of covariate states is conducted in this research. The estimated covariate states and unknown covariate values in future running steps of assets constitute an incomplete covariate set which is then analysed by the extended NNHMs. A new assessment function is also proposed to evaluate risks of underestimated and overestimated reliability analysis results. A case study using field data from a paper and pulp mill has been conducted and it demonstrates that this new multi-step reliability analysis procedure is able to generate more accurate analysis results.
Resumo:
The separation of independent sources from mixed observed data is a fundamental and challenging problem. In many practical situations, observations may be modelled as linear mixtures of a number of source signals, i.e. a linear multi-input multi-output system. A typical example is speech recordings made in an acoustic environment in the presence of background noise and/or competing speakers. Other examples include EEG signals, passive sonar applications and cross-talk in data communications. In this paper, we propose iterative algorithms to solve the n × n linear time invariant system under two different constraints. Some existing solutions for 2 × 2 systems are reviewed and compared.
Resumo:
Genetic algorithms (GAs) have been used to tackle non-linear multi-objective optimization (MOO) problems successfully, but their success is governed by key parameters which have been shown to be sensitive to the nature of the particular problem, incorporating concerns such as the numbers of objectives and variables, and the size and topology of the search space, making it hard to determine the best settings in advance. This work describes a real-encoded multi-objective optimizing GA (MOGA) that uses self-adaptive mutation and crossover, and which is applied to optimization of an airfoil, for minimization of drag and maximization of lift coefficients. The MOGA is integrated with a Free-Form Deformation tool to manage the section geometry, and XFoil which evaluates each airfoil in terms of its aerodynamic efficiency. The performance is compared with those of the heuristic MOO algorithms, the Multi-Objective Tabu Search (MOTS) and NSGA-II, showing that this GA achieves better convergence.
Resumo:
A low-power, highly linear, multi-standard, active-RC filter with an accurate and novel tuning architec-ture is presented. It exhibits 1EEE 802. 11a/b/g (9.5 MHz) and DVB-H (3 MHz, 4 MHz) application. The filter exploits digitally-controlled polysilicon resistor banks and a phase lock loop type automatic tuning system. The novel and complex automatic frequency calibration scheme provides better than 4 comer frequency accuracy, and it can be powered down after calibration to save power and avoid digital signal interference. The filter achieves OIP3 of 26 dBm and the measured group delay variation of the receiver filter is 50 ns (WLAN mode). Its dissipation is 3.4 mA in RX mode and 2.3 mA (only for one path) in TX mode from a 2.85 V supply. The dissipation of calibration consumes 2 mA. The circuit has been fabricated in a 0.35μm 47 GHz SiGe BiCMOS technology; the receiver and transmitter filter occupy 0.21 mm~2 and 0.11 mm~2 (calibration circuit excluded), respectively.
Resumo:
This paper uses dynamic impulse response analysis to investigate the interrelationships among stock price volatility, trading volume, and the leverage effect. Dynamic impulse response analysis is a technique for analyzing the multi-step-ahead characteristics of a nonparametric estimate of the one-step conditional density of a strictly stationary process. The technique is the generalization to a nonlinear process of Sims-style impulse response analysis for linear models. In this paper, we refine the technique and apply it to a long panel of daily observations on the price and trading volume of four stocks actively traded on the NYSE: Boeing, Coca-Cola, IBM, and MMM.
Resumo:
Although the use of ball milling to induce reactions between solids (mechanochemical synthesis) can provide lower-waste routes to chemical products by avoiding solvent during the reaction, there are further potential advantages in using one-pot multistep syntheses to avoid the use of bulk solvents for the purification of intermediates. We report here two-step syntheses involving formation of salen-type ligands from diamines and hydroxyaldehydes followed directly by reactions with metal salts to provide the corresponding metal complexes. Five salen-type ligands 2,2'-[1,2-ethanediylbis[(E)-nitrilomethylidyne]] bisphenol, ` salenH2', 1; 2,2'-[(+/-)-1,2-cyclohexanediylbis-[(E)-nitrilomethylidyne]] bis-phenol, 2; 2,2'-[1,2-phenylenebis( nitrilomethylidyne)]-bis-phenol, ` salphenH2' 3; 2-[[(2-aminophenyl) imino] methyl]-phenol, 4; 2,2'-[(+/-)-1,2-cyclohexanediylbis[(E)-nitrilomethylidyne]]-bis[4,6-bis(1,1-dimethylethyl)]-phenol, ` Jacobsen ligand', 5) were found to form readily in a shaker-type ball mill at 0.5 to 3 g scale from their corresponding diamine and aldehyde precursors. Although in some cases both starting materials were liquids, ball milling was still necessary to drive those reactions to completion because precipitation of the product and or intermediates rapidly gave in thick pastes which could not be stirred conventionally. The only ligand which required the addition of solvent was the Jacobsen ligand 5 which required 1.75 mol equivalents of methanol to go to completion. Ligands 1-5 were thus obtained directly in 30-60 minutes in their hydrated forms, due to the presence of water by-product, as free-flowing yellow powders which could be dried by heating to give analytically pure products. The one-armed salphen ligand 4 could also be obtained selectively by changing the reaction stoichiometry to 1 : 1. SalenH(2) 1 was explored for the onepot two-step synthesis of metal complexes. In particular, after in situ formation of the ligand by ball milling, metal salts (ZnO, Ni(OAc)2 center dot 4H(2)O or Cu(OAc)(2)center dot H2O) were added directly to the jar and milling continued for a further 30 minutes. Small amounts of methanol (0.4-1.1 mol equivalents) were needed for these reactions to run to completion. The corresponding metal complexes [M(salen)] (M = Zn, 6; Ni, 7; or Cu, 8) were thus obtained quantitatively after 30 minutes in hydrated form, and could be heated briefly to give analytically pure dehydrated products. The all-at-once ` tandem' synthesis of [Zn(salen)] 6 was also explored by milling ZnO, ethylene diamine and salicylaldehyde together in the appropriate mole ratio for 60 minutes. This approach also gave the target complex selectively with no solvent needing to be added. Overall, these syntheses were found to be highly efficient in terms of time and the in avoidance of bulk solvent both during the reaction and for the isolation of intermediates. The work demonstrates the applicability of mechanochemical synthesis to one-pot multi-step strategies.
Resumo:
Methods of improving the coverage of Box–Jenkins prediction intervals for linear autoregressive models are explored. These methods use bootstrap techniques to allow for parameter estimation uncertainty and to reduce the small-sample bias in the estimator of the models’ parameters. In addition, we also consider a method of bias-correcting the non-linear functions of the parameter estimates that are used to generate conditional multi-step predictions.
Resumo:
We present an efficient numerical methodology for the 31) computation of incompressible multi-phase flows described by conservative phase-field models We focus here on the case of density matched fluids with different viscosity (Model H) The numerical method employs adaptive mesh refinements (AMR) in concert with an efficient semi-implicit time discretization strategy and a linear, multi-level multigrid to relax high order stability constraints and to capture the flow`s disparate scales at optimal cost. Only five linear solvers are needed per time-step. Moreover, all the adaptive methodology is constructed from scratch to allow a systematic investigation of the key aspects of AMR in a conservative, phase-field setting. We validate the method and demonstrate its capabilities and efficacy with important examples of drop deformation, Kelvin-Helmholtz instability, and flow-induced drop coalescence (C) 2010 Elsevier Inc. All rights reserved
Resumo:
A predição do preço da energia elétrica é uma questão importante para todos os participantes do mercado, para que decidam as estratégias mais adequadas e estabeleçam os contratos bilaterais que maximizem seus lucros e minimizem os seus riscos. O preço da energia tipicamente exibe sazonalidade, alta volatilidade e picos. Além disso, o preço da energia é influenciado por muitos fatores, tais como: demanda de energia, clima e preço de combustíveis. Este trabalho propõe uma nova abordagem híbrida para a predição de preços de energia no mercado de curto prazo. Tal abordagem combina os filtros autorregressivos integrados de médias móveis (ARIMA) e modelos de Redes Neurais (RNA) numa estrutura em cascata e utiliza variáveis explanatórias. Um processo em dois passos é aplicado. Na primeira etapa, as variáveis explanatórias são preditas. Na segunda etapa, os preços de energia são preditos usando os valores futuros das variáveis exploratórias. O modelo proposto considera uma predição de 12 passos (semanas) a frente e é aplicada ao mercado brasileiro, que possui características únicas de comportamento e adota o despacho centralizado baseado em custo. Os resultados mostram uma boa capacidade de predição de picos de preço e uma exatidão satisfatória de acordo com as medidas de erro e testes de perda de cauda quando comparado com técnicas tradicionais. Em caráter complementar, é proposto um modelo classificador composto de árvores de decisão e RNA, com objetivo de explicitar as regras de formação de preços e, em conjunto com o modelo preditor, atuar como uma ferramenta atrativa para mitigar os riscos da comercialização de energia.
Resumo:
Pós-graduação em Química - IQ
Resumo:
Many engineering sectors are challenged by multi-objective optimization problems. Even if the idea behind these problems is simple and well established, the implementation of any procedure to solve them is not a trivial task. The use of evolutionary algorithms to find candidate solutions is widespread. Usually they supply a discrete picture of the non-dominated solutions, a Pareto set. Although it is very interesting to know the non-dominated solutions, an additional criterion is needed to select one solution to be deployed. To better support the design process, this paper presents a new method of solving non-linear multi-objective optimization problems by adding a control function that will guide the optimization process over the Pareto set that does not need to be found explicitly. The proposed methodology differs from the classical methods that combine the objective functions in a single scale, and is based on a unique run of non-linear single-objective optimizers.
Resumo:
The objective of this work of thesis is the refined estimations of source parameters. To such a purpose we used two different approaches, one in the frequency domain and the other in the time domain. In frequency domain, we analyzed the P- and S-wave displacement spectra to estimate spectral parameters, that is corner frequencies and low frequency spectral amplitudes. We used a parametric modeling approach which is combined with a multi-step, non-linear inversion strategy and includes the correction for attenuation and site effects. The iterative multi-step procedure was applied to about 700 microearthquakes in the moment range 1011-1014 N•m and recorded at the dense, wide-dynamic range, seismic networks operating in Southern Apennines (Italy). The analysis of the source parameters is often complicated when we are not able to model the propagation accurately. In this case the empirical Green function approach is a very useful tool to study the seismic source properties. In fact the Empirical Green Functions (EGFs) consent to represent the contribution of propagation and site effects to signal without using approximate velocity models. An EGF is a recorded three-component set of time-histories of a small earthquake whose source mechanism and propagation path are similar to those of the master event. Thus, in time domain, the deconvolution method of Vallée (2004) was applied to calculate the source time functions (RSTFs) and to accurately estimate source size and rupture velocity. This technique was applied to 1) large event, that is Mw=6.3 2009 L’Aquila mainshock (Central Italy), 2) moderate events, that is cluster of earthquakes of 2009 L’Aquila sequence with moment magnitude ranging between 3 and 5.6, 3) small event, i.e. Mw=2.9 Laviano mainshock (Southern Italy).
Resumo:
In der vorliegenden Arbeit konnte gezeigt werden, dass durch die grafting-from-Methode verschiedene geschützte Polypeptidbürsten basierend auf L-glutaminsäure, L-asparaginsäure, L-lysin und L-ornithin synthetisch zugänglich sind. Zur Verwirklichung dieser Synthesestrategie wurde mehrstufig ein Makroinitiator auf Basis von N-methacrylamid-1,6-diaminohexan hergestellt, der die ringöffnende Polymerisation von Leuchs´schen Anhydriden zur Entwicklung von geschützten Polypeptidseitenketten initiieren kann. Durch stark saure bzw. alkalische Abspaltbedingungen war es möglich, die Schutzgruppen bei allen geschützten Bürsten bis auf eine Spezies erfolgreich zu entfernen. Weitergehende Untersuchungen an den positiv bzw. negativ geladenen Polyelektrolytbürsten mittels statischer Lichtstreuung und Kapillarelektrophorese zeigten, dass lediglich die Z-geschützten Poly-L-lysinbürsten ohne Kettenabbau entschützt werden konnten. In allen anderen Fällen wurden nach Abspaltung der Schutzgruppen lineare Kettenfragmente detektiert. Durch die Zugabe von NaClO4 oder Methanol zu den wässrigen Lösungen der Poly-L-lysinbürsten konnte mittels CD-Spektroskopie gezeigt werden, dass die Seitenketten von einer ungeordneten Konformation in eine helikale Konformation übergehen. In weiterführenden Experimenten wurde mittels statischer Lichtstreuung, dynamischer Lichtstreuung, SAXS, und AFM-Aufnahmen in Lösung bewiesen, dass die helikale Konformation der Seitenketten eine deutliche Abnahme des Zylinderquerschnitts und des Querschnittträgheitsradius zur Folge hat, die Topologie der Bürste allerdings unverändert bleibt. Weiterhin konnte mittels Kapillarelektrophorese die elektrophoretische Mobilität der Poly-L-lysinbürsten und ihrer linearen Analoga bestimmt werden. Mit diesen Resultaten war es in Kombination mit statischen Lichtstreuexperimenten möglich, die effektive Ladung von linearem und verzweigten Poly-L-lysin nach einer Theorie von Muthukumar zu berechnen. Das Ergebnis dieser Rechnungen bestätigt die Ergebnisse früherer Untersuchungen von Peter Dziezok, der in seiner Dissertation durch Leitfähigkeits und Lichtstreumessungen an linearem PVP und PVP-Bürsten herausfand, dass die effektive Ladung von Polymerbürsten mindestens um einen Faktor 10 kleiner ist als bei den korrespondierenden linearen Analoga.