941 resultados para Computers -- Computer Science
Resumo:
Developing analytical models that can accurately describe behaviors of Internet-scale networks is difficult. This is due, in part, to the heterogeneous structure, immense size and rapidly changing properties of today's networks. The lack of analytical models makes large-scale network simulation an indispensable tool for studying immense networks. However, large-scale network simulation has not been commonly used to study networks of Internet-scale. This can be attributed to three factors: 1) current large-scale network simulators are geared towards simulation research and not network research, 2) the memory required to execute an Internet-scale model is exorbitant, and 3) large-scale network models are difficult to validate. This dissertation tackles each of these problems. ^ First, this work presents a method for automatically enabling real-time interaction, monitoring, and control of large-scale network models. Network researchers need tools that allow them to focus on creating realistic models and conducting experiments. However, this should not increase the complexity of developing a large-scale network simulator. This work presents a systematic approach to separating the concerns of running large-scale network models on parallel computers and the user facing concerns of configuring and interacting with large-scale network models. ^ Second, this work deals with reducing memory consumption of network models. As network models become larger, so does the amount of memory needed to simulate them. This work presents a comprehensive approach to exploiting structural duplications in network models to dramatically reduce the memory required to execute large-scale network experiments. ^ Lastly, this work addresses the issue of validating large-scale simulations by integrating real protocols and applications into the simulation. With an emulation extension, a network simulator operating in real-time can run together with real-world distributed applications and services. As such, real-time network simulation not only alleviates the burden of developing separate models for applications in simulation, but as real systems are included in the network model, it also increases the confidence level of network simulation. This work presents a scalable and flexible framework to integrate real-world applications with real-time simulation.^
Resumo:
Stealthy attackers move patiently through computer networks - taking days, weeks or months to accomplish their objectives in order to avoid detection. As networks scale up in size and speed, monitoring for such attack attempts is increasingly a challenge. This paper presents an efficient monitoring technique for stealthy attacks. It investigates the feasibility of proposed method under number of different test cases and examines how design of the network affects the detection. A methodological way for tracing anonymous stealthy activities to their approximate sources is also presented. The Bayesian fusion along with traffic sampling is employed as a data reduction method. The proposed method has the ability to monitor stealthy activities using 10-20% size sampling rates without degrading the quality of detection.
Resumo:
The goal of this paper is to study and propose a new technique for noise reduction used during the reconstruction of speech signals, particularly for biomedical applications. The proposed method is based on Kalman filtering in the time domain combined with spectral subtraction. Comparison with discrete Kalman filter in the frequency domain shows better performance of the proposed technique. The performance is evaluated by using the segmental signal-to-noise ratio and the Itakura-Saito`s distance. Results have shown that Kalman`s filter in time combined with spectral subtraction is more robust and efficient, improving the Itakura-Saito`s distance by up to four times. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The purpose of this paper is to propose a multiobjective optimization approach for solving the manufacturing cell formation problem, explicitly considering the performance of this said manufacturing system. Cells are formed so as to simultaneously minimize three conflicting objectives, namely, the level of the work-in-process, the intercell moves and the total machinery investment. A genetic algorithm performs a search in the design space, in order to approximate to the Pareto optimal set. The values of the objectives for each candidate solution in a population are assigned by running a discrete-event simulation, in which the model is automatically generated according to the number of machines and their distribution among cells implied by a particular solution. The potential of this approach is evaluated via its application to an illustrative example, and a case from the relevant literature. The obtained results are analyzed and reviewed. Therefore, it is concluded that this approach is capable of generating a set of alternative manufacturing cell configurations considering the optimization of multiple performance measures, greatly improving the decision making process involved in planning and designing cellular systems. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This work extends a previously presented refined sandwich beam finite element (FE) model to vibration analysis, including dynamic piezoelectric actuation and sensing. The mechanical model is a refinement of the classical sandwich theory (CST), for which the core is modelled with a third-order shear deformation theory (TSDT). The FE model is developed considering, through the beam length, electrically: constant voltage for piezoelectric layers and quadratic third-order variable of the electric potential in the core, while meclianically: linear axial displacement, quadratic bending rotation of the core and cubic transverse displacement of the sandwich beam. Despite the refinement of mechanical and electric behaviours of the piezoelectric core, the model leads to the same number of degrees of freedom as the previous CST one due to a two-step static condensation of the internal dof (bending rotation and core electric potential third-order variable). The results obtained with the proposed FE model are compared to available numerical, analytical and experimental ones. Results confirm that the TSDT and the induced cubic electric potential yield an extra stiffness to the sandwich beam. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The general flowshop scheduling problem is a production problem where a set of n jobs have to be processed with identical flow pattern on in machines. In permutation flowshops the sequence of jobs is the same on all machines. A significant research effort has been devoted for sequencing jobs in a flowshop minimizing the makespan. This paper describes the application of a Constructive Genetic Algorithm (CGA) to makespan minimization on flowshop scheduling. The CGA was proposed recently as an alternative to traditional GA approaches, particularly, for evaluating schemata directly. The population initially formed only by schemata, evolves controlled by recombination to a population of well-adapted structures (schemata instantiation). The CGA implemented is based on the NEH classic heuristic and a local search heuristic used to define the fitness functions. The parameters of the CGA are calibrated using a Design of Experiments (DOE) approach. The computational results are compared against some other successful algorithms from the literature on Taillard`s well-known standard benchmark. The computational experience shows that this innovative CGA approach provides competitive results for flowshop scheduling; problems. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Product lifecycle management (PLM) innovates as it defines both the product as a central element to aggregate enterprise information and the lifecycle as a new time dimension for information integration and analysis. Because of its potential benefits to shorten innovation lead-times and to reduce costs, PLM has attracted a lot of attention at industry and at research. However, the current PLM implementation stage at most organisations still does not apply the lifecycle management concepts thoroughly. In order to close the existing realisation gap, this article presents a process oriented framework to support effective PLM implementation. The framework central point consists of a set of lifecycle oriented business process reference models which links the necessary fundamental concepts, enterprise knowledge and software solutions to effectively deploy PLM. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
This paper presents an accurate and efficient solution for the random transverse and angular displacement fields of uncertain Timoshenko beams. Approximate, numerical solutions are obtained using the Galerkin method and chaos polynomials. The Chaos-Galerkin scheme is constructed by respecting the theoretical conditions for existence and uniqueness of the solution. Numerical results show fast convergence to the exact solution, at excellent accuracies. The developed Chaos-Galerkin scheme accurately approximates the complete cumulative distribution function of the displacement responses. The Chaos-Galerkin scheme developed herein is a theoretically sound and efficient method for the solution of stochastic problems in engineering. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents some improvements in the model proposed by Machado et al. [Machado SL, Carvalho MF, Vilar OM. Constitutive model for municipal solid waste. J Geotech Geoenviron Eng ASCE 2002; 128(11):940-51] now considering the influence of biodegradation of organic matter in the mechanical behavior of municipal solid waste. The original framework considers waste as composed of two component groups; fibers and organic paste. The particular laws of behavior are assessed for each component group and then coupled to represent waste behavior. The improvements introduced in this paper take into account the changes in the properties of fibers and mass loss due to organic matter depletion over time. Mass loss is indirectly calculated considering the MSW gas generation potential through a first order decay model. It is shown that as the biodegradation process occurs the proportion of fibers increases, however, they also undergo a degradation process which tends to reduce their ultimate tensile stress and Young modulus. The way these changes influence the behavior of MSW is incorporated in the final framework which captures the main features of the MSW stress-strain behavior under different loading conditions. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Swallowing dynamics involves the coordination and interaction of several muscles and nerves which allow correct food transport from mouth to stomach without laryngotracheal penetration or aspiration. Clinical swallowing assessment depends on the evaluator`s knowledge of anatomic structures and of neurophysiological processes involved in swallowing. Any alteration in those steps is denominated oropharyngeal dysphagia, which may have many causes, such as neurological or mechanical disorders. Videofluoroscopy of swallowing is presently considered to be the best exam to objectively assess the dynamics of swallowing, but the exam needs to be conducted under certain restrictions, due to patient`s exposure to radiation, which limits periodical repetition for monitoring swallowing therapy. Another method, called cervical auscultation, is a promising new diagnostic tool for the assessment of swallowing disorders. The potential to diagnose dysphagia in a noninvasive manner by assessing the sounds of swallowing is a highly attractive option for the dysphagia clinician. Even so, the captured sound has an amount of noise, which can hamper the evaluator`s decision. In that way, the present paper proposes the use of a filter to improve the quality of audible sound and facilitate the perception of examination. The wavelet denoising approach is used to decompose the noisy signal. The signal to noise ratio was evaluated to demonstrate the quantitative results of the proposed methodology. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Most post-processors for boundary element (BE) analysis use an auxiliary domain mesh to display domain results, working against the profitable modelling process of a pure boundary discretization. This paper introduces a novel visualization technique which preserves the basic properties of the boundary element methods. The proposed algorithm does not require any domain discretization and is based on the direct and automatic identification of isolines. Another critical aspect of the visualization of domain results in BE analysis is the effort required to evaluate results in interior points. In order to tackle this issue, the present article also provides a comparison between the performance of two different BE formulations (conventional and hybrid). In addition, this paper presents an overview of the most common post-processing and visualization techniques in BE analysis, such as the classical algorithms of scan line and the interpolation over a domain discretization. The results presented herein show that the proposed algorithm offers a very high performance compared with other visualization procedures.
Resumo:
This paper focuses on the flexural behavior of RC beams externally strengthened with Carbon Fiber Reinforced Polymers (CFRP) fabric. A non-linear finite element (FE) analysis strategy is proposed to support the beam flexural behavior experimental analysis. A development system (QUEBRA2D/FEMOOP programs) has been used to accomplish the numerical simulation. Appropriate constitutive models for concrete, rebars, CFRP and bond-slip interfaces have been implemented and adjusted to represent the composite system behavior. Interface and truss finite elements have been implemented (discrete and embedded approaches) for the numerical representation of rebars, interfaces and composites.
Resumo:
This work deals with the determination of crack openings in 2D reinforced concrete structures using the Finite Element Method with a smeared rotating crack model or an embedded crack model In the smeared crack model, the strong discontinuity associated with the crack is spread throughout the finite element As is well known, the continuity of the displacement field assumed for these models is incompatible with the actual discontinuity However, this type of model has been used extensively due to the relative computational simplicity it provides by treating cracks in a continuum framework, as well as the reportedly good predictions of reinforced concrete members` structural behavior On the other hand, by enriching the displacement field within each finite element crossed by the crack path, the embedded crack model is able to describe the effects of actual discontinuities (cracks) This paper presents a comparative study of the abilities of these 2D models in predicting the mechanical behavior of reinforced concrete structures Structural responses are compared with experimental results from the literature, including crack patterns, crack openings and rebar stresses predicted by both models
Resumo:
Here, we study the stable integration of real time optimization (RTO) with model predictive control (MPC) in a three layer structure. The intermediate layer is a quadratic programming whose objective is to compute reachable targets to the MPC layer that lie at the minimum distance to the optimum set points that are produced by the RTO layer. The lower layer is an infinite horizon MPC with guaranteed stability with additional constraints that force the feasibility and convergence of the target calculation layer. It is also considered the case in which there is polytopic uncertainty in the steady state model considered in the target calculation. The dynamic part of the MPC model is also considered unknown but it is assumed to be represented by one of the models of a discrete set of models. The efficiency of the methods presented here is illustrated with the simulation of a low order system. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
In this study, the concept of cellular automata is applied in an innovative way to simulate the separation of phases in a water/oil emulsion. The velocity of the water droplets is calculated by the balance of forces acting on a pair of droplets in a group, and cellular automata is used to simulate the whole group of droplets. Thus, it is possible to solve the problem stochastically and to show the sequence of collisions of droplets and coalescence phenomena. This methodology enables the calculation of the amount of water that can be separated from the emulsion under different operating conditions, thus enabling the process to be optimized. Comparisons between the results obtained from the developed model and the operational performance of an actual desalting unit are carried out. The accuracy observed shows that the developed model is a good representation of the actual process. (C) 2010 Published by Elsevier Ltd.