895 resultados para Other Computer Engineering
Resumo:
Observability measures the support of computer systems to accurately capture, analyze, and present (collectively observe) the internal information about the systems. Observability frameworks play important roles for program understanding, troubleshooting, performance diagnosis, and optimizations. However, traditional solutions are either expensive or coarse-grained, consequently compromising their utility in accommodating today’s increasingly complex software systems. New solutions are emerging for VM-based languages due to the full control language VMs have over program executions. Existing such solutions, nonetheless, still lack flexibility, have high overhead, or provide limited context information for developing powerful dynamic analyses. In this thesis, we present a VM-based infrastructure, called marker tracing framework (MTF), to address the deficiencies in the existing solutions for providing better observability for VM-based languages. MTF serves as a solid foundation for implementing fine-grained low-overhead program instrumentation. Specifically, MTF allows analysis clients to: 1) define custom events with rich semantics ; 2) specify precisely the program locations where the events should trigger; and 3) adaptively enable/disable the instrumentation at runtime. In addition, MTF-based analysis clients are more powerful by having access to all information available to the VM. To demonstrate the utility and effectiveness of MTF, we present two analysis clients: 1) dynamic typestate analysis with adaptive online program analysis (AOPA); and 2) selective probabilistic calling context analysis (SPCC). In addition, we evaluate the runtime performance of MTF and the typestate client with the DaCapo benchmarks. The results show that: 1) MTF has acceptable runtime overhead when tracing moderate numbers of marker events; and 2) AOPA is highly effective in reducing the event frequency for the dynamic typestate analysis; and 3) language VMs can be exploited to offer greater observability.
Resumo:
In this paper, we consider the problem of topology design for optical networks. We investigate the problem of selecting switching sites to minimize total cost of the optical network. The cost of an optical network can be expressed as a sum of three main factors: the site cost, the link cost, and the switch cost. To the best of our knowledge, this problem has not been studied in its general form as investigated in this paper. We present a mixed integer quadratic programming (MIQP) formulation of the problem to find the optimal value of the total network cost. We also present an efficient heuristic to approximate the solution in polynomial time. The experimental results show good performance of the heuristic. The value of the total network cost computed by the heuristic varies within 2% to 21% of its optimal value in the experiments with 10 nodes. The total network cost computed by the heuristic for 51% of the experiments with 10 node network topologies varies within 8% of its optimal value. We also discuss the insight gained from our experiments.
Resumo:
[ES] El Trabajo Final de Grado tiene por finalidad ofrecer una solución que ayude a las personas a gestionar sus tareas tanto personales como empresariales de una manera más productiva. Actualmente este tipo de aplicaciones tienen mucho éxito. Se decidió que el desarrollo de esta aplicación fuera con la metodología Getting Things Done (GTD) ya que es una metodología que aumenta la productividad y reduce el estrés laboral. A día de hoy, no hay muchas aplicaciones que utilice esta metodología y las que la utilizan lo hace de una forma muy básica. Junto a esta metodología y guiándonos de la experiencia del tutor se intentó combinar esta metodología con controles de tiempo para mejorar aún más la productividad de las personas que utiliza dicho software. El resultado obtenido de este trabajo final de grado fue la base de una aplicación web para la gestión de tareas. El software creado es totalmente funcional, muy fácil de usar, muy intuitivo, y usa la filosofía Getting Things Done . Básicamente los objetivos principales conseguidos en este proyecto fueron: la gestión de usuarios. La gestión de tareas y proyectos. Aplicación de la metodología GTD. Control del tiempo productivo, e improductivo, interrupciones, temporizadores. La aplicación ha sido realizada como Trabajo Final de Grado en Ingeniería Informática, cumpliendo con todas las fases del desarrollo del software, para obtener un producto funcional que fuera aprobado por el tutor que haría el rol de potencial cliente. En el presente proyecto se ha seguido la metodología RUP, dirigida por casos de uso, iterativa e incremental. Para completar el proceso se ha realizado la elaboración de una lista de características, la especificación de los casos de uso, una fase de análisis, una de diseño, implementación y prueba. Las tecnologías utilizadas han sido, principalmente, Ruby On Rails, HTML5, CSS , AJAX y JAVASCRIPT. El objetivo a largo plazo es que esta solución pueda ser tomada como base de implementación, donde haciendo las mejoras necesarias se pueda poner en el mercado un gran software de gestión de tareas siguiendo la metodología GTD.
Resumo:
Compared with other mature engineering disciplines, fracture mechanics of concrete is still a developing field and very important for structures like bridges subject to dynamic loading. An historical point of view of what done in the field is provided and then the project is presented. The project presents an application of the Digital Image Correlation (DIC) technique for the detection of cracks at the surface of concrete prisms (500mmx100mmx100mm) subject to flexural loading conditions (Four Point Bending test). The technique provide displacement measurements of the region of interest and from this displacement field information about crack mouth opening (CMOD) are obtained and related to the applied load. The evolution of the fracture process is shown through graphs and graphical maps of the displacement at some step of the loading process. The study shows that it is possible with the DIC system to detect the appearance and evolution of cracks, even before the cracks become visually detectable.
Resumo:
The aim of this study was to develop a model capable to capture the different contributions which characterize the nonlinear behaviour of reinforced concrete structures. In particular, especially for non slender structures, the contribution to the nonlinear deformation due to bending may be not sufficient to determine the structural response. Two different models characterized by a fibre beam-column element are here proposed. These models can reproduce the flexure-shear interaction in the nonlinear range, with the purpose to improve the analysis in shear-critical structures. The first element discussed is based on flexibility formulation which is associated with the Modified Compression Field Theory as material constitutive law. The other model described in this thesis is based on a three-field variational formulation which is associated with a 3D generalized plastic-damage model as constitutive relationship. The first model proposed in this thesis was developed trying to combine a fibre beamcolumn element based on the flexibility formulation with the MCFT theory as constitutive relationship. The flexibility formulation, in fact, seems to be particularly effective for analysis in the nonlinear field. Just the coupling between the fibre element to model the structure and the shear panel to model the individual fibres allows to describe the nonlinear response associated to flexure and shear, and especially their interaction in the nonlinear field. The model was implemented in an original matlab® computer code, for describing the response of generic structures. The simulations carried out allowed to verify the field of working of the model. Comparisons with available experimental results related to reinforced concrete shears wall were performed in order to validate the model. These results are characterized by the peculiarity of distinguishing the different contributions due to flexure and shear separately. The presented simulations were carried out, in particular, for monotonic loading. The model was tested also through numerical comparisons with other computer programs. Finally it was applied for performing a numerical study on the influence of the nonlinear shear response for non slender reinforced concrete (RC) members. Another approach to the problem has been studied during a period of research at the University of California Berkeley. The beam formulation follows the assumptions of the Timoshenko shear beam theory for the displacement field, and uses a three-field variational formulation in the derivation of the element response. A generalized plasticity model is implemented for structural steel and a 3D plastic-damage model is used for the simulation of concrete. The transverse normal stress is used to satisfy the transverse equilibrium equations of at each control section, this criterion is also used for the condensation of degrees of freedom from the 3D constitutive material to a beam element. In this thesis is presented the beam formulation and the constitutive relationships, different analysis and comparisons are still carrying out between the two model presented.
Resumo:
Electric power grids throughout the world suffer from serious inefficiencies associated with under-utilization due to demand patterns, engineering design and load following approaches in use today. These grids consume much of the world’s energy and represent a large carbon footprint. From material utilization perspectives significant hardware is manufactured and installed for this infrastructure often to be used at less than 20-40% of its operational capacity for most of its lifetime. These inefficiencies lead engineers to require additional grid support and conventional generation capacity additions when renewable technologies (such as solar and wind) and electric vehicles are to be added to the utility demand/supply mix. Using actual data from the PJM [PJM 2009] the work shows that consumer load management, real time price signals, sensors and intelligent demand/supply control offer a compelling path forward to increase the efficient utilization and carbon footprint reduction of the world’s grids. Underutilization factors from many distribution companies indicate that distribution feeders are often operated at only 70-80% of their peak capacity for a few hours per year, and on average are loaded to less than 30-40% of their capability. By creating strong societal connections between consumers and energy providers technology can radically change this situation. Intelligent deployment of smart sensors, smart electric vehicles, consumer-based load management technology very high saturations of intermittent renewable energy supplies can be effectively controlled and dispatched to increase the levels of utilization of existing utility distribution, substation, transmission, and generation equipment. The strengthening of these technology, society and consumer relationships requires rapid dissemination of knowledge (real time prices, costs & benefit sharing, demand response requirements) in order to incentivize behaviors that can increase the effective use of technological equipment that represents one of the largest capital assets modern society has created.
Resumo:
We describe a recent offering of a linear systems and signal processing course for third-year electrical and computer engineering students. This course is a pre-requisite for our first digital signal processing course. Students have traditionally viewed linear systems courses as mathematical and extremely difficult. Without compromising the rigor of the required concepts, we strived to make the course fun, with application-based hands-on laboratory projects. These projects can be modified easily to meet specific instructors' preferences. © 2011 IEEE.(17 refs)
Resumo:
A new 2-D hydrophone array for ultrasound therapy monitoring is presented, along with a novel algorithm for passive acoustic mapping using a sparse weighted aperture. The array is constructed using existing polyvinylidene fluoride (PVDF) ultrasound sensor technology, and is utilized for its broadband characteristics and its high receive sensitivity. For most 2-D arrays, high-resolution imagery is desired, which requires a large aperture at the cost of a large number of elements. The proposed array's geometry is sparse, with elements only on the boundary of the rectangular aperture. The missing information from the interior is filled in using linear imaging techniques. After receiving acoustic emissions during ultrasound therapy, this algorithm applies an apodization to the sparse aperture to limit side lobes and then reconstructs acoustic activity with high spatiotemporal resolution. Experiments show verification of the theoretical point spread function, and cavitation maps in agar phantoms correspond closely to predicted areas, showing the validity of the array and methodology.
Resumo:
The main objective of this paper is to discuss various aspects of implementing a specific intrusion-detection scheme on a micro-computer system using fixed-point arithmetic. The proposed scheme is suitable for detecting intruder stimuli which are in the form of transient signals. It consists of two stages: an adaptive digital predictor and an adaptive threshold detection algorithm. Experimental results involving data acquired via field experiments are also included.
Resumo:
The performance of the parallel vector implementation of the one- and two-dimensional orthogonal transforms is evaluated. The orthogonal transforms are computed using actual or modified fast Fourier transform (FFT) kernels. The factors considered in comparing the speed-up of these vectorized digital signal processing algorithms are discussed and it is shown that the traditional way of comparing th execution speed of digital signal processing algorithms by the ratios of the number of multiplications and additions is no longer effective for vector implementation; the structure of the algorithm must also be considered as a factor when comparing the execution speed of vectorized digital signal processing algorithms. Simulation results on the Cray X/MP with the following orthogonal transforms are presented: discrete Fourier transform (DFT), discrete cosine transform (DCT), discrete sine transform (DST), discrete Hartley transform (DHT), discrete Walsh transform (DWHT), and discrete Hadamard transform (DHDT). A comparison between the DHT and the fast Hartley transform is also included.(34 refs)
Resumo:
Simulation is an important resource for researchers in diverse fields. However, many researchers have found flaws in the methodology of published simulation studies and have described the state of the simulation community as being in a crisis of credibility. This work describes the project of the Simulation Automation Framework for Experiments (SAFE), which addresses the issues that undermine credibility by automating the workflow in the execution of simulation studies. Automation reduces the number of opportunities for users to introduce error in the scientific process thereby improvingthe credibility of the final results. Automation also eases the job of simulation users and allows them to focus on the design of models and the analysis of results rather than on the complexities of the workflow.
Resumo:
The Simulation Automation Framework for Experiments (SAFE) is a project created to raise the level of abstraction in network simulation tools and thereby address issues that undermine credibility. SAFE incorporates best practices in network simulationto automate the experimental process and to guide users in the development of sound scientific studies using the popular ns-3 network simulator. My contributions to the SAFE project: the design of two XML-based languages called NEDL (ns-3 Experiment Description Language) and NSTL (ns-3 Script Templating Language), which facilitate the description of experiments and network simulationmodels, respectively. The languages provide a foundation for the construction of better interfaces between the user and the ns-3 simulator. They also provide input to a mechanism which automates the execution of network simulation experiments. Additionally,this thesis demonstrates that one can develop tools to generate ns-3 scripts in Python or C++ automatically from NSTL model descriptions.
Resumo:
For the past sixty years, waveguide slot radiator arrays have played a critical role in microwave radar and communication systems. They feature a well-characterized antenna element capable of direct integration into a low-loss feed structure with highly developed and inexpensive manufacturing processes. Waveguide slot radiators comprise some of the highest performance—in terms of side-lobe-level, efficiency, etc. — antenna arrays ever constructed. A wealth of information is available in the open literature regarding design procedures for linearly polarized waveguide slots. By contrast, despite their presence in some of the earliest published reports, little has been presented to date on array designs for circularly polarized (CP) waveguide slots. Moreover, that which has been presented features a classic traveling wave, efficiency-reducing beam tilt. This work proposes a unique CP waveguide slot architecture which mitigates these problems and a thorough design procedure employing widely available, modern computational tools. The proposed array topology features simultaneous dual-CP operation with grating-lobe-free, broadside radiation, high aperture efficiency, and good return loss. A traditional X-Slot CP element is employed with the inclusion of a slow wave structure passive phase shifter to ensure broadside radiation without the need for performance-limiting dielectric loading. It is anticipated this technology will be advantageous for upcoming polarimetric radar and Ka-band SatCom systems. The presented design methodology represents a philosophical shift away from traditional waveguide slot radiator design practices. Rather than providing design curves and/or analytical expressions for equivalent circuit models, simple first-order design rules – generated via parametric studies — are presented with the understanding that device optimization and design will be carried out computationally. A unit-cell, S-parameter based approach provides a sufficient reduction of complexity to permit efficient, accurate device design with attention to realistic, application-specific mechanical tolerances. A transparent, start-to-finish example of the design procedure for a linear sub-array at X-Band is presented. Both unit cell and array performance is calculated via finite element method simulations. Results are confirmed via good agreement with finite difference, time domain calculations. Array performance exhibiting grating-lobe-free, broadside-scanned, dual-CP radiation with better than 20 dB return loss and over 75% aperture efficiency is presented.
Resumo:
In this dissertation, the problem of creating effective large scale Adaptive Optics (AO) systems control algorithms for the new generation of giant optical telescopes is addressed. The effectiveness of AO control algorithms is evaluated in several respects, such as computational complexity, compensation error rejection and robustness, i.e. reasonable insensitivity to the system imperfections. The results of this research are summarized as follows: 1. Robustness study of Sparse Minimum Variance Pseudo Open Loop Controller (POLC) for multi-conjugate adaptive optics (MCAO). The AO system model that accounts for various system errors has been developed and applied to check the stability and performance of the POLC algorithm, which is one of the most promising approaches for the future AO systems control. It has been shown through numerous simulations that, despite the initial assumption that the exact system knowledge is necessary for the POLC algorithm to work, it is highly robust against various system errors. 2. Predictive Kalman Filter (KF) and Minimum Variance (MV) control algorithms for MCAO. The limiting performance of the non-dynamic Minimum Variance and dynamic KF-based phase estimation algorithms for MCAO has been evaluated by doing Monte-Carlo simulations. The validity of simple near-Markov autoregressive phase dynamics model has been tested and its adequate ability to predict the turbulence phase has been demonstrated both for single- and multiconjugate AO. It has also been shown that there is no performance improvement gained from the use of the more complicated KF approach in comparison to the much simpler MV algorithm in the case of MCAO. 3. Sparse predictive Minimum Variance control algorithm for MCAO. The temporal prediction stage has been added to the non-dynamic MV control algorithm in such a way that no additional computational burden is introduced. It has been confirmed through simulations that the use of phase prediction makes it possible to significantly reduce the system sampling rate and thus overall computational complexity while both maintaining the system stable and effectively compensating for the measurement and control latencies.