928 resultados para Computer input-output equipment


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Contemporary integrated circuits are designed and manufactured in a globalized environment leading to concerns of piracy, overproduction and counterfeiting. One class of techniques to combat these threats is circuit obfuscation which seeks to modify the gate-level (or structural) description of a circuit without affecting its functionality in order to increase the complexity and cost of reverse engineering. Most of the existing circuit obfuscation methods are based on the insertion of additional logic (called “key gates”) or camouflaging existing gates in order to make it difficult for a malicious user to get the complete layout information without extensive computations to determine key-gate values. However, when the netlist or the circuit layout, although camouflaged, is available to the attacker, he/she can use advanced logic analysis and circuit simulation tools and Boolean SAT solvers to reveal the unknown gate-level information without exhaustively trying all the input vectors, thus bringing down the complexity of reverse engineering. To counter this problem, some ‘provably secure’ logic encryption algorithms that emphasize methodical selection of camouflaged gates have been proposed previously in literature [1,2,3]. The contribution of this paper is the creation and simulation of a new layout obfuscation method that uses don't care conditions. We also present proof-of-concept of a new functional or logic obfuscation technique that not only conceals, but modifies the circuit functionality in addition to the gate-level description, and can be implemented automatically during the design process. Our layout obfuscation technique utilizes don’t care conditions (namely, Observability and Satisfiability Don’t Cares) inherent in the circuit to camouflage selected gates and modify sub-circuit functionality while meeting the overall circuit specification. Here, camouflaging or obfuscating a gate means replacing the candidate gate by a 4X1 Multiplexer which can be configured to perform all possible 2-input/ 1-output functions as proposed by Bao et al. [4]. It is important to emphasize that our approach not only obfuscates but alters sub-circuit level functionality in an attempt to make IP piracy difficult. The choice of gates to obfuscate determines the effort required to reverse engineer or brute force the design. As such, we propose a method of camouflaged gate selection based on the intersection of output logic cones. By choosing these candidate gates methodically, the complexity of reverse engineering can be made exponential, thus making it computationally very expensive to determine the true circuit functionality. We propose several heuristic algorithms to maximize the RE complexity based on don’t care based obfuscation and methodical gate selection. Thus, the goal of protecting the design IP from malicious end-users is achieved. It also makes it significantly harder for rogue elements in the supply chain to use, copy or replicate the same design with a different logic. We analyze the reverse engineering complexity by applying our obfuscation algorithm on ISCAS-85 benchmarks. Our experimental results indicate that significant reverse engineering complexity can be achieved at minimal design overhead (average area overhead for the proposed layout obfuscation methods is 5.51% and average delay overhead is about 7.732%). We discuss the strengths and limitations of our approach and suggest directions that may lead to improved logic encryption algorithms in the future. References: [1] R. Chakraborty and S. Bhunia, “HARPOON: An Obfuscation-Based SoC Design Methodology for Hardware Protection,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 28, no. 10, pp. 1493–1502, 2009. [2] J. A. Roy, F. Koushanfar, and I. L. Markov, “EPIC: Ending Piracy of Integrated Circuits,” in 2008 Design, Automation and Test in Europe, 2008, pp. 1069–1074. [3] J. Rajendran, M. Sam, O. Sinanoglu, and R. Karri, “Security Analysis of Integrated Circuit Camouflaging,” ACM Conference on Computer Communications and Security, 2013. [4] Bao Liu, Wang, B., "Embedded reconfigurable logic for ASIC design obfuscation against supply chain attacks,"Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,6, 24-28 March 2014.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interaction is increasingly a public affair, taking place in our theatres, galleries, museums, exhibitions and on the city streets. This raises a new design challenge for HCI, questioning how a performer s interaction with a computer experienced is by spectators. We examine examples from art, performance and exhibition design, comparing them according to the extent to which they hide, partially reveal, transform, reveal or even amplify a performerts manipulations. We also examine the effects of these manipulations including movements, gestures and utterances that take place around direct input and output. This comparison reveals four broad design strategies: `secretive,' where manipulations and effects are largely hidden; `expressive,' where they are revealed, enabling the spectator to fully appreciate the performer's interaction; `magical,' where effects are revealed but the manipulations that caused them are hidden; and finally `suspenseful,' where manipulations are apparent, but effects only get revealed when the spectator takes their turn.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Users need to be able to address in-air gesture systems, which means finding where to perform gestures and how to direct them towards the intended system. This is necessary for input to be sensed correctly and without unintentionally affecting other systems. This thesis investigates novel interaction techniques which allow users to address gesture systems properly, helping them find where and how to gesture. It also investigates audio, tactile and interactive light displays for multimodal gesture feedback; these can be used by gesture systems with limited output capabilities (like mobile phones and small household controls), allowing the interaction techniques to be used by a variety of device types. It investigates tactile and interactive light displays in greater detail, as these are not as well understood as audio displays. Experiments 1 and 2 explored tactile feedback for gesture systems, comparing an ultrasound haptic display to wearable tactile displays at different body locations and investigating feedback designs. These experiments found that tactile feedback improves the user experience of gesturing by reassuring users that their movements are being sensed. Experiment 3 investigated interactive light displays for gesture systems, finding this novel display type effective for giving feedback and presenting information. It also found that interactive light feedback is enhanced by audio and tactile feedback. These feedback modalities were then used alongside audio feedback in two interaction techniques for addressing gesture systems: sensor strength feedback and rhythmic gestures. Sensor strength feedback is multimodal feedback that tells users how well they can be sensed, encouraging them to find where to gesture through active exploration. Experiment 4 found that they can do this with 51mm accuracy, with combinations of audio and interactive light feedback leading to the best performance. Rhythmic gestures are continuously repeated gesture movements which can be used to direct input. Experiment 5 investigated the usability of this technique, finding that users can match rhythmic gestures well and with ease. Finally, these interaction techniques were combined, resulting in a new single interaction for addressing gesture systems. Using this interaction, users could direct their input with rhythmic gestures while using the sensor strength feedback to find a good location for addressing the system. Experiment 6 studied the effectiveness and usability of this technique, as well as the design space for combining the two types of feedback. It found that this interaction was successful, with users matching 99.9% of rhythmic gestures, with 80mm accuracy from target points. The findings show that gesture systems could successfully use this interaction technique to allow users to address them. Novel design recommendations for using rhythmic gestures and sensor strength feedback were created, informed by the experiment findings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

É do conhecimento geral de que, hoje em dia, a tecnologia evolui rapidamente. São criadas novas arquitecturas para resolver determinadas limitações ou problemas. Por vezes, essa evolução é pacífica e não requer necessidade de adaptação e, por outras, essa evolução pode Implicar mudanças. As linguagens de programação são, desde sempre, o principal elo de comunicação entre o programador e o computador. Novas linguagens continuam a aparecer e outras estão sempre em desenvolvimento para se adaptarem a novos conceitos e paradigmas. Isto requer um esforço extra para o programador, que tem de estar sempre atento a estas mudanças. A Programação Visual pode ser uma solução para este problema. Exprimir funções como módulos que recebem determinado Input e retomam determinado output poderá ajudar os programadores espalhados pelo mundo, através da possibilidade de lhes dar uma margem para se abstraírem de pormenores de baixo nível relacionados com uma arquitectura específica. Esta tese não só mostra como combinar as capacidades do CeII/B.E. (que tem uma arquitectura multi­processador heterogénea) com o OpenDX (que tem um ambiente de programação visual), como também demonstra que tal pode ser feito sem grande perda de performance. ABSTRACT; lt is known that nowadays technology develops really fast. New architectures are created ln order to provide new solutions for different technology limitations and problems. Sometimes, this evolution is pacific and there is no need to adapt to new technologies, but things also may require a change every once ln a while. Programming languages have always been the communication bridge between the programmer and the computer. New ones keep coming and other ones keep improving ln order to adapt to new concepts and paradigms. This requires an extra-effort for the programmer, who always needs to be aware of these changes. Visual Programming may be a solution to this problem. Expressing functions as module boxes which receive determined Input and return determined output may help programmers across the world by giving them the possibility to abstract from specific low-level hardware issues. This thesis not only shows how the CeII/B.E. (which has a heterogeneous multi-core architecture) capabilities can be combined with OpenDX (which has a visual programming environment), but also demonstrates that lt can be done without losing much performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When designing systems that are complex, dynamic and stochastic in nature, simulation is generally recognised as one of the best design support technologies, and a valuable aid in the strategic and tactical decision making process. A simulation model consists of a set of rules that define how a system changes over time, given its current state. Unlike analytical models, a simulation model is not solved but is run and the changes of system states can be observed at any point in time. This provides an insight into system dynamics rather than just predicting the output of a system based on specific inputs. Simulation is not a decision making tool but a decision support tool, allowing better informed decisions to be made. Due to the complexity of the real world, a simulation model can only be an approximation of the target system. The essence of the art of simulation modelling is abstraction and simplification. Only those characteristics that are important for the study and analysis of the target system should be included in the simulation model. The purpose of simulation is either to better understand the operation of a target system, or to make predictions about a target system’s performance. It can be viewed as an artificial white-room which allows one to gain insight but also to test new theories and practices without disrupting the daily routine of the focal organisation. What you can expect to gain from a simulation study is very well summarised by FIRMA (2000). His idea is that if the theory that has been framed about the target system holds, and if this theory has been adequately translated into a computer model this would allow you to answer some of the following questions: · Which kind of behaviour can be expected under arbitrarily given parameter combinations and initial conditions? · Which kind of behaviour will a given target system display in the future? · Which state will the target system reach in the future? The required accuracy of the simulation model very much depends on the type of question one is trying to answer. In order to be able to respond to the first question the simulation model needs to be an explanatory model. This requires less data accuracy. In comparison, the simulation model required to answer the latter two questions has to be predictive in nature and therefore needs highly accurate input data to achieve credible outputs. These predictions involve showing trends, rather than giving precise and absolute predictions of the target system performance. The numerical results of a simulation experiment on their own are most often not very useful and need to be rigorously analysed with statistical methods. These results then need to be considered in the context of the real system and interpreted in a qualitative way to make meaningful recommendations or compile best practice guidelines. One needs a good working knowledge about the behaviour of the real system to be able to fully exploit the understanding gained from simulation experiments. The goal of this chapter is to brace the newcomer to the topic of what we think is a valuable asset to the toolset of analysts and decision makers. We will give you a summary of information we have gathered from the literature and of the experiences that we have made first hand during the last five years, whilst obtaining a better understanding of this exciting technology. We hope that this will help you to avoid some pitfalls that we have unwittingly encountered. Section 2 is an introduction to the different types of simulation used in Operational Research and Management Science with a clear focus on agent-based simulation. In Section 3 we outline the theoretical background of multi-agent systems and their elements to prepare you for Section 4 where we discuss how to develop a multi-agent simulation model. Section 5 outlines a simple example of a multi-agent system. Section 6 provides a collection of resources for further studies and finally in Section 7 we will conclude the chapter with a short summary.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this contribution, a system identification procedure of a two-input Wiener model suitable for the analysis of the disturbance behavior of integrated nonlinear circuits is presented. The identified block model is comprised of two linear dynamic and one static nonlinear block, which are determined using an parameterized approach. In order to characterize the linear blocks, an correlation analysis using a white noise input in combination with a model reduction scheme is adopted. After having characterized the linear blocks, from the output spectrum under single tone excitation at each input a linear set of equations will be set up, whose solution gives the coefficients of the nonlinear block. By this data based black box approach, the distortion behavior of a nonlinear circuit under the influence of an interfering signal at an arbitrary input port can be determined. Such an interfering signal can be, for example, an electromagnetic interference signal which conductively couples into the port of consideration. © 2011 Author(s).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image (Video) retrieval is an interesting problem of retrieving images (videos) similar to the query. Images (Videos) are represented in an input (feature) space and similar images (videos) are obtained by finding nearest neighbors in the input representation space. Numerous input representations both in real valued and binary space have been proposed for conducting faster retrieval. In this thesis, we present techniques that obtain improved input representations for retrieval in both supervised and unsupervised settings for images and videos. Supervised retrieval is a well known problem of retrieving same class images of the query. We address the practical aspects of achieving faster retrieval with binary codes as input representations for the supervised setting in the first part, where binary codes are used as addresses into hash tables. In practice, using binary codes as addresses does not guarantee fast retrieval, as similar images are not mapped to the same binary code (address). We address this problem by presenting an efficient supervised hashing (binary encoding) method that aims to explicitly map all the images of the same class ideally to a unique binary code. We refer to the binary codes of the images as `Semantic Binary Codes' and the unique code for all same class images as `Class Binary Code'. We also propose a new class­ based Hamming metric that dramatically reduces the retrieval times for larger databases, where only hamming distance is computed to the class binary codes. We also propose a Deep semantic binary code model, by replacing the output layer of a popular convolutional Neural Network (AlexNet) with the class binary codes and show that the hashing functions learned in this way outperforms the state­ of ­the art, and at the same time provide fast retrieval times. In the second part, we also address the problem of supervised retrieval by taking into account the relationship between classes. For a given query image, we want to retrieve images that preserve the relative order i.e. we want to retrieve all same class images first and then, the related classes images before different class images. We learn such relationship aware binary codes by minimizing the similarity between inner product of the binary codes and the similarity between the classes. We calculate the similarity between classes using output embedding vectors, which are vector representations of classes. Our method deviates from the other supervised binary encoding schemes as it is the first to use output embeddings for learning hashing functions. We also introduce new performance metrics that take into account the related class retrieval results and show significant gains over the state­ of­ the art. High Dimensional descriptors like Fisher Vectors or Vector of Locally Aggregated Descriptors have shown to improve the performance of many computer vision applications including retrieval. In the third part, we will discuss an unsupervised technique for compressing high dimensional vectors into high dimensional binary codes, to reduce storage complexity. In this approach, we deviate from adopting traditional hyperplane hashing functions and instead learn hyperspherical hashing functions. The proposed method overcomes the computational challenges of directly applying the spherical hashing algorithm that is intractable for compressing high dimensional vectors. A practical hierarchical model that utilizes divide and conquer techniques using the Random Select and Adjust (RSA) procedure to compress such high dimensional vectors is presented. We show that our proposed high dimensional binary codes outperform the binary codes obtained using traditional hyperplane methods for higher compression ratios. In the last part of the thesis, we propose a retrieval based solution to the Zero shot event classification problem - a setting where no training videos are available for the event. To do this, we learn a generic set of concept detectors and represent both videos and query events in the concept space. We then compute similarity between the query event and the video in the concept space and videos similar to the query event are classified as the videos belonging to the event. We show that we significantly boost the performance using concept features from other modalities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The energy consumption by ICT (Information and Communication Technology) equipment is rapidly increasing which causes a significant economic and environmental problem. At present, the network infrastructure is becoming a large portion of the energy footprint in ICT. Thus the concept of energy efficient or green networking has been introduced. Now one of the main concerns of network industry is to minimize energy consumption of network infrastructure because of the potential economic benefits, ethical responsibility, and its environmental impact. In this paper, the energy management strategies to reduce the energy consumed by network switches in LAN (Local Area Network) have been developed. According to the lifecycle assessment of network switches, during usage phase, the highest amount of energy consumed. The study considers bandwidth, link load and traffic matrixes as input parameters which have the highest contribution in energy footprint of network switches during usage phase and energy consumption as output. Then with the objective of reducing energy usage of network infrastructure, the feasibility of putting Ethernet switches hibernate or sleep mode was investigated. After that, the network topology was reorganized using clustering method based on the spectral approach for putting network switches to hibernate or switched off mode considering the time and communications among them. Experimental results show the interest of this approach in terms of energy consumption

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In our research we investigate the output accuracy of discrete event simulation models and agent based simulation models when studying human centric complex systems. In this paper we focus on human reactive behaviour as it is possible in both modelling approaches to implement human reactive behaviour in the model by using standard methods. As a case study we have chosen the retail sector, and here in particular the operations of the fitting room in the women wear department of a large UK department store. In our case study we looked at ways of determining the efficiency of implementing new management policies for the fitting room operation through modelling the reactive behaviour of staff and customers of the department. First, we have carried out a validation experiment in which we compared the results from our models to the performance of the real system. This experiment also allowed us to establish differences in output accuracy between the two modelling methods. In a second step a multi-scenario experiment was carried out to study the behaviour of the models when they are used for the purpose of operational improvement. Overall we have found that for our case study example both, discrete event simulation and agent based simulation have the same potential to support the investigation into the efficiency of implementing new management policies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In modern power electronics equipment, it is desirable to design a low profile, high power density, and fast dynamic response converter. Increases in switching frequency reduce the size of the passive components such as transformers, inductors, and capacitors which results in compact size and less requirement for the energy storage. In addition, the fast dynamic response can be achieved by operating at high frequency. However, achieving high frequency operation while keeping the efficiency high, requires new advanced devices, higher performance magnetic components, and new circuit topology. These are required to absorb and utilize the parasitic components and also to mitigate the frequency dependent losses including switching loss, gating loss, and magnetic loss. Required performance improvements can be achieved through the use of Radio Frequency (RF) design techniques. To reduce switching losses, resonant converter topologies like resonant RF amplifiers (inverters) combined with a rectifier are the effective solution to maintain high efficiency at high switching frequencies through using the techniques such as device parasitic absorption, Zero Voltage Switching (ZVS), Zero Current Switching (ZCS), and a resonant gating. Gallium Nitride (GaN) device technologies are being broadly used in RF amplifiers due to their lower on- resistance and device capacitances compared with silicon (Si) devices. Therefore, this kind of semiconductor is well suited for high frequency power converters. The major problems involved with high frequency magnetics are skin and proximity effects, increased core and copper losses, unbalanced magnetic flux distribution generating localized hot spots, and reduced coupling coefficient. In order to eliminate the magnetic core losses which play a crucial role at higher operating frequencies, a coreless PCB transformer can be used. Compared to the conventional wire-wound transformer, a planar PCB transformer in which the windings are laid on the Printed Board Circuit (PCB) has a low profile structure, excellent thermal characteristics, and ease of manufacturing. Therefore, the work in this thesis demonstrates the design and analysis of an isolated low profile class DE resonant converter operating at 10 MHz switching frequency with a nominal output of 150 W. The power stage consists of a class DE inverter using GaN devices along with a sinusoidal gate drive circuit on the primary side and a class DE rectifier on the secondary side. For obtaining the stringent height converter, isolation is provided by a 10-layered coreless PCB transformer of 1:20 turn’s ratio. It is designed and optimized using 3D Finite Element Method (FEM) tools and radio frequency (RF) circuit design software. Simulation and experimental results are presented for a 10-layered coreless PCB transformer operating in 10 MHz.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the field of Power Electronics, several types of motor control systems have been developed using STM microcontroller and power boards. In both industrial power applications and domestic appliances, power electronic inverters are widely used. Inverters are used to control the torque, speed, and position of the rotor in AC motor drives. An inverter delivers constant-voltage and constant-frequency power in uninterruptible power sources. Because inverter power supplies have a high-power consumption and low transfer efficiency rate, a three-phase sine wave AC power supply was created using the embedded system STM32, which has low power consumption and efficient speed. It has the capacity of output frequency of 50 Hz and the RMS of line voltage. STM32 embedded based Inverter is a power supply that integrates, reduced, and optimized the power electronics application that require hardware system, software, and application solution, including power architecture, techniques, and tools, approaches capable of performance on devices and equipment. Power inverters are currently used and implemented in green energy power system with low energy system such as sensors or microcontroller to perform the operating function of motors and pumps. STM based power inverter is efficient, less cost and reliable. My thesis work was based on STM motor drives and control system which can be implemented in a gas analyser for operating the pumps and motors. It has been widely applied in various engineering sectors due to its ability to respond to adverse structural changes and improved structural reliability. The present research was designed to use STM Inverter board on low power MCU such as NUCLEO with some practical examples such as Blinking LED, and PWM. Then we have implemented a three phase Inverter model with Steval-IPM08B board, which converter single phase 230V AC input to three phase 380 V AC output, the output will be useful for operating the induction motor.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents the development of low cost microprocessor-based equipment for generation of differential GPS correction signal, in real time, and configuration and supervision of the GPS base. The developed equipment contains a dedicated microcontroller connected to the GPS receiver, alphanumeric display and multifunction keyboard for configuration and operation of the system and communication interfaces. The electronic circuit has the function of receiving the information from GPS base; interpret them, converting the sentence in the RTCM SC-104 protocol. The microcontroller software makes the conversion of the signal received by the GPS base from the specific format to RTCM SC-104 protocol. The processing main board has two serials RS-232C standard interfaces. One of them is used for configuration and receiving the information generated by the GPS base. The other operates as output, sending the differential correction signal for the transmission system. The development of microprocessor-based equipment showed that it is possible the construction of a low cost private station for real time generation of differential GPS correction signal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Attention deficit, impulsivity and hyperactivity are the cardinal features of attention deficit hyperactivity disorder (ADHD) but executive function (EF) disorders, as problems with inhibitory control, working memory and reaction time, besides others EFs, may underlie many of the disturbs associated with the disorder. OBJECTIVE: To examine the reaction time in a computerized test in children with ADHD and normal controls. METHOD: Twenty-three boys (aged 9 to 12) with ADHD diagnosis according to Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, 2000 (DSM-IV) criteria clinical, without comorbidities, Intelligence Quotient (IQ) >89, never treated with stimulant and fifteen normal controls, age matched were investigated during performance on a voluntary attention psychophysical test. RESULTS: Children with ADHD showed reaction time higher than normal controls. CONCLUSION: A slower reaction time occurred in our patients with ADHD. This findings may be related to problems with the attentional system, that could not maintain an adequate capacity of perceptual input processes and/or in motor output processes, to respond consistently during continuous or repetitive activity.