105 resultados para correctness verification


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A nearly constant switching frequency current hysteresis controller for a 2-level inverter fed induction motor drive is proposed in this paper: The salient features of this controller are fast dynamics for the current, inherent protection against overloads and less switching frequency variation. The large variation of switching frequency as in the conventional hysteresis controller is avoided by defining a current-error boundary which is obtained from the current-error trajectory of the standard space vector PWM. The current-error boundary is computed at every sampling interval based on the induction machine parameters and from the estimated fundamental stator voltage. The stator currents are always monitored and when the current-error exceeds the boundary, voltage space vector is switched to reduce the current-error. The proposed boundary computation algorithm is applicable in linear and over-modulation region and it is simple to implement in any standard digital signal processor: Detailed experimental verification is done using a 7.5 kW induction motor and the results are given to show the performance of the drive at various operating conditions and validate the proposed advantages.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, the approach for assigning cooperative communication of Uninhabited Aerial Vehicles (UAV) to perform multiple tasks on multiple targets is posed as a combinatorial optimization problem. The multiple task such as classification, attack and verification of target using UAV is employed using nature inspired techniques such as Artificial Immune System (AIS), Particle Swarm Optimization (PSO) and Virtual Bee Algorithm (VBA). The nature inspired techniques have an advantage over classical combinatorial optimization methods like prohibitive computational complexity to solve this NP-hard problem. Using the algorithms we find the best sequence in which to attack and destroy the targets while minimizing the total distance traveled or the maximum distance traveled by an UAV. The performance analysis of the UAV to classify, attack and verify the target is evaluated using AIS, PSO and VBA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Advanced bus-clamping switching sequences, which employ an active vector twice in a subcycle, are used to reduce line current distortion and switching loss in a space vector modulated voltage source converter. This study evaluates minimum switching loss pulse width modulation (MSLPWM), which is a combination of such sequences, for static reactive power compensator (STATCOM) application. It is shown that MSLPWM results in a significant reduction in device loss over conventional space vector pulse width modulation. Experimental verification is presented at different power levels of up to 150 kVA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Precise pointer analysis is a problem of interest to both the compiler and the program verification community. Flow-sensitivity is an important dimension of pointer analysis that affects the precision of the final result computed. Scaling flow-sensitive pointer analysis to millions of lines of code is a major challenge. Recently, staged flow-sensitive pointer analysis has been proposed, which exploits a sparse representation of program code created by staged analysis. In this paper we formulate the staged flow-sensitive pointer analysis as a graph-rewriting problem. Graph-rewriting has already been used for flow-insensitive analysis. However, formulating flow-sensitive pointer analysis as a graph-rewriting problem adds additional challenges due to the nature of flow-sensitivity. We implement our parallel algorithm using Intel Threading Building Blocks and demonstrate considerable scaling (upto 2.6x) for 8 threads on a set of 10 benchmarks. Compared to the sequential implementation of staged flow-sensitive analysis, a single threaded execution of our implementation performs better in 8 of the benchmarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Given the recent reports pertaining to novel optical properties of ultra-small quantum dots (QDs) (r <2 nm), this nanomaterial is of relevance to both technology and science. However it is well known that in these size regimes most chalocogenide QD dispersions are unstable. Since applications often require use of QD dispersions (e.g. for deployment on a substrate), stabilizing these ultra-small particles is of practical relevance. In this work we demonstrate a facile, green, solution approach for synthesis of stable, ultra-small ZnO QDs having radius less than 2 nm. The particle size is calculated using Brits' equation and confirmed by transmission electron micrographs. ZnO QDs reported remain stable for > 120 days in ethanol (at similar to 298-303 K). We report digestive ripening (DR) in TEA capped ZnO QDs; this occurs rapidly over a short duration of 5 min. To explain this observation we propose a suitable mechanism based on the Lee's theory, which correlates the tendency of DR with the observed zeta potentials of the dispersed medium. To the best of our knowledge this is the (i) first report on DR in oxide QDs, as well as the first direct experimental verification of Lee's theory, and (ii) most rapid DR reported so far. The facile nature of the method presented here makes ultra-small ZnO readily accessible for fundamental exploration and technologically relevant applications. (C) 2014 Elsevier Ltd and Techna Group S.r.l. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Package-board co-design plays a crucial role in determining the performance of high-speed systems. Although there exist several commercial solutions for electromagnetic analysis and verification, lack of Computer Aided Design (CAD) tools for SI aware design and synthesis lead to longer design cycles and non-optimal package-board interconnect geometries. In this work, the functional similarities between package-board design and radio-frequency (RF) imaging are explored. Consequently, qualitative methods common to the imaging community, like Tikhonov Regularization (TR) and Landweber method are applied to solve multi-objective, multi-variable package design problems. In addition, a new hierarchical iterative piecewise linear algorithm is developed as a wrapper over LBP for an efficient solution in the design space.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The correctness of a hard real-time system depends its ability to meet all its deadlines. Existing real-time systems use either a pure real-time scheduler or a real-time scheduler embedded as a real-time scheduling class in the scheduler of an operating system (OS). Existing implementations of schedulers in multicore systems that support real-time and non-real-time tasks, permit the execution of non-real-time tasks in all the cores with priorities lower than those of real-time tasks, but interrupts and softirqs associated with these non-real-time tasks can execute in any core with priorities higher than those of real-time tasks. As a result, the execution overhead of real-time tasks is quite large in these systems, which, in turn, affects their runtime. In order that the hard real-time tasks can be executed in such systems with minimal interference from other Linux tasks, we propose, in this paper, an integrated scheduler architecture, called SchedISA, which aims to considerably reduce the execution overhead of real-time tasks in these systems. In order to test the efficacy of the proposed scheduler, we implemented partitioned earliest deadline first (P-EDF) scheduling algorithm in SchedISA on Linux kernel, version 3.8, and conducted experiments on Intel core i7 processor with eight logical cores. We compared the execution overhead of real-time tasks in the above implementation of SchedISA with that in SCHED_DEADLINE's P-EDF implementation, which concurrently executes real-time and non-real-time tasks in Linux OS in all the cores. The experimental results show that the execution overhead of real-time tasks in the above implementation of SchedISA is considerably less than that in SCHED_DEADLINE. We believe that, with further refinement of SchedISA, the execution overhead of real-time tasks in SchedISA can be reduced to a predictable maximum, making it suitable for scheduling hard real-time tasks without affecting the CPU share of Linux tasks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The transformation of flowing liquids into rigid glasses is thought to involve increasingly cooperative relaxation dynamics as the temperature approaches that of the glass transition. However, the precise nature of this motion is unclear, and a complete understanding of vitrification thus remains elusive. Of the numerous theoretical perspectives(1-4) devised to explain the process, random first-order theory (RFOT; refs 2,5) is a well-developed thermodynamic approach, which predicts a change in the shape of relaxing regions as the temperature is lowered. However, the existence of an underlying `ideal' glass transition predicted by RFOT remains debatable, largely because the key microscopic predictions concerning the growth of amorphous order and the nature of dynamic correlations lack experimental verification. Here, using holographic optical tweezers, we freeze a wall of particles in a two-dimensional colloidal glass-forming liquid and provide direct evidence for growing amorphous order in the form of a static point-to-set length. We uncover the non-monotonic dependence of dynamic correlations on area fraction and show that this non-monotonicity follows directly from the change in morphology and internal structure of cooperatively rearranging regions(6,7). Our findings support RFOT and thereby constitute a crucial step in distinguishing between competing theories of glass formation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cache analysis plays a very important role in obtaining precise Worst Case Execution Time (WCET) estimates of programs for real-time systems. While Abstract Interpretation based approaches are almost universally used for cache analysis, they fail to take advantage of its unique requirement: it is not necessary to find the guaranteed cache behavior that holds across all executions of a program. We only need the cache behavior along one particular program path, which is the path with the maximum execution time. In this work, we introduce the concept of cache miss paths, which allows us to use the worst-case path information to improve the precision of AI-based cache analysis. We use Abstract Interpretation to determine the cache miss paths, and then integrate them in the IPET formulation. An added advantage is that this further allows us to use infeasible path information for cache analysis. Experimentally, our approach gives more precise WCETs as compared to AI-based cache analysis, and we also provide techniques to trade-off analysis time with precision to provide scalability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We revisit a problem studied by Padakandla and Sundaresan SIAM J. Optim., August 2009] on the minimization of a separable convex function subject to linear ascending constraints. The problem arises as the core optimization in several resource allocation problems in wireless communication settings. It is also a special case of an optimization of a separable convex function over the bases of a specially structured polymatroid. We give an alternative proof of the correctness of the algorithm of Padakandla and Sundaresan. In the process we relax some of their restrictions placed on the objective function.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Retransmission protocols such as HDLC and TCP are designed to ensure reliable communication over noisy channels (i.e., channels that can corrupt messages). Thakkar et al. 15] have recently presented an algorithmic verification technique for deterministic streaming string transducer (DSST) models of such protocols. The verification problem is posed as equivalence checking between the specification and protocol DSSTs. In this paper, we argue that more general models need to be obtained using non-deterministic streaming string transducers (NSSTs). However, equivalence checking is undecidable for NSSTs. We present two classes where the models belong to a sub-class of NSSTs for which it is decidable. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We solve the two-dimensional, planar Navier-Stokes equations to simulate a laminar, standing hydraulic jump using a Volume-of-Fluid method. The geometry downstream of the jump has been designed to be similar to experimental conditions by including a pit at the edge of the platform over which liquid film flows. We obtain jumps with and without separation. Increasing the inlet Froude number pushes the jump downstream and makes the slope of the jump weaker, consistent with experimental observations of circular jumps, and decreasing the Reynolds number brings the jump upstream while making it steeper. We study the effect of the length of the domain and that of a downstream obstacle on the structure and location of the jump. The transient flow which leads to a final steady jump is described for the first time to our knowledge. In the moderate Reynolds number regime, we obtain steady undular jumps with a separated bubble underneath the first few undulations. Interestingly, surface tension leads to shortening of wavelength of these undulations. We show that the undulations can be explained using the inviscid theory of Benjamin and Lighthill (Proc. R. Soc. London, Ser. A, 1954). We hope this new finding will motivate experimental verification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Executing authenticated computation on outsourced data is currently an area of major interest in cryptology. Large databases are being outsourced to untrusted servers without appreciable verification mechanisms. As adversarial server could produce erroneous output, clients should not trust the server's response blindly. Primitive set operations like union, set difference, intersection etc. can be invoked on outsourced data in different concrete settings and should be verifiable by the client. One such interesting adaptation is to authenticate email search result where the untrusted mail server has to provide a proof along with the search result. Recently Ohrimenko et al. proposed a scheme for authenticating email search. We suggest significant improvements over their proposal in terms of client computation and communication resources by properly recasting it in two-party settings. In contrast to Ohrimenko et al. we are able to make the number of bilinear pairing evaluation, the costliest operation in verification procedure, independent of the result set cardinality for union operation. We also provide an analytical comparison of our scheme with their proposal which is further corroborated through experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Following the recent work of the authors in development and numerical verification of a new kinematic approach of the limit analysis for surface footings on non-associative materials, a practical procedure is proposed to utilize the theory. It is known that both the peak friction angle and dilation angle depend on the sand density as well as the stress level, which was not the concern of the former work. In the current work, a practical procedure is established to provide a better estimate of the bearing capacity of surface footings on sand which is often non-associative. This practical procedure is based on the results obtained theoretically and requires the density index and the critical state friction angle of the sand. The proposed practical procedure is a simple iterative computational procedure which relates the density index of the sand, stress level, dilation angle, peak friction angle and eventually the bearing capacity. The procedure is described and verified among available footing load test data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computer Assisted Assessment (CAA) has been existing for several years now. While some forms of CAA do not require sophisticated text understanding (e.g., multiple choice questions), there are also student answers that consist of free text and require analysis of text in the answer. Research towards the latter till date has concentrated on two main sub-tasks: (i) grading of essays, which is done mainly by checking the style, correctness of grammar, and coherence of the essay and (ii) assessment of short free-text answers. In this paper, we present a structured view of relevant research in automated assessment techniques for short free-text answers. We review papers spanning the last 15 years of research with emphasis on recent papers. Our main objectives are two folds. First we present the survey in a structured way by segregating information on dataset, problem formulation, techniques, and evaluation measures. Second we present a discussion on some of the potential future directions in this domain which we hope would be helpful for researchers.