224 resultados para Control and Systems Engineering


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Plug-in hybrid electric vehicles (PHEVs) provide much promise in reducing greenhouse gas emissions and, thus, are a focal point of research and development. Existing on-board charging capacity is effective but requires the use of several power conversion devices and power converters, which reduce reliability and cost efficiency. This paper presents a novel three-phase switched reluctance (SR) motor drive with integrated charging functions (including internal combustion engine and grid charging). The electrical energy flow within the drivetrain is controlled by a power electronic converter with less power switching devices and magnetic devices. It allows the desired energy conversion between the engine generator, the battery, and the SR motor under different operation modes. Battery-charging techniques are developed to operate under both motor-driving mode and standstill-charging mode. During the magnetization mode, the machine's phase windings are energized by the dc-link voltage. The power converter and the machine phase windings are controlled with a three-phase relay to enable the use of the ac-dc rectifier. The power converter can work as a buck-boost-type or a buck-type dc-dc converter for charging the battery. Simulation results in MATLAB/Simulink and experiments on a 3-kW SR motor validate the effectiveness of the proposed technologies, which may have significant economic implications and improve the PHEVs' market acceptance

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This article focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. In particular, the seemingly obvious lower bounds of Ω(m) messages, where m is the number of edges in the network, and Ω(D) time, where D is the network diameter, are nontrivial to show for randomized (Monte Carlo) algorithms. (Recent results, showing that even Ω(n), where n is the number of nodes in the network, is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms, except for the restricted case of comparison algorithms, where it was also required that nodes may not wake up spontaneously and that D and n were not known. We establish these fundamental lower bounds in this article for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (namely, algorithms that work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make any use of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time leader election algorithm is known. A slight adaptation of our lower bound technique gives rise to an Ω(m) message lower bound for randomized broadcast algorithms. 

An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. The answer is known to be negative in the deterministic setting. We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that tradeoff messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to use virtual reality as a sport analysis tool, we need to be sure that an immersed athlete reacts realistically in a virtual environment. This has been validated for a real handball goalkeeper facing a virtual thrower. However, we currently ignore which visual variables induce a realistic motor behavior of the immersed handball goalkeeper. In this study, we used virtual reality to dissociate the visual information related to the movements of the player from the visual information related to the trajectory of the ball. Thus, the aim is to evaluate the relative influence of these different visual information sources on the goalkeeper's motor behavior. We tested 10 handball goalkeepers who had to predict the final position of the virtual ball in the goal when facing the following: only the throwing action of the attacking player (TA condition), only the resulting ball trajectory (BA condition), and both the throwing action of the attacking player and the resulting ball trajectory (TB condition). Here we show that performance was better in the BA and TB conditions, but contrary to expectations, performance was substantially worse in the TA condition. A significant effect of ball landing zone does, however, suggest that the relative importance between visual information from the player and the ball depends on the targeted zone in the goal. In some cases, body-based cues embedded in the throwing actions may have a minor influence on the ball trajectory and vice versa. Kinematics analysis was then combined with these results to determine why such differences occur depending on the ball landing zone and consequently how it can clarify the role of different sources of visual information on the motor behavior of an athlete immersed in a virtual environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new approach to determine the local boundary of voltage stability region in a cut-set power space (CVSR) is presented. Power flow tracing is first used to determine the generator-load pair most sensitive to each branch in the interface. The generator-load pairs are then used to realize accurate small disturbances by controlling the branch power flow in increasing and decreasing directions to obtain new equilibrium points around the initial equilibrium point. And, continuous power flow is used starting from such new points to get the corresponding critical points around the initial critical point on the CVSR boundary. Then a hyperplane cross the initial critical point can be calculated by solving a set of linear algebraic equations. Finally, the presented method is validated by some systems, including New England 39-bus system, IEEE 118-bus system, and EPRI-1000 bus system. It can be revealed that the method is computationally more efficient and has less approximation error. It provides a useful approach for power system online voltage stability monitoring and assessment. This work is supported by National Natural Science Foundation of China (No. 50707019), Special Fund of the National Basic Research Program of China (No. 2009CB219701), Foundation for the Author of National Excellent Doctoral Dissertation of PR China (No. 200439), Tianjin Municipal Science and Technology Development Program (No. 09JCZDJC25000), National Major Project of Scientific and Technical Supporting Programs of China During the 11th Five-year Plan Period (No. 2006BAJ03A06). ©2009 State Grid Electric Power Research Institute Press.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Inherently error-resilient applications in areas such as signal processing, machine learning and data analytics provide opportunities for relaxing reliability requirements, and thereby reducing the overhead incurred by conventional error correction schemes. In this paper, we exploit the tolerable imprecision of such applications by designing an energy-efficient fault-mitigation scheme for unreliable data memories to meet target yield. The proposed approach uses a bit-shuffling mechanism to isolate faults into bit locations with lower significance. This skews the bit-error distribution towards the low order bits, substantially limiting the output error magnitude. By controlling the granularity of the shuffling, the proposed technique enables trading-off quality for power, area, and timing overhead. Compared to error-correction codes, this can reduce the overhead by as much as 83% in read power, 77% in read access time, and 89% in area, when applied to various data mining applications in 28nm process technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wearable devices performing advanced bio-signal analysis algorithms are aimed to foster a revolution in healthcare provision of chronic cardiac diseases. In this context, energy efficiency is of paramount importance, as long-term monitoring must be ensured while relying on a tiny power source. Operating at a scaled supply voltage, just above the threshold voltage, effectively helps in saving substantial energy, but it makes circuits, and especially memories, more prone to errors, threatening the correct execution of algorithms. The use of error detection and correction codes may help to protect the entire memory content, however it incurs in large area and energy overheads which may not be compatible with the tight energy budgets of wearable systems. To cope with this challenge, in this paper we propose to limit the overhead of traditional schemes by selectively detecting and correcting errors only in data highly impacting the end-to-end quality of service of ultra-low power wearable electrocardiogram (ECG) devices. This partition adopts the protection of either significant words or significant bits of each data element, according to the application characteristics (statistical properties of the data in the application buffers), and its impact in determining the output. The proposed heterogeneous error protection scheme in real ECG signals allows substantial energy savings (11% in wearable devices) compared to state-of-the-art approaches, like ECC, in which the whole memory is protected against errors. At the same time, it also results in negligible output quality degradation in the evaluated power spectrum analysis application of ECG signals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Research in the field of sports performance is constantly developing new technology to help extract meaningful data to aid in understanding in a multitude of areas such as improving technical or motor performance. Video playback has previously been extensively used for exploring anticipatory behaviour. However, when using such systems, perception is not active. This loses key information that only emerges from the dynamics of the action unfolding over time and the active perception of the observer. Virtual reality (VR) may be used to overcome such issues. This paper presents the architecture and initial implementation of a novel VR cricket simulator, utilising state of the art motion capture technology (21 Vicon cameras capturing kinematic profile of elite bowlers) and emerging VR technology (Intersense IS-900 tracking combined with Qualisys Motion capture cameras with visual display via Sony Head Mounted Display HMZ-T1), applied in a cricket scenario to examine varying components of decision and action for cricket batters. This provided an experience with a high level of presence allowing for a real-time egocentric view-point to be presented to participants. Cyclical user-testing was carried out, utilisng both qualitative and quantitative approaches, with users reporting a positive experience in use of the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research paper presents a five step algorithm to generate tool paths for machining Free form / Irregular Contoured Surface(s) (FICS) by adopting STEP-NC (AP-238) format. In the first step, a parametrized CAD model with FICS is created or imported in UG-NX6.0 CAD package. The second step recognizes the features and calculates a Closeness Index (CI) by comparing them with the B-Splines / Bezier surfaces. The third step utilizes the CI and extracts the necessary data to formulate the blending functions for identified features. In the fourth step Z-level 5 axis tool paths are generated by adopting flat and ball end mill cutters. Finally, in the fifth step, tool paths are integrated with STEP-NC format and validated. All these steps are discussed and explained through a validated industrial component.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research paper presents the work on feature recognition, tool path data generation and integration with STEP-NC (AP-238 format) for features having Free form / Irregular Contoured Surface(s) (FICS). Initially, the FICS features are modelled / imported in UG CAD package and a closeness index is generated. This is done by comparing the FICS features with basic B-Splines / Bezier curves / surfaces. Then blending functions are caculated by adopting convolution theorem. Based on the blending functions, contour offsett tool paths are generated and simulated for 5 axis milling environment. Finally, the tool path (CL) data is integrated with STEP-NC (AP-238) format. The tool path algorithm and STEP- NC data is tested with various industrial parts through an automated UFUNC plugin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stealthy attackers move patiently through computer networks - taking days, weeks or months to accomplish their objectives in order to avoid detection. As networks scale up in size and speed, monitoring for such attack attempts is increasingly a challenge. This paper presents an efficient monitoring technique for stealthy attacks. It investigates the feasibility of proposed method under number of different test cases and examines how design of the network affects the detection. A methodological way for tracing anonymous stealthy activities to their approximate sources is also presented. The Bayesian fusion along with traffic sampling is employed as a data reduction method. The proposed method has the ability to monitor stealthy activities using 10-20% size sampling rates without degrading the quality of detection.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rotational moulding is a method to produce hollow plastic articles. Heating is normally carried out by placing the mould into a hot air oven where the plastic material in the mould is heated. The most common cooling media are water and forced air. Due to the inefficient nature of conventional hot air ovens most of the energy supplied by the oven does not go to heat the plastic and as a consequence the procedure has very long cycle times. Direct oil heating is an effective alternative in order to achieve better energy efficiency and cycle times. This research work has combined this technology with new innovative design of mould, applying the advantages of electroforming and rapid prototyping. Complex cavity geometries are manufactured by electroforming from a rapid prototyping mandrel. The approach involves conformal heating and cooling channels , where the oil flows into a parallel channel to the electroformed cavity (nickel or copper). Because of this the mould enables high temperature uniformity with direct heating and cooling of the electroformed shell, Uniform heating and cooling is important not only for good quality parts but also for good uniform wall thickness distribution in the rotationally moulded part. The experimental work with the manufactured prototype mould has enabled analysis of the thermal uniformity in the cavity, under different temperatures. Copyright © 2008 by ASME.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The measurement of fast changing temperature fluctuations is a challenging problem due to the inherent limited bandwidth of temperature sensors. This results in a measured signal that is a lagged and attenuated version of the input. Compensation can be performed provided an accurate, parameterised sensor model is available. However, to account for the influence of the measurement environment and changing conditions such as gas velocity, the model must be estimated in-situ. The cross-relation method of blind deconvolution is one approach for in-situ characterisation of sensors. However, a drawback with the method is that it becomes positively biased and unstable at high noise levels. In this paper, the cross-relation method is cast in the discrete-time domain and a bias compensation approach is developed. It is shown that the proposed compensation scheme is robust and yields unbiased estimates with lower estimation variance than the uncompensated version. All results are verified using Monte-Carlo simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To maintain the pace of development set by Moore's law, production processes in semiconductor manufacturing are becoming more and more complex. The development of efficient and interpretable anomaly detection systems is fundamental to keeping production costs low. As the dimension of process monitoring data can become extremely high anomaly detection systems are impacted by the curse of dimensionality, hence dimensionality reduction plays an important role. Classical dimensionality reduction approaches, such as Principal Component Analysis, generally involve transformations that seek to maximize the explained variance. In datasets with several clusters of correlated variables the contributions of isolated variables to explained variance may be insignificant, with the result that they may not be included in the reduced data representation. It is then not possible to detect an anomaly if it is only reflected in such isolated variables. In this paper we present a new dimensionality reduction technique that takes account of such isolated variables and demonstrate how it can be used to build an interpretable and robust anomaly detection system for Optical Emission Spectroscopy data.