105 resultados para Modelo Input-Output
Resumo:
The configuration proposed in this paper aims to generate high voltage for pulsed power applications. The main idea is to charge two groups of capacitors in parallel through an inductor and take the advantage of resonant phenomena in charging each capacitor up to a double input voltage level. In each resonant half a cycle, one of those capacitor groups are charged, and finally the charged capacitors will be connected together in series and the summation of the capacitor voltages can be appeared at the output of the topology. This topology can be considered as a modified Marx generator which works based on the resonant concept. Simulation models of this converter have been investigated in Matlab/SIMULINK platform and the attained results fully satisfy the proper operation of the converter.
Resumo:
This correspondence paper addresses the problem of output feedback stabilization of control systems in networked environments with quality-of-service (QoS) constraints. The problem is investigated in discrete-time state space using Lyapunov’s stability theory and the linear inequality matrix technique. A new discrete-time modeling approach is developed to describe a networked control system (NCS) with parameter uncertainties and nonideal network QoS. It integrates a network-induced delay, packet dropout, and other network behaviors into a unified framework. With this modeling, an improved stability condition, which is dependent on the lower and upper bounds of the equivalent network-induced delay, is established for the NCS with norm-bounded parameter uncertainties. It is further extended for the output feedback stabilization of the NCS with nonideal QoS. Numerical examples are given to demonstrate the main results of the theoretical development.
Resumo:
Nonlinear filter generators are common components used in the keystream generators for stream ciphers and more recently for authentication mechanisms. They consist of a Linear Feedback Shift Register (LFSR) and a nonlinear Boolean function to mask the linearity of the LFSR output. Properties of the output of a nonlinear filter are not well studied. Anderson noted that the m-tuple output of a nonlinear filter with consecutive taps to the filter function is unevenly distributed. Current designs use taps which are not consecutive. We examine m-tuple outputs from nonlinear filter generators constructed using various LFSRs and Boolean functions for both consecutive and uneven (full positive difference sets where possible) tap positions. The investigation reveals that in both cases, the m-tuple output is not uniform. However, consecutive tap positions result in a more biased distribution than uneven tap positions, with some m-tuples not occurring at all. These biased distributions indicate a potential flaw that could be exploited for cryptanalysis
Resumo:
We present several new observations on the SMS4 block cipher, and discuss their cryptographic significance. The crucial observation is the existence of fixed points and also of simple linear relationships between the bits of the input and output words for each component of the round functions for some input words. This implies that the non-linear function T of SMS4 does not appear random and that the linear transformation provides poor diffusion. Furthermore, the branch number of the linear transformation in the key scheduling algorithm is shown to be less than optimal. The main security implication of these observations is that the round function is not always non-linear. Due to this linearity, it is possible to reduce the number of effective rounds of SMS4 by four. We also investigate the susceptibility of SMS4 to further cryptanalysis. Finally, we demonstrate a successful differential attack on a slightly modified variant of SMS4. These findings raise serious questions on the security provided by SMS4.
Resumo:
Optimal scheduling of voltage regulators (VRs), fixed and switched capacitors and voltage on customer side of transformer (VCT) along with the optimal allocaton of VRs and capacitors are performed using a hybrid optimisation method based on discrete particle swarm optimisation and genetic algorithm. Direct optimisation of the tap position is not appropriate since in general the high voltage (HV) side voltage is not known. Therefore, the tap setting can be determined give the optimal VCT once the HV side voltage is known. The objective function is composed of the distribution line loss cost, the peak power loss cost and capacitors' and VRs' capital, operation and maintenance costs. The constraints are limits on bus voltage and feeder current along with VR taps. The bus voltage should be maintained within the standard level and the feeder current should not exceed the feeder-rated current. The taps are to adjust the output voltage of VRs between 90 and 110% of their input voltages. For validation of the proposed method, the 18-bus IEEE system is used. The results are compared with prior publications to illustrate the benefit of the employed technique. The results also show that the lowest cost planning for voltage profile will be achieved if a combination of capacitors, VRs and VCTs is considered.
Resumo:
The variability of input parameters is the most important source of overall model uncertainty. Therefore, an in-depth understanding of the variability is essential for uncertainty analysis of stormwater quality model outputs. This paper presents the outcomes of a research study which investigated the variability of pollutants build-up characteristics on road surfaces in residential, commercial and industrial land uses. It was found that build-up characteristics vary highly even within the same land use. Additionally, industrial land use showed relatively higher variability of maximum build-up, build-up rate and particle size distribution, whilst the commercial land use displayed a relatively higher variability of pollutant-solid ratio. Among the various build-up parameters analysed, D50 (volume-median-diameter) displayed the relatively highest variability for all three land uses.
Resumo:
Pedestrian movement is known to cause significant effects on indoor MIMO channels. In this paper, a statistical characterization of the indoor MIMO-OFDM channel subject ot pedestrian movement is reported. The experiment used 4 sending and 4 receiving antennas and 114 sub-carriers at 5.2 GHz. Measurement scenarios varied from zero to ten pedestrians walking randomly between transmitter (tx) and receiver (Rx) arrays. The empirical cumulative distribution function (CDF) of the received fading envelope fits the Ricean distribution with K factors ranging from 7dB to 15 dB, for the 10 pedestrians and vacant scenarios respectively. In general, as the number of pedestrians increase, the CDF slope tends to decrease proportionally. Furthermore, as the number of pedestrians increase, increasing multipath contribution, the dynamic range of channel capacity increases proportionally. These results are consistent with measurement results obtained in controlled scenarios for a fixed narrowband Single-Input Single-Output (SISO) link at 5.2 GHz in previous work. The described empirical characterization provides an insight into the prediction of human-body shadowing effects for indoor MIMO-OFDM channels at 5.2 GHz.
Resumo:
A Simulink Matlab control system of a heavy vehicle suspension has been developed. The aim of the exercise presented in this paper was to develop a Simulink Matlab control system of a heavy vehicle suspension. The objective facilitated by this outcome was the use of a working model of a heavy vehicle (HV) suspension that could be used for future research. A working computer model is easier and cheaper to re-configure than a HV axle group installed on a truck; it presents less risk should something go wrong and allows more scope for variation and sensitivity analysis before embarking on further "real-world" testing. Empirical data recorded as the input and output signals of a heavy vehicle (HV) suspension were used to develop the parameters for computer simulation of a linear time invariant system described by a second-order differential equation of the form: (i.e. a "2nd-order" system). Using the empirical data as an input to the computer model allowed validation of its output compared with the empirical data. The errors ranged from less than 1% to approximately 3% for any parameter, when comparing like-for-like inputs and outputs. The model is presented along with the results of the validation. This model will be used in future research in the QUT/Main Roads project Heavy vehicle suspensions – testing and analysis, particularly so for a theoretical model of a multi-axle HV suspension with varying values of dynamic load sharing. Allowance will need to be made for the errors noted when using the computer models in this future work.
Resumo:
Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.
Resumo:
The new configuration proposed in this paper for Marx Generator (MG) aims to generate high voltage for pulsed power applications through reduced number of semiconductor components with a more efficient load supplying process. The main idea is to charge two groups of capacitors in parallel through an inductor and take advantage of resonant phenomenon in charging each capacitor up to a double input voltage level. In each resonant half a cycle, one of those capacitor groups are charged, and eventually the charged capacitors will be connected in series and the summation of the capacitor voltages can be appeared at the output of the topology. This topology can be considered as a modified Marx generator which works based on the resonant concept. Simulated models of this converter have been investigated in Matlab/SIMULINK platform and a prototype set up has been implemented in laboratory. The acquired results of either fully satisfy the anticipations in proper operation of the converter.
Resumo:
The new configuration proposed in this paper for Marx Generator (MG.) aims to generate high voltage for pulsed power applications through reduced number of semiconductor components with a more efficient load supplying process. The main idea is to charge two groups of capacitors in parallel through an inductor and take the advantage of resonant phenomenon in charging each capacitor up to a double input voltage level. In each resonant half a cycle, one of those capacitor groups are charged, and eventually the charged capacitors will be connected in series and the summation of the capacitor voltages can be appeared at the output of the topology. This topology can be considered as a modified Marx generator which works based on the resonant concept. Simulated models of this converter have been investigated in Matlab/SIMULINK platform and the acquired results fully satisfy the anticipations in proper operation of the converter.
Resumo:
INTRODUCTION. Following anterior thoracoscopic instrumentation and fusion for the treatment of thoracic AIS, implant related complications have been reported as high as 20.8%. Currently the magnitudes of the forces applied to the spine during anterior scoliosis surgery are unknown. The aim of this study was to measure the segmental compressive forces applied during anterior single rod instrumentation in a series of adolescent idiopathic scoliosis patients. METHODS. A force transducer was designed, constructed and retrofitted to a surgical cable compression tool, routinely used to apply segmental compression during anterior scoliosis correction. Transducer output was continuously logged during the compression of each spinal joint, the output at completion converted to an applied compression force using calibration data. The angle between adjacent vertebral body screws was also measured on intra-operative frontal plane fluoroscope images taken both before and after each joint compression. The difference in angle between the two images was calculated as an estimate for the achieved correction at each spinal joint. RESULTS. Force measurements were obtained for 15 scoliosis patients (Aged 11-19 years) with single thoracic curves (Cobb angles 47˚- 67˚). In total, 95 spinal joints were instrumented. The average force applied for a single joint was 540 N (± 229 N)ranging between 88 N and 1018 N. Experimental error in the force measurement, determined from transducer calibration was ± 43 N. A trend for higher forces applied at joints close to the apex of the scoliosis was observed. The average joint correction angle measured by fluoroscope imaging was 4.8˚ (±2.6˚, range 0˚-12.6˚). CONCLUSION. This study has quantified in-vivo, the intra-operative correction forces applied by the surgeon during anterior single rod instrumentation. This data provides a useful contribution towards an improved understanding of the biomechanics of scoliosis correction. In particular, this data will be used as input for developing patient-specific finite element simulations of scoliosis correction surgery.
Resumo:
There have been notable advances in learning to control complex robotic systems using methods such as Locally Weighted Regression (LWR). In this paper we explore some potential limits of LWR for robotic applications, particularly investigating its application to systems with a long horizon of temporal dependence. We define the horizon of temporal dependence as the delay from a control input to a desired change in output. LWR alone cannot be used in a temporally dependent system to find meaningful control values from only the current state variables and output, as the relationship between the input and the current state is under-constrained. By introducing a receding horizon of the future output states of the system, we show that sufficient constraint is applied to learn good solutions through LWR. The new method, Receding Horizon Locally Weighted Regression (RH-LWR), is demonstrated through one-shot learning on a real Series Elastic Actuator controlling a pendulum.
Resumo:
This paper establishes a practical stability result for discrete-time output feedback control involving mismatch between the exact system to be stabilised and the approximating system used to design the controller. The practical stability is in the sense of an asymptotic bound on the amount of error bias introduced by the model approximation, and is established using local consistency properties of the systems. Importantly, the practical stability established here does not require the approximating system to be of the same model type as the exact system. Examples are presented to illustrate the nature of our practical stability result.