951 resultados para Robust Performance
Resumo:
The IEEE 802.15.4 has been adopted as a communication protocol standard for Low-Rate Wireless Private Area Networks (LRWPANs). While it appears as a promising candidate solution for Wireless Sensor Networks (WSNs), its adequacy must be carefully evaluated. In this paper, we analyze the performance limits of the slotted CSMA/CA medium access control (MAC) mechanism in the beacon-enabled mode for broadcast transmissions in WSNs. The motivation for evaluating the beacon-enabled mode is due to its flexibility and potential for WSN applications as compared to the non-beacon enabled mode. Our analysis is based on an accurate simulation model of the slotted CSMA/CA mechanism on top of a realistic physical layer, with respect to the IEEE 802.15.4 standard specification. The performance of the slotted CSMA/CA is evaluated and analyzed for different network settings to understand the impact of the protocol attributes (superframe order, beacon order and backoff exponent), the number of nodes and the data frame size on the network performance, namely in terms of throughput (S), average delay (D) and probability of success (Ps). We also analytically evaluate the impact of the slotted CSMA/CA overheads on the saturation throughput. We introduce the concept of utility (U) as a combination of two or more metrics, to determine the best offered load range for an optimal behavior of the network. We show that the optimal network performance using slotted CSMA/CA occurs in the range of 35% to 60% with respect to an utility function proportional to the network throughput (S) divided by the average delay (D).
Resumo:
Penalty and Barrier methods are normally used to solve Nonlinear Optimization Problems constrained problems. The problems appear in areas such as engineering and are often characterised by the fact that involved functions (objective and constraints) are non-smooth and/or their derivatives are not know. This means that optimization methods based on derivatives cannot net used. A Java based API was implemented, including only derivative-free optimizationmethods, to solve both constrained and unconstrained problems, which includes Penalty and Barriers methods. In this work a new penalty function, based on Fuzzy Logic, is presented. This function imposes a progressive penalization to solutions that violate the constraints. This means that the function imposes a low penalization when the violation of the constraints is low and a heavy penalisation when the violation is high. The value of the penalization is not known in beforehand, it is the outcome of a fuzzy inference engine. Numerical results comparing the proposed function with two of the classic penalty/barrier functions are presented. Regarding the presented results one can conclude that the prosed penalty function besides being very robust also exhibits a very good performance.
Resumo:
Applications involving biosignals, such as Electrocardiography (ECG), are becoming more pervasive with the extension towards non-intrusive scenarios helping targeting ambulatory healthcare monitoring, emotion assessment, among many others. In this study we introduce a new type of silver/silver chloride (Ag/AgCl) electrodes based on a paper substrate and produced using an inkjet printing technique. This type of electrodes can increase the potential applications of biosignal acquisition technologies for everyday life use, given that there are several advantages, such as cost reduction and easier recycling, resultant from the approach explored in our work. We performed a comparison study to assess the quality of this new electrode type, in which ECG data was collected with three types of Ag/AgCl electrodes: i) gelled; ii) dry iii) paper-based inkjet printed. We also compared the performance of each electrode when acquired using a professional-grade gold standard device, and a low cost platform. Experimental results showed that data acquired using our proposed inkjet printed electrode is highly correlated with data obtained through conventional electrodes. Moreover, the electrodes are robust to high-end and low-end data acquisition devices. Copyright © 2014 SCITEPRESS - Science and Technology Publications. All rights reserved.
Resumo:
This paper addresses the use of multidimensional scaling in the evaluation of controller performance. Several nonlinear systems are analyzed based on the closed loop time response under the action of a reference step input signal. Three alternative performance indices, based on the time response, Fourier analysis, and mutual information, are tested. The numerical experiments demonstrate the feasibility of the proposed methodology and motivate its extension for other performance measures and new classes of nonlinearities.
Resumo:
A noncoherent vector delay/frequency-locked loop (VDFLL) architecture for GNSS receivers is proposed. A bank of code and frequency discriminators feeds a central extended Kalman filter that estimates the receiver's position and velocity, besides the clock error. The VDFLL architecture performance is compared with the one of the classic scalar receiver, both for scintillation and multipath scenarios, in terms of position errors. We show that the proposed solution is superior to the conventional scalar receivers, which tend to lose lock rapidly, due to the sudden drops of the received signal power.
Resumo:
We analyze the advantages and drawbacks of a vector delay/frequency-locked loop (VDFLL) architecture regarding the conventional scalar and the vector delay-locked loop (VDLL) architectures for GNSS receivers in harsh scenarios that include ionospheric scintillation, multipath, and high dynamics motion. The VDFLL is constituted by a bank of code and frequency discriminators feeding a central extended Kaiman filter (EKF) that estimates the receiver's position, velocity, and clock bias. Both code and frequency loops are closed vectorially through the EKF. The VDLL closes the code loop vectorially and the phase loops through individual PLLs while the scalar receiver closes both loops by means of individual independent PLLs and DLLs.
Resumo:
Floating-point computing with more than one TFLOP of peak performance is already a reality in recent Field-Programmable Gate Arrays (FPGA). General-Purpose Graphics Processing Units (GPGPU) and recent many-core CPUs have also taken advantage of the recent technological innovations in integrated circuit (IC) design and had also dramatically improved their peak performances. In this paper, we compare the trends of these computing architectures for high-performance computing and survey these platforms in the execution of algorithms belonging to different scientific application domains. Trends in peak performance, power consumption and sustained performances, for particular applications, show that FPGAs are increasing the gap to GPUs and many-core CPUs moving them away from high-performance computing with intensive floating-point calculations. FPGAs become competitive for custom floating-point or fixed-point representations, for smaller input sizes of certain algorithms, for combinational logic problems and parallel map-reduce problems. © 2014 Technical University of Munich (TUM).
Resumo:
A unified architecture for fast and efficient computation of the set of two-dimensional (2-D) transforms adopted by the most recent state-of-the-art digital video standards is presented in this paper. Contrasting to other designs with similar functionality, the presented architecture is supported on a scalable, modular and completely configurable processing structure. This flexible structure not only allows to easily reconfigure the architecture to support different transform kernels, but it also permits its resizing to efficiently support transforms of different orders (e. g. order-4, order-8, order-16 and order-32). Consequently, not only is it highly suitable to realize high-performance multi-standard transform cores, but it also offers highly efficient implementations of specialized processing structures addressing only a reduced subset of transforms that are used by a specific video standard. The experimental results that were obtained by prototyping several configurations of this processing structure in a Xilinx Virtex-7 FPGA show the superior performance and hardware efficiency levels provided by the proposed unified architecture for the implementation of transform cores for the Advanced Video Coding (AVC), Audio Video coding Standard (AVS), VC-1 and High Efficiency Video Coding (HEVC) standards. In addition, such results also demonstrate the ability of this processing structure to realize multi-standard transform cores supporting all the standards mentioned above and that are capable of processing the 8k Ultra High Definition Television (UHDTV) video format (7,680 x 4,320 at 30 fps) in real time.
Resumo:
Many studies have demonstrated the relationship between the alpha activity and the central visual ability, in which the visual ability is usually assessed through static stimuli. Besides static circumstance, however in the real environment there are often dynamic changes and the peripheral visual ability in a dynamic environment (i.e., dynamic peripheral visual ability) is important for all people. So far, no work has reported whether there is a relationship between the dynamic peripheral visual ability and the alpha activity. Thus, the objective of this study was to investigate their relationship. Sixty-two soccer players performed a newly designed peripheral vision task in which the visual stimuli were dynamic, while their EEG signals were recorded from Cz, O1, and O2 locations. The relationship between the dynamic peripheral visual performance and the alpha activity was examined by the percentage-bend correlation test. The results indicated no significant correlation between the dynamic peripheral visual performance and the alpha amplitudes in the eyes-open and eyes-closed resting condition. However, it was not the case for the alpha activity during the peripheral vision task: the dynamic peripheral visual performance showed significant positive inter-individual correlations with the amplitudes in the alpha band (8-12 Hz) and the individual alpha band (IAB) during the peripheral vision task. A potential application of this finding is to improve the dynamic peripheral visual performance by up-regulating alpha activity using neuromodulation techniques.
Resumo:
Health services
Resumo:
It is important to understand and forecast a typical or a particularly household daily consumption in order to design and size suitable renewable energy systems and energy storage. In this research for Short Term Load Forecasting (STLF) it has been used Artificial Neural Networks (ANN) and, despite the consumption unpredictability, it has been shown the possibility to forecast the electricity consumption of a household with certainty. The ANNs are recognized to be a potential methodology for modeling hourly and daily energy consumption and load forecasting. Input variables such as apartment area, numbers of occupants, electrical appliance consumption and Boolean inputs as hourly meter system were considered. Furthermore, the investigation carried out aims to define an ANN architecture and a training algorithm in order to achieve a robust model to be used in forecasting energy consumption in a typical household. It was observed that a feed-forward ANN and the Levenberg-Marquardt algorithm provided a good performance. For this research it was used a database with consumption records, logged in 93 real households, in Lisbon, Portugal, between February 2000 and July 2001, including both weekdays and weekend. The results show that the ANN approach provides a reliable model for forecasting household electric energy consumption and load profile. © 2014 The Author.
Resumo:
Introduction: multimodality environment; requirement for greater understanding of the imaging technologies used, the limitations of these technologies, and how to best interpret the results; dose optimization; introduction of new techniques; current practice and best practice; incidental findings, in low-dose CT images obtained as part of the hybrid imaging process, are an increasing phenomenon with advancing CT technology; resultant ethical and medico-legal dilemmas; understanding limitations of these procedures important when reporting images and recommending follow-up; free-response observer performance study was used to evaluate lesion detection in low-dose CT images obtained during attenuation correction acquisitions for myocardial perfusion imaging, on two hybrid imaging systems.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do Grau de Mestre em Engenharia Industrial
Resumo:
A robot’s drive has to exert appropriate driving forces that can keep its arm and end effector at the proper position, velocity and acceleration, and simultaneously has to compensate for the effects of the contact forces arising between the tool and the workpiece depending on the needs of the actual technological operation. Balancing the effects of a priori unknown external disturbance forces and the inaccuracies of the available dynamic model of the robot is also important. Technological tasks requiring well prescribed end effector trajectories and contact forces simultaneously are challenging control problems that can be tackled in various manners.
Resumo:
ABSTRACT OBJECTIVE To assess the impact of implementing long-stay beds for patients of low complexity and high dependency in small hospitals on the performance of an emergency referral tertiary hospital. METHODS For this longitudinal study, we identified hospitals in three municipalities of a regional department of health covered by tertiary care that supplied 10 long-stay beds each. Patients were transferred to hospitals in those municipalities based on a specific protocol. The outcome of transferred patients was obtained by daily monitoring. Confounding factors were adjusted by Cox logistic and semiparametric regression. RESULTS Between September 1, 2013 and September 30, 2014, 97 patients were transferred, 72.1% male, with a mean age of 60.5 years (SD = 1.9), for which 108 transfers were performed. Of these patients, 41.7% died, 33.3% were discharged, 15.7% returned to tertiary care, and only 9.3% tertiary remained hospitalized until the end of the analysis period. We estimated the Charlson comorbidity index – 0 (n = 28 [25.9%]), 1 (n = 31 [56.5%]) and ≥ 2 (n = 19 [17.5%]) – the only variable that increased the chance of death or return to the tertiary hospital (Odds Ratio = 2.4; 95%CI 1.3;4.4). The length of stay in long-stay beds was 4,253 patient days, which would represent 607 patients at the tertiary hospital, considering the average hospital stay of seven days. The tertiary hospital increased the number of patients treated in 50.0% for Intensive Care, 66.0% for Neurology and 9.3% in total. Patients stayed in long-stay beds mainly in the first 30 (50.0%) and 60 (75.0%) days. CONCLUSIONS Implementing long-stay beds increased the number of patients treated in tertiary care, both in general and in system bottleneck areas such as Neurology and Intensive Care. The Charlson index of comorbidity is associated with the chance of patient death or return to tertiary care, even when adjusted for possible confounding factors.