955 resultados para logistics performance measuring


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The IEEE 802.15.4 has been adopted as a communication protocol standard for Low-Rate Wireless Private Area Networks (LRWPANs). While it appears as a promising candidate solution for Wireless Sensor Networks (WSNs), its adequacy must be carefully evaluated. In this paper, we analyze the performance limits of the slotted CSMA/CA medium access control (MAC) mechanism in the beacon-enabled mode for broadcast transmissions in WSNs. The motivation for evaluating the beacon-enabled mode is due to its flexibility and potential for WSN applications as compared to the non-beacon enabled mode. Our analysis is based on an accurate simulation model of the slotted CSMA/CA mechanism on top of a realistic physical layer, with respect to the IEEE 802.15.4 standard specification. The performance of the slotted CSMA/CA is evaluated and analyzed for different network settings to understand the impact of the protocol attributes (superframe order, beacon order and backoff exponent), the number of nodes and the data frame size on the network performance, namely in terms of throughput (S), average delay (D) and probability of success (Ps). We also analytically evaluate the impact of the slotted CSMA/CA overheads on the saturation throughput. We introduce the concept of utility (U) as a combination of two or more metrics, to determine the best offered load range for an optimal behavior of the network. We show that the optimal network performance using slotted CSMA/CA occurs in the range of 35% to 60% with respect to an utility function proportional to the network throughput (S) divided by the average delay (D).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Indoor location systems cannot rely on technologies such as GPS (Global Positioning System) to determine the position of a mobile terminal, because its signals are blocked by obstacles such as walls, ceilings, roofs, etc. In such environments. The use of alternative techniques, such as the use of wireless networks, should be considered. The location estimation is made by measuring and analysing one of the parameters of the wireless signal, usually the received power. One of the techniques used to estimate the locations using wireless networks is fingerprinting. This technique comprises two phases: in the first phase data is collected from the scenario and stored in a database; the second phase consists in determining the location of the mobile node by comparing the data collected from the wireless transceiver with the data previously stored in the database. In this paper an approach for localisation using fingerprinting based on Fuzzy Logic and pattern searching is presented. The performance of the proposed approach is compared with the performance of classic methods, and it presents an improvement between 10.24% and 49.43%, depending on the mobile node and the Fuzzy Logic parameters.ł

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A simple procedure to measure the cohesive laws of bonded joints under mode I loading using the double cantilever beam test is proposed. The method only requires recording the applied load–displacement data and measuring the crack opening displacement at its tip in the course of the experimental test. The strain energy release rate is obtained by a procedure involving the Timoshenko beam theory, the specimen’s compliance and the crack equivalent concept. Following the proposed approach the influence of the fracture process zone is taken into account which is fundamental for an accurate estimation of the failure process details. The cohesive law is obtained by differentiation of the strain energy release rate as a function of the crack opening displacement. The model was validated numerically considering three representative cohesive laws. Numerical simulations using finite element analysis including cohesive zone modeling were performed. The good agreement between the inputted and resulting laws for all the cases considered validates the model. An experimental confirmation was also performed by comparing the numerical and experimental load–displacement curves. The numerical load–displacement curves were obtained by adjusting typical cohesive laws to the ones measured experimentally following the proposed approach and using finite element analysis including cohesive zone modeling. Once again, good agreement was obtained in the comparisons thus demonstrating the good performance of the proposed methodology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The performance of the Weather Research and Forecast (WRF) model in wind simulation was evaluated under different numerical and physical options for an area of Portugal, located in complex terrain and characterized by its significant wind energy resource. The grid nudging and integration time of the simulations were the tested numerical options. Since the goal is to simulate the near-surface wind, the physical parameterization schemes regarding the boundary layer were the ones under evaluation. Also, the influences of the local terrain complexity and simulation domain resolution on the model results were also studied. Data from three wind measuring stations located within the chosen area were compared with the model results, in terms of Root Mean Square Error, Standard Deviation Error and Bias. Wind speed histograms, occurrences and energy wind roses were also used for model evaluation. Globally, the model accurately reproduced the local wind regime, despite a significant underestimation of the wind speed. The wind direction is reasonably simulated by the model especially in wind regimes where there is a clear dominant sector, but in the presence of low wind speeds the characterization of the wind direction (observed and simulated) is very subjective and led to higher deviations between simulations and observations. Within the tested options, results show that the use of grid nudging in simulations that should not exceed an integration time of 2 days is the best numerical configuration, and the parameterization set composed by the physical schemes MM5–Yonsei University–Noah are the most suitable for this site. Results were poorer in sites with higher terrain complexity, mainly due to limitations of the terrain data supplied to the model. The increase of the simulation domain resolution alone is not enough to significantly improve the model performance. Results suggest that error minimization in the wind simulation can be achieved by testing and choosing a suitable numerical and physical configuration for the region of interest together with the use of high resolution terrain data, if available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wind resource evaluation in two sites located in Portugal was performed using the mesoscale modelling system Weather Research and Forecasting (WRF) and the wind resource analysis tool commonly used within the wind power industry, the Wind Atlas Analysis and Application Program (WAsP) microscale model. Wind measurement campaigns were conducted in the selected sites, allowing for a comparison between in situ measurements and simulated wind, in terms of flow characteristics and energy yields estimates. Three different methodologies were tested, aiming to provide an overview of the benefits and limitations of these methodologies for wind resource estimation. In the first methodology the mesoscale model acts like “virtual” wind measuring stations, where wind data was computed by WRF for both sites and inserted directly as input in WAsP. In the second approach, the same procedure was followed but here the terrain influences induced by the mesoscale model low resolution terrain data were removed from the simulated wind data. In the third methodology, the simulated wind data is extracted at the top of the planetary boundary layer height for both sites, aiming to assess if the use of geostrophic winds (which, by definition, are not influenced by the local terrain) can bring any improvement in the models performance. The obtained results for the abovementioned methodologies were compared with those resulting from in situ measurements, in terms of mean wind speed, Weibull probability density function parameters and production estimates, considering the installation of one wind turbine in each site. Results showed that the second tested approach is the one that produces values closest to the measured ones, and fairly acceptable deviations were found using this coupling technique in terms of estimated annual production. However, mesoscale output should not be used directly in wind farm sitting projects, mainly due to the mesoscale model terrain data poor resolution. Instead, the use of mesoscale output in microscale models should be seen as a valid alternative to in situ data mainly for preliminary wind resource assessments, although the application of mesoscale and microscale coupling in areas with complex topography should be done with extreme caution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the use of multidimensional scaling in the evaluation of controller performance. Several nonlinear systems are analyzed based on the closed loop time response under the action of a reference step input signal. Three alternative performance indices, based on the time response, Fourier analysis, and mutual information, are tested. The numerical experiments demonstrate the feasibility of the proposed methodology and motivate its extension for other performance measures and new classes of nonlinearities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A noncoherent vector delay/frequency-locked loop (VDFLL) architecture for GNSS receivers is proposed. A bank of code and frequency discriminators feeds a central extended Kalman filter that estimates the receiver's position and velocity, besides the clock error. The VDFLL architecture performance is compared with the one of the classic scalar receiver, both for scintillation and multipath scenarios, in terms of position errors. We show that the proposed solution is superior to the conventional scalar receivers, which tend to lose lock rapidly, due to the sudden drops of the received signal power.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We analyze the advantages and drawbacks of a vector delay/frequency-locked loop (VDFLL) architecture regarding the conventional scalar and the vector delay-locked loop (VDLL) architectures for GNSS receivers in harsh scenarios that include ionospheric scintillation, multipath, and high dynamics motion. The VDFLL is constituted by a bank of code and frequency discriminators feeding a central extended Kaiman filter (EKF) that estimates the receiver's position, velocity, and clock bias. Both code and frequency loops are closed vectorially through the EKF. The VDLL closes the code loop vectorially and the phase loops through individual PLLs while the scalar receiver closes both loops by means of individual independent PLLs and DLLs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Floating-point computing with more than one TFLOP of peak performance is already a reality in recent Field-Programmable Gate Arrays (FPGA). General-Purpose Graphics Processing Units (GPGPU) and recent many-core CPUs have also taken advantage of the recent technological innovations in integrated circuit (IC) design and had also dramatically improved their peak performances. In this paper, we compare the trends of these computing architectures for high-performance computing and survey these platforms in the execution of algorithms belonging to different scientific application domains. Trends in peak performance, power consumption and sustained performances, for particular applications, show that FPGAs are increasing the gap to GPUs and many-core CPUs moving them away from high-performance computing with intensive floating-point calculations. FPGAs become competitive for custom floating-point or fixed-point representations, for smaller input sizes of certain algorithms, for combinational logic problems and parallel map-reduce problems. © 2014 Technical University of Munich (TUM).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A unified architecture for fast and efficient computation of the set of two-dimensional (2-D) transforms adopted by the most recent state-of-the-art digital video standards is presented in this paper. Contrasting to other designs with similar functionality, the presented architecture is supported on a scalable, modular and completely configurable processing structure. This flexible structure not only allows to easily reconfigure the architecture to support different transform kernels, but it also permits its resizing to efficiently support transforms of different orders (e. g. order-4, order-8, order-16 and order-32). Consequently, not only is it highly suitable to realize high-performance multi-standard transform cores, but it also offers highly efficient implementations of specialized processing structures addressing only a reduced subset of transforms that are used by a specific video standard. The experimental results that were obtained by prototyping several configurations of this processing structure in a Xilinx Virtex-7 FPGA show the superior performance and hardware efficiency levels provided by the proposed unified architecture for the implementation of transform cores for the Advanced Video Coding (AVC), Audio Video coding Standard (AVS), VC-1 and High Efficiency Video Coding (HEVC) standards. In addition, such results also demonstrate the ability of this processing structure to realize multi-standard transform cores supporting all the standards mentioned above and that are capable of processing the 8k Ultra High Definition Television (UHDTV) video format (7,680 x 4,320 at 30 fps) in real time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many studies have demonstrated the relationship between the alpha activity and the central visual ability, in which the visual ability is usually assessed through static stimuli. Besides static circumstance, however in the real environment there are often dynamic changes and the peripheral visual ability in a dynamic environment (i.e., dynamic peripheral visual ability) is important for all people. So far, no work has reported whether there is a relationship between the dynamic peripheral visual ability and the alpha activity. Thus, the objective of this study was to investigate their relationship. Sixty-two soccer players performed a newly designed peripheral vision task in which the visual stimuli were dynamic, while their EEG signals were recorded from Cz, O1, and O2 locations. The relationship between the dynamic peripheral visual performance and the alpha activity was examined by the percentage-bend correlation test. The results indicated no significant correlation between the dynamic peripheral visual performance and the alpha amplitudes in the eyes-open and eyes-closed resting condition. However, it was not the case for the alpha activity during the peripheral vision task: the dynamic peripheral visual performance showed significant positive inter-individual correlations with the amplitudes in the alpha band (8-12 Hz) and the individual alpha band (IAB) during the peripheral vision task. A potential application of this finding is to improve the dynamic peripheral visual performance by up-regulating alpha activity using neuromodulation techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: multimodality environment; requirement for greater understanding of the imaging technologies used, the limitations of these technologies, and how to best interpret the results; dose optimization; introduction of new techniques; current practice and best practice; incidental findings, in low-dose CT images obtained as part of the hybrid imaging process, are an increasing phenomenon with advancing CT technology; resultant ethical and medico-legal dilemmas; understanding limitations of these procedures important when reporting images and recommending follow-up; free-response observer performance study was used to evaluate lesion detection in low-dose CT images obtained during attenuation correction acquisitions for myocardial perfusion imaging, on two hybrid imaging systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do Grau de Mestre em Engenharia Industrial

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ABSTRACT OBJECTIVE To assess the impact of implementing long-stay beds for patients of low complexity and high dependency in small hospitals on the performance of an emergency referral tertiary hospital. METHODS For this longitudinal study, we identified hospitals in three municipalities of a regional department of health covered by tertiary care that supplied 10 long-stay beds each. Patients were transferred to hospitals in those municipalities based on a specific protocol. The outcome of transferred patients was obtained by daily monitoring. Confounding factors were adjusted by Cox logistic and semiparametric regression. RESULTS Between September 1, 2013 and September 30, 2014, 97 patients were transferred, 72.1% male, with a mean age of 60.5 years (SD = 1.9), for which 108 transfers were performed. Of these patients, 41.7% died, 33.3% were discharged, 15.7% returned to tertiary care, and only 9.3% tertiary remained hospitalized until the end of the analysis period. We estimated the Charlson comorbidity index – 0 (n = 28 [25.9%]), 1 (n = 31 [56.5%]) and ≥ 2 (n = 19 [17.5%]) – the only variable that increased the chance of death or return to the tertiary hospital (Odds Ratio = 2.4; 95%CI 1.3;4.4). The length of stay in long-stay beds was 4,253 patient days, which would represent 607 patients at the tertiary hospital, considering the average hospital stay of seven days. The tertiary hospital increased the number of patients treated in 50.0% for Intensive Care, 66.0% for Neurology and 9.3% in total. Patients stayed in long-stay beds mainly in the first 30 (50.0%) and 60 (75.0%) days. CONCLUSIONS Implementing long-stay beds increased the number of patients treated in tertiary care, both in general and in system bottleneck areas such as Neurology and Intensive Care. The Charlson index of comorbidity is associated with the chance of patient death or return to tertiary care, even when adjusted for possible confounding factors.