893 resultados para performance evaluation tool
Resumo:
Impairments characterization and performance evaluation of Raman amplified unrepeated DP-16QAM transmissions are conducted. Experimental results indicate that small gain in forward direction enhance the system signal-to-noise ratio for longer reach without introducing noticeable penalty.
Resumo:
WiMAX has been introduced as a competitive alternative for metropolitan broadband wireless access technologies. It is connection oriented and it can provide very high data rates, large service coverage, and flexible quality of services (QoS). Due to the large number of connections and flexible QoS supported by WiMAX, the uplink access in WiMAX networks is very challenging since the medium access control (MAC) protocol must efficiently manage the bandwidth and related channel allocations. In this paper, we propose and investigate a cost-effective WiMAX bandwidth management scheme, named the WiMAX partial sharing scheme (WPSS), in order to provide good QoS while achieving better bandwidth utilization and network throughput. The proposed bandwidth management scheme is compared with a simple but inefficient scheme, named the WiMAX complete sharing scheme (WCPS). A maximum entropy (ME) based analytical model (MEAM) is proposed for the performance evaluation of the two bandwidth management schemes. The reason for using MEAM for the performance evaluation is that MEAM can efficiently model a large-scale system in which the number of stations or connections is generally very high, while the traditional simulation and analytical (e.g., Markov models) approaches cannot perform well due to the high computation complexity. We model the bandwidth management scheme as a queuing network model (QNM) that consists of interacting multiclass queues for different service classes. Closed form expressions for the state and blocking probability distributions are derived for those schemes. Simulation results verify the MEAM numerical results and show that WPSS can significantly improve the network's performance compared to WCPS.
Resumo:
With the features of low-power and flexible networking capabilities IEEE 802.15.4 has been widely regarded as one strong candidate of communication technologies for wireless sensor networks (WSNs). It is expected that with an increasing number of deployments of 802.15.4 based WSNs, multiple WSNs could coexist with full or partial overlap in residential or enterprise areas. As WSNs are usually deployed without coordination, the communication could meet significant degradation with the 802.15.4 channel access scheme, which has a large impact on system performance. In this thesis we are motivated to investigate the effectiveness of 802.15.4 networks supporting WSN applications with various environments, especially when hidden terminals are presented due to the uncoordinated coexistence problem. Both analytical models and system level simulators are developed to analyse the performance of the random access scheme specified by IEEE 802.15.4 medium access control (MAC) standard for several network scenarios. The first part of the thesis investigates the effectiveness of single 802.15.4 network supporting WSN applications. A Markov chain based analytic model is applied to model the MAC behaviour of IEEE 802.15.4 standard and a discrete event simulator is also developed to analyse the performance and verify the proposed analytical model. It is observed that 802.15.4 networks could sufficiently support most WSN applications with its various functionalities. After the investigation of single network, the uncoordinated coexistence problem of multiple 802.15.4 networks deployed with communication range fully or partially overlapped are investigated in the next part of the thesis. Both nonsleep and sleep modes are investigated with different channel conditions by analytic and simulation methods to obtain the comprehensive performance evaluation. It is found that the uncoordinated coexistence problem can significantly degrade the performance of 802.15.4 networks, which is unlikely to satisfy the QoS requirements for many WSN applications. The proposed analytic model is validated by simulations which could be used to obtain the optimal parameter setting before WSNs deployments to eliminate the interference risks.
Resumo:
Indicators are widely used by organizations as a way of evaluating, measuring and classifying organizational performance. As part of performance evaluation systems, indicators are often shared or compared across internal sectors or with other organizations. However, indicators can be vague and imprecise, and also can lack semantics, making comparisons with other indicators difficult. Thus, this paper presents a knowledge model based on an ontology that may be used to represent indicators semantically and generically, dealing with the imprecision and vagueness, and thus facilitating better comparison. Semantic technologies are shown to be suitable for this solution, so that it could be able to represent complex data involved in indicators comparison.
Resumo:
Background - It is well recognised that errors are more likely to occur during transitions of care, especially medicines errors. Clinic letters are used as a communication tool during a transition from hospital (outpatient clinics) to primary care (general practitioners). Little is known about medicines errors in clinic letters, as previous studies in this area have focused on medicines errors in inpatient or outpatient prescriptions. Published studies concerning clinic letters largely focus on perceptions of patients or general practitioners in respect to overall quality. Purpose - To investigate medicines errors contained in outpatient clinic letters generated by prescribers within the Neurology Department of a specialist paediatric hospital in the UK.Materials and methods - Single site, retrospective, cross-sectional review of 100 clinic letters generated during March–July 2013 in response to an outpatient consultation. Clinic letters were conveniently selected from the most recent visit of each patient. An evaluation tool with a 10-point scale, where 10 was no error and 0 was significant error, was developed and refined throughout the study to facilitate identification and characterisation of medicines errors. The tool was tested for a relationship between scores and number of medicines errors using a regression analysis.Results - Of 315 items related to neurology mentioned within the letters, 212 items were associated with 602 errors. Common missing information was allergy (97%, n = 97), formulation (60.3%, n = 190), strength/concentration (59%, n = 186) and weight (53%, n = 53). Ninety-nine letters were associated with at least one error. Scores were in range of 4–10 with 42% of letters scored as 7. Statistically significant relationships were observed between scores and number of medicines errors (R2 = 0.4168, p < 0.05) as well as between number of medicines and number of drug-related errors (R2 = 0.9719, p < 0.05). Conclusions - Nearly all clinic letters were associated with medicines errors. The 10-point evaluation tool may be a useful device to categorise clinic letter errors.
Resumo:
Indicators are widely used by organizations as a way of evaluating, measuring and classifying organizational performance. As part of performance evaluation systems, indicators are often shared or compared across internal sectors or with other organizations. However, indicators can be vague and imprecise, and also can lack semantics, making comparisons with other indicators difficult. Thus, this paper presents a knowledge model based on an ontology that may be used to represent indicators semantically and generically, dealing with the imprecision and vagueness, and thus facilitating better comparison. Semantic technologies are shown to be suitable for this solution, so that it could be able to represent complex data involved in indicators comparison.
Resumo:
Data Envelopment Analysis (DEA) is a powerful analytical technique for measuring the relative efficiency of alternatives based on their inputs and outputs. The alternatives can be in the form of countries who attempt to enhance their productivity and environmental efficiencies concurrently. However, when desirable outputs such as productivity increases, undesirable outputs increase as well (e.g. carbon emissions), thus making the performance evaluation questionable. In addition, traditional environmental efficiency has been typically measured by crisp input and output (desirable and undesirable). However, the input and output data, such as CO2 emissions, in real-world evaluation problems are often imprecise or ambiguous. This paper proposes a DEA-based framework where the input and output data are characterized by symmetrical and asymmetrical fuzzy numbers. The proposed method allows the environmental evaluation to be assessed at different levels of certainty. The validity of the proposed model has been tested and its usefulness is illustrated using two numerical examples. An application of energy efficiency among 23 European Union (EU) member countries is further presented to show the applicability and efficacy of the proposed approach under asymmetric fuzzy numbers.
Resumo:
IEEE 802.11 standard is the dominant technology for wireless local area networks (WLANs). In the last two decades, the Distributed coordination function (DCF) of IEEE 802.11 standard has become the one of the most important media access control (MAC) protocols for mobile ad hoc networks (MANETs). The DCF protocol can also be combined with cognitive radio, thus the IEEE 802.11 cognitive radio ad hoc networks (CRAHNs) come into being. There were several literatures which focus on the modeling of IEEE 802.11 CRAHNs, however, there is still no thorough and scalable analytical models for IEEE 802.11 CRAHNs whose cognitive node (i.e., secondary user, SU) has spectrum sensing and possible channel silence process before the MAC contention process. This paper develops a unified analytical model for IEEE 802.11 CRAHNs for comprehensive MAC layer queuing analysis. In the proposed model, the SUs are modeled by a hyper generalized 2D Markov chain model with an M/G/1/K model while the primary users (PUs) are modeled by a generalized 2D Markov chain and an M/G/1/K model. The performance evaluation results show that the quality-of-service (QoS) of both the PUs and SUs can be statistically guaranteed with the suitable settings of duration of channel sensing and silence phase in the case of under loading.
Resumo:
An assessment tool designed to measure a customer service orientation among RN's and LPN's was developed using a content-oriented approach. Critical incidents were first developed by asking two samples of healthcare managers (n = 52 and 25) to identify various customer-contact situations. The critical incidents were then used to formulate a 121-item instrument. Patient-contact workers from 3 hospitals (n = 102) completed the instrument along with the NEO-FFI, a measure of the Big Five personality factors. Concurrently, managers completed a performance evaluation scale on the employees participating in the study in order to determine the predictive validity of the instrument.^ Through a criterion-keying approach, the instrument was scaled down to 38 items. The correlation between HealthServe and the supervisory ratings of performance evaluation data supported the instrument's criterion-related validity (r =.66, p $<$.0001). Incremental validity of HealthServe over the Big Five was found with HealthServe accounting for 46% of the variance.^ The NEO-FFI was used to assess the correlation between personality traits and HealthServe. A factor analysis of HealthServe suggested 4 factors which were correlated with the NEO-FFI scores. Results indicated that HealthServe was related to Extraversion, Openness to Experience, Agreeableness, Conscientiousness and negatively related to Neuroticism.^ The benefits of the test construction procedure used here over the use of broad-based measures of personality were discussed as well as the limitations of using a concurrent validation strategy. Recommendations for future studies were provided. ^
Resumo:
The span of control is the most discussed single concept in classical and modern management theory. In specifying conditions for organizational effectiveness, the span of control has generally been regarded as a critical factor. Existing research work has focused mainly on qualitative methods to analyze this concept, for example heuristic rules based on experiences and/or intuition. This research takes a quantitative approach to this problem and formulates it as a binary integer model, which is used as a tool to study the organizational design issue. This model considers a range of requirements affecting management and supervision of a given set of jobs in a company. These decision variables include allocation of jobs to workers, considering complexity and compatibility of each job with respect to workers, and the requirement of management for planning, execution, training, and control activities in a hierarchical organization. The objective of the model is minimal operations cost, which is the sum of supervision costs at each level of the hierarchy, and the costs of workers assigned to jobs. The model is intended for application in the make-to-order industries as a design tool. It could also be applied to make-to-stock companies as an evaluation tool, to assess the optimality of their current organizational structure. Extensive experiments were conducted to validate the model, to study its behavior, and to evaluate the impact of changing parameters with practical problems. This research proposes a meta-heuristic approach to solving large-size problems, based on the concept of greedy algorithms and the Meta-RaPS algorithm. The proposed heuristic was evaluated with two measures of performance: solution quality and computational speed. The quality is assessed by comparing the obtained objective function value to the one achieved by the optimal solution. The computational efficiency is assessed by comparing the computer time used by the proposed heuristic to the time taken by a commercial software system. Test results show the proposed heuristic procedure generates good solutions in a time-efficient manner.
Resumo:
We investigated controls on the water chemistry of a South Ecuadorian cloud forest catchment which is partly pristine, and partly converted to extensive pasture. From April 2007 to May 2008 water samples were taken weekly to biweekly at nine different subcatchments, and were screened for differences in electric conductivity, pH, anion, as well as element composition. A principal component analysis was conducted to reduce dimensionality of the data set and define major factors explaining variation in the data. Three main factors were isolated by a subset of 10 elements (Ca2+, Ce, Gd, K+, Mg2+, Na+, Nd, Rb, Sr, Y), explaining around 90% of the data variation. Land-use was the major factor controlling and changing water chemistry of the subcatchments. A second factor was associated with the concentration of rare earth elements in water, presumably highlighting other anthropogenic influences such as gravel excavation or road construction. Around 12% of the variation was explained by the third component, which was defined by the occurrence of Rb and K and represents the influence of vegetation dynamics on element accumulation and wash-out. Comparison of base- and fast flow concentrations led to the assumption that a significant portion of soil water from around 30 cm depth contributes to storm flow, as revealed by increased rare earth element concentrations in fast flow samples. Our findings demonstrate the utility of multi-tracer principal component analysis to study tropical headwater streams, and emphasize the need for effective land management in cloud forest catchments.
Resumo:
In this thesis, novel analog-to-digital and digital-to-analog generalized time-interleaved variable bandpass sigma-delta modulators are designed, analysed, evaluated and implemented that are suitable for high performance data conversion for a broad-spectrum of applications. These generalized time-interleaved variable bandpass sigma-delta modulators can perform noise-shaping for any centre frequency from DC to Nyquist. The proposed topologies are well-suited for Butterworth, Chebyshev, inverse-Chebyshev and elliptical filters, where designers have the flexibility of specifying the centre frequency, bandwidth as well as the passband and stopband attenuation parameters. The application of the time-interleaving approach, in combination with these bandpass loop-filters, not only overcomes the limitations that are associated with conventional and mid-band resonator-based bandpass sigma-delta modulators, but also offers an elegant means to increase the conversion bandwidth, thereby relaxing the need to use faster or higher-order sigma-delta modulators. A step-by-step design technique has been developed for the design of time-interleaved variable bandpass sigma-delta modulators. Using this technique, an assortment of lower- and higher-order single- and multi-path generalized A/D variable bandpass sigma-delta modulators were designed, evaluated and compared in terms of their signal-to-noise ratios, hardware complexity, stability, tonality and sensitivity for ideal and non-ideal topologies. Extensive behavioural-level simulations verified that one of the proposed topologies not only used fewer coefficients but also exhibited greater robustness to non-idealties. Furthermore, second-, fourth- and sixth-order single- and multi-path digital variable bandpass digital sigma-delta modulators are designed using this technique. The mathematical modelling and evaluation of tones caused by the finite wordlengths of these digital multi-path sigmadelta modulators, when excited by sinusoidal input signals, are also derived from first principles and verified using simulation and experimental results. The fourth-order digital variable-band sigma-delta modulator topologies are implemented in VHDL and synthesized on Xilinx® SpartanTM-3 Development Kit using fixed-point arithmetic. Circuit outputs were taken via RS232 connection provided on the FPGA board and evaluated using MATLAB routines developed by the author. These routines included the decimation process as well. The experiments undertaken by the author further validated the design methodology presented in the work. In addition, a novel tunable and reconfigurable second-order variable bandpass sigma-delta modulator has been designed and evaluated at the behavioural-level. This topology offers a flexible set of choices for designers and can operate either in single- or dual-mode enabling multi-band implementations on a single digital variable bandpass sigma-delta modulator. This work is also supported by a novel user-friendly design and evaluation tool that has been developed in MATLAB/Simulink that can speed-up the design, evaluation and comparison of analog and digital single-stage and time-interleaved variable bandpass sigma-delta modulators. This tool enables the user to specify the conversion type, topology, loop-filter type, path number and oversampling ratio.
Resumo:
This study considers a dual-hop cognitive inter-vehicular relay-assisted communication system where all
communication links are non-line of sight ones and their fading is modelled by the double Rayleigh fading distribution.
Road-side relays (or access points) implementing the decode-and-forward relaying protocol are employed and one of
them is selected according to a predetermined policy to enable communication between vehicles. The performance of
the considered cognitive cooperative system is investigated for Kth best partial and full relay selection (RS) as well as
for two distinct fading scenarios. In the first scenario, all channels are double Rayleigh distributed. In the second
scenario, only the secondary source to relay and relay to destination channels are considered to be subject to double
Rayleigh fading whereas, channels between the secondary transmitters and the primary user are modelled by the
Rayleigh distribution. Exact and approximate expressions for the outage probability performance for all considered RS
policies and fading scenarios are presented. In addition to the analytical results, complementary computer simulated
performance evaluation results have been obtained by means of Monte Carlo simulations. The perfect match between
these two sets of results has verified the accuracy of the proposed mathematical analysis.
Resumo:
We evaluate the impact of the Eurozone sovereign debt crisis on the performance and performance persistence of a survivorship bias-free sample of bond funds from a small market, identified as one of the most affected by this event, during the 2001–2012 period. Besides avoiding data mining, we also introduce a methodological innovation in assessing bond fund performance persistence. Our results show that bond funds underperform significantly both during crisis and non-crisis periods. Besides, we find strong evidence of performance persistence, for both short- and longer-term horizons, during non-crisis periods but not during the debt crisis. In this way, the persistence phenomenon in small markets seems to occur only during non-crisis periods and this is valuable information for bond fund investors to exploit.
Resumo:
This paper provides the first investigation about bond mutual fund performance during recession and expansion periods separately. Based on multi-factor performance evaluation models, results show that bond funds significantly underperform the market during both phases of the business cycle. Nevertheless, unlike equity funds, bond funds exhibit considerably higher alphas during good economic states than during market downturns. These results, however, seem entirely driven by the global financial crisis subperiod. In contrast, during the recession associated to the Euro sovereign debt crisis, bond funds are able to accomplish neutral performance. This improved performance throughout the debt crisis seems to be related to more conservative investment strategies, which reflect an increase in managers’ risk aversion.