967 resultados para two-round scheme
Resumo:
Data envelopment analysis (DEA) is a methodology for measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. Crisp input and output data are fundamentally indispensable in conventional DEA. However, the observed values of the input and output data in real-world problems are sometimes imprecise or vague. Many researchers have proposed various fuzzy methods for dealing with the imprecise and ambiguous data in DEA. In this study, we provide a taxonomy and review of the fuzzy DEA methods. We present a classification scheme with four primary categories, namely, the tolerance approach, the a-level based approach, the fuzzy ranking approach and the possibility approach. We discuss each classification scheme and group the fuzzy DEA papers published in the literature over the past 20 years. To the best of our knowledge, this paper appears to be the only review and complete source of references on fuzzy DEA. © 2011 Elsevier B.V. All rights reserved.
Resumo:
An international round robin study of the stability of fast pyrolysis bio-oil was undertaken. Fifteen laboratories in five different countries contributed. Two bio-oil samples were distributed to the laboratories for stability testing and further analysis. The stability test was defined in a method provided with the bio-oil samples. Viscosity measurement was a key input. The change in viscosity of a sealed sample of bio-oil held for 24 h at 80 °C was the defining element of stability. Subsequent analyses included ultimate analysis, density, moisture, ash, filterable solids, and TAN/pH determination, and gel permeation chromatography. The results showed that kinematic viscosity measurement was more generally conducted and more reproducibly performed versus dynamic viscosity measurement. The variation in the results of the stability test was great and a number of reasons for the variation were identified. The subsequent analyses proved to be at the level of reproducibility, as found in earlier round robins on bio-oil analysis. Clearly, the analyses were more straightforward and reproducible with a bio-oil sample low in filterable solids (0.2%), compared to one with a higher (2%) solids loading. These results can be helpful in setting standards for use of bio-oil, which is just coming into the marketplace. © 2012 American Chemical Society.
Resumo:
An array of in-line curvature sensors on a garment is used to monitor the thoracic and abdominal movements of a human during respiration. The results are used to obtain volumetric changes of the human torso in agreement with a spirometer used simultaneously at the mouth. The array of 40 in-line fiber Bragg gratings is used to produce 20 curvature sensors at different locations, each sensor consisting of two fiber Bragg gratings. The 20 curvature sensors and adjoining fiber are encapsulated into a low-temperature-cured synthetic silicone. The sensors are wavelength interrogated by a commercially available system from Moog Insensys, and the wavelength changes are calibrated to recover curvature. A three-dimensional algorithm is used to generate shape changes during respiration that allow the measurement of absolute volume changes at various sections of the torso. It is shown that the sensing scheme yields a volumetric error of 6%. Comparing the volume data obtained from the spirometer with the volume estimated with the synchronous data from the shape-sensing array yielded a correlation value 0.86 with a Pearson's correlation coefficient p <0.01.
Resumo:
A synchronization scheme for a two-channel phase sensitive amplifier is implemented based on the injection-locking of single InP quantum-dash mode-locked laser. Error free performance with penalty <1 dB is demonstrated for both channels. © 2011 Optical Society of America.
Resumo:
Дойчин Бояджиев, Галена Пеловска - В статията се предлага оптимизиран алгоритъм, който е по-бърз в сравнение с по- рано описаната ускорена (модифицирана STS) диференчна схема за възрастово структуриран популационен модел с дифузия. Запазвайки апроксимацията на модифицирания STS алгоритъм, изчислителното времето се намаля почти два пъти. Това прави оптимизирания метод по-предпочитан за задачи с нелинейност или с по-висока размерност.
Resumo:
Raman fibre lasers and converters using the stimulated Raman scattering (SRS) in optical fibre waveguide are attractive for many applications ranging from telecommunications to bio-medical applications [1]. Multiple-wavelength Raman laser sources emitting at two and more wavelengths have been proposed to increase amplification spectrum of Raman fibre amplifiers and to improve noise characteristics [2,3]. Typically, a single fibre waveguide is used in such devices while multi-wavelength generation is achieved by employing corresponding number of fibre Bragg grating (FBG) pairs forming laser resonator. This approach, being rather practical, however, might not provide a good level of cross coherence between radiation generated at different wavelengths due to difference in FBGs and random phase fluctuations between the two wavelengths. In this work we examine a scheme of two-wavelength Raman fibre laser with high-Q cavity based on spectral intracavity broadening [3]. We demonstrate feasibility of such configuration and perform numerical analysis clarifying laser operation using an amplitude propagation equation model that accounts for all key physical effects in nonlinear fibre: dispersion, Kerr nonlinearity, Raman gain, depletion of the Raman pump wave and fibre losses. The key idea behind this scheme is to take advantage of the spectral broadening that occurs in optical fibre at high powers. The effect of spectral broadening leads to effective decrease of the FBGs reflectivity and enables generation of two waves in one-stage Raman laser. The output spectrum in the considered high-Q cavity scheme corresponds to two peaks with 0.2 - 1 nm distance between them. © 2011 IEEE.
Resumo:
Power converters are a key, but vulnerable component in switched reluctance motor (SRM) drives. In this paper, a new fault diagnosis scheme for SRM converters is proposed based on the wavelet packet decomposition (WPD) with a dc-link current sensor. Open- and short-circuit faults of the power switches in an asymmetrical half-bridge converter are analyzed in details. In order to obtain the fault signature from the phase currents, two pulse-width modulation signals with phase shift are injected into the lower-switches of the converter to extract the excitation current, and the WPD algorithm is then applied to the detected currents for fault diagnosis. Moreover, a discrete degree of the wavelet packet node energy is chosen as the fault coefficient. The converter faults can be diagnosed and located directly by determining the changes in the discrete degree from the detected currents. The proposed scheme requires only one current sensor in the dc link, while conventional methods need one sensor for each phase or additional detection circuits. The experimental results on a 750-W three-phase SRM are presented to confirm the effectiveness of the proposed fault diagnosis scheme.
Resumo:
Reliability of power converters is of crucial importance in switched reluctance motor drives used for safety-critical applications. Open-circuit faults in power converters will cause the motor to run in unbalanced states, and if left untreated, they will lead to damage to the motor and power modules, and even cause a catastrophic failure of the whole drive system. This study is focused on using a single current sensor to detect open-circuit faults accurately. An asymmetrical half-bridge converter is considered in this study and the faults of single-phase open and two-phase open are analysed. Three different bus positions are defined. On the basis of a fast Fourier transform algorithm with Blackman window interpolation, the bus current spectrums before and after open-circuit faults are analysed in details. Their fault characteristics are extracted accurately by the normalisations of the phase fundamental frequency component and double phase fundamental frequency component, and the fault characteristics of the three bus detection schemes are also compared. The open-circuit faults can be located by finding the relationship between the bus current and rotor position. The effectiveness of the proposed diagnosis method is validated by the simulation results and experimental tests.
Resumo:
PHAR-QA, funded by the European Commission, is producing a framework of competences for pharmacy practice. The framework is in line with the EU directive on sectoral professions and takes into account the diversity of the pharmacy profession and the on-going changes in healthcare systems (with an increasingly important role for pharmacists), and in the pharmaceutical industry. PHAR-QA is asking academia, students and practicing pharmacists to rank competences required for practice. The results show that competences in the areas of drug interactions, need for drug treatment and provision of information and service were ranked highest whereas those in the areas of ability to design and conduct research and development and production of medicines were ranked lower. For the latter two categories, industrial pharmacists ranked them higher than did the other five groups
Resumo:
A correlation scheme (leading to a special equilibrium called “soft” correlated equilibrium) is applied for two-person finite games in extensive form with perfect information. Randomization by an umpire takes place over the leaves of the game tree. At every decision point players have the choice either to follow the recommendation of the umpire blindly or freely choose any other action except the one suggested. This scheme can lead to Pareto-improved outcomes of other correlated equilibria. Computational issues of maximizing a linear function over the set of soft correlated equilibria are considered and a linear-time algorithm in terms of the number of edges in the game tree is given for a special procedure called “subgame perfect optimization”.
Resumo:
The purpose of this study was to design a preventive scheme using directional antennas to improve the performance of mobile ad hoc networks. In this dissertation, a novel Directionality based Preventive Link Maintenance (DPLM) Scheme is proposed to characterize the performance gain [JaY06a, JaY06b, JCY06] by extending the life of link. In order to maintain the link and take preventive action, signal strength of data packets is measured. Moreover, location information or angle of arrival information is collected during communication and saved in the table. When measured signal strength is below orientation threshold , an orientation warning is generated towards the previous hop node. Once orientation warning is received by previous hop (adjacent) node, it verifies the correctness of orientation warning with few hello pings and initiates high quality directional link (a link above the threshold) and immediately switches to it, avoiding a link break altogether. The location information is utilized to create a directional link by orienting neighboring nodes antennas towards each other. We call this operation an orientation handoff, which is similar to soft-handoff in cellular networks. ^ Signal strength is the indicating factor, which represents the health of the link and helps to predict the link failure. In other words, link breakage happens due to node movement and subsequently reducing signal strength of receiving packets. DPLM scheme helps ad hoc networks to avoid or postpone costly operation of route rediscovery in on-demand routing protocols by taking above-mentioned preventive action. ^ This dissertation advocates close but simple collaboration between the routing, medium access control and physical layers. In order to extend the link, the Dynamic Source Routing (DSR) and IEEE 802.11 MAC protocols were modified to use the ability of directional antennas to transmit over longer distance. A directional antenna module is implemented in OPNET simulator with two separate modes of operations: omnidirectional and directional. The antenna module has been incorporated in wireless node model and simulations are performed to characterize the performance improvement of mobile ad hoc networks. Extensive simulations have shown that without affecting the behavior of the routing protocol noticeably, aggregate throughput, packet delivery ratio, end-to-end delay (latency), routing overhead, number of data packets dropped, and number of path breaks are improved considerably. We have done the analysis of the results in different scenarios to evaluate that the use of directional antennas with proposed DPLM scheme has been found promising to improve the performance of mobile ad hoc networks. ^
Resumo:
In recent years, the internet has grown exponentially, and become more complex. This increased complexity potentially introduces more network-level instability. But for any end-to-end internet connection, maintaining the connection's throughput and reliability at a certain level is very important. This is because it can directly affect the connection's normal operation. Therefore, a challenging research task is to improve a network's connection performance by optimizing its throughput and reliability. This dissertation proposed an efficient and reliable transport layer protocol (called concurrent TCP (cTCP)), an extension of the current TCP protocol, to optimize end-to-end connection throughput and enhance end-to-end connection fault tolerance. The proposed cTCP protocol could aggregate multiple paths' bandwidth by supporting concurrent data transfer (CDT) on a single connection. Here concurrent data transfer was defined as the concurrent transfer of data from local hosts to foreign hosts via two or more end-to-end paths. An RTT-Based CDT mechanism, which was based on a path's RTT (Round Trip Time) to optimize CDT performance, was developed for the proposed cTCP protocol. This mechanism primarily included an RTT-Based load distribution and path management scheme, which was used to optimize connections' throughput and reliability. A congestion control and retransmission policy based on RTT was also provided. According to experiment results, under different network conditions, our RTT-Based CDT mechanism could acquire good CDT performance. Finally a CWND-Based CDT mechanism, which was based on a path's CWND (Congestion Window), to optimize CDT performance was introduced. This mechanism primarily included: a CWND-Based load allocation scheme, which assigned corresponding data to paths based on their CWND to achieve aggregate bandwidth; a CWND-Based path management, which was used to optimize connections' fault tolerance; and a congestion control and retransmission management policy, which was similar to regular TCP in its separate path handling. According to corresponding experiment results, this mechanism could acquire near-optimal CDT performance under different network conditions.
Resumo:
Computer networks produce tremendous amounts of event-based data that can be collected and managed to support an increasing number of new classes of pervasive applications. Examples of such applications are network monitoring and crisis management. Although the problem of distributed event-based management has been addressed in the non-pervasive settings such as the Internet, the domain of pervasive networks has its own characteristics that make these results non-applicable. Many of these applications are based on time-series data that possess the form of time-ordered series of events. Such applications also embody the need to handle large volumes of unexpected events, often modified on-the-fly, containing conflicting information, and dealing with rapidly changing contexts while producing results with low-latency. Correlating events across contextual dimensions holds the key to expanding the capabilities and improving the performance of these applications. This dissertation addresses this critical challenge. It establishes an effective scheme for complex-event semantic correlation. The scheme examines epistemic uncertainty in computer networks by fusing event synchronization concepts with belief theory. Because of the distributed nature of the event detection, time-delays are considered. Events are no longer instantaneous, but duration is associated with them. Existing algorithms for synchronizing time are split into two classes, one of which is asserted to provide a faster means for converging time and hence better suited for pervasive network management. Besides the temporal dimension, the scheme considers imprecision and uncertainty when an event is detected. A belief value is therefore associated with the semantics and the detection of composite events. This belief value is generated by a consensus among participating entities in a computer network. The scheme taps into in-network processing capabilities of pervasive computer networks and can withstand missing or conflicting information gathered from multiple participating entities. Thus, this dissertation advances knowledge in the field of network management by facilitating the full utilization of characteristics offered by pervasive, distributed and wireless technologies in contemporary and future computer networks.
Resumo:
This paper describes the implementation of a novel mitigation approach and subsequent adaptive management, designed to reduce the transfer of fine sediment in Glaisdale Beck; a small upland catchment in the UK. Hydro-meteorological and suspended sediment datasets are collected over a two year period spanning pre- and post-diversion periods in order to assess the impact of the channel reconfiguration scheme on the fluvial suspended sediment dynamics. Analysis of the river response demonstrates that the fluvial sediment system has become more restrictive with reduced fine sediment transfer. This is characterised by reductions in flow-weighted mean suspended sediment concentrations from 77.93 mg/l prior to mitigation, to 74.36 mg/l following the diversion. A Mann-Whitney U test found statistically significant differences (p < 0.001) between the pre- and post-monitoring median SSCs. Whilst application of one-way analysis of covariance (ANCOVA) on the coefficients of sediment rating curves developed before and after the diversion found statistically significant differences (p < 0.001), with both Log a and b coefficients becoming smaller following the diversion. Non-parametric analysis indicates a reduction in residuals through time (p < 0.001), with the developed LOWESS model over-predicting sediment concentrations as the channel stabilises. However, the channel is continuing to adjust to the reconfigured morphology, with evidence of a headward propagating knickpoint which has migrated 120 m at an exponentially decreasing rate over the last 7 years since diversion. The study demonstrates that channel reconfiguration can be effective in mitigating fine sediment flux in upland streams but the full value of this may take many years to achieve whilst the fluvial system, slowly readjusts.
Resumo:
Computer networks produce tremendous amounts of event-based data that can be collected and managed to support an increasing number of new classes of pervasive applications. Examples of such applications are network monitoring and crisis management. Although the problem of distributed event-based management has been addressed in the non-pervasive settings such as the Internet, the domain of pervasive networks has its own characteristics that make these results non-applicable. Many of these applications are based on time-series data that possess the form of time-ordered series of events. Such applications also embody the need to handle large volumes of unexpected events, often modified on-the-fly, containing conflicting information, and dealing with rapidly changing contexts while producing results with low-latency. Correlating events across contextual dimensions holds the key to expanding the capabilities and improving the performance of these applications. This dissertation addresses this critical challenge. It establishes an effective scheme for complex-event semantic correlation. The scheme examines epistemic uncertainty in computer networks by fusing event synchronization concepts with belief theory. Because of the distributed nature of the event detection, time-delays are considered. Events are no longer instantaneous, but duration is associated with them. Existing algorithms for synchronizing time are split into two classes, one of which is asserted to provide a faster means for converging time and hence better suited for pervasive network management. Besides the temporal dimension, the scheme considers imprecision and uncertainty when an event is detected. A belief value is therefore associated with the semantics and the detection of composite events. This belief value is generated by a consensus among participating entities in a computer network. The scheme taps into in-network processing capabilities of pervasive computer networks and can withstand missing or conflicting information gathered from multiple participating entities. Thus, this dissertation advances knowledge in the field of network management by facilitating the full utilization of characteristics offered by pervasive, distributed and wireless technologies in contemporary and future computer networks.