411 resultados para frequency-doubling efficiency
Resumo:
Typical wireless power transfer systems utilize series compensation circuit which is based on magnetic coupling and resonance principles that was first developed by Tesla. However, changes in coupling caused by gap distance, alignment and orientation variations can lead to reduce power transfer efficiencies and the transferred power levels. This paper proposes impedance matched circuit to reduce frequency bifurcation effect and improve on the transferred power level, efficiency and total harmonic distortion (THD) performance of the series compensation circuit. A comprehensive mathematical analysis is performed for both series and impedance matched circuits to show the frequency bifurcation effects in terms of input impedance, variations in transferred power levels and efficiencies. Matlab/Simulink results validate the theoretical analysis and shows the circuits’ THD performance when circuits are fed with power electronic converters.
Resumo:
Achieving high efficiency with improved power transfer range and misalignment tolerance is the major design challenge in realizing Wireless Power Transfer (WPT) systems for industrial applications. Resonant coils must be carefully designed to achieve highest possible system performance by fully utilizing the available space. High quality factor and enhanced electromagnetic coupling are key indices which determine the system performance. In this paper, design parameter extraction and quality factor optimization of multi layered helical coils are presented using finite element analysis (FEA) simulations. In addition, a novel Toroidal Shaped Spiral (TSS) coil is proposed to increase power transfer range and misalignment tolerance. The proposed shapes and recommendations can be used to design high efficiency WPT resonator in a limited space.
Resumo:
Most real-life data analysis problems are difficult to solve using exact methods, due to the size of the datasets and the nature of the underlying mechanisms of the system under investigation. As datasets grow even larger, finding the balance between the quality of the approximation and the computing time of the heuristic becomes non-trivial. One solution is to consider parallel methods, and to use the increased computational power to perform a deeper exploration of the solution space in a similar time. It is, however, difficult to estimate a priori whether parallelisation will provide the expected improvement. In this paper we consider a well-known method, genetic algorithms, and evaluate on two distinct problem types the behaviour of the classic and parallel implementations.
Resumo:
Background The requirement for dual screening of titles and abstracts to select papers to examine in full text can create a huge workload, not least when the topic is complex and a broad search strategy is required, resulting in a large number of results. An automated system to reduce this burden, while still assuring high accuracy, has the potential to provide huge efficiency savings within the review process. Objectives To undertake a direct comparison of manual screening with a semi‐automated process (priority screening) using a machine classifier. The research is being carried out as part of the current update of a population‐level public health review. Methods Authors have hand selected studies for the review update, in duplicate, using the standard Cochrane Handbook methodology. A retrospective analysis, simulating a quasi‐‘active learning’ process (whereby a classifier is repeatedly trained based on ‘manually’ labelled data) will be completed, using different starting parameters. Tests will be carried out to see how far different training sets, and the size of the training set, affect the classification performance; i.e. what percentage of papers would need to be manually screened to locate 100% of those papers included as a result of the traditional manual method. Results From a search retrieval set of 9555 papers, authors excluded 9494 papers at title/abstract and 52 at full text, leaving 9 papers for inclusion in the review update. The ability of the machine classifier to reduce the percentage of papers that need to be manually screened to identify all the included studies, under different training conditions, will be reported. Conclusions The findings of this study will be presented along with an estimate of any efficiency gains for the author team if the screening process can be semi‐automated using text mining methodology, along with a discussion of the implications for text mining in screening papers within complex health reviews.
Resumo:
Recent changes in the aviation industry and in the expectations of travellers have begun to alter the way we approach our understanding, and thus the segmentation, of airport passengers. The key to successful segmentation of any population lies in the selection of the criteria on which the partitions are based. Increasingly, the basic criteria used to segment passengers (purpose of trip and frequency of travel) no longer provide adequate insights into the passenger experience. In this paper, we propose a new model for passenger segmentation based on the passenger core value, time. The results are based on qualitative research conducted in-situ at Brisbane International Terminal during 2012-2013. Based on our research, a relationship between time sensitivity and degree of passenger engagement was identified. This relationship was used as the basis for a new passenger segmentation model, namely: Airport Enthusiast (engaged, non time sensitive); Time Filler (non engaged, non time sensitive); Efficiency Lover (non engaged, time sensitive) and Efficient Enthusiast (engaged, time sensitive). The outcomes of this research extend the theoretical knowledge about passenger experience in the terminal environment. These new insights can ultimately be used to optimise the allocation of space for future terminal planning and design.
Resumo:
Purpose To investigate the frequency of convergence and accommodation anomalies in an optometric clinical setting in Mashhad, Iran, and to determine tests with highest accuracy in diagnosing these anomalies. Methods From 261 patients who came to the optometric clinics of Mashhad University of Medical Sciences during a month, 83 of them were included in the study based on the inclusion criteria. Near point of convergence (NPC), near and distance heterophoria, monocular and binocular accommodative facility (MAF and BAF, respectively), lag of accommodation, positive and negative fusional vergences (PFV and NFV, respectively), AC/A ratio, relative accommodation, and amplitude of accommodation (AA) were measured to diagnose the convergence and accommodation anomalies. The results were also compared between symptomatic and asymptomatic patients. The accuracy of these tests was explored using sensitivity (S), specificity (Sp), and positive and negative likelihood ratios (LR+, LR−). Results Mean age of the patients was 21.3 ± 3.5 years and 14.5% of them had specific binocular and accommodative symptoms. Convergence and accommodative anomalies were found in 19.3% of the patients; accommodative excess (4.8%) and convergence insufficiency (3.6%) were the most common accommodative and convergence disorders, respectively. Symptomatic patients showed lower values for BAF (p = .003), MAF (p = .001), as well as AA (p = .001) compared with asymptomatic patients. Moreover, BAF (S = 75%, Sp = 62%) and MAF (S = 62%, Sp = 89%) were the most accurate tests for detecting accommodative and convergence disorders in terms of both sensitivity and specificity. Conclusions Convergence and accommodative anomalies are the most common binocular disorders in optometric patients. Including tests of monocular and binocular accommodative facility in routine eye examinations as accurate tests to diagnose these anomalies requires further investigation.
Resumo:
Map-matching algorithms that utilise road segment connectivity along with other data (i.e.position, speed and heading) in the process of map-matching are normally suitable for high frequency (1 Hz or higher) positioning data from GPS. While applying such map-matching algorithms to low frequency data (such as data from a fleet of private cars, buses or light duty vehicles or smartphones), the performance of these algorithms reduces to in the region of 70% in terms of correct link identification, especially in urban and sub-urban road networks. This level of performance may be insufficient for some real-time Intelligent Transport System (ITS) applications and services such as estimating link travel time and speed from low frequency GPS data. Therefore, this paper develops a new weight-based shortest path and vehicle trajectory aided map-matching (stMM) algorithm that enhances the map-matching of low frequency positioning data on a road map. The well-known A* search algorithm is employed to derive the shortest path between two points while taking into account both link connectivity and turn restrictions at junctions. In the developed stMM algorithm, two additional weights related to the shortest path and vehicle trajectory are considered: one shortest path-based weight is related to the distance along the shortest path and the distance along the vehicle trajectory, while the other is associated with the heading difference of the vehicle trajectory. The developed stMM algorithm is tested using a series of real-world datasets of varying frequencies (i.e. 1 s, 5 s, 30 s, 60 s sampling intervals). A high-accuracy integrated navigation system (a high-grade inertial navigation system and a carrier-phase GPS receiver) is used to measure the accuracy of the developed algorithm. The results suggest that the algorithm identifies 98.9% of the links correctly for every 30 s GPS data. Omitting the information from the shortest path and vehicle trajectory, the accuracy of the algorithm reduces to about 73% in terms of correct link identification. The algorithm can process on average 50 positioning fixes per second making it suitable for real-time ITS applications and services.
Resumo:
Building energy-efficiency (BEE) is the key to drive the promotion of energy saving in building sector. A large variety of building energy-efficiency policy instrument exist. Some are mandatory, some are soft scheme, and some use economic incentives from country to country. This paper presents the current development of implementing BEE policy instruments by examining the practices of BEE in seven selected countries and regions. In the study, BEE policy instruments are classified into three groups, including mandatory administration control instruments, economic incentive instruments and voluntary scheme instruments. The study shows that different countries have adopted different instruments in their practices for achieving the target of energy-saving and gained various kinds of experiences. It is important to share these experiences gained.
Resumo:
This study models the joint production of desirable and undesirable output production (that is, CO2 emissions) of airlines. The Malmquist–Luenberger productivity index is employed to measure productivity growth when undesirable output production is incorporated into the production model. The results show that pollution abatement activities of airlines lowers productivity growth, which suggests that the traditional approach of measuring productivity growth, which ignores CO2 emissions, overstates ‘true’ productivity growth. The reliability of the results is also tested and verified using confidence intervals based on bootstrapping.
Resumo:
Recent studies have shown that ultrasound transit time spectroscopy (UTTS) is an alternative method to describe ultrasound wave propagation through complex samples as an array of parallel sonic rays. This technique has the potential to characterize bone properties including volume fraction and may be implemented in clinical systems to predict osteoporotic fracture risk. In contrast to broadband ultrasound attenuation, which is highly frequency dependent, we hypothesise that UTTS is frequency independent. This study measured 1 MHz and 5 MHz broadband ultrasound signals through a set of acrylic step-wedge samples. Digital deconvolution of the signals through water and each sample was applied to derive a transit time spectrum. The resulting spectra at both 1 MHz and 5 MHz were compared to the predicted transit time values. Linear regression analysis yields agreement (R2) of 99.23% and 99.74% at 1 MHz and 5 MHz respectively indicating frequency independence of transit time spectra.
Resumo:
Improvements in the effectiveness and efficiency of supply-side waste management are necessary in many countries. In Japan, municipalities with limited budgets have delayed the introduction of new waste-management technologies. Thus, the central government has used subsidies to encourage municipalities to adopt certain new technologies to improve waste-management efficiency. In this study, we measure the efficiency of waste management and explore how technology is related to technical efficiency. We find that municipalities are likely to adopt less-efficient technologies and that the central government's policies are likely to promote inefficient technology adoption by local governments.
Resumo:
The objective of this study is to examine technical efficiency and productivity growth in the Indian banking sector over the period from 2004 to 2011. We apply an innovative methodological approach introduced by Chen et al. (2011) and Barros et al. (2012), who use a weighted Russell directional distance model to measure technical inefficiency. We further modify and extend that model to measure TFP change with NPLs. We find that the inefficiency levels are significantly different among the three ownership structure of banks in India. Foreign banks have strong market position in India and they pull the production frontier in a more efficient direction. SPBs and domestic private banks show considerably higher inefficiency. We conclude that the restructuring policy applied in the late 1990s and early 2000s by the Indian government has not had a long-lasting effect.
Resumo:
This research examines the important emerging area of online customer experience (OCE) using data collected from an online survey of frequent and infrequent online shoppers. The study examines a model of antecedents for cognitive and affective experiential states and their influence on outcomes, such as online shopping satisfaction and repurchase intentions. The model also examines the relationships between perceived risk, trust, satisfaction and repurchase intentions. Theoretically, the study provides a broader understanding of OCE, through insights into two shopper segments identified as being important in e-retailing. For managers, the study highlights areas of OCE and their implications for ongoing management of the online channel.
Resumo:
Displacement of conventional synchronous generators by non-inertial units such as wind or solar generators will result in reduced-system inertia affecting under-frequency response. Frequency control is important to avoid equipment damage, load shedding, and possible blackouts. Wind generators along with energy storage systems can be used to improve the frequency response of low-inertia power system. This paper proposes a fuzzy-logic based frequency controller (FFC) for wind farms augmented with energy storage systems (wind-storage system) to improve the primary frequency response in future low-inertia hybrid power system. The proposed controller provides bidirectional real power injection using system frequency deviations and rate of change of frequency (RoCoF). Moreover, FFC ensures optimal use of energy from wind farms and storage units by eliminating the inflexible de-loading of wind energy and minimizing the required storage capacity. The efficacy of the proposed FFC is verified on the low-inertia hybrid power system.
Resumo:
BACKGROUND Ongoing shortages of blood products may be addressed through additional donations. However, donation frequency rates are typically lower than medically possible. This preliminary study aims to determine voluntary nonremunerated whole blood (WB) and plasmapheresis donors' willingness, and subsequent facilitators and barriers, to make additional donations of a different type. STUDY DESIGN AND METHODS Forty individual telephone interviews were conducted posing two additional donation pattern scenarios: first, making a single and, second, making multiple plasmapheresis donations between WB donations. Stratified purposive sampling was conducted for four samples varying in donation experience: no-plasma, new-to-both-WB-and-plasma, new-to-plasma, and plasma donors. Interviews were analyzed yielding excellent (κ values > 0.81) inter-rater reliability. RESULTS Facilitators were more endorsed than barriers for a single but not multiple plasmapheresis donation. More new-to-both donors (n = 5) were willing to make multiple plasma donations between WB donations than others (n = 1 each) and identified fewer barriers (n = 3) than those more experienced in donation (n = 8 no plasma, n = 10 new to both, n = 11 plasma). Donors in the plasma sample were concerned about the subsequent reduced time between plasma donations by adding WB donations (n = 3). The no-plasma and new-to-plasma donors were concerned about the time commitment required (n = 3). CONCLUSION Current donors are willing to add different product donations but donation history influences their willingness to change. Early introduction of multiple donation types, variation in inventory levels, and addressing barriers will provide blood collection agencies with a novel and cost-effective inventory management strategy.