886 resultados para Optimal test set
Resumo:
Switching mode power supplies (SMPS) are subject to low power factor and high harmonic distortions. Active power-factor correction (APFC) is a technique to improve the power factor and to reduce the harmonic distortion of SMPSs. However, this technique results in double frequency output voltage variation which can be reduced by using a large output capacitance. Using large capacitors increases the cost and size of the converter. Furthermore, the capacitors are subject to frequent failures mainly caused by evaporation of the electrolytic solution which reduce the converter reliability. This thesis presents an optimal control method for the input current of a boost converter to reduce the size of the output capacitor. The optimum current waveform as a function of weighing factor is found by using the Euler Lagrange equation. A set of simulations are performed to determine the ideal weighing which gives the lowest possible output voltage variation as the converter still meets the IEC-61000-3-2 class-A harmonics requirements with a power factor of 0.8 or higher. The proposed method is verified by the experimental work. A boost converter is designed and it is run for different power levels, 100 W, 200 W and 400 W. The desired output voltage ripple is 10 V peak to peak for the output voltage of 200 Vdc. This ripple value corresponds to a ± 2.5% output voltage ripple. The experimental and the simulation results are found to be quite matching. A significant reduction in capacitor size, as high as 50%, is accomplished by using the proposed method.
Resumo:
Physical fitness can be evaluated in competitive and school sports with different field tests under different conditions and goals. To produce valid results, a field test must be practical and reach high standards of test criteria (objectivity, reliability, validity). The purpose of this study was to investigate the test criteria and the practicability of a group of field tests called «SUISSE Sport Test Konzept Basis Feldtestbatterie». For 20-m sprint, ventral trunk muscle test, standing long jump, 2-kg medicine ball shot, obstacle course and cooper-test, test quality and practicability were evaluated. 221 children and adolescents from competitive sports and different school levels took part in the study. According to school level, they were divided into 3 groups (P: 7–11.5 y, S1: 11.6–15.5 y, S2: 15.6–21.8 y). Objectivity was tested for time or distance measurement in all tests as well as for error rating in obstacle test. For reliability measurement, 162 subjects performed the field tests twice within a few weeks. For validity results of standing long jump were compared with counter movement jump performance on a force plate. Correlation analysis was performed and level of significance was set for p < 0.05. For accuracy standard error was calculated. All tests achieved sufficient to excellent objectiv - ity with correlation-coefficient (r) lying between 0.85 and 0.99. Reliability was very good (r = 0.84–0.97). In cooper- and trunk test, reliability was higher for athletes than for pupils (trunk test: r = 0.95 vs. r = 0.62, cooper-test: r = 0.90 vs. r = 0.78). In those tests the reliability decreases with increasing age (cooper-test: P: r = 0.84, S1: r = 0.69, S2: r = 0.52; trunk-test: P: r = 0.69, S1: r = 0.71; S2: r = 0.39). Validity for standing long jump was good (r = 0.75–0.86). The standard error of the mean was between 4–8%, with the exception for cooper-test (athletes: 6%, pupils: 11%) and trunk test (athletes: 14%, pupils: 46%). The results show that the evaluated group of field tests is a practicable, objective and reliable tool to determine physical skills in young athletes as well as in a scholar setting over a broad age range. Most of the tests achieved the test criteria with the grades good to excellent. The lower coefficient of reliability for cooper- and trunk test by the pupils could be explained by motivational problems in this setting. For up to 20 subjects, a tester can accomplish the tests within 3 h. Finally, age-dependent grades were elaborated
Resumo:
The objective of this report is to study distributed (decentralized) three phase optimal power flow (OPF) problem in unbalanced power distribution networks. A full three phase representation of the distribution networks is considered to account for the highly unbalance state of the distribution networks. All distribution network’s series/shunt components, and load types/combinations had been modeled on commercial version of General Algebraic Modeling System (GAMS), the high-level modeling system for mathematical programming and optimization. The OPF problem has been successfully implemented and solved in a centralized approach and distributed approach, where the objective is to minimize the active power losses in the entire system. The study was implemented on the IEEE-37 Node Test Feeder. A detailed discussion of all problem sides and aspects starting from the basics has been provided in this study. Full simulation results have been provided at the end of the report.
Resumo:
Two important and upcoming technologies, microgrids and electricity generation from wind resources, are increasingly being combined. Various control strategies can be implemented, and droop control provides a simple option without requiring communication between microgrid components. Eliminating the single source of potential failure around the communication system is especially important in remote, islanded microgrids, which are considered in this work. However, traditional droop control does not allow the microgrid to utilize much of the power available from the wind. This dissertation presents a novel droop control strategy, which implements a droop surface in higher dimension than the traditional strategy. The droop control relationship then depends on two variables: the dc microgrid bus voltage, and the wind speed at the current time. An approach for optimizing this droop control surface in order to meet a given objective, for example utilizing all of the power available from a wind resource, is proposed and demonstrated. Various cases are used to test the proposed optimal high dimension droop control method, and demonstrate its function. First, the use of linear multidimensional droop control without optimization is demonstrated through simulation. Next, an optimal high dimension droop control surface is implemented with a simple dc microgrid containing two sources and one load. Various cases for changing load and wind speed are investigated using simulation and hardware-in-the-loop techniques. Optimal multidimensional droop control is demonstrated with a wind resource in a full dc microgrid example, containing an energy storage device as well as multiple sources and loads. Finally, the optimal high dimension droop control method is applied with a solar resource, and using a load model developed for a military patrol base application. The operation of the proposed control is again investigated using simulation and hardware-in-the-loop techniques.
Resumo:
Experimental work and analysis was done to investigate engine startup robustness and emissions of a flex-fuel spark ignition (SI) direct injection (DI) engine. The vaporization and other characteristics of ethanol fuel blends present a challenge at engine startup. Strategies to reduce the enrichment requirements for the first engine startup cycle and emissions for the second and third fired cycle at 25°C ± 1°C engine and intake air temperature were investigated. Research work was conducted on a single cylinder SIDI engine with gasoline and E85 fuels, to study the effect on first fired cycle of engine startup. Piston configurations that included a compression ratio change (11 vs 15.5) and piston geometry change (flattop vs bowl) were tested, along with changes in intake cam timing (95,110,125) and fuel pressure (0.4 MPa vs 3 MPa). The goal was to replicate the engine speed, manifold pressure, fuel pressure and testing temperature from an engine startup trace for investigating the first fired cycle for the engine. Results showed bowl piston was able to enable lower equivalence ratio engine starts with gasoline fuel, while also showing lower IMEP at the same equivalence ratio compared to flat top piston. With E85, bowl piston showed reduced IMEP as compression ratio increased at the same equivalence ratio. A preference for constant intake valve timing across fuels seemed to indicate that flattop piston might be a good flex-fuel piston. Significant improvements were seen with higher CR bowl piston with high fuel pressure starts, but showed no improvement with low fuel pressures. Simulation work was conducted to analyze initial three cycles of engine startup in GT-POWER for the same set of hardware used in the experimentations. A steady state validated model was modified for startup conditions. The results of which allowed an understanding of the relative residual levels and IMEP at the test points in the cam phasing space. This allowed selecting additional test points that enable use of higher residual levels, eliminating those with smaller trapped mass incapable of producing required IMEP for proper engine turnover. The second phase of experimental testing results for 2nd and 3rd startup cycle revealed both E10 and E85 prefer the same SOI of 240°bTDC at second and third startup cycle for the flat top piston and high injection pressures. E85 fuel optimal cam timing for startup showed that it tolerates more residuals compared to E10 fuel. Higher internal residuals drives down the Ø requirement for both fuels up to their combustion stability limit, this is thought to be direct benefit to vaporization due to increased cycle start temperature. Benefits are shown for an advance IMOP and retarded EMOP strategy at engine startup. Overall the amount of residuals preferred by an engine for E10 fuel at startup is thought to be constant across engine speed, thus could enable easier selection of optimized cam positions across the startup speeds.
Resumo:
Writing unit tests for legacy systems is a key maintenance task. When writing tests for object-oriented programs, objects need to be set up and the expected effects of executing the unit under test need to be verified. If developers lack internal knowledge of a system, the task of writing tests is non-trivial. To address this problem, we propose an approach that exposes side effects detected in example runs of the system and uses these side effects to guide the developer when writing tests. We introduce a visualization called Test Blueprint, through which we identify what the required fixture is and what assertions are needed to verify the correct behavior of a unit under test. The dynamic analysis technique that underlies our approach is based on both tracing method executions and on tracking the flow of objects at runtime. To demonstrate the usefulness of our approach we present results from two case studies.
Resumo:
In this note, we show that an extension of a test for perfect ranking in a balanced ranked set sample given by Li and Balakrishnan (2008) to the multi-cycle case turns out to be equivalent to the test statistic proposed by Frey et al. (2007). This provides an alternative interpretation and motivation for their test statistic.
Resumo:
There has been significant interest in indirect measures of attitudes like the Implicit Association Test (IAT), presumably because of the possibility of uncovering implicit prejudices. The authors derived a set of qualitative predictions for people's performance in the IAT on the basis of random walk models. These were supported in 3 experiments comparing clearly positive or negative categories to nonwords. They also provided evidence that participants shift their response criterion when doing the IAT. Because of these criterion shifts, a response pattern in the IAT can have multiple causes. Thus, it is not possible to infer a single cause (such as prejudice) from IAT results. A surprising additional result was that nonwords were treated as though they were evaluated more negatively than obviously negative items like insects, suggesting that low familiarity items may generate the pattern of data previously interpreted as evidence for implicit prejudice.
Resumo:
Dendrogeomorphology uses information sources recorded in the roots, trunks and branches of trees and bushes located in the fluvial system to complement (or sometimes even replace) systematic and palaeohydrological records of past floods. The application of dendrogeomorphic data sources and methods to palaeoflood analysis over nearly 40 years has allowed improvements to be made in frequency and magnitude estimations of past floods. Nevertheless, research carried out so far has shown that the dendrogeomorphic indicators traditionally used (mainly scar evidence), and their use to infer frequency and magnitude, have been restricted to a small, limited set of applications. New possibilities with enormous potential remain unexplored. New insights in future research of palaeoflood frequency and magnitude using dendrogeomorphic data sources should: (1) test the application of isotopic indicators (16O/18O ratio) to discover the meteorological origin of past floods; (2) use different dendrogeomorphic indicators to estimate peak flows with 2D (and 3D) hydraulic models and study how they relate to other palaeostage indicators; (3) investigate improved calibration of 2D hydraulic model parameters (roughness); and (4) apply statistics-based cost–benefit analysis to select optimal mitigation measures. This paper presents an overview of these innovative methodologies, with a focus on their capabilities and limitations in the reconstruction of recent floods and palaeofloods.
Resumo:
Equipped with state-of-the-art smartphones and mobile devices, today's highly interconnected urban population is increasingly dependent on these gadgets to organize and plan their daily lives. These applications often rely on current (or preferred) locations of individual users or a group of users to provide the desired service, which jeopardizes their privacy; users do not necessarily want to reveal their current (or preferred) locations to the service provider or to other, possibly untrusted, users. In this paper, we propose privacy-preserving algorithms for determining an optimal meeting location for a group of users. We perform a thorough privacy evaluation by formally quantifying privacy-loss of the proposed approaches. In order to study the performance of our algorithms in a real deployment, we implement and test their execution efficiency on Nokia smartphones. By means of a targeted user-study, we attempt to get an insight into the privacy-awareness of users in location-based services and the usability of the proposed solutions.
Resumo:
Stepwise uncertainty reduction (SUR) strategies aim at constructing a sequence of points for evaluating a function f in such a way that the residual uncertainty about a quantity of interest progressively decreases to zero. Using such strategies in the framework of Gaussian process modeling has been shown to be efficient for estimating the volume of excursion of f above a fixed threshold. However, SUR strategies remain cumbersome to use in practice because of their high computational complexity, and the fact that they deliver a single point at each iteration. In this article we introduce several multipoint sampling criteria, allowing the selection of batches of points at which f can be evaluated in parallel. Such criteria are of particular interest when f is costly to evaluate and several CPUs are simultaneously available. We also manage to drastically reduce the computational cost of these strategies through the use of closed form formulas. We illustrate their performances in various numerical experiments, including a nuclear safety test case. Basic notions about kriging, auxiliary problems, complexity calculations, R code, and data are available online as supplementary materials.
Resumo:
OBJECTIVE To provide guidance on standards for reporting studies of diagnostic test accuracy for dementia disorders. METHODS An international consensus process on reporting standards in dementia and cognitive impairment (STARDdem) was established, focusing on studies presenting data from which sensitivity and specificity were reported or could be derived. A working group led the initiative through 4 rounds of consensus work, using a modified Delphi process and culminating in a face-to-face consensus meeting in October 2012. The aim of this process was to agree on how best to supplement the generic standards of the STARD statement to enhance their utility and encourage their use in dementia research. RESULTS More than 200 comments were received during the wider consultation rounds. The areas at most risk of inadequate reporting were identified and a set of dementia-specific recommendations to supplement the STARD guidance were developed, including better reporting of patient selection, the reference standard used, avoidance of circularity, and reporting of test-retest reliability. CONCLUSION STARDdem is an implementation of the STARD statement in which the original checklist is elaborated and supplemented with guidance pertinent to studies of cognitive disorders. Its adoption is expected to increase transparency, enable more effective evaluation of diagnostic tests in Alzheimer disease and dementia, contribute to greater adherence to methodologic standards, and advance the development of Alzheimer biomarkers.
Resumo:
We consider the problem of twenty questions with noisy answers, in which we seek to find a target by repeatedly choosing a set, asking an oracle whether the target lies in this set, and obtaining an answer corrupted by noise. Starting with a prior distribution on the target's location, we seek to minimize the expected entropy of the posterior distribution. We formulate this problem as a dynamic program and show that any policy optimizing the one-step expected reduction in entropy is also optimal over the full horizon. Two such Bayes optimal policies are presented: one generalizes the probabilistic bisection policy due to Horstein and the other asks a deterministic set of questions. We study the structural properties of the latter, and illustrate its use in a computer vision application.
Resumo:
The consumption of immunoglobulins (Ig) is increasing due to better recognition of antibody deficiencies, an aging population, and new indications. This review aims to examine the various dosing regimens and research developments in the established and in some of the relevant off-label indications in Europe. The background to the current regulatory settings in Europe is provided as a backdrop for the latest developments in primary and secondary immunodeficiencies and in immunomodulatory indications. In these heterogeneous areas, clinical trials encompassing different routes of administration, varying intervals, and infusion rates are paving the way toward more individualized therapy regimens. In primary antibody deficiencies, adjustments in dosing and intervals will depend on the clinical presentation, effective IgG trough levels and IgG metabolism. Ideally, individual pharmacokinetic profiles in conjunction with the clinical phenotype could lead to highly tailored treatment. In practice, incremental dosage increases are necessary to titrate the optimal dose for more severely ill patients. Higher intravenous doses in these patients also have beneficial immunomodulatory effects beyond mere IgG replacement. Better understanding of the pharmacokinetics of Ig therapy is leading to a move away from simplistic "per kg" dosing. Defective antibody production is common in many secondary immunodeficiencies irrespective of whether the causative factor was lymphoid malignancies (established indications), certain autoimmune disorders, immunosuppressive agents, or biologics. This antibody failure, as shown by test immunization, may be amenable to treatment with replacement Ig therapy. In certain immunomodulatory settings [e.g., idiopathic thrombocytopenic purpura (ITP)], selection of patients for Ig therapy may be enhanced by relevant biomarkers in order to exclude non-responders and thus obtain higher response rates. In this review, the developments in dosing of therapeutic immunoglobulins have been limited to high and some medium priority indications such as ITP, Kawasaki' disease, Guillain-Barré syndrome, chronic inflammatory demyelinating polyradiculoneuropathy, myasthenia gravis, multifocal motor neuropathy, fetal alloimmune thrombocytopenia, fetal hemolytic anemia, and dermatological diseases.
Resumo:
The ability to determine what activity of daily living a person performs is of interest in many application domains. It is possible to determine the physical and cognitive capabilities of the elderly by inferring what activities they perform in their houses. Our primary aim was to establish a proof of concept that a wireless sensor system can monitor and record physical activity and these data can be modeled to predict activities of daily living. The secondary aim was to determine the optimal placement of the sensor boxes for detecting activities in a room. A wireless sensor system was set up in a laboratory kitchen. The ten healthy participants were requested to make tea following a defined sequence of tasks. Data were collected from the eight wireless sensor boxes placed in specific places in the test kitchen and analyzed to detect the sequences of tasks performed by the participants. These sequence of tasks were trained and tested using the Markov Model. Data analysis focused on the reliability of the system and the integrity of the collected data. The sequence of tasks were successfully recognized for all subjects and the averaged data pattern of tasks sequences between the subjects had a high correlation. Analysis of the data collected indicates that sensors placed in different locations are capable of recognizing activities, with the movement detection sensor contributing the most to detection of tasks. The central top of the room with no obstruction of view was considered to be the best location to record data for activity detection. Wireless sensor systems show much promise as easily deployable to monitor and recognize activities of daily living.