844 resultados para System Performance Measures.
Resumo:
Recently there has been an increasing interest in the development of new methods using Pareto optimality to deal with multi-objective criteria (for example, accuracy and architectural complexity). Once one has learned a model based on their devised method, the problem is then how to compare it with the state of art. In machine learning, algorithms are typically evaluated by comparing their performance on different data sets by means of statistical tests. Unfortunately, the standard tests used for this purpose are not able to jointly consider performance measures. The aim of this paper is to resolve this issue by developing statistical procedures that are able to account for multiple competing measures at the same time. In particular, we develop two tests: a frequentist procedure based on the generalized likelihood-ratio test and a Bayesian procedure based on a multinomial-Dirichlet conjugate model. We further extend them by discovering conditional independences among measures to reduce the number of parameter of such models, as usually the number of studied cases is very reduced in such comparisons. Real data from a comparison among general purpose classifiers is used to show a practical application of our tests.
Resumo:
Rapid developments in display technologies, digital printing, imaging sensors, image processing and image transmission are providing new possibilities for creating and conveying visual content. In an age in which images and video are ubiquitous and where mobile, satellite, and three-dimensional (3-D) imaging have become ordinary experiences, quantification of the performance of modern imaging systems requires appropriate approaches. At the end of the imaging chain, a human observer must decide whether images and video are of a satisfactory visual quality. Hence the measurement and modeling of perceived image quality is of crucial importance, not only in visual arts and commercial applications but also in scientific and entertainment environments. Advances in our understanding of the human visual system offer new possibilities for creating visually superior imaging systems and promise more accurate modeling of image quality. As a result, there is a profusion of new research on imaging performance and perceived quality.
Resumo:
The HIRDLS instrument contains 21 spectral channels spanning a wavelength range from 6 to 18mm. For each of these channels the spectral bandwidth and position are isolated by an interference bandpass filter at 301K placed at an intermediate focal plane of the instrument. A second filter cooled to 65K positioned at the same wavelength but designed with a wider bandwidth is placed directly in front of each cooled detector element to reduce stray radiation from internally reflected in-band signals, and to improve the out-of-band blocking. This paper describes the process of determining the spectral requirements for the two bandpass filters and the antireflection coatings used on the lenses and dewar window of the instrument. This process uses a system throughput performance approach taking the instrument spectral specification as a target. It takes into account the spectral characteristics of the transmissive optical materials, the relative spectral response of the detectors, thermal emission from the instrument, and the predicted atmospheric signal to determine the radiance profile for each channel. Using this design approach an optimal design for the filters can be achieved, minimising the number of layers to improve the in-band transmission and to aid manufacture. The use of this design method also permits the instrument spectral performance to be verified using the measured response from manufactured components. The spectral calculations for an example channel are discussed, together with the spreadsheet calculation method. All the contributions made by the spectrally active components to the resulting instrument channel throughput are identified and presented.
Resumo:
System aspects of filter radiometer optics used to sense planetary atmospheres are described. Thus the lenses, dichroic beamsplitters and filters in longwave channels of the Mars Observer PMIRR Pressure Modulator Infrared radiometer instrument are assessed individually, and as systems at 20.7µm, 31.9µm, 47.2µm wavelength. A window filter and a longwave calibration filter of the SCARAB earth observer instrument are assessed similarly.
Resumo:
Many different performance measures have been developed to evaluate field predictions in meteorology. However, a researcher or practitioner encountering a new or unfamiliar measure may have difficulty in interpreting its results, which may lead to them avoiding new measures and relying on those that are familiar. In the context of evaluating forecasts of extreme events for hydrological applications, this article aims to promote the use of a range of performance measures. Some of the types of performance measures that are introduced in order to demonstrate a six-step approach to tackle a new measure. Using the example of the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble precipitation predictions for the Danube floods of July and August 2002, to show how to use new performance measures with this approach and the way to choose between different performance measures based on their suitability for the task at hand is shown. Copyright © 2008 Royal Meteorological Society
Resumo:
Sensitivity and specificity are measures that allow us to evaluate the performance of a diagnostic test. In practice, it is common to have situations where a proportion of selected individuals cannot have the real state of the disease verified, since the verification could be an invasive procedure, as occurs with biopsy. This happens, as a special case, in the diagnosis of prostate cancer, or in any other situation related to risks, that is, not practicable, nor ethical, or in situations with high cost. For this case, it is common to use diagnostic tests based only on the information of verified individuals. This procedure can lead to biased results or workup bias. In this paper, we introduce a Bayesian approach to estimate the sensitivity and the specificity for two diagnostic tests considering verified and unverified individuals, a result that generalizes the usual situation based on only one diagnostic test.
Resumo:
In Sweden, 90% of the solar heating systems are solar domestic hot water and heating systems (SDHW&H), so called combisystems. These generally supply most of the domestic hot water needs during the summer and have enough capacity to supply some energy to the heating system during spring and autumn. This paper describes a standard Swedish combisystem and how the output from it varies with heating load, climate within Sweden, and how it can be increased with improved system design. A base case is defined using the standard combi- system, a modern Swedish single family house and the climate of Stockholm. Using the simulation program Trnsys, parametric studies have been performed on the base case and improved system designs. The solar fraction could be increased from 17.1% for the base case to 22.6% for the best system design, given the same system size, collector type and load. A short analysis of the costs of changed system design is given, showing that payback times for additional investment are from 5-8 years. Measurements on system components in the laboratory have been used to verify the simulation models used. More work is being carried out in order to find even better system designs, and further improvements in system performance are expected.
Resumo:
The role of judicial systems in determining economic perfonnance has gained increasingly attention in recent years. Nonetheless, the literature lacks a clearly articulated framework to examine how judicial systems influence the investment and production decisions of economic agents. This paper tries to till in this gap. It examines what constitutes a well-functioning judiciary, analyzes how dysfunctional judicial systems compromise economic growth, and reviews the relevant empirical literature. It concludes with some remarks about why, despite the widespread perception that well-functioning legal and judicial systems are key to the success of market-oriented reforms in developing and transition countries, judicial refonn has lagged so much behind other reforms.
Resumo:
Much has been researched and discussed in the importance played by knowledge in organizations. We are witnessing the establishment of the knowledge economy, but this "new economy" brings in itself a whole complex system of metrics and evaluations, and cannot be dissociated from it. Due to its importance, the initiatives of knowledge management must be continually assessed on their progress in order to verify whether they are moving towards achieving the goals of success. Thus, good measurement practices should include not only how the organization quantifies its knowledge capital, but also how resources are allocated to supply their growth. Thinking about the aspects listed above, this paper presents an approach to a model for Knowledge extraction using an ERP system, suggesting the establishment of a set of indicators for assessing organizational performance. The objective is to evaluate the implementation of projects of knowledge management and thus observe the general development of the organization.
Resumo:
Since the 1980s, huge efforts have been made to utilise renewable energy sources to generate electric power. One of the interesting issues about embedded generators is the question of optimal placement and sizing of the embedded generators. This paper reports an investigation of impact of the integration of embedded generators on the overall performances of the distribution networks in the steady state, using theorem of superposition. Set of distribution system indices is proposed to observe performances of the distribution networks with embedded generators. Results obtained from the case study using IEEE test network are presented and discussed.
Resumo:
Two systems of bus driver compensation exist in Santiago, Chile. The majority of drivers are paid per passenger transported, which leads to drivers trying to maximize the number of passengers each one conveys. Some of these effects are beneficial, such as a more active effort to minimize the problem of bus bunching, while others, such as aggressive driving, can be harmful. Drivers are said to "race" and the term "War for the Fare" is commonly used. Drivers also pay freelance workers called "sapos" to provide spacing information. Similar phenomena occur in other Latin American capitals.The other system, a fixed wage, is used by 2 companies holding recently awarded concessions for routes feeding metro stations.This paper discusses, quantitatively and qualitatively, the effects of these two compensation systems on accidents, quality of service, attitudes of both users and drivers, and average waiting times for passengers.
Resumo:
Transportation corridors in megaregions present a unique challenge for planners because of the high concentration of development, complex interjurisdictional issues, and history of independent development of core urban centers. The concept of resilience, as applied to megaregions, can be used to understand better the performance of these corridors. Resiliency is the ability to recover from or adjust easily to change. Resiliency performance measures can be expanded on for application to megaregions throughout the United States. When applied to transportation corridors in megaregions and represented by performance measures such as redundancy, continuity, connectivity, and travel time reliability, the concept of resiliency captures the spatial and temporal relationships between the attributes of a corridor, a network, and neighboring facilities over time at the regional and local levels. This paper focuses on the development of performance measurements for evaluating corridor resiliency as well as a plan for implementing analysis methods at the jurisdictional level. The transportation corridor between Boston, Massachusetts, and Washington, D.C., is used as a case study to represent the applicability of these measures to megaregions throughout the country.
Resumo:
Patients with amnestic mild cognitive impairment are at high risk for developing Alzheimer's disease. Besides episodic memory dysfunction they show deficits in accessing contextual knowledge that further specifies a general spatial navigation task or an executive function (EF) virtual action planning. There has been only one previous work with virtual reality and the use of a virtual action planning supermarket for the diagnosis of mild cognitive impairment. The authors of that study examined the feasibility and the validity of the virtual action planning supermarket (VAP-S) for the diagnosis of patients with mild cognitive impairment (MCI) and found that the VAP-S is a viable tool to assess EF deficits. In our study we employed the in-house platform of virtual action planning museum (VAP-M) and a sample of 25 MCI and 25 controls, in order to investigate deficits in spatial navigation, prospective memory and executive function. In addition, we used the morphology of late components in event-related potential (ERP) responses, as a marker for cognitive dysfunction. The related measurements were fed to a common classification scheme facilitating the direct comparison of both approaches. Our results indicate that both the VAP-M and ERP averages were able to differentiate between healthy elders and patients with amnestic mild cognitive impairment and agree with the findings of the virtual action planning supermarket (VAP-S). The sensitivity (specificity) was 100% (98%) for the VAP-M data and 87%(90%) for the ERP responses. Considering that ERPs have proven to advance the early detection and diagnosis of "presymptomatic AD", the suggested VAP-M platform appears as an appealing alternative.
Resumo:
BACKGROUND AND PURPOSE We report on workflow and process-based performance measures and their effect on clinical outcome in Solitaire FR Thrombectomy for Acute Revascularization (STAR), a multicenter, prospective, single-arm study of Solitaire FR thrombectomy in large vessel anterior circulation stroke patients. METHODS Two hundred two patients were enrolled across 14 centers in Europe, Canada, and Australia. The following time intervals were measured: stroke onset to hospital arrival, hospital arrival to baseline imaging, baseline imaging to groin puncture, groin puncture to first stent deployment, and first stent deployment to reperfusion. Effects of time of day, general anesthesia use, and multimodal imaging on workflow were evaluated. Patient characteristics and workflow processes associated with prolonged interval times and good clinical outcome (90-day modified Rankin score, 0-2) were analyzed. RESULTS Median times were onset of stroke to hospital arrival, 123 minutes (interquartile range, 163 minutes); hospital arrival to thrombolysis in cerebral infarction (TICI) 2b/3 or final digital subtraction angiography, 133 minutes (interquartile range, 99 minutes); and baseline imaging to groin puncture, 86 minutes (interquartile range, 24 minutes). Time from baseline imaging to puncture was prolonged in patients receiving intravenous tissue-type plasminogen activator (32-minute mean delay) and when magnetic resonance-based imaging at baseline was used (18-minute mean delay). Extracranial carotid disease delayed puncture to first stent deployment time on average by 25 minutes. For each 1-hour increase in stroke onset to final digital subtraction angiography (or TICI 2b/3) time, odds of good clinical outcome decreased by 38%. CONCLUSIONS Interval times in the STAR study reflect current intra-arterial therapy for patients with acute ischemic stroke. Improving workflow metrics can further improve clinical outcome. CLINICAL TRIAL REGISTRATION: URL http://www.clinicaltrials.gov. Unique identifier: NCT01327989.