863 resultados para Score metric
Resumo:
2000 Mathematics Subject Classification: 35B40, 35L15.
Resumo:
In this paper a full analytic model for pause intensity (PI), a no-reference metric for video quality assessment, is presented. The model is built upon the video play out buffer behavior at the client side and also encompasses the characteristics of a TCP network. Video streaming via TCP produces impairments in play continuity, which are not typically reflected in current objective metrics such as PSNR and SSIM. Recently the buffer under run frequency/probability has been used to characterize the buffer behavior and as a measurement for performance optimization. But we show, using subjective testing, that under run frequency cannot reflect the viewers' quality of experience for TCP based streaming. We also demonstrate that PI is a comprehensive metric made up of a combination of phenomena observed in the play out buffer. The analytical model in this work is verified with simulations carried out on ns-2, showing that the two results are closely matched. The effectiveness of the PI metric has also been proved by subjective testing on a range of video clips, where PI values exhibit a good correlation with the viewers' opinion scores. © 2012 IEEE.
Resumo:
This research focuses on automatically adapting a search engine size in response to fluctuations in query workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computer resources to or from the engine. Our solution is to contribute an adaptive search engine that will repeatedly re-evaluate its load and, when appropriate, switch over to a dierent number of active processors. We focus on three aspects and break them out into three sub-problems as follows: Continually determining the Number of Processors (CNP), New Grouping Problem (NGP) and Regrouping Order Problem (ROP). CNP means that (in the light of the changes in the query workload in the search engine) there is a problem of determining the ideal number of processors p active at any given time to use in the search engine and we call this problem CNP. NGP happens when changes in the number of processors are determined and it must also be determined which groups of search data will be distributed across the processors. ROP is how to redistribute this data onto processors while keeping the engine responsive and while also minimising the switchover time and the incurred network load. We propose solutions for these sub-problems. For NGP we propose an algorithm for incrementally adjusting the index to t the varying number of virtual machines. For ROP we present an ecient method for redistributing data among processors while keeping the search engine responsive. Regarding the solution for CNP, we propose an algorithm determining the new size of the search engine by re-evaluating its load. We tested the solution performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud. Our experiments show that when we compare our NGP solution with computing the index from scratch, the incremental algorithm speeds up the index computation 2{10 times while maintaining a similar search performance. The chosen redistribution method is 25% to 50% faster than other methods and reduces the network load around by 30%. For CNP we present a deterministic algorithm that shows a good ability to determine a new size of search engine. When combined, these algorithms give an adapting algorithm that is able to adjust the search engine size with a variable workload.
Resumo:
The attempts at carrying out terrorist attacks have become more prevalent. As a result, an increasing number of countries have become particularly vigilant against the means by which terrorists raise funds to finance their draconian acts against human life and property. Among the many counter-terrorism agencies in operation, governments have set up financial intelligence units (FIUs) within their borders for the purpose of tracking down terrorists’ funds. By investigating reported suspicious transactions, FIUs attempt to weed out financial criminals who use these illegal funds to finance terrorist activity. The prominent role played by FIUs means that their performance is always under the spotlight. By interviewing experts and conducting surveys of those associated with the fight against financial crime, this study investigated perceptions of FIU performance on a comparative basis between American and non-American FIUs. The target group of experts included financial institution personnel, civilian agents, law enforcement personnel, academicians, and consultants. Questions for the interview and surveys were based on the Kaplan and Norton’s Balanced Scorecard (BSC) methodology. One of the objectives of this study was to help determine the suitability of the BSC to this arena. While FIUs in this study have concentrated on performance by measuring outputs such as the number of suspicious transaction reports investigated, this study calls for a focus on outcomes involving all the parties responsible for financial criminal investigations. It is only through such an integrated approach that these various entities will be able to improve performance in solving financial crime. Experts in financial intelligence strongly believed that the quality and timeliness of intelligence was more important than keeping track of the number of suspicious transaction reports. Finally, this study concluded that the BSC could be appropriately applied to the arena of financial crime prevention even though the emphasis is markedly different from that in the private sector. While priority in the private sector is given to financial outcomes, in this arena employee growth and internal processes were perceived as most important in achieving a satisfactory outcome.
Resumo:
The Comprehensive Everglades Restoration Plan (CERP) attempts to restore hydrology in the Northern and Southern Estuaries of Florida. Reefs of the Eastern oyster Crassostrea virginica are a dominant feature of the estuaries along the Southwest Florida coast. Oysters are benthic, sessile, filter-feeding organisms that provide ecosystem services by filtering the water column and providing food, shelter and habitat for associated organisms. As such, the species is an excellent sentinel organism for examining the impacts of restoration on estuarine ecosystems. The implementation of CERP attempts to improve: the hydrology and spatial and structural characteristics of oyster reefs, the recruitment and survivorship of C. virginica, and the reef-associated communities of organisms. This project links biological responses and environmental conditions relative to hydrological changes as a means of assessing positive or negative trends in oyster responses and population trends. Using oyster responses, we have developed a communication tool (i.e., Stoplight Report Card) based on CERP performance measures that can distinguish between responses to restoration and natural patterns. The Stoplight Report Card system is a communication tool that uses Monitoring and Assessment Program (MAP) performance measures to grade an estuary's response to changes brought about by anthropogenic input or restoration activities. The Stoplight Report Card consists of both a suitability index score for each organism metric as well as a trend score (− decreasing trend, +/− no change in trend, and + increasing trend). Based on these two measures, a component score (e.g., living density) is calculated by averaging the suitability index score and the trend score. The final index score is obtained by taking the geometric score of each component, which is then translated into a stoplight color for success (green), caution (yellow), or failure (red). Based on the data available for oyster populations and the responses of oysters in the Caloosahatchee Estuary, the system is currently at stage “caution.” This communication tool instantly conveys the status of the indicator and the suitability, while trend curves provide information on progress towards reaching a target. Furthermore, the tool has the advantage of being able to be applied regionally, by species, and collectively, in concert with other species, system-wide.
Resumo:
The sizing of nursing human resources is an essential management tool to meet the needs of the patients and the institution. Regarding to the Intensive Care Unit, where the most critical patients are treated and the most advanced life-support equipments are used, requiring a high number of skilled workers, the use of specific indicators to measure the workload of the team becomes necessary. The Nursing Activities Score is a validated instrument for measuring nursing workload in the Intensive Care Unit that has demonstrated effectiveness. It is a cross-sectional study with the primary objective of assessing the workload of nursing staff in an adult Intensive Care Unit through the application of the Nursing Activities Score. The study was conducted in a private hospital specialized in the treatment of patients with cancer, which is located in the city of Natal (Rio Grande do Norte – Brazil). The study was approved by the Research Ethics Committee of the hospital (Protocol number 558.799; CAAE 24966013.7.0000.5293). For data collection, a form of sociodemographic characteristics of the patients was used; the Nursing Activities Score was used to identify the workload of nursing staff; and the instrument of Perroca, which classifies patients and provides data related to the their need for nursing care, was also used. The collected data were analyzed using a statistical package. The categorical variables were described by absolute and relative frequency, while the number by median and interquartile range. Considering the inferential approach, the Spearman test, the Wald chi-square, Kruskal Wallis and Mann-Whitney test were used. The statistically significant variables were those with p values <0.05. The evaluation of the overall averages of NAS, considering the first 15 days of hospitalization, was performed by the analysis of Generalized Estimating Equations (GEE), with adjust for the variable length of hospitalization. The sample consisted of 40 patients, in the period of June to August 2014. The results showed a mean age of 62,1 years (±23,4) with a female predominance (57,5%). The most frequent type of treatment was clinical (60,0%), observing an average stay of 6,9 days (±6,5). Considering the origin, most patients (35%) came from the Surgical Center. There was a mortality rate of 27,5%. 277 measures of NAS score and Perroca were performed, and the averages of 69,8% (±24,1) and 22,7% (±4.2) were obtained, respectively. There was an association between clinical outcome and value of the Nursing Activities Score in 24 hours (p <0.001), and between the degree of dependency of patients and nursing workload (rp 0,653, p<0,001). The achieved workload of the nursing staff, in the analyzed period, was presented high, showing that hospitalized patients required a high demand for care. These findings create subsidies for sizing of staff and allocation of human resources in the sector, in order to achieve greater safety and patient satisfaction as a result of intensive care, as well as an environment conducive to quality of life for the professionals
Resumo:
Funding The NNUH Stroke and TIA Register is maintained by the NNUH NHS Foundation Trust Stroke Services and data management for this study is supported by the NNUH Research and Development Department through Research Capability Funds.
Resumo:
Publisher PDF
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
OBJECTIVE: The Thrombolysis in Myocardial Infarction (TIMI) score is a validated tool for risk stratification of acute coronary syndrome. We hypothesized that the TIMI risk score would be able to risk stratify patients in observation unit for acute coronary syndrome. METHODS: STUDY DESIGN: Retrospective cohort study of consecutive adult patients placed in an urban academic hospital emergency department observation unit with an average annual census of 65,000 between 2004 and 2007. Exclusion criteria included elevated initial cardiac biomarkers, ST segment changes on ECG, unstable vital signs, or unstable arrhythmias. A composite of significant coronary artery disease (CAD) indicators, including diagnosis of myocardial infarction, percutaneous coronary intervention, coronary artery bypass surgery, or death within 30 days and 1 year, were abstracted via chart review and financial record query. The entire cohort was stratified by TIMI risk scores (0-7) and composite event rates with 95% confidence interval were calculated. RESULTS: In total 2228 patients were analyzed. Average age was 54.5 years, 42.0% were male. The overall median TIMI risk score was 1. Eighty (3.6%) patients had 30-day and 119 (5.3%) had 1-year CAD indicators. There was a trend toward increasing rate of composite CAD indicators at 30 days and 1 year with increasing TIMI score, ranging from a 1.2% event rate at 30 days and 1.9% at 1 year for TIMI score of 0 and 12.5% at 30 days and 21.4% at 1 year for TIMI ≥ 4. CONCLUSIONS: In an observation unit cohort, the TIMI risk score is able to risk stratify patients into low-, moderate-, and high-risk groups.
Resumo:
Estimation of absolute risk of cardiovascular disease (CVD), preferably with population-specific risk charts, has become a cornerstone of CVD primary prevention. Regular recalibration of risk charts may be necessary due to decreasing CVD rates and CVD risk factor levels. The SCORE risk charts for fatal CVD risk assessment were first calibrated for Germany with 1998 risk factor level data and 1999 mortality statistics. We present an update of these risk charts based on the SCORE methodology including estimates of relative risks from SCORE, risk factor levels from the German Health Interview and Examination Survey for Adults 2008-11 (DEGS1) and official mortality statistics from 2012. Competing risks methods were applied and estimates were independently validated. Updated risk charts were calculated based on cholesterol, smoking, systolic blood pressure risk factor levels, sex and 5-year age-groups. The absolute 10-year risk estimates of fatal CVD were lower according to the updated risk charts compared to the first calibration for Germany. In a nationwide sample of 3062 adults aged 40-65 years free of major CVD from DEGS1, the mean 10-year risk of fatal CVD estimated by the updated charts was lower by 29% and the estimated proportion of high risk people (10-year risk > = 5%) by 50% compared to the older risk charts. This recalibration shows a need for regular updates of risk charts according to changes in mortality and risk factor levels in order to sustain the identification of people with a high CVD risk.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.