959 resultados para Average Case Complexity
Resumo:
OBJECTIVES: To document biopsychosocial profiles of patients with rheumatoid arthritis (RA) by means of the INTERMED and to correlate the results with conventional methods of disease assessment and health care utilization. METHODS: Patients with RA (n = 75) were evaluated with the INTERMED, an instrument for assessing case complexity and care needs. Based on their INTERMED scores, patients were compared with regard to severity of illness, functional status, and health care utilization. RESULTS: In cluster analysis, a 2-cluster solution emerged, with about half of the patients characterized as complex. Complex patients scoring especially high in the psychosocial domain of the INTERMED were disabled significantly more often and took more psychotropic drugs. Although the 2 patient groups did not differ in severity of illness and functional status, complex patients rated their illness as more severe on subjective measures and on most items of the Medical Outcomes Study Short Form 36. Complex patients showed increased health care utilization despite a similar biologic profile. CONCLUSIONS: The INTERMED identified complex patients with increased health care utilization, provided meaningful and comprehensive patient information, and proved to be easy to implement and advantageous compared with conventional methods of disease assessment. Intervention studies will have to demonstrate whether management strategies based on INTERMED profiles can improve treatment response and outcome of complex patients.
Resumo:
The aim of this study was to assess a population of patients with diabetes mellitus by means of the INTERMED, a classification system for case complexity integrating biological, psychosocial and health care related aspects of disease. The main hypothesis was that the INTERMED would identify distinct clusters of patients with different degrees of case complexity and different clinical outcomes. Patients (n=61) referred to a tertiary reference care centre were evaluated with the INTERMED and followed 9 months for HbA1c values and 6 months for health care utilisation. Cluster analysis revealed two clusters: cluster 1 (62%) consisting of complex patients with high INTERMED scores and cluster 2 (38%) consisting of less complex patients with lower INTERMED. Cluster 1 patients showed significantly higher HbA1c values and a tendency for increased health care utilisation. Total INTERMED scores were significantly related to HbA1c and explained 21% of its variance. In conclusion, different clusters of patients with different degrees of case complexity were identified by the INTERMED, allowing the detection of highly complex patients at risk for poor diabetes control. The INTERMED therefore provides an objective basis for clinical and scientific progress in diabetes mellitus. Ongoing intervention studies will have to confirm these preliminary data and to evaluate if management strategies based on the INTERMED profiles will improve outcomes.
Resumo:
This study extends the standard econometric treatment of appellate court outcomes by 1) considering the role of decision-maker effort and case complexity, and 2) adopting a multi-categorical selection process of appealed cases. We find evidence of appellate courts being affected by both the effort made by first-stage decision makers and case complexity. This illustrates the value of widening the narrowly defined focus on heterogeneity in individual-specific preferences that characterises many applied studies on legal decision-making. Further, the majority of appealed cases represent non-random sub-samples and the multi-categorical selection process appears to offer advantages over the more commonly used dichotomous selection models.
Resumo:
Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.
Resumo:
This thesis presents there important results in visual object recognition based on shape. (1) A new algorithm (RAST; Recognition by Adaptive Sudivisions of Tranformation space) is presented that has lower average-case complexity than any known recognition algorithm. (2) It is shown, both theoretically and empirically, that representing 3D objects as collections of 2D views (the "View-Based Approximation") is feasible and affects the reliability of 3D recognition systems no more than other commonly made approximations. (3) The problem of recognition in cluttered scenes is considered from a Bayesian perspective; the commonly-used "bounded-error errorsmeasure" is demonstrated to correspond to an independence assumption. It is shown that by modeling the statistical properties of real-scenes better, objects can be recognized more reliably.
Resumo:
We generalize the Strong Boneh-Boyen (SBB) signature scheme to sign vectors; we call this scheme GSBB. We show that if a particular (but most natural) average case reduction from SBB to GSBB exists, then the Strong Diffie-Hellman (SDH) and the Computational Diffie-Hellman (CDH) have the same worst-case complexity.
Resumo:
A preliminary version of this paper appeared in Proceedings of the 31st IEEE Real-Time Systems Symposium, 2010, pp. 239–248.
Resumo:
Consider the problem of assigning implicit-deadline sporadic tasks on a heterogeneous multiprocessor platform comprising two different types of processors—such a platform is referred to as two-type platform. We present two low degree polynomial time-complexity algorithms, SA and SA-P, each providing the following guarantee. For a given two-type platform and a task set, if there exists a task assignment such that tasks can be scheduled to meet deadlines by allowing them to migrate only between processors of the same type (intra-migrative), then (i) using SA, it is guaranteed to find such an assignment where the same restriction on task migration applies but given a platform in which processors are 1+α/2 times faster and (ii) SA-P succeeds in finding a task assignment where tasks are not allowed to migrate between processors (non-migrative) but given a platform in which processors are 1+α times faster. The parameter 0<α≤1 is a property of the task set; it is the maximum of all the task utilizations that are no greater than 1. We evaluate average-case performance of both the algorithms by generating task sets randomly and measuring how much faster processors the algorithms need (which is upper bounded by 1+α/2 for SA and 1+α for SA-P) in order to output a feasible task assignment (intra-migrative for SA and non-migrative for SA-P). In our evaluations, for the vast majority of task sets, these algorithms require significantly smaller processor speedup than indicated by their theoretical bounds. Finally, we consider a special case where no task utilization in the given task set can exceed one and for this case, we (re-)prove the performance guarantees of SA and SA-P. We show, for both of the algorithms, that changing the adversary from intra-migrative to a more powerful one, namely fully-migrative, in which tasks can migrate between processors of any type, does not deteriorate the performance guarantees. For this special case, we compare the average-case performance of SA-P and a state-of-the-art algorithm by generating task sets randomly. In our evaluations, SA-P outperforms the state-of-the-art by requiring much smaller processor speedup and by running orders of magnitude faster.
Resumo:
RESUMO - A definição e medição da produção são questões centrais para a administração hospitalar. A produção hospitalar, quando se consideram os casos tratados, baseia-se em dois aspectos: a definição de sistemas de classificação de doentes como metodologia para identificar produtos e a criação de índices de casemix para se compararem esses mesmos produtos. Para a sua definição e implementação podem ser consideradas características relacionadas com a complexidade dos casos (atributo da oferta) ou com a sua gravidade (atributo da procura), ou ainda características mistas. Por sua vez, a análise do perfil e da política de admissões dos hospitais adquire um maior relevo no contexto de novas experiências previstas e em curso no SNS e da renovada necessidade de avaliação e regulação que daí decorrem. Neste estudo pretendeu-se discutir a metodologia para apuramento do índice de casemix dos hospitais, introduzindo- se a gravidade dos casos tratados como atributo relevante para a sua concretização. Assim, foi analisada uma amostra de 950 443 casos presentes na base de dados dos resumos de alta em 2002, tendo- -se dado particular atenção aos 31 hospitais posteriormente constituídos como SA. Foram considerados três índices de casemix: índice de complexidade (a partir do peso relativo dos DRGs), índice de gravidade (a partir da escala de mortalidade esperada do disease staging recalibrada para Portugal) e índice conjunto (média dos dois anteriores). Verificou-se que a análise do índice de complexidade, de gravidade e conjunto dá informações distintas sobre o perfil de admissões dos hospitais considerados. Os índices de complexidade e de gravidade mostram associações distintas às características dos hospitais e dos doentes tratados. Para além disso, existe uma diferença clara entre os casos com tratamento médico e cirúrgico. No entanto, para a globalidade dos hospitais analisados observou-se que os hospitais que tratam os casos mais graves tratam igualmente os mais complexos, tendo-se ainda identificado alguns hospitais em que tal não se verifica e, quando possível, apontado eventuais razões para esse comportamento.
Resumo:
RATIONALE: This study was intended to document the frequency of care complexity in liver transplant candidates, and its association with mood disturbance and poor health-related quality of life (HRQoL). METHODS: Consecutive patients fulfilling inclusion criteria, recruited in three European hospitals, were assessed with INTERMED, a reliable and valid method for the early assessment of bio-psychosocial health risks and needs. Blind to the results, they were also assessed with the Hospital Anxiety and Depression Scale (HADS). HRQoL was documented with the EuroQol and the SF36. Statistical analysis included multivariate and multilevel techniques. RESULTS: Among patients fulfilling inclusion criteria, 60 patients (75.9%) completed the protocol and 38.3% of them were identified as "complex" by INTERMED, but significant between-center differences were found. In support of the working hypothesis, INTERMED scores were significantly associated with all measures of both the SF36 and the EuroQol, and also with the HADS. A one point increase in the INTERMED score results in a reduction in 0.93 points in EuroQol and a 20% increase in HADS score. CONCLUSIONS: INTERMED-measured case complexity is frequent in liver transplant candidates but varies widely between centers. The use of this method captures in one instrument multiple domains of patient status, including mood disturbances and reduced HRQoL.
Resumo:
In this work, we study and compare two percolation algorithms, one of then elaborated by Elias, and the other one by Newman and Ziff, using theorical tools of algorithms complexity and another algorithm that makes an experimental comparation. This work is divided in three chapters. The first one approaches some necessary definitions and theorems to a more formal mathematical study of percolation. The second presents technics that were used for the estimative calculation of the algorithms complexity, are they: worse case, better case e average case. We use the technique of the worse case to estimate the complexity of both algorithms and thus we can compare them. The last chapter shows several characteristics of each one of the algorithms and through the theoretical estimate of the complexity and the comparison between the execution time of the most important part of each one, we can compare these important algorithms that simulate the percolation.
Resumo:
In the last few years, the European Union (EU) has become greatly concerned about the environmental costs of road transport in Europe as a result of the constant growth in the market share of trucks and the steady decline in the market share of railroads. In order to reverse this trend, the EU is promoting the implementation of additional charges for heavy goods vehicles (HGV) on the trunk roads of the EU countries. However, the EU policy is being criticised because it does not address the implementation of charges to internalise the external costs produced by automobiles and other transport modes such as railroad. In this paper, we first describe the evolution of the HGV charging policy in the EU, and then assess its practical implementation across different European countries. Second, and of greater significance, by using the case study of Spain, we evaluate to what extent the current fees on trucks and trains reflect their social marginal costs, and consequently lead to an allocative-efficient outcome. We found that for the average case in Spain the truck industry meets more of the marginal social cost produced by it than does the freight railroad industry. The reason for this lies in the large sums of money paid by truck companies in fuel taxes, and the subsidies that continue to be granted by the government to the railroads.
Resumo:
In the last few years, the European Union (EU) has become greatly concerned about the environmental costs of road transport in Europe as a result of the constant growth in the market share of trucks and the steady decline in the market share of railroads. In order to reverse this trend, the EU is promoting the implementation of additional charges for heavy goods vehicles (HGV) on the trunk roads of the EU countries. However, the EU policy is being criticised because it does not address the implementation of charges to internalise the external costs produced by automobiles and other transport modes such as railroad. In this paper, we first describe the evolution of the HGV charging policy in the EU, and then assess its practical implementation across different European countries. Second, and of greater significance, by using the case study of Spain, we evaluate to what extent the current fees on trucks and trains reflect their social marginal costs, and consequently lead to an allocative-efficient outcome. We found that for the average case in Spain the truck industry meets more of the marginal social cost produced by it than does the freight railroad industry. The reason for this lies in the large sums of money paid by truck companies in fuel taxes, and the subsidies that continue to be granted by the government to the railroads.
Resumo:
In this paper, we propose a resource allocation scheme to minimize transmit power for multicast orthogonal frequency division multiple access systems. The proposed scheme allows users to have different symbol error rate (SER) across subcarriers and guarantees an average bit error rate and transmission rate for all users. We first provide an algorithm to determine the optimal bits and target SER on subcarriers. Because the worst-case complexity of the optimal algorithm is exponential, we further propose a suboptimal algorithm that separately assigns bit and adjusts SER with a lower complexity. Numerical results show that the proposed algorithm can effectively improve the performance of multicast orthogonal frequency division multiple access systems and that the performance of the suboptimal algorithm is close to that of the optimal one. Copyright © 2012 John Wiley & Sons, Ltd. This paper proposes optimal and suboptimal algorithms for minimizing transmitting power of multicast orthogonal frequency division multiple access systems with guaranteed average bit error rate and data rate requirement. The proposed scheme allows users to have different symbol error rate across subcarriers and guarantees an average bit error rate and transmission rate for all users. Copyright © 2012 John Wiley & Sons, Ltd.