960 resultados para least common subgraph algorithm
Resumo:
This dissertation discusses structural-electrostatic modeling techniques, genetic algorithm based optimization and control design for electrostatic micro devices. First, an alternative modeling technique, the interpolated force model, for electrostatic micro devices is discussed. The method provides improved computational efficiency relative to a benchmark model, as well as improved accuracy for irregular electrode configurations relative to a common approximate model, the parallel plate approximation model. For the configuration most similar to two parallel plates, expected to be the best case scenario for the approximate model, both the parallel plate approximation model and the interpolated force model maintained less than 2.2% error in static deflection compared to the benchmark model. For the configuration expected to be the worst case scenario for the parallel plate approximation model, the interpolated force model maintained less than 2.9% error in static deflection while the parallel plate approximation model is incapable of handling the configuration. Second, genetic algorithm based optimization is shown to improve the design of an electrostatic micro sensor. The design space is enlarged from published design spaces to include the configuration of both sensing and actuation electrodes, material distribution, actuation voltage and other geometric dimensions. For a small population, the design was improved by approximately a factor of 6 over 15 generations to a fitness value of 3.2 fF. For a larger population seeded with the best configurations of the previous optimization, the design was improved by another 7% in 5 generations to a fitness value of 3.0 fF. Third, a learning control algorithm is presented that reduces the closing time of a radiofrequency microelectromechanical systems switch by minimizing bounce while maintaining robustness to fabrication variability. Electrostatic actuation of the plate causes pull-in with high impact velocities, which are difficult to control due to parameter variations from part to part. A single degree-of-freedom model was utilized to design a learning control algorithm that shapes the actuation voltage based on the open/closed state of the switch. Experiments on 3 test switches show that after 5-10 iterations, the learning algorithm lands the switch with an impact velocity not exceeding 0.2 m/s, eliminating bounce.
Resumo:
OBJECTIVE: The aim of this systematic review was to assess the survival rates of short-span implant-supported cantilever fixed dental prostheses (ICFDPs) and the incidence of technical and biological complications after an observation period of at least 5 years. MATERIAL AND METHODS: An electronic MEDLINE search supplemented by manual searching was conducted to identify prospective or retrospective cohort studies reporting data of at least 5 years on ICFDPs. Five- and 10-year estimates for failure and complication rates were calculated using standard or random-effect Poisson regression analysis. RESULTS: The five studies eligible for the meta-analysis yielded an estimated 5- and 10-year ICFDP cumulative survival rate of 94.3% [95 percent confidence interval (95% CI): 84.1-98%] and 88.9% (95% CI: 70.8-96.1%), respectively. Five-year estimates for peri-implantitis were 5.4% (95% CI: 2-14.2%) and 9.4% (95% CI: 3.3-25.4%) at implant and prosthesis levels, respectively. Veneer fracture (5-year estimate: 10.3%; 95% CI: 3.9-26.6%) and screw loosening (5-year estimate: 8.2%; 95% CI: 3.9-17%) represented the most common complications, followed by loss of retention (5-year estimate: 5.7%; 95% CI: 1.9-16.5%) and abutment/screw fracture (5-year estimate: 2.1%; 95% CI: 0.9-5.1%). Implant fracture was rare (5-year estimate: 1.3%; 95% CI: 0.2-8.3%); no framework fracture was reported. Radiographic bone level changes did not yield statistically significant differences either at the prosthesis or at the implant levels when comparing ICFDPs with short-span implant-supported end-abutment fixed dental prostheses. CONCLUSIONS: ICFDPs represent a valid treatment modality; no detrimental effects can be expected on bone levels due to the presence of a cantilever extension per se.
Resumo:
Classic cystic fibrosis (CF) is caused by two loss-of-function mutations in the cystic fibrosis transmembrane conductance regulator (CFTR) gene, whereas patients with nonclassic CF have at least one copy of a mutant gene that retains partial function of the CFTR protein. In addition, there are several other phenotypes associated with CFTR gene mutations, such as idiopathic chronic pancreatitis. In CFTR-associated disorders and in nonclassic CF, often only one CFTR mutation or no CFTR mutations can be detected. In this study, we screened 23 patients with CFTR-associated disorders for CFTR mutations by complete gene testing and quantitative transcript analysis. Mutations were found in 10 patients. In cells from respiratory epithelium, we detected aberrant splicing of CFTR mRNA in all investigated individuals. We observed a highly significant association between the presence of coding single-nucleotide polymorphisms (coding SNPs, or cSNPs) and increased skipping of exon 9 and 12. This association was found both in patients and in normal individuals carrying the same cSNPs. The cSNPs c.1540A>G, c.2694T>G, and c.4521G>A may have affected pre-mRNA splicing by changing regulatory sequence motifs of exonic splice enhancers, leading to lower amounts of normal transcripts. The analysis of CFTR exons indicated that less frequent and weak exonic splicing enhancer (ESE) motifs make exon 12 vulnerable to skipping. The number of splice variants in individuals with cSNPs was similar to previously reported values for the T5 allele, suggesting that cSNPs may enhance susceptibility to CFTR related diseases. In addition, cSNPs may be responsible for variation in the phenotypic expression of CFTR mutations. Quantitative approaches rather than conventional genomic analysis are required to interpret the role of cSNPs.
Resumo:
An algorithm, based on ‘vertex priority values’ has been proposed to uniquely sequence and represent connectivity matrix of chemical structures of cyclic/ acyclic functionalized achiral hydrocarbons and their derivatives. In this method ‘vertex priority values’ have been assigned in terms of atomic weights, subgraph lengths, loops, and heteroatom contents. Subsequently the terminal vertices have been considered upon completing the sequencing of the core vertices. This approach provides a multilayered connectivity graph, which can be put to use in comparing two or more structures or parts thereof for any given purpose. Furthermore the basic vertex connection tables generated here are useful in the computation of characteristic matrices/ topological indices, automorphism groups, and in storing, sorting and retrieving of chemical structures from databases.
Resumo:
Waterbirds are often observed to move between different wintering sites within the same winter—for example, in response to food availability or weather conditions. Within-winter movements may contribute to the spreading of diseases, such as avian influenza, outside the actual migration period. The Common Pochard Aythya ferina seems to be particularly sensitive to infection with the highly pathogenic avian influenza virus H5N1 and, consequently, could play an important role as vectors for the disease. We describe here the within-winter movements of Pochards in Europe in relation to topography, climate, sex and age. We analysed data provided by the Euring data bank on 201 individuals for which records from different locations from the same winter (December–February) were available. The distances and directions moved within the winter varied markedly between regions, which could be ascribed to the differing topography (coast lines, Alps). We found no significant differences in terms of distances and directions moved between the sexes and only weak indications of differences between the age classes. In Switzerland, juveniles moved in more westerly directions than adults. During relatively mild winters, winter harshness had no effect on the distances travelled, but in cold winters, a positive relationship was observed, a pattern possibly triggered by the freezing of lakes. Winter harshness did not influence the directions of the movement. About 41% (83/201) of the Pochards that were recovered at least 1 km from the ringing site had moved more than 200 km. A substantial number of birds moved between central/southern Europe and the north-western coast of mainland Europe, and between the north-western coast of mainland Europe and Great Britain, whereas no direct exchange between Great Britain and central/southern Europe was observed. Within-winter movements of Pochards seem to be a common phenomenon in all years and possibly occur as a response to the depletion of food resources. This high tendency to move could potentially contribute to the spread of bird-transmitted diseases outside the actual migration period.
Resumo:
OBJECTIVE The natural course of chronic hepatitis C varies widely. To improve the profiling of patients at risk of developing advanced liver disease, we assessed the relative contribution of factors for liver fibrosis progression in hepatitis C. DESIGN We analysed 1461 patients with chronic hepatitis C with an estimated date of infection and at least one liver biopsy. Risk factors for accelerated fibrosis progression rate (FPR), defined as ≥0.13 Metavir fibrosis units per year, were identified by logistic regression. Examined factors included age at infection, sex, route of infection, HCV genotype, body mass index (BMI), significant alcohol drinking (≥20 g/day for ≥5 years), HIV coinfection and diabetes. In a subgroup of 575 patients, we assessed the impact of single nucleotide polymorphisms previously associated with fibrosis progression in genome-wide association studies. Results were expressed as attributable fraction (AF) of risk for accelerated FPR. RESULTS Age at infection (AF 28.7%), sex (AF 8.2%), route of infection (AF 16.5%) and HCV genotype (AF 7.9%) contributed to accelerated FPR in the Swiss Hepatitis C Cohort Study, whereas significant alcohol drinking, anti-HIV, diabetes and BMI did not. In genotyped patients, variants at rs9380516 (TULP1), rs738409 (PNPLA3), rs4374383 (MERTK) (AF 19.2%) and rs910049 (major histocompatibility complex region) significantly added to the risk of accelerated FPR. Results were replicated in three additional independent cohorts, and a meta-analysis confirmed the role of age at infection, sex, route of infection, HCV genotype, rs738409, rs4374383 and rs910049 in accelerating FPR. CONCLUSIONS Most factors accelerating liver fibrosis progression in chronic hepatitis C are unmodifiable.
Resumo:
1. Positive interactions among plants can increase species richness by relaxing environmental filters and providing more heterogeneous environments. However, it is not known if facilitation could affect coexistence through other mechanisms. Most studies on plant coexistence focus on negative frequency-dependent mechanisms (decreasing the abundance of common species); here, we test if facilitation can enhance coexistence by giving species an advantage when rare. 2. To test our hypothesis, we used a global data set from drylands and alpine environments and measured the intensity of facilitation (based on co-occurrences with nurse plants) for 48 species present in at least 4 different sites and with a range of abundances in the field. We compared these results with the degree of facilitation experienced by species which are globally rare or common (according to the IUCN Red List), and with a larger data base including over 1200 co-occurrences of target species with their nurses. 3. Facilitation was stronger for rare species (i.e. those having lower local abundances or considered endangered by the IUCN) than for common species, and strongly decreased with the abundance of the facilitated species. These results hold after accounting for the distance of each species from its ecological optimum (i.e. the degree of functional stress it experiences). 4. Synthesis. Our results highlight that nurse plants not only increase the number of species able to colonize a given site, but may also promote species coexistence by preventing the local extinction of rare species. Our findings illustrate the role that nurse plants play in conserving endangered species and link the relationship between facilitation and diversity with coexistence theory. As such, they provide further mechanistic understanding on how facilitation maintains plant diversity.
Resumo:
The genetic structure and dynamics of hybrid zones provide crucial information for understanding the processes and mechanisms of evolutionary divergence and speciation. In general, higher levels of evolutionary divergence between taxa are more likely to be associated with reproductive isolation and may result in suppressed or strongly restricted hybridization. In this study, we examined two secondary contact zones between three deep evolutionary lineages in the common vole (Microtus arvalis). Differences in divergence times between the lineages can shed light on different stages of reproductive isolation and thus provide information on the ongoing speciation process in M. arvalis. We examined more than 800 individuals for mitochondrial (mtDNA), Y-chromosome and autosomal markers and used assignment and cline analysis methods to characterize the extent and direction of gene flow in the contact zones. Introgression of both autosomal and mtDNA markers in a relatively broad area of admixture indicates selectively neutral hybridization between the least-divergent lineages (Central and Eastern) without evidence for partial reproductive isolation. In contrast, a very narrow area of hybridization, shifts in marker clines and the quasi-absence of Y-chromosome introgression support a moving hybrid zone and unidirectional selection against male hybrids between the lineages with older divergence (Central and Western). Data from a replicate transect further support non-neutral processes in this hybrid zone and also suggest a role for landscape history in the movement and shaping of geneflow profiles.
Resumo:
BACKGROUND AND OBJECTIVE Connective tissue grafts are frequently applied, together with Emdogain(®) , for root coverage. However, it is unknown whether fibroblasts from the gingiva and from the palate respond similarly to Emdogain. The aim of this study was therefore to evaluate the effect of Emdogain(®) on fibroblasts from palatal and gingival connective tissue using a genome-wide microarray approach. MATERIAL AND METHODS Human palatal and gingival fibroblasts were exposed to Emdogain(®) and RNA was subjected to microarray analysis followed by gene ontology screening with Database for Annotation, Visualization and Integrated Discovery functional annotation clustering, Kyoto Encyclopedia of Genes and Genomes pathway analysis and the Search Tool for the Retrieval of Interacting Genes/Proteins functional protein association network. Microarray results were confirmed by quantitative RT-PCR analysis. RESULTS The transcription levels of 106 genes were up-/down-regulated by at least five-fold in both gingival and palatal fibroblasts upon exposure to Emdogain(®) . Gene ontology screening assigned the respective genes into 118 biological processes, six cellular components, eight molecular functions and five pathways. Among the striking patterns observed were the changing expression of ligands targeting the transforming growth factor-beta and gp130 receptor family as well as the transition of mesenchymal epithelial cells. Moreover, Emdogain(®) caused changes in expression of receptors for chemokines, lipids and hormones, and for transcription factors such as SMAD3, peroxisome proliferator-activated receptor gamma and those of the ETS family. CONCLUSION The present data suggest that Emdogain(®) causes substantial alterations in gene expression, with similar patterns observed in palatal and gingival fibroblasts.
Resumo:
AIMS We aimed to assess the prevalence and management of clinical familial hypercholesterolaemia (FH) among patients with acute coronary syndrome (ACS). METHODS AND RESULTS We studied 4778 patients with ACS from a multi-centre cohort study in Switzerland. Based on personal and familial history of premature cardiovascular disease and LDL-cholesterol levels, two validated algorithms for diagnosis of clinical FH were used: the Dutch Lipid Clinic Network algorithm to assess possible (score 3-5 points) or probable/definite FH (>5 points), and the Simon Broome Register algorithm to assess possible FH. At the time of hospitalization for ACS, 1.6% had probable/definite FH [95% confidence interval (CI) 1.3-2.0%, n = 78] and 17.8% possible FH (95% CI 16.8-18.9%, n = 852), respectively, according to the Dutch Lipid Clinic algorithm. The Simon Broome algorithm identified 5.4% (95% CI 4.8-6.1%, n = 259) patients with possible FH. Among 1451 young patients with premature ACS, the Dutch Lipid Clinic algorithm identified 70 (4.8%, 95% CI 3.8-6.1%) patients with probable/definite FH, and 684 (47.1%, 95% CI 44.6-49.7%) patients had possible FH. Excluding patients with secondary causes of dyslipidaemia such as alcohol consumption, acute renal failure, or hyperglycaemia did not change prevalence. One year after ACS, among 69 survivors with probable/definite FH and available follow-up information, 64.7% were using high-dose statins, 69.0% had decreased LDL-cholesterol from at least 50, and 4.6% had LDL-cholesterol ≤1.8 mmol/L. CONCLUSION A phenotypic diagnosis of possible FH is common in patients hospitalized with ACS, particularly among those with premature ACS. Optimizing long-term lipid treatment of patients with FH after ACS is required.
Resumo:
BACKGROUND AND AIMS The Barcelona Clinic Liver Cancer (BCLC) staging system is the algorithm most widely used to manage patients with hepatocellular carcinoma (HCC). We aimed to investigate the extent to which the BCLC recommendations effectively guide clinical practice and assess the reasons for any deviation from the recommendations. MATERIAL AND METHODS The first-line treatments assigned to patients included in the prospective Bern HCC cohort were analyzed. RESULTS Among 223 patients included in the cohort, 116 were not treated according to the BCLC algorithm. Eighty percent of the patients in BCLC stage 0 (very early HCC) and 60% of the patients in BCLC stage A (early HCC) received recommended curative treatment. Only 29% of the BCLC stage B patients (intermediate HCC) and 33% of the BCLC stage C patients (advanced HCC) were treated according to the algorithm. Eighty-nine percent of the BCLC stage D patients (terminal HCC) were treated with best supportive care, as recommended. In 98 patients (44%) the performance status was disregarded in the stage assignment. CONCLUSION The management of HCC in clinical practice frequently deviates from the BCLC recommendations. Most of the curative therapy options, which have well-defined selection criteria, were allocated according to the recommendations, while the majority of the palliative therapy options were assigned to patients with tumor stages not aligned with the recommendations. The only parameter which is subjective in the algorithm, the performance status, is also the least respected.
Resumo:
Many attempts have already been made to detect exomoons around transiting exoplanets, but the first confirmed discovery is still pending. The experiences that have been gathered so far allow us to better optimize future space telescopes for this challenge already during the development phase. In this paper we focus on the forthcoming CHaraterising ExOPlanet Satellite (CHEOPS), describing an optimized decision algorithm with step-by-step evaluation, and calculating the number of required transits for an exomoon detection for various planet moon configurations that can be observable by CHEOPS. We explore the most efficient way for such an observation to minimize the cost in observing time. Our study is based on PTV observations (photocentric transit timing variation) in simulated CHEOPS data, but the recipe does not depend on the actual detection method, and it can be substituted with, e.g., the photodynamical method for later applications. Using the current state-of-the-art level simulation of CHEOPS data we analyzed transit observation sets for different star planet moon configurations and performed a bootstrap analysis to determine their detection statistics. We have found that the detection limit is around an Earth-sized moon. In the case of favorable spatial configurations, systems with at least a large moon and a Neptune-sized planet, an 80% detection chance requires at least 5-6 transit observations on average. There is also a nonzero chance in the case of smaller moons, but the detection statistics deteriorate rapidly, while the necessary transit measurements increase quickly. After the CoRoT and Kepler spacecrafts, CHEOPS will be the next dedicated space telescope that will observe exoplanetary transits and characterize systems with known Doppler-planets. Although it has a smaller aperture than Kepler (the ratio of the mirror diameters is about 1/3) and is mounted with a CCD that is similar to Kepler's, it will observe brighter stars and operate with larger sampling rate; therefore, the detection limit for an exomoon can be the same as or better, which will make CHEOPS a competitive instruments in the quest for exomoons.
Resumo:
SNP genotyping arrays have been developed to characterize single-nucleotide polymorphisms (SNPs) and DNA copy number variations (CNVs). The quality of the inferences about copy number can be affected by many factors including batch effects, DNA sample preparation, signal processing, and analytical approach. Nonparametric and model-based statistical algorithms have been developed to detect CNVs from SNP genotyping data. However, these algorithms lack specificity to detect small CNVs due to the high false positive rate when calling CNVs based on the intensity values. Association tests based on detected CNVs therefore lack power even if the CNVs affecting disease risk are common. In this research, by combining an existing Hidden Markov Model (HMM) and the logistic regression model, a new genome-wide logistic regression algorithm was developed to detect CNV associations with diseases. We showed that the new algorithm is more sensitive and can be more powerful in detecting CNV associations with diseases than an existing popular algorithm, especially when the CNV association signal is weak and a limited number of SNPs are located in the CNV.^
Resumo:
The effectiveness of the Anisotropic Analytical Algorithm (AAA) implemented in the Eclipse treatment planning system (TPS) was evaluated using theRadiologicalPhysicsCenteranthropomorphic lung phantom using both flattened and flattening-filter-free high energy beams. Radiation treatment plans were developed following the Radiation Therapy Oncology Group and theRadiologicalPhysicsCenterguidelines for lung treatment using Stereotactic Radiation Body Therapy. The tumor was covered such that at least 95% of Planning Target Volume (PTV) received 100% of the prescribed dose while ensuring that normal tissue constraints were followed as well. Calculated doses were exported from the Eclipse TPS and compared with the experimental data as measured using thermoluminescence detectors (TLD) and radiochromic films that were placed inside the phantom. The results demonstrate that the AAA superposition-convolution algorithm is able to calculate SBRT treatment plans with all clinically used photon beams in the range from 6 MV to 18 MV. The measured dose distribution showed a good agreement with the calculated distribution using clinically acceptable criteria of ±5% dose or 3mm distance to agreement. These results show that in a heterogeneous environment a 3D pencil beam superposition-convolution algorithms with Monte Carlo pre-calculated scatter kernels, such as AAA, are able to reliably calculate dose, accounting for increased lateral scattering due to the loss of electronic equilibrium in low density medium. The data for high energy plans (15 MV and 18 MV) showed very good tumor coverage in contrast to findings by other investigators for less sophisticated dose calculation algorithms, which demonstrated less than expected tumor doses and generally worse tumor coverage for high energy plans compared to 6MV plans. This demonstrates that the modern superposition-convolution AAA algorithm is a significant improvement over previous algorithms and is able to calculate doses accurately for SBRT treatment plans in the highly heterogeneous environment of the thorax for both lower (≤12 MV) and higher (greater than 12 MV) beam energies.
Resumo:
Energy management has always been recognized as a challenge in mobile systems, especially in modern OS-based mobile systems where multi-functioning are widely supported. Nowadays, it is common for a mobile system user to run multiple applications simultaneously while having a target battery lifetime in mind for a specific application. Traditional OS-level power management (PM) policies make their best effort to save energy under performance constraint, but fail to guarantee a target lifetime, leaving the painful trading off between the total performance of applications and the target lifetime to the user itself. This thesis provides a new way to deal with the problem. It is advocated that a strong energy-aware PM scheme should first guarantee a user-specified battery lifetime to a target application by restricting the average power of those less important applications, and in addition to that, maximize the total performance of applications without harming the lifetime guarantee. As a support, energy, instead of CPU or transmission bandwidth, should be globally managed as the first-class resource by the OS. As the first-stage work of a complete PM scheme, this thesis presents the energy-based fair queuing scheduling, a novel class of energy-aware scheduling algorithms which, in combination with a mechanism of battery discharge rate restricting, systematically manage energy as the first-class resource with the objective of guaranteeing a user-specified battery lifetime for a target application in OS-based mobile systems. Energy-based fair queuing is a cross-application of the traditional fair queuing in the energy management domain. It assigns a power share to each task, and manages energy by proportionally serving energy to tasks according to their assigned power shares. The proportional energy use establishes proportional share of the system power among tasks, which guarantees a minimum power for each task and thus, avoids energy starvation on any task. Energy-based fair queuing treats all tasks equally as one type and supports periodical time-sensitive tasks by allocating each of them a share of system power that is adequate to meet the highest energy demand in all periods. However, an overly conservative power share is usually required to guarantee the meeting of all time constraints. To provide more effective and flexible support for various types of time-sensitive tasks in general purpose operating systems, an extra real-time friendly mechanism is introduced to combine priority-based scheduling into the energy-based fair queuing. Since a method is available to control the maximum time one time-sensitive task can run with priority, the power control and time-constraint meeting can be flexibly traded off. A SystemC-based test-bench is designed to assess the algorithms. Simulation results show the success of the energy-based fair queuing in achieving proportional energy use, time-constraint meeting, and a proper trading off between them. La gestión de energía en los sistema móviles está considerada hoy en día como un reto fundamental, notándose, especialmente, en aquellos terminales que utilizando un sistema operativo implementan múltiples funciones. Es común en los sistemas móviles actuales ejecutar simultaneamente diferentes aplicaciones y tener, para una de ellas, un objetivo de tiempo de uso de la batería. Tradicionalmente, las políticas de gestión de consumo de potencia de los sistemas operativos hacen lo que está en sus manos para ahorrar energía y satisfacer sus requisitos de prestaciones, pero no son capaces de proporcionar un objetivo de tiempo de utilización del sistema, dejando al usuario la difícil tarea de buscar un compromiso entre prestaciones y tiempo de utilización del sistema. Esta tesis, como contribución, proporciona una nueva manera de afrontar el problema. En ella se establece que un esquema de gestión de consumo de energía debería, en primer lugar, garantizar, para una aplicación dada, un tiempo mínimo de utilización de la batería que estuviera especificado por el usuario, restringiendo la potencia media consumida por las aplicaciones que se puedan considerar menos importantes y, en segundo lugar, maximizar las prestaciones globales sin comprometer la garantía de utilización de la batería. Como soporte de lo anterior, la energía, en lugar del tiempo de CPU o el ancho de banda, debería gestionarse globalmente por el sistema operativo como recurso de primera clase. Como primera fase en el desarrollo completo de un esquema de gestión de consumo, esta tesis presenta un algoritmo de planificación de encolado equitativo (fair queueing) basado en el consumo de energía, es decir, una nueva clase de algoritmos de planificación que, en combinación con mecanismos que restrinjan la tasa de descarga de una batería, gestionen de forma sistemática la energía como recurso de primera clase, con el objetivo de garantizar, para una aplicación dada, un tiempo de uso de la batería, definido por el usuario, en sistemas móviles empotrados. El encolado equitativo de energía es una extensión al dominio de la energía del encolado equitativo tradicional. Esta clase de algoritmos asigna una reserva de potencia a cada tarea y gestiona la energía sirviéndola de manera proporcional a su reserva. Este uso proporcional de la energía garantiza que cada tarea reciba una porción de potencia y evita que haya tareas que se vean privadas de recibir energía por otras con un comportamiento más ambicioso. Esta clase de algoritmos trata a todas las tareas por igual y puede planificar tareas periódicas en tiempo real asignando a cada una de ellas una reserva de potencia que es adecuada para proporcionar la mayor de las cantidades de energía demandadas por período. Sin embargo, es posible demostrar que sólo se consigue cumplir con los requisitos impuestos por todos los plazos temporales con reservas de potencia extremadamente conservadoras. En esta tesis, para proporcionar un soporte más flexible y eficiente para diferentes tipos de tareas de tiempo real junto con el resto de tareas, se combina un mecanismo de planificación basado en prioridades con el encolado equitativo basado en energía. En esta clase de algoritmos, gracias al método introducido, que controla el tiempo que se ejecuta con prioridad una tarea de tiempo real, se puede establecer un compromiso entre el cumplimiento de los requisitos de tiempo real y el consumo de potencia. Para evaluar los algoritmos, se ha diseñado en SystemC un banco de pruebas. Los resultados muestran que el algoritmo de encolado equitativo basado en el consumo de energía consigue el balance entre el uso proporcional a la energía reservada y el cumplimiento de los requisitos de tiempo real.