874 resultados para Monitoring the quality and safety of the health system
Resumo:
In 58 newborn infants a new iridium oxide sensor was evaluated for transcutaneous carbon dioxide (tcPCO2) monitoring at 42 degrees C with a prolonged fixation time of 24 hours. The correlation of tcPCO2 (y; mm Hg) v PaCO2 (x; mm Hg) for 586 paired values was: y = 4.6 + 1.45x; r = .89; syx = 6.1 mm Hg. The correlation was not influenced by the duration of fixation. The transcutaneous sensor detected hypocapnia (PaCO2 less than 35 mm Hg) in 74% and hypercapnia (PCO2 greater than 45 mm Hg) in 74% of all cases. After 24 hours, calibration shifts were less than 4 mm Hg in 90% of the measuring periods. In 86% of the infants, no skin changes were observed; in 12% of infants, there were transitional skin erythemas and in 2% a blister which disappeared without scarring. In newborn infants with normal BPs, continuous tcPCO2 monitoring at 42 degrees C can be extended for as many as 24 hours without loss of reliability or increased risk for skin burns.
Resumo:
Background/Purpose: The primary treatment goals for gouty arthritis (GA) are rapid relief of pain and inflammation during acute attacks, and long-term hyperuricemia management. A post-hoc analysis of 2 pivotal trials was performed to assess efficacy and safety of canakinumab (CAN), a fully human monoclonal anti-IL-1_ antibody, vs triamcinolone acetonide (TA) in GA patients unable to use NSAIDs and colchicine, and who were on stable urate lowering therapy (ULT) or unable to use ULT. Methods: In these 12-week, randomized, multicenter, double-blind, double-dummy, active-controlled studies (_-RELIEVED and _-RELIEVED II), patients had to have frequent attacks (_3 attacks in previous year) meeting preliminary GA ACR 1977 criteria, and were unresponsive, intolerant, or contraindicated to NSAIDs and/or colchicine, and if on ULT, ULT was stable. Patients were randomized during an acute attack to single dose CAN 150 mg s.c. or TA 40 mg i.m. and were redosed "on demand" for each new attack. Patients completing the core studies were enrolled into blinded 12-week extension studies to further investigate on-demand use of CAN vs TA for new attacks. The subpopulation selected for this post-hoc analysis was (a) unable to use NSAIDs and colchicine due to contraindication, intolerance or lack of efficacy for these drugs, and (b) currently on ULT, or contraindication or previous failure of ULT, as determined by investigators. Subpopulation comprised 101 patients (51 CAN; 50 TA) out of 454 total. Results: Several co-morbidities, including hypertension (56%), obesity (56%), diabetes (18%), and ischemic heart disease (13%) were reported in 90% of this subpopulation. Pain intensity (VAS 100 mm scale) was comparable between CAN and TA treatment groups at baseline (least-square [LS] mean 74.6 and 74.4 mm, respectively). A significantly lower pain score was reported with CAN vs TA at 72 hours post dose (1st co-primary endpoint on baseline flare; LS mean, 23.5 vs 33.6 mm; difference _10.2 mm; 95% CI, _19.9, _0.4; P_0.0208 [1-sided]). CAN significantly reduced risk for their first new attacks by 61% vs TA (HR 0.39; 95% CI, 0.17-0.91, P_0.0151 [1-sided]) for the first 12 weeks (2nd co-primary endpoint), and by 61% vs TA (HR 0.39; 95% CI, 0.19-0.79, P_0.0047 [1-sided]) over 24 weeks. Serum urate levels increased for CAN vs TA with mean change from baseline reaching a maximum of _0.7 _ 2.0 vs _0.1 _ 1.8 mg/dL at 8 weeks, and _0.3 _ 2.0 vs _0.2 _ 1.4 mg/dL at end of study (all had GA attack at baseline). Adverse Events (AEs) were reported in 33 (66%) CAN and 24 (47.1%) TA patients. Infections and infestations were the most common AEs, reported in 10 (20%) and 5 (10%) patients treated with CAN and TA respectively. Incidence of SAEs was comparable between CAN (gastritis, gastroenteritis, chronic renal failure) and TA (aortic valve incompetence, cardiomyopathy, aortic stenosis, diarrohea, nausea, vomiting, bicuspid aortic valve) groups (2 [4.0%] vs 2 [3.9%]). Conclusion: CAN provided superior pain relief and reduced risk of new attack in highly-comorbid GA patients unable to use NSAIDs and colchicine, and who were currently on stable ULT or unable to use ULT. The safety profile in this post-hoc subpopulation was consistent with the overall _-RELIEVED and _-RELIEVED II population.
Resumo:
OBJECTIVE: To assess the survival benefit and safety profile of low-dose (850 mg/kg) and high-dose (1350 mg/kg) phospholipid emulsion vs. placebo administered as a continuous 3-day infusion in patients with confirmed or suspected Gram-negative severe sepsis. Preclinical and ex vivo studies show that lipoproteins bind and neutralize endotoxin, and experimental animal studies demonstrate protection from septic death when lipoproteins are administered. Endotoxin neutralization correlates with the amount of phospholipid in the lipoprotein particles. DESIGN: A three-arm, randomized, blinded, placebo-controlled trial. SETTING: Conducted at 235 centers worldwide between September 2004 and April 2006. PATIENTS: A total of 1379 patients participated in the study, 598 patients received low-dose phospholipid emulsion, and 599 patients received placebo. The high-dose phospholipid emulsion arm was stopped, on the recommendation of the Independent Data Monitoring Committee, due to an increase in life-threatening serious adverse events at the fourth interim analysis and included 182 patients. MEASUREMENTS AND MAIN RESULTS: A 28-day all-cause mortality and new-onset organ failure. There was no significant treatment benefit for low- or high-dose phospholipid emulsion vs. placebo for 28-day all-cause mortality, with rates of 25.8% (p = .329), 31.3% (p = .879), and 26.9%, respectively. The rate of new-onset organ failure was not statistically different among groups at 26.3%, 31.3%, 20.4% with low- and high-dose phospholipid emulsion, and placebo, respectively (one-sided p = .992, low vs. placebo; p = .999, high vs. placebo). Of the subjects treated, 45% had microbiologically confirmed Gram-negative infections. Maximal changes in mean hemoglobin levels were reached on day 10 (-1.04 g/dL) and day 5 (-1.36 g/dL) with low- and high-dose phospholipid emulsion, respectively, and on day 14 (-0.82 g/dL) with placebo. CONCLUSIONS: Treatment with phospholipid emulsion did not reduce 28-day all-cause mortality, or reduce the onset of new organ failure in patients with suspected or confirmed Gram-negative severe sepsis.
Resumo:
Background: The long-term efficacy and safety of aclidinium bromide, a novel, long-acting muscarinic antagonist, were investigated in patients with moderate to severe chronic obstructive pulmonary disease (COPD). Methods: In two double-blind, 52-week studies, ACCLAIM/COPD I (n = 843) and II (n = 804), patients were randomised to inhaled aclidinium 200 μg or placebo once-daily. Patients were required to have a postbronchodilator forced expiratory volume in 1 second (FEV1)/forced vital capacity ratio of ≤70% and FEV1 <80% of the predicted value. The primary endpoint was trough FEV1 at 12 and 28 weeks. Secondary endpoints were health status measured by St George"s Respiratory Questionnaire (SGRQ) and time to first moderate or severe COPD exacerbation. Results: At 12 and 28 weeks, aclidinium improved trough FEV1 versus placebo in ACCLAIM/COPD I (by 61 and 67 mL; both p < 0.001) and ACCLAIM/COPD II (by 63 and 59 mL; both p < 0.001). More patients had a SGRQ improvement ≥4 units at 52 weeks with aclidinium versus placebo in ACCLAIM/COPD I (48.1% versus 39.5%; p = 0.025) and ACCLAIM/COPD II (39.0% versus 32.8%; p = 0.074). The time to first exacerbation was significantly delayed by aclidinium in ACCLAIM/COPD II (hazard ratio [HR] 0.7; 95% confidence interval [CI] 0.55 to 0.92; p = 0.01), but not ACCLAIM/COPD I (HR 1.0; 95% CI 0.72 to 1.33; p = 0.9). Adverse events were minor in both studies. Conclusion: Aclidinium is effective and well tolerated in patients with moderate to severe COPD. Trial registration: ClinicalTrials.gov: NCT00363896 ACCLAIM/COPD I) and NCT00358436 (ACCLAIM/COPD II).
Resumo:
The subject of this thesis is the development of a Gaschromatography (GC) system for non-methane hydrocarbons (NMHCs) and measurement of samples within the project CARIBIC (Civil Aircraft for the Regular Investigation of the atmosphere Based on an Instrument Container, www.caribic-atmospheric.com). Air samples collected at cruising altitude from the upper troposphere and lowermost stratosphere contain hydrocarbons at low levels (ppt range), which imposes substantial demands on detection limits. Full automation enabled to maintain constant conditions during the sample processing and analyses. Additionally, automation allows overnight operation thus saving time. A gas chromatography using flame ionization detection (FID) together with the dual column approach enables simultaneous detection with almost equal carbon atom response for all hydrocarbons except for ethyne. The first part of this thesis presents the technical descriptions of individual parts of the analytical system. Apart from the sample treatment and calibration procedures, the sample collector is described. The second part deals with analytical performance of the GC system by discussing tests that had been made. Finally, results for measurement flight are assessed in terms of quality of the data and two flights are discussed in detail. Analytical performance is characterized using detection limits for each compound, using uncertainties for each compound, using tests of calibration mixture conditioning and carbon dioxide trap to find out their influence on analyses, and finally by comparing the responses of calibrated substances during period when analyses of the flights were made. Comparison of both systems shows good agreement. However, because of insufficient capacity of the CO2 trap the signal of one column was suppressed due to breakthroughed carbon dioxide so much that its results appeared to be unreliable. Plausibility tests for the internal consistency of the given data sets are based on common patterns exhibited by tropospheric NMHCs. All tests show that samples from the first flights do not comply with the expected pattern. Additionally, detected alkene artefacts suggest potential problems with storing or contamination within all measurement flights. Two last flights # 130-133 and # 166-169 comply with the tests therefore their detailed analysis is made. Samples were analyzed in terms of their origin (troposphere vs. stratosphere, backward trajectories), their aging (NMHCs ratios) and detected plumes were compared to chemical signatures of Asian outflows. In the last chapter a future development of the presented system with focus on separation is drawn. An extensive appendix documents all important aspects of the dissertation from theoretical introduction through illustration of sample treatment to overview diagrams for the measured flights.
Resumo:
BACKGROUND: Despite trials demonstrating its efficacy, many physicians harbor concerns regarding the use of natalizumab in the treatment of patients with refractory Crohn's disease (CD). The purpose of this study was to perform a descriptive analysis of a series of CD patients not currently enrolled in a clinical trial. METHODS: A retrospective case review of patients treated with natalizumab at 6 sites in Massachusetts: Boston Medical Center, Beth Israel Deaconess Medical Center, Brigham & Women's Hospital, Lahey Clinic, Massachusetts General Hospital, and UMass Medical Center. RESULTS: Data on 69 CD patients on natalizumab were collected. At the start of treatment, patients' disease duration was 12 years. A high proportion of patients were women (68%), presented with perianal disease (65%) and upper gastrointestinal tract involvement (14%). Prior nonbiologic therapies were steroids (96%), thiopurines (94%), antibiotics (74%), methotrexate (58%), and at least two anti-tumor necrosis factor agent failures (81%). Sixty-nine percent (44 of 64 patients) with available medical evaluation had a partial or complete clinical response. Loss of response was 13% after an average of 1 year of treatment. Adverse events were infusion reactions, headaches, fever, and infections. No case of progressive multifocal leukoencephalopathy was observed. CONCLUSIONS: In our clinical experience outside the context of a clinical trial, natalizumab is largely reserved for CD patients with extensive ileocolonic disease who have failed conventional immunosuppressants and of at least 2 anti-tumor necrosis factor agents. This drug is, however, well tolerated and offers significant clinical improvement for more than a year in one-third of these difficult-to-treat CD patients.
Resumo:
INTRODUCTION The pentasaccharide fondaparinux is widely approved for prophylaxis and treatment of thromboembolic diseases and therapy of acute coronary syndrome. It is also used off-label in patients with acute, suspected or antecedent heparin-induced thrombocytopenia (HIT). The aim of this prospective observational cohort study was to document fondaparinux' prescription practice, tolerance and therapy safety in a representative mixed German single-centre patient cohort. PATIENTS AND METHODS Between 09/2008 - 04/2009, 231 consecutive patients treated with fondaparinux were enrolled. Medical data were obtained from patient's records. The patients were clinically screened for thrombosis (Wells score), sequelae of HIT (4T's score), and bleeding complications (ISTH-criteria) and subjected to further assessment (i.e. sonography, HIT-diagnostics), if necessary. The mortality rate was assessed 30 days after therapy start. RESULTS Overall, 153/231 patients had a prophylactic, 74/231 patients a therapeutic, and 4/231 patients a successive prophylactic/therapeutic indication. In 11/231 patients fondaparinux was used due to suspected/antecedent HIT, in 5/231 patients due to a previous cutaneous delayed-type hypersensitivity to heparins. Other indications were rare. Three new/progressive thromboses were detected. No cases of HIT, major bleedings, or fatalities occurred. CONCLUSIONS Fondaparinux was well tolerated and was safe in prophylaxis and therapy; prescriptions mostly followed the current approval guidelines and were rarely related to HIT-associated indications (<5% of prescriptions), which is in contrast to previous study results in the U.S. (>94% of prescriptions were HIT-associated). A trend towards an individualised fondaparinux use based on the compound's inherent properties and the patients' risk profiles, i.e., antecedent HIT, bone fractures, heparin allergy, was observed.
Resumo:
OBJECTIVES The purpose of this study was to compare the 2-year safety and effectiveness of new- versus early-generation drug-eluting stents (DES) according to the severity of coronary artery disease (CAD) as assessed by the SYNTAX (Synergy between Percutaneous Coronary Intervention with Taxus and Cardiac Surgery) score. BACKGROUND New-generation DES are considered the standard-of-care in patients with CAD undergoing percutaneous coronary intervention. However, there are few data investigating the effects of new- over early-generation DES according to the anatomic complexity of CAD. METHODS Patient-level data from 4 contemporary, all-comers trials were pooled. The primary device-oriented clinical endpoint was the composite of cardiac death, myocardial infarction, or ischemia-driven target-lesion revascularization (TLR). The principal effectiveness and safety endpoints were TLR and definite stent thrombosis (ST), respectively. Adjusted hazard ratios (HRs) with 95% confidence intervals (CIs) were calculated at 2 years for overall comparisons, as well as stratified for patients with lower (SYNTAX score ≤11) and higher complexity (SYNTAX score >11). RESULTS A total of 6,081 patients were included in the study. New-generation DES (n = 4,554) compared with early-generation DES (n = 1,527) reduced the primary endpoint (HR: 0.75 [95% CI: 0.63 to 0.89]; p = 0.001) without interaction (p = 0.219) between patients with lower (HR: 0.86 [95% CI: 0.64 to 1.16]; p = 0.322) versus higher CAD complexity (HR: 0.68 [95% CI: 0.54 to 0.85]; p = 0.001). In patients with SYNTAX score >11, new-generation DES significantly reduced TLR (HR: 0.36 [95% CI: 0.26 to 0.51]; p < 0.001) and definite ST (HR: 0.28 [95% CI: 0.15 to 0.55]; p < 0.001) to a greater extent than in the low-complexity group (TLR pint = 0.059; ST pint = 0.013). New-generation DES decreased the risk of cardiac mortality in patients with SYNTAX score >11 (HR: 0.45 [95% CI: 0.27 to 0.76]; p = 0.003) but not in patients with SYNTAX score ≤11 (pint = 0.042). CONCLUSIONS New-generation DES improve clinical outcomes compared with early-generation DES, with a greater safety and effectiveness in patients with SYNTAX score >11.
Resumo:
Although recent guidelines recommend the combination of calcium channel blockers (CCBs) and thiazide (-like) diuretics, this combination is not widely used in clinical practice. The aim of this meta-analysis was to assess the efficacy and safety of this combination regarding the following endpoints: all-cause and cardiovascular mortality, myocardial infarction, and stroke. Four studies with a total of 30,791 of patients met the inclusion criteria. The combination CCB/thiazide (-like) diuretic was associated with a significant risk reduction for myocardial infarction (risk ratio [RR], 0.83; 95% confidence interval [CI], 0.73-0.95) and stroke (RR, 0.77; CI, 0.64-0.92) compared with other combinations, whereas it was similarly effective compared with other combinations in reducing the risk of all-cause (RR, 0.89; CI, 0.75-1.06) and cardiovascular (RR, 0.89; CI 0.71-1.10) mortality. Elderly patients with isolated systolic hypertension may particularly benefit from such a combination, since both drug classes have been shown to confer cerebrovascular protection.
Resumo:
OBJECTIVES To prove non-inferiority of the first non-hormonal vaginal cream in Germany, Vagisan(®) Moisturising Cream (CREAM), compared to a non-hormonal vaginal gel (GEL) for vulvovaginal atrophy (VVA) symptom relief. METHOD This was a 12-week multicenter, open-label, prospective, randomized, two-period, cross-over phase-III trial. The primary endpoint was the cumulative VVA subjective symptom score of the respective treatment period. Secondary endpoints were assessment of single VVA subjective and objective symptoms, VVA objective symptom score, vaginal pH, safety parameters, overall assessment of efficacy, tolerability and evaluation of product properties. In total, 117 women were randomly allocated to either one of the two treatments, each administered for 4 weeks; 92 women were included in the per-protocol analysis (primary analysis). The main outcome measure was cumulative VVA subjective symptom score. RESULTS Regarding VVA symptom relief, results confirmed non-inferiority of CREAM compared to GEL and even indicated superiority of CREAM. Frequency and intensity of subjective symptoms and objective findings were clearly reduced, with CREAM showing better results compared to GEL. Mean VVA objective symptom score significantly decreased; improvement was significantly greater with CREAM. Vaginal pH decreased only following CREAM treatment. Tolerability was superior for CREAM: burning and itching, mostly rated as mild, occurred markedly less often with CREAM than with GEL. Overall satisfaction with treatment efficacy, tolerability and most product properties were rated significantly superior for CREAM. CONCLUSIONS Subjective and objective VVA symptoms were reliably and safely reduced by both non-hormonal topical products. However, efficacy and tolerability of CREAM were shown to be superior to GEL.
Resumo:
La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.
Resumo:
Mode of access: Internet.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
Objective: To compare the effectiveness and safety of repeat treatment with hylan G-F 20 based on data from a randomized, controlled trial [Raynauld JP, Torrance GW, Band PA, Goldsmith CH, Tugwell P, Walker V, et al. A prospective, randomized, pragmatic, health outcomes trial evaluating the incorporation of hylan G-F 20 into the treatment paradigm for patients with knee osteoarthritis (Part 1 of 2): clinical results. Osteoarthritis Cartilage 2002;10:506-17]. The hypotheses tested were whether the single-course and repeat-course subgroups would be superior to appropriate care and not different from each other. Method: A total of 255 patients with knee osteoarthritis were randomized to appropriate care with hylan G-F 20 or appropriate care without hylan G-F 20. The hylan G-F 20 group was partitioned into two subgroups: (1) patients who received a single course of hylan G-F 20; and (2) patients who received two or more courses of hylan G-F 20. Results: For the primary effectiveness measure, change in Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) pain score as a percent of baseline, the single-course subgroup improved by 41%, the repeat-course subgroup by 35%, and the appropriate care group by 14%. Both subgroups improved significantly more than the appropriate care group (P < 0.05), and were not statistically significantly different from each other (70% power to detect a 20% difference). Secondary effectiveness measures showed similar results. In the repeat-course subgroup, no statistically significant differences were found in the number of local adverse events, the number of patients with local adverse events, or arthrocentesis rates between the first and repeat courses of treatment. Conclusions: Although the study was neither designed nor powered to examine repeat treatment, this a posteriori analysis provides support for a favorable effectiveness and safety profile of hylan G-F 20 in repeat course patients. (C) 2004 OsteoArthritis Research Society International. Published by Elsevier Ltd. All rights reserved.