88 resultados para Three-phase Integrated Inverter


Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: Standard cardiopulmonary bypass (CPB) circuits with their large surface area and volume contribute to postoperative systemic inflammatory reaction and hemodilution. In order to minimize these problems a new approach has been developed resulting in a single disposable, compact arterio-venous loop, which has integral kinetic-assist pumping, oxygenating, air removal, and gross filtration capabilities (CardioVention Inc., Santa Clara, CA, USA). The impact of this system on gas exchange capacity, blood elements and hemolysis is compared to that of a conventional circuit in a model of prolonged perfusion. METHODS: Twelve calves (mean body weight: 72.2+/-3.7 kg) were placed on cardiopulmonary bypass for 6 h with a flow of 5 l/min, and randomly assigned to the CardioVention system (n=6) or a standard CPB circuit (n=6). A standard battery of blood samples was taken before bypass and throughout bypass. Analysis of variance was used for comparison. RESULTS: The hematocrit remained stable throughout the experiment in the CardioVention group, whereas it dropped in the standard group in the early phase of perfusion. When normalized for prebypass values, both profiles differed significantly (P<0.01). Both O2 and CO2 transfers were significantly improved in the CardioVention group (P=0.04 and P<0.001, respectively). There was a slightly higher pressure drop in the CardioVention group but no single value exceeded 112 mmHg. No hemolysis could be detected in either group with all free plasma Hb values below 15 mg/l. Thrombocyte count, when corrected by hematocrit and normalized by prebypass values, exhibited an increased drop in the standard group (P=0.03). CONCLUSION: The CardioVention system with its concept of limited priming volume and exposed foreign surface area, improves gas exchange probably because of the absence of detectable hemodilution, and appears to limit the decrease in the thrombocyte count which may be ascribed to the reduced surface. Despite the volume and surface constraints, no hemolysis could be detected throughout the 6 h full-flow perfusion period.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High systemic levels of IP-10 at onset of combination therapy for chronic hepatitis C mirror intrahepatic mRNA levels and predict a slower first phase decline in HCV RNA as well as poor outcome. Recently several genome wide association studies have revealed that single nucleotide polymorphisms (SNPs) on chromosome19 within proximity of IL28B predict spontaneous clearance of HCV infection and as therapeutic outcome among patients infected with HCV genotype 1, with three such SNPs being highly predictive: rs12979860, rs12980275, and rs8099917. In the present study, we correlated genetic variations in these SNPs from 253 Caucasian patients with pretreatment plasma levels of IP-10 and HCV RNA throughout therapy within a phase III treatment trial (HCV-DITTO). The favorable genetic variations in all three SNPs (CC, AA, and TT respectively) was significantly associated with lower baseline IP-10 (CC vs. CT/TT at rs12979860: median 189 vs. 258 pg/mL, P=0.02, AA vs. AG/GG at rs12980275: median 189 vs. 258 pg/mL, P=0.01, TT vs. TG/GG at rs8099917: median 224 vs. 288 pg/mL, P=0.04), were significantly less common among HCV genotype 1 infected patients than genotype 2/3 (P<0.0001, P<0.0001, and P=0.01 respectively) and had significantly higher baseline viral load than carriers of the SNP genotypes (6.3 vs. 5.9 log 10 IU/mL, P=0.0012, 6.3 vs. 6.0 log 10 IU/mL, P=0.026, and 6.3 vs. 5.8 log 10 IU/mL, P=0.0003 respectively). Among HCV genotype 1 infected homozygous or heterogeneous carriers of the favorable C, A, and T genotypes, lower baseline IP-10 was significantly associated with greater decline in HCV-RNA day 0-4, which translated into increased rates of achieving SVR among homozygous patients with baseline IP-10 below 150 pg/mL (85%, 75%, and 75% respectively). In a multivariate analysis among genotype 1 infected patients, both baseline IP-10 and the SNPs were significant independent predictors of SVR. Conclusion: Baseline plasma IP-10 is significantly associated with IL28B variations, and augments the predictiveness of the first phase decline in HCV RNA and final treatment outcome.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To assess the survival benefit and safety profile of low-dose (850 mg/kg) and high-dose (1350 mg/kg) phospholipid emulsion vs. placebo administered as a continuous 3-day infusion in patients with confirmed or suspected Gram-negative severe sepsis. Preclinical and ex vivo studies show that lipoproteins bind and neutralize endotoxin, and experimental animal studies demonstrate protection from septic death when lipoproteins are administered. Endotoxin neutralization correlates with the amount of phospholipid in the lipoprotein particles. DESIGN: A three-arm, randomized, blinded, placebo-controlled trial. SETTING: Conducted at 235 centers worldwide between September 2004 and April 2006. PATIENTS: A total of 1379 patients participated in the study, 598 patients received low-dose phospholipid emulsion, and 599 patients received placebo. The high-dose phospholipid emulsion arm was stopped, on the recommendation of the Independent Data Monitoring Committee, due to an increase in life-threatening serious adverse events at the fourth interim analysis and included 182 patients. MEASUREMENTS AND MAIN RESULTS: A 28-day all-cause mortality and new-onset organ failure. There was no significant treatment benefit for low- or high-dose phospholipid emulsion vs. placebo for 28-day all-cause mortality, with rates of 25.8% (p = .329), 31.3% (p = .879), and 26.9%, respectively. The rate of new-onset organ failure was not statistically different among groups at 26.3%, 31.3%, 20.4% with low- and high-dose phospholipid emulsion, and placebo, respectively (one-sided p = .992, low vs. placebo; p = .999, high vs. placebo). Of the subjects treated, 45% had microbiologically confirmed Gram-negative infections. Maximal changes in mean hemoglobin levels were reached on day 10 (-1.04 g/dL) and day 5 (-1.36 g/dL) with low- and high-dose phospholipid emulsion, respectively, and on day 14 (-0.82 g/dL) with placebo. CONCLUSIONS: Treatment with phospholipid emulsion did not reduce 28-day all-cause mortality, or reduce the onset of new organ failure in patients with suspected or confirmed Gram-negative severe sepsis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Correlates of immune-mediated protection to most viral and cancer vaccines are still unknown. This impedes the development of novel vaccines to incurable diseases such as HIV and cancer. In this study, we have used functional genomics and polychromatic flow cytometry to define the signature of the immune response to the yellow fever (YF) vaccine 17D (YF17D) in a cohort of 40 volunteers followed for up to 1 yr after vaccination. We show that immunization with YF17D leads to an integrated immune response that includes several effector arms of innate immunity, including complement, the inflammasome, and interferons, as well as adaptive immunity as shown by an early T cell response followed by a brisk and variable B cell response. Development of these responses is preceded, as demonstrated in three independent vaccination trials and in a novel in vitro system of primary immune responses (modular immune in vitro construct [MIMIC] system), by the coordinated up-regulation of transcripts for specific transcription factors, including STAT1, IRF7, and ETS2, which are upstream of the different effector arms of the immune response. These results clearly show that the immune response to a strong vaccine is preceded by coordinated induction of master transcription factors that lead to the development of a broad, polyfunctional, and persistent immune response that integrates all effector cells of the immune system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Most patients with glioblastoma are older than 60 years, but treatment guidelines are based on trials in patients aged only up to 70 years. We did a randomised trial to assess the optimum palliative treatment in patients aged 60 years and older with glioblastoma. METHODS: Patients with newly diagnosed glioblastoma were recruited from Austria, Denmark, France, Norway, Sweden, Switzerland, and Turkey. They were assigned by a computer-generated randomisation schedule, stratified by centre, to receive temozolomide (200 mg/m(2) on days 1-5 of every 28 days for up to six cycles), hypofractionated radiotherapy (34·0 Gy administered in 3·4 Gy fractions over 2 weeks), or standard radiotherapy (60·0 Gy administered in 2·0 Gy fractions over 6 weeks). Patients and study staff were aware of treatment assignment. The primary endpoint was overall survival. Analyses were done by intention to treat. This trial is registered, number ISRCTN81470623. FINDINGS: 342 patients were enrolled, of whom 291 were randomised across three treatment groups (temozolomide n=93, hypofractionated radiotherapy n=98, standard radiotherapy n=100) and 51 of whom were randomised across only two groups (temozolomide n=26, hypofractionated radiotherapy n=25). In the three-group randomisation, in comparison with standard radiotherapy, median overall survival was significantly longer with temozolomide (8·3 months [95% CI 7·1-9·5; n=93] vs 6·0 months [95% CI 5·1-6·8; n=100], hazard ratio [HR] 0·70; 95% CI 0·52-0·93, p=0·01), but not with hypofractionated radiotherapy (7·5 months [6·5-8·6; n=98], HR 0·85 [0·64-1·12], p=0·24). For all patients who received temozolomide or hypofractionated radiotherapy (n=242) overall survival was similar (8·4 months [7·3-9·4; n=119] vs 7·4 months [6·4-8·4; n=123]; HR 0·82, 95% CI 0·63-1·06; p=0·12). For age older than 70 years, survival was better with temozolomide and with hypofractionated radiotherapy than with standard radiotherapy (HR for temozolomide vs standard radiotherapy 0·35 [0·21-0·56], p<0·0001; HR for hypofractionated vs standard radiotherapy 0·59 [95% CI 0·37-0·93], p=0·02). Patients treated with temozolomide who had tumour MGMT promoter methylation had significantly longer survival than those without MGMT promoter methylation (9·7 months [95% CI 8·0-11·4] vs 6·8 months [5·9-7·7]; HR 0·56 [95% CI 0·34-0·93], p=0·02), but no difference was noted between those with methylated and unmethylated MGMT promoter treated with radiotherapy (HR 0·97 [95% CI 0·69-1·38]; p=0·81). As expected, the most common grade 3-4 adverse events in the temozolomide group were neutropenia (n=12) and thrombocytopenia (n=18). Grade 3-5 infections in all randomisation groups were reported in 18 patients. Two patients had fatal infections (one in the temozolomide group and one in the standard radiotherapy group) and one in the temozolomide group with grade 2 thrombocytopenia died from complications after surgery for a gastrointestinal bleed. INTERPRETATION: Standard radiotherapy was associated with poor outcomes, especially in patients older than 70 years. Both temozolomide and hypofractionated radiotherapy should be considered as standard treatment options in elderly patients with glioblastoma. MGMT promoter methylation status might be a useful predictive marker for benefit from temozolomide. FUNDING: Merck, Lion's Cancer Research Foundation, University of Umeå, and the Swedish Cancer Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: The objective of this study was to analyse the long-term mortality and morbidity of a group of patients undergoing thrombolysis during the acute phase of myocardial infarction and to determine the factors influencing the prognosis. One hundred and seventy five patients (149 mean and 26 women, mean age: 54 years) were included in a randomized study, comparing the efficacy of 2 thrombolytic substances administered during the acute phase of myocardial infarction. A standard questionnaire was sent to the various attending physicians to follow-up of these 175 patients. RESULTS: The hospital mortality was 5% (9 patients) and 14 patients (9%) died after a mean follow-up of 4.3 +/- 2.1 years. The 5-year actuarial survival was 81%. Fourteen patients (8%) were lost to follow-up and 49 patients (32%) underwent surgical or percutaneous revascularization during follow-up. Revascularized patients had a significantly better survival than non-revascularized patients. The mean left ventricular ejection fraction of patients who died was lower (48% versus 71%) than that of survivors. Patients with an ejection fraction < 40% also had a significantly lower survival (p = 0.01). Patency of the vessel after thrombolysis was associated with a slightly better survival; this difference was not significant. The ejection fraction at 6 month was also significantly higher (60 +/- 10% versus 49 +/- 11%) for patients with a patent artery. Three risk factors for death or reinfarction were identified: age > 65 years at the time of infarction, disease in more than one coronary vessel and absence of angina pectoris before infarction. The probability of a coronary accident varied from 2 to 88% according to the number of risk factors present. At the time of follow-up, 60% of patients presented hypercholesterolaemia versus only 7% before infarction 73% of patients received anticoagulant or antiaggregant treatment and 81% of patients were asymptomatic. CONCLUSION: The mortality and the acute and long-term morbidity of myocardial infarction remain high, as only 34% of our patients did not develop any events during follow-up, despite serious medical management and follow-up. The ejection fraction has an important prognostic value. Patient management should take the abovementioned risk factors into account.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les problèmes d'écoulements multiphasiques en média poreux sont d'un grand intérêt pour de nombreuses applications scientifiques et techniques ; comme la séquestration de C02, l'extraction de pétrole et la dépollution des aquifères. La complexité intrinsèque des systèmes multiphasiques et l'hétérogénéité des formations géologiques sur des échelles multiples représentent un challenge majeur pour comprendre et modéliser les déplacements immiscibles dans les milieux poreux. Les descriptions à l'échelle supérieure basées sur la généralisation de l'équation de Darcy sont largement utilisées, mais ces méthodes sont sujettes à limitations pour les écoulements présentant de l'hystérèse. Les avancées récentes en terme de performances computationnelles et le développement de méthodes précises pour caractériser l'espace interstitiel ainsi que la distribution des phases ont favorisé l'utilisation de modèles qui permettent une résolution fine à l'échelle du pore. Ces modèles offrent un aperçu des caractéristiques de l'écoulement qui ne peuvent pas être facilement observées en laboratoire et peuvent être utilisé pour expliquer la différence entre les processus physiques et les modèles à l'échelle macroscopique existants. L'objet premier de la thèse se porte sur la simulation numérique directe : les équations de Navier-Stokes sont résolues dans l'espace interstitiel et la méthode du volume de fluide (VOF) est employée pour suivre l'évolution de l'interface. Dans VOF, la distribution des phases est décrite par une fonction fluide pour l'ensemble du domaine et des conditions aux bords particulières permettent la prise en compte des propriétés de mouillage du milieu poreux. Dans la première partie de la thèse, nous simulons le drainage dans une cellule Hele-Shaw 2D avec des obstacles cylindriques. Nous montrons que l'approche proposée est applicable même pour des ratios de densité et de viscosité très importants et permet de modéliser la transition entre déplacement stable et digitation visqueuse. Nous intéressons ensuite à l'interprétation de la pression capillaire à l'échelle macroscopique. Nous montrons que les techniques basées sur la moyenne spatiale de la pression présentent plusieurs limitations et sont imprécises en présence d'effets visqueux et de piégeage. Au contraire, une définition basée sur l'énergie permet de séparer les contributions capillaires des effets visqueux. La seconde partie de la thèse est consacrée à l'investigation des effets d'inertie associés aux reconfigurations irréversibles du ménisque causé par l'interface des instabilités. Comme prototype pour ces phénomènes, nous étudions d'abord la dynamique d'un ménisque dans un pore angulaire. Nous montrons que, dans un réseau de pores cubiques, les sauts et reconfigurations sont si fréquents que les effets d'inertie mènent à différentes configurations des fluides. A cause de la non-linéarité du problème, la distribution des fluides influence le travail des forces de pression, qui, à son tour, provoque une chute de pression dans la loi de Darcy. Cela suggère que ces phénomènes devraient être pris en compte lorsque que l'on décrit l'écoulement multiphasique en média poreux à l'échelle macroscopique. La dernière partie de la thèse s'attache à démontrer la validité de notre approche par une comparaison avec des expériences en laboratoire : un drainage instable dans un milieu poreux quasi 2D (une cellule Hele-Shaw avec des obstacles cylindriques). Plusieurs simulations sont tournées sous différentes conditions aux bords et en utilisant différents modèles (modèle intégré 2D et modèle 3D) afin de comparer certaines quantités macroscopiques avec les observations au laboratoire correspondantes. Malgré le challenge de modéliser des déplacements instables, où, par définition, de petites perturbations peuvent grandir sans fin, notre approche numérique apporte de résultats satisfaisants pour tous les cas étudiés. - Problems involving multiphase flow in porous media are of great interest in many scientific and engineering applications including Carbon Capture and Storage, oil recovery and groundwater remediation. The intrinsic complexity of multiphase systems and the multi scale heterogeneity of geological formations represent the major challenges to understand and model immiscible displacement in porous media. Upscaled descriptions based on generalization of Darcy's law are widely used, but they are subject to several limitations for flow that exhibit hysteric and history- dependent behaviors. Recent advances in high performance computing and the development of accurate methods to characterize pore space and phase distribution have fostered the use of models that allow sub-pore resolution. These models provide an insight on flow characteristics that cannot be easily achieved by laboratory experiments and can be used to explain the gap between physical processes and existing macro-scale models. We focus on direct numerical simulations: we solve the Navier-Stokes equations for mass and momentum conservation in the pore space and employ the Volume Of Fluid (VOF) method to track the evolution of the interface. In the VOF the distribution of the phases is described by a fluid function (whole-domain formulation) and special boundary conditions account for the wetting properties of the porous medium. In the first part of this thesis we simulate drainage in a 2-D Hele-Shaw cell filled with cylindrical obstacles. We show that the proposed approach can handle very large density and viscosity ratios and it is able to model the transition from stable displacement to viscous fingering. We then focus on the interpretation of the macroscopic capillary pressure showing that pressure average techniques are subject to several limitations and they are not accurate in presence of viscous effects and trapping. On the contrary an energy-based definition allows separating viscous and capillary contributions. In the second part of the thesis we investigate inertia effects associated with abrupt and irreversible reconfigurations of the menisci caused by interface instabilities. As a prototype of these phenomena we first consider the dynamics of a meniscus in an angular pore. We show that in a network of cubic pores, jumps and reconfigurations are so frequent that inertia effects lead to different fluid configurations. Due to the non-linearity of the problem, the distribution of the fluids influences the work done by pressure forces, which is in turn related to the pressure drop in Darcy's law. This suggests that these phenomena should be taken into account when upscaling multiphase flow in porous media. The last part of the thesis is devoted to proving the accuracy of the numerical approach by validation with experiments of unstable primary drainage in a quasi-2D porous medium (i.e., Hele-Shaw cell filled with cylindrical obstacles). We perform simulations under different boundary conditions and using different models (2-D integrated and full 3-D) and we compare several macroscopic quantities with the corresponding experiment. Despite the intrinsic challenges of modeling unstable displacement, where by definition small perturbations can grow without bounds, the numerical method gives satisfactory results for all the cases studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: We conducted a phase I multicenter trial in naïve metastatic castrate-resistant prostate cancer patients with escalating inecalcitol dosages, combined with docetaxel-based chemotherapy. Inecalcitol is a novel vitamin D receptor agonist with higher antiproliferative effects and a 100-fold lower hypercalcemic activity than calcitriol. EXPERIMENTAL DESIGN: Safety and efficacy were evaluated in groups of three to six patients receiving inecalcitol during a 21-day cycle in combination with docetaxel (75 mg/m2 every 3 weeks) and oral prednisone (5 mg twice a day) up to six cycles. Primary endpoint was dose-limiting toxicity (DLT) defined as grade 3 hypercalcemia within the first cycle. Efficacy endpoint was ≥30% PSA decline within 3 months. RESULTS: Eight dose levels (40-8,000 μg) were evaluated in 54 patients. DLT occurred in two of four patients receiving 8,000 μg/day after one and two weeks of inecalcitol. Calcemia normalized a few days after interruption of inecalcitol. Two other patients reached grade 2, and the dose level was reduced to 4,000 μg. After dose reduction, calcemia remained within normal range and grade 1 hypercalcemia. The maximum tolerated dose was 4,000 μg daily. Respectively, 85% and 76% of the patients had ≥30% PSA decline within 3 months and ≥50% PSA decline at any time during the study. Median time to PSA progression was 169 days. CONCLUSION: High antiproliferative daily inecalcitol dose has been safely used in combination with docetaxel and shows encouraging PSA response (≥30% PSA response: 85%; ≥50% PSA response: 76%). A randomized phase II study is planned.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To evaluate gadocoletic acid (B-22956), a gadolinium-based paramagnetic blood pool agent, for contrast-enhanced coronary magnetic resonance angiography (MRA) in a Phase I clinical trial, and to compare the findings with those obtained using a standard noncontrast T2 preparation sequence. MATERIALS AND METHODS: The left coronary system was imaged in 12 healthy volunteers before B-22956 application and 5 (N = 11) and 45 (N = 7) minutes after application of 0.075 mmol/kg of body weight (BW) of B-22956. Additionally, imaging of the right coronary system was performed 23 minutes after B-22956 application (N = 6). A three-dimensional gradient echo sequence with T2 preparation (precontrast) or inversion recovery (IR) pulse (postcontrast) with real-time navigator correction was used. Assessment of the left and right coronary systems was performed qualitatively (a 4-point visual score for image quality) and quantitatively in terms of signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), vessel sharpness, visible vessel length, maximal luminal diameter, and the number of visible side branches. RESULTS: Significant (P < 0.01) increases in SNR (+42%) and CNR (+86%) were noted five minutes after B-22956 application, compared to precontrast T2 preparation values. A significant increase in CNR (+40%, P < 0.05) was also noted 45 minutes postcontrast. Vessels (left anterior descending artery (LAD), left coronary circumflex (LCx), and right coronary artery (RCA)) were also significantly (P < 0.05) sharper on postcontrast images. Significant increases in vessel length were noted for the LAD (P < 0.05) and LCx and RCA (both P < 0.01), while significantly more side branches were noted for the LAD and RCA (both P < 0.05) when compared to precontrast T2 preparation values. CONCLUSION: The use of the intravascular contrast agent B-22956 substantially improves both objective and subjective parameters of image quality on high-resolution three-dimensional coronary MRA. The increase in SNR, CNR, and vessel sharpness minimizes current limitations of coronary artery visualization with high-resolution coronary MRA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim was to investigate the efficacy of neoadjuvant docetaxel-cisplatin and identify prognostic factors for outcome in locally advanced stage IIIA (pN2 by mediastinoscopy) non-small-cell lung cancer (NSCLC) patients. In all, 75 patients (from 90 enrolled) underwent tumour resection after three 3-week cycles of docetaxel 85 mg m-2 (day 1) plus cisplatin 40 or 50 mg m-2 (days 1 and 2). Therapy was well tolerated (overall grade 3 toxicity occurred in 48% patients; no grade 4 nonhaematological toxicity was reported), with no observed late toxicities. Median overall survival (OS) and event-free survival (EFS) times were 35 and 15 months, respectively, in the 75 patients who underwent surgery; corresponding figures for all 90 patients enrolled were 28 and 12 months. At 3 years after initiating trial therapy, 27 out of 75 patients (36%) were alive and tumour free. At 5-year follow-up, 60 and 65% of patients had local relapse and distant metastases, respectively. The most common sites of distant metastases were the lung (24%) and brain (17%). Factors associated with OS, EFS and risk of local relapse and distant metastases were complete tumour resection and chemotherapy activity (clinical response, pathologic response, mediastinal downstaging). Neoadjuvant docetaxel-cisplatin was effective and tolerable in stage IIIA pN2 NSCLC, with chemotherapy contributing significantly to outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

EXECUTIVE SUMMARY : Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link. This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective. This dissertation is divided into three parts. Part One: Information Security Evaluation issues consists of four chapters. Chapter 1 is an introduction to the purpose of this research purpose and the Model that will be proposed. In this chapter we raise some questions with respect to "traditional evaluation methods" as well as identifying the principal elements to be addressed in this direction. Then we introduce the baseline attributes of our model and set out the expected result of evaluations according to our model. Chapter 2 is focused on the definition of Information Security to be used as a reference point for our evaluation model. The inherent concepts of the contents of a holistic and baseline Information Security Program are defined. Based on this, the most common roots-of-trust in Information Security are identified. Chapter 3 focuses on an analysis of the difference and the relationship between the concepts of Information Risk and Security Management. Comparing these two concepts allows us to identify the most relevant elements to be included within our evaluation model, while clearing situating these two notions within a defined framework is of the utmost importance for the results that will be obtained from the evaluation process. Chapter 4 sets out our evaluation model and the way it addresses issues relating to the evaluation of Information Security. Within this Chapter the underlying concepts of assurance and trust are discussed. Based on these two concepts, the structure of the model is developed in order to provide an assurance related platform as well as three evaluation attributes: "assurance structure", "quality issues", and "requirements achievement". Issues relating to each of these evaluation attributes are analysed with reference to sources such as methodologies, standards and published research papers. Then the operation of the model is discussed. Assurance levels, quality levels and maturity levels are defined in order to perform the evaluation according to the model. Part Two: Implementation of the Information Security Assurance Assessment Model (ISAAM) according to the Information Security Domains consists of four chapters. This is the section where our evaluation model is put into a welldefined context with respect to the four pre-defined Information Security dimensions: the Organizational dimension, Functional dimension, Human dimension, and Legal dimension. Each Information Security dimension is discussed in a separate chapter. For each dimension, the following two-phase evaluation path is followed. The first phase concerns the identification of the elements which will constitute the basis of the evaluation: ? Identification of the key elements within the dimension; ? Identification of the Focus Areas for each dimension, consisting of the security issues identified for each dimension; ? Identification of the Specific Factors for each dimension, consisting of the security measures or control addressing the security issues identified for each dimension. The second phase concerns the evaluation of each Information Security dimension by: ? The implementation of the evaluation model, based on the elements identified for each dimension within the first phase, by identifying the security tasks, processes, procedures, and actions that should have been performed by the organization to reach the desired level of protection; ? The maturity model for each dimension as a basis for reliance on security. For each dimension we propose a generic maturity model that could be used by every organization in order to define its own security requirements. Part three of this dissertation contains the Final Remarks, Supporting Resources and Annexes. With reference to the objectives of our thesis, the Final Remarks briefly analyse whether these objectives were achieved and suggest directions for future related research. Supporting resources comprise the bibliographic resources that were used to elaborate and justify our approach. Annexes include all the relevant topics identified within the literature to illustrate certain aspects of our approach. Our Information Security evaluation model is based on and integrates different Information Security best practices, standards, methodologies and research expertise which can be combined in order to define an reliable categorization of Information Security. After the definition of terms and requirements, an evaluation process should be performed in order to obtain evidence that the Information Security within the organization in question is adequately managed. We have specifically integrated into our model the most useful elements of these sources of information in order to provide a generic model able to be implemented in all kinds of organizations. The value added by our evaluation model is that it is easy to implement and operate and answers concrete needs in terms of reliance upon an efficient and dynamic evaluation tool through a coherent evaluation system. On that basis, our model could be implemented internally within organizations, allowing them to govern better their Information Security. RÉSUMÉ : Contexte général de la thèse L'évaluation de la sécurité en général, et plus particulièrement, celle de la sécurité de l'information, est devenue pour les organisations non seulement une mission cruciale à réaliser, mais aussi de plus en plus complexe. A l'heure actuelle, cette évaluation se base principalement sur des méthodologies, des bonnes pratiques, des normes ou des standards qui appréhendent séparément les différents aspects qui composent la sécurité de l'information. Nous pensons que cette manière d'évaluer la sécurité est inefficiente, car elle ne tient pas compte de l'interaction des différentes dimensions et composantes de la sécurité entre elles, bien qu'il soit admis depuis longtemps que le niveau de sécurité globale d'une organisation est toujours celui du maillon le plus faible de la chaîne sécuritaire. Nous avons identifié le besoin d'une approche globale, intégrée, systémique et multidimensionnelle de l'évaluation de la sécurité de l'information. En effet, et c'est le point de départ de notre thèse, nous démontrons que seule une prise en compte globale de la sécurité permettra de répondre aux exigences de sécurité optimale ainsi qu'aux besoins de protection spécifiques d'une organisation. Ainsi, notre thèse propose un nouveau paradigme d'évaluation de la sécurité afin de satisfaire aux besoins d'efficacité et d'efficience d'une organisation donnée. Nous proposons alors un modèle qui vise à évaluer d'une manière holistique toutes les dimensions de la sécurité, afin de minimiser la probabilité qu'une menace potentielle puisse exploiter des vulnérabilités et engendrer des dommages directs ou indirects. Ce modèle se base sur une structure formalisée qui prend en compte tous les éléments d'un système ou programme de sécurité. Ainsi, nous proposons un cadre méthodologique d'évaluation qui considère la sécurité de l'information à partir d'une perspective globale. Structure de la thèse et thèmes abordés Notre document est structuré en trois parties. La première intitulée : « La problématique de l'évaluation de la sécurité de l'information » est composée de quatre chapitres. Le chapitre 1 introduit l'objet de la recherche ainsi que les concepts de base du modèle d'évaluation proposé. La maniéré traditionnelle de l'évaluation de la sécurité fait l'objet d'une analyse critique pour identifier les éléments principaux et invariants à prendre en compte dans notre approche holistique. Les éléments de base de notre modèle d'évaluation ainsi que son fonctionnement attendu sont ensuite présentés pour pouvoir tracer les résultats attendus de ce modèle. Le chapitre 2 se focalise sur la définition de la notion de Sécurité de l'Information. Il ne s'agit pas d'une redéfinition de la notion de la sécurité, mais d'une mise en perspectives des dimensions, critères, indicateurs à utiliser comme base de référence, afin de déterminer l'objet de l'évaluation qui sera utilisé tout au long de notre travail. Les concepts inhérents de ce qui constitue le caractère holistique de la sécurité ainsi que les éléments constitutifs d'un niveau de référence de sécurité sont définis en conséquence. Ceci permet d'identifier ceux que nous avons dénommés « les racines de confiance ». Le chapitre 3 présente et analyse la différence et les relations qui existent entre les processus de la Gestion des Risques et de la Gestion de la Sécurité, afin d'identifier les éléments constitutifs du cadre de protection à inclure dans notre modèle d'évaluation. Le chapitre 4 est consacré à la présentation de notre modèle d'évaluation Information Security Assurance Assessment Model (ISAAM) et la manière dont il répond aux exigences de l'évaluation telle que nous les avons préalablement présentées. Dans ce chapitre les concepts sous-jacents relatifs aux notions d'assurance et de confiance sont analysés. En se basant sur ces deux concepts, la structure du modèle d'évaluation est développée pour obtenir une plateforme qui offre un certain niveau de garantie en s'appuyant sur trois attributs d'évaluation, à savoir : « la structure de confiance », « la qualité du processus », et « la réalisation des exigences et des objectifs ». Les problématiques liées à chacun de ces attributs d'évaluation sont analysées en se basant sur l'état de l'art de la recherche et de la littérature, sur les différentes méthodes existantes ainsi que sur les normes et les standards les plus courants dans le domaine de la sécurité. Sur cette base, trois différents niveaux d'évaluation sont construits, à savoir : le niveau d'assurance, le niveau de qualité et le niveau de maturité qui constituent la base de l'évaluation de l'état global de la sécurité d'une organisation. La deuxième partie: « L'application du Modèle d'évaluation de l'assurance de la sécurité de l'information par domaine de sécurité » est elle aussi composée de quatre chapitres. Le modèle d'évaluation déjà construit et analysé est, dans cette partie, mis dans un contexte spécifique selon les quatre dimensions prédéfinies de sécurité qui sont: la dimension Organisationnelle, la dimension Fonctionnelle, la dimension Humaine, et la dimension Légale. Chacune de ces dimensions et son évaluation spécifique fait l'objet d'un chapitre distinct. Pour chacune des dimensions, une évaluation en deux phases est construite comme suit. La première phase concerne l'identification des éléments qui constituent la base de l'évaluation: ? Identification des éléments clés de l'évaluation ; ? Identification des « Focus Area » pour chaque dimension qui représentent les problématiques se trouvant dans la dimension ; ? Identification des « Specific Factors » pour chaque Focus Area qui représentent les mesures de sécurité et de contrôle qui contribuent à résoudre ou à diminuer les impacts des risques. La deuxième phase concerne l'évaluation de chaque dimension précédemment présentées. Elle est constituée d'une part, de l'implémentation du modèle général d'évaluation à la dimension concernée en : ? Se basant sur les éléments spécifiés lors de la première phase ; ? Identifiant les taches sécuritaires spécifiques, les processus, les procédures qui auraient dû être effectués pour atteindre le niveau de protection souhaité. D'autre part, l'évaluation de chaque dimension est complétée par la proposition d'un modèle de maturité spécifique à chaque dimension, qui est à considérer comme une base de référence pour le niveau global de sécurité. Pour chaque dimension nous proposons un modèle de maturité générique qui peut être utilisé par chaque organisation, afin de spécifier ses propres exigences en matière de sécurité. Cela constitue une innovation dans le domaine de l'évaluation, que nous justifions pour chaque dimension et dont nous mettons systématiquement en avant la plus value apportée. La troisième partie de notre document est relative à la validation globale de notre proposition et contient en guise de conclusion, une mise en perspective critique de notre travail et des remarques finales. Cette dernière partie est complétée par une bibliographie et des annexes. Notre modèle d'évaluation de la sécurité intègre et se base sur de nombreuses sources d'expertise, telles que les bonnes pratiques, les normes, les standards, les méthodes et l'expertise de la recherche scientifique du domaine. Notre proposition constructive répond à un véritable problème non encore résolu, auquel doivent faire face toutes les organisations, indépendamment de la taille et du profil. Cela permettrait à ces dernières de spécifier leurs exigences particulières en matière du niveau de sécurité à satisfaire, d'instancier un processus d'évaluation spécifique à leurs besoins afin qu'elles puissent s'assurer que leur sécurité de l'information soit gérée d'une manière appropriée, offrant ainsi un certain niveau de confiance dans le degré de protection fourni. Nous avons intégré dans notre modèle le meilleur du savoir faire, de l'expérience et de l'expertise disponible actuellement au niveau international, dans le but de fournir un modèle d'évaluation simple, générique et applicable à un grand nombre d'organisations publiques ou privées. La valeur ajoutée de notre modèle d'évaluation réside précisément dans le fait qu'il est suffisamment générique et facile à implémenter tout en apportant des réponses sur les besoins concrets des organisations. Ainsi notre proposition constitue un outil d'évaluation fiable, efficient et dynamique découlant d'une approche d'évaluation cohérente. De ce fait, notre système d'évaluation peut être implémenté à l'interne par l'entreprise elle-même, sans recourir à des ressources supplémentaires et lui donne également ainsi la possibilité de mieux gouverner sa sécurité de l'information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An assay for the simultaneous analysis of pharmaceutical compounds and their metabolites from micro-whole blood samples (i.e. 5 microL) was developed using an on-line dried blood spot (on-line DBS) device coupled with hydrophilic interaction/reversed-phase (HILIC/RP) LC/MS/MS. Filter paper is directly integrated to the LC device using a homemade inox desorption cell. Without any sample pretreatment, analytes are desorbed from the paper towards an automated system of valves linking a zwitterionic-HILIC column to an RP C18 column. In the same run, the polar fraction is separated by the zwitterionic-HILIC column while the non-polar fraction is eluted on the RP C18. Both fractions are detected by IT-MS operating in full scan mode for the survey scan and in product ion mode for the dependant scan using an ESI source. The procedure was evaluated by the simultaneous qualitative analysis of four probes and their relative phase I and II metabolites spiked in whole blood. In addition, the method was successfully applied to the in vivo monitoring of buprenorphine metabolism after the administration of an intraperitoneal injection of 30 mg/kg on adult female Wistar rat.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: The European Organisation for Research and Treatment of Cancer and National Cancer Institute of Canada trial on temozolomide (TMZ) and radiotherapy (RT) in glioblastoma (GBM) has demonstrated that the combination of TMZ and RT conferred a significant and meaningful survival advantage compared with RT alone. We evaluated in this trial whether the recursive partitioning analysis (RPA) retains its overall prognostic value and what the benefit of the combined modality is in each RPA class. PATIENTS AND METHODS: Five hundred seventy-three patients with newly diagnosed GBM were randomly assigned to standard postoperative RT or to the same RT with concomitant TMZ followed by adjuvant TMZ. The primary end point was overall survival. The European Organisation for Research and Treatment of Cancer RPA used accounts for age, WHO performance status, extent of surgery, and the Mini-Mental Status Examination. RESULTS: Overall survival was statistically different among RPA classes III, IV, and V, with median survival times of 17, 15, and 10 months, respectively, and 2-year survival rates of 32%, 19%, and 11%, respectively (P < .0001). Survival with combined TMZ/RT was higher in RPA class III, with 21 months median survival time and a 43% 2-year survival rate, versus 15 months and 20% for RT alone (P = .006). In RPA class IV, the survival advantage remained significant, with median survival times of 16 v 13 months, respectively, and 2-year survival rates of 28% v 11%, respectively (P = .0001). In RPA class V, however, the survival advantage of RT/TMZ was of borderline significance (P = .054). CONCLUSION: RPA retains its prognostic significance overall as well as in patients receiving RT with or without TMZ for newly diagnosed GBM, particularly in classes III and IV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract This phase II trial treated elderly or frail patients with acute myeloid leukemia (AML) with single-agent subcutaneous azacytidine at 100 mg/m(2), on 5 of 28 days for up to six cycles. Treatment was stopped for lack of response, or continued to progression in responders. The primary endpoint was response within 6 months. A response rate ≥ 34% was considered a positive trial outcome. From September 2008 to April 2010, 45 patients from 10 centers (median age 74 [55-86] years) were accrued. Patients received four (1-21) cycles. Best response was complete response/complete response with incomplete recovery of neutrophils and/or platelets (CR/CRi) in eight (18%; 95% confidence interval [CI]: 8-32%.), 0 (0%) partial response (PR), seven (16%) hematologic improvement, 17 (38%) stable disease. Three non-responding patients stopped treatment after six cycles, 31 patients stopped early and 11 patients continued treatment for 8-21 cycles. Adverse events (grade ≥ III) were infections (n = 13), febrile neutropenia (n = 8), thrombocytopenia (n = 7), dyspnea (p = 6), bleeding (n = 5) and anemia (n = 4). Median overall survival was 6 months. Peripheral blood blast counts, grouped at 30%, had a borderline significant association with response (p = 0.07). This modified azacytidine schedule is feasible for elderly or frail patients with AML in an outpatient setting with moderate, mainly hematologic, toxicity and response in a proportion of patients, although the primary objective was not reached.