984 resultados para design recovery
Resumo:
Background and Objectives: Patients who survive acute kidney injury (AKI), especially those with partial renal recovery, present a higher long-term mortality risk. However, there is no consensus on the best time to assess renal function after an episode of acute kidney injury or agreement on the definition of renal recovery. In addition, only limited data regarding predictors of recovery are available. Design, Setting, Participants, & Measurements: From 1984 to 2009, 84 adult survivors of acute kidney injury were followed by the same nephrologist (RCRMA) for a median time of 4.1 years. Patients were seen at least once each year after discharge until end stage renal disease (ESRD) or death. In each consultation serum creatinine was measured and glomerular filtration rate estimated. Renal recovery was defined as a glomerular filtration rate value >= 60 mL/min/1.73 m2. A multiple logistic regression was performed to evaluate factors independently associated with renal recovery. Results: The median length of follow-up was 50 months (30-90 months). All patients had stabilized their glomerular filtration rates by 18 months and 83% of them stabilized earlier: up to 12 months. Renal recovery occurred in 16 patients (19%) at discharge and in 54 (64%) by 18 months. Six patients died and four patients progressed to ESRD during the follow up period. Age (OR 1.09, p < 0.0001) and serum creatinine at hospital discharge (OR 2.48, p = 0.007) were independent factors associated with non renal recovery. The acute kidney injury severity, evaluated by peak serum creatinine and need for dialysis, was not associated with non renal recovery. Conclusions: Renal recovery must be evaluated no earlier than one year after an acute kidney injury episode. Nephrology referral should be considered mainly for older patients and those with elevated serum creatinine at hospital discharge.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
This thesis deal with the design of advanced OFDM systems. Both waveform and receiver design have been treated. The main scope of the Thesis is to study, create, and propose, ideas and novel design solutions able to cope with the weaknesses and crucial aspects of modern OFDM systems. Starting from the the transmitter side, the problem represented by low resilience to non-linear distortion has been assessed. A novel technique that considerably reduces the Peak-to-Average Power Ratio (PAPR) yielding a quasi constant signal envelope in the time domain (PAPR close to 1 dB) has been proposed.The proposed technique, named Rotation Invariant Subcarrier Mapping (RISM),is a novel scheme for subcarriers data mapping,where the symbols belonging to the modulation alphabet are not anchored, but maintain some degrees of freedom. In other words, a bit tuple is not mapped on a single point, rather it is mapped onto a geometrical locus, which is totally or partially rotation invariant. The final positions of the transmitted complex symbols are chosen by an iterative optimization process in order to minimize the PAPR of the resulting OFDM symbol. Numerical results confirm that RISM makes OFDM usable even in severe non-linear channels. Another well known problem which has been tackled is the vulnerability to synchronization errors. Indeed in OFDM system an accurate recovery of carrier frequency and symbol timing is crucial for the proper demodulation of the received packets. In general, timing and frequency synchronization is performed in two separate phases called PRE-FFT and POST-FFT synchronization. Regarding the PRE-FFT phase, a novel joint symbol timing and carrier frequency synchronization algorithm has been presented. The proposed algorithm is characterized by a very low hardware complexity, and, at the same time, it guarantees very good performance in in both AWGN and multipath channels. Regarding the POST-FFT phase, a novel approach for both pilot structure and receiver design has been presented. In particular, a novel pilot pattern has been introduced in order to minimize the occurrence of overlaps between two pattern shifted replicas. This allows to replace conventional pilots with nulls in the frequency domain, introducing the so called Silent Pilots. As a result, the optimal receiver turns out to be very robust against severe Rayleigh fading multipath and characterized by low complexity. Performance of this approach has been analytically and numerically evaluated. Comparing the proposed approach with state of the art alternatives, in both AWGN and multipath fading channels, considerable performance improvements have been obtained. The crucial problem of channel estimation has been thoroughly investigated, with particular emphasis on the decimation of the Channel Impulse Response (CIR) through the selection of the Most Significant Samples (MSSs). In this contest our contribution is twofold, from the theoretical side, we derived lower bounds on the estimation mean-square error (MSE) performance for any MSS selection strategy,from the receiver design we proposed novel MSS selection strategies which have been shown to approach these MSE lower bounds, and outperformed the state-of-the-art alternatives. Finally, the possibility of using of Single Carrier Frequency Division Multiple Access (SC-FDMA) in the Broadband Satellite Return Channel has been assessed. Notably, SC-FDMA is able to improve the physical layer spectral efficiency with respect to single carrier systems, which have been used so far in the Return Channel Satellite (RCS) standards. However, it requires a strict synchronization and it is also sensitive to phase noise of local radio frequency oscillators. For this reason, an effective pilot tone arrangement within the SC-FDMA frame, and a novel Joint Multi-User (JMU) estimation method for the SC-FDMA, has been proposed. As shown by numerical results, the proposed scheme manages to satisfy strict synchronization requirements and to guarantee a proper demodulation of the received signal.
Resumo:
This PhD thesis reports on car fluff management, recycling and recovery. Car fluff is the residual waste produced by car recycling operations, particularly from hulk shredding. Car fluff is known also as Automotive Shredder Residue (ASR) and it is made of plastics, rubbers, textiles, metals and other materials, and it is very heterogeneous both in its composition and in its particle size. In fact, fines may amount to about 50%, making difficult to sort out recyclable materials or exploit ASR heat value by energy recovery. This 3 years long study started with the definition of the Italian End-of-Life Vehicles (ELVs) recycling state of the art. A national recycling trial revealed Italian recycling rate to be around 81% in 2008, while European Community recycling target are set to 85% by 2015. Consequently, according to Industrial Ecology framework, a life cycle assessment (LCA) has been conducted revealing that sorting and recycling polymers and metals contained in car fluff, followed by recovering residual energy, is the route which has the best environmental perspective. This results led the second year investigation that involved pyrolysis trials on pretreated ASR fractions aimed at investigating which processes could be suitable for an industrial scale ASR treatment plant. Sieving followed by floatation reported good result in thermochemical conversion of polymers with polyolefins giving excellent conversion rate. This factor triggered ecodesign considerations. Ecodesign, together with LCA, is one of the Industrial Ecology pillars and it consists of design for recycling and design for disassembly, both aimed at the improvement of car components dismantling speed and the substitution of non recyclable material. Finally, during the last year, innovative plants and technologies for metals recovery from car fluff have been visited and tested worldwide in order to design a new car fluff treatment plant aimed at ASR energy and material recovery.
Resumo:
In a world focused on the need to produce energy for a growing population, while reducing atmospheric emissions of carbon dioxide, organic Rankine cycles represent a solution to fulfil this goal. This study focuses on the design and optimization of axial-flow turbines for organic Rankine cycles. From the turbine designer point of view, most of this fluids exhibit some peculiar characteristics, such as small enthalpy drop, low speed of sound, large expansion ratio. A computational model for the prediction of axial-flow turbine performance is developed and validated against experimental data. The model allows to calculate turbine performance within a range of accuracy of ±3%. The design procedure is coupled with an optimization process, performed using a genetic algorithm where the turbine total-to-static efficiency represents the objective function. The computational model is integrated in a wider analysis of thermodynamic cycle units, by providing the turbine optimal design. First, the calculation routine is applied in the context of the Draugen offshore platform, where three heat recovery systems are compared. The turbine performance is investigated for three competing bottoming cycles: organic Rankine cycle (operating cyclopentane), steam Rankine cycle and air bottoming cycle. Findings indicate the air turbine as the most efficient solution (total-to-static efficiency = 0.89), while the cyclopentane turbine results as the most flexible and compact technology (2.45 ton/MW and 0.63 m3/MW). Furthermore, the study shows that, for organic and steam Rankine cycles, the optimal design configurations for the expanders do not coincide with those of the thermodynamic cycles. This suggests the possibility to obtain a more accurate analysis by including the computational model in the simulations of the thermodynamic cycles. Afterwards, the performance analysis is carried out by comparing three organic fluids: cyclopentane, MDM and R245fa. Results suggest MDM as the most effective fluid from the turbine performance viewpoint (total-to-total efficiency = 0.89). On the other hand, cyclopentane guarantees a greater net power output of the organic Rankine cycle (P = 5.35 MW), while R245fa represents the most compact solution (1.63 ton/MW and 0.20 m3/MW). Finally, the influence of the composition of an isopentane/isobutane mixture on both the thermodynamic cycle performance and the expander isentropic efficiency is investigated. Findings show how the mixture composition affects the turbine efficiency and so the cycle performance. Moreover, the analysis demonstrates that the use of binary mixtures leads to an enhancement of the thermodynamic cycle performance.
Resumo:
The performance of microchannel heat exchangers was assessed in gas-to-liquid applications in the order of several tens of kWth . The technology is suitable for exhaust heat recovery systems based on organic Rankine cycle. In order to design a light and compact microchannel heat exchanger, an optimization process is developed. The model employed in the procedure is validated through computational fluid-dynamics analysis with commercial software. It is shown that conjugate effects have a significant impact on the heat transfer performance of the device.
Resumo:
OBJECTIVE: To investigate causes of the lack of clinical improvement after thoracolumbar disc surgery. STUDY DESIGN: Case-control magnetic resonance imaging (MRI) study. ANIMALS: Chondrodystrophic dogs with acute thoracolumbar disc disease treated by hemilaminectomy: 10 that had no short-term clinical improvement and 12 with "normal" clinical improvement. METHODS: Dogs that had surgery for treatment of intervertebral disc extrusion (2003-2008) where thoracolumbar disc disease was confirmed by MRI were evaluated to identify dogs that had lack of clinical improvement after surgery. Ten dogs with delayed recovery or clinical deterioration were reexamined with MRI and compared with 12 dogs with normal recovery and MRI reexamination after 6 weeks (control group). RESULTS: Of 173 dogs, 10 (5.8%) had clinical deterioration within 1-10 days after surgery. In 8 dogs, residual spinal cord compression was identified on MRI. Bleeding was present in 1 dog. In 3 dogs, the cause was an incorrect approach and insufficient disc material removal. In 3 dogs, recurrence occurred at the surgical site. In 1 dog, the centrally located extruded material was shifted to the contralateral side during surgery. These 8 dogs had repeat surgery and recovery was uneventful. In 2 dogs, deterioration could not be associated with a compressive disc lesion. Hemorrhagic myelomalacia was confirmed by pathologic examination in 1 dog. The other dog recovered after 6 months of conservative management. CONCLUSION: Delayed postsurgical recovery or deterioration is commonly associated with newly developed and/or remaining compressive disc lesion. CLINICAL RELEVANCE: We recommend early MRI reexamination to assess the postsurgical spinal canal and cord, and to plan further therapeutic measures in chondrodystrophic dogs with delayed recovery after decompressive hemilaminectomy for thoracolumbar disc disease.
Resumo:
BACKGROUND: Antiretroviral therapy (ART) containing tenofovir disoproxil fumarate (TDF) and didanosine (ddI) has been associated with poor immune recovery despite virologic success. This effect might be related to ddI toxicity since ddI exposure is substantially increased by TDF. OBJECTIVE: To analyze whether immune recovery during ART with TDF and ddI is ddI-dose dependent. DESIGN AND METHODS: A retrospective longitudinal analysis of immune recovery measured by the CD4 T-cell slope in 614 patients treated with ART containing TDF with or without ddI. Patients were stratified according to the tertiles of their weight-adjusted ddI dose: low dose (< 3.3 mg/kg), intermediate dose (3.3-4.1 mg/kg) and high dose (> 4.1 mg/kg). Cofactors modifying the degree of immune recovery after starting TDF-containing ART were identified by univariable and multivariable linear regression analyses. RESULTS: CD4 T-cell slopes were comparable between patients treated with TDF and a weight-adjusted ddI-dose of < 4.1 mg/kg per day (n = 143) versus TDF-without-ddI (n = 393). In the multivariable model the slopes differed by -13 CD4 T cells/mul per year [95% confidence interval (CI), -42 to 17; P = 0.40]. In contrast, patients treated with TDF and a higher ddI dose (> 4.1 mg/kg per day, n = 78) experienced a significantly impaired immune recovery (-47 CD4 T cells/microl per year; 95% CI, -82 to -12; P = 0.009). The virologic response was comparable between the different treatment groups. CONCLUSIONS: Immune recovery is impaired, when high doses of ddI (> 4.1 mg/kg) are given in combination with TDF. If the dose of ddI is adjusted to less than 4.1 mg/kg per day, immune recovery is similar to other TDF-containing ART regimen.
Resumo:
Self-stabilization is a property of a distributed system such that, regardless of the legitimacy of its current state, the system behavior shall eventually reach a legitimate state and shall remain legitimate thereafter. The elegance of self-stabilization stems from the fact that it distinguishes distributed systems by a strong fault tolerance property against arbitrary state perturbations. The difficulty of designing and reasoning about self-stabilization has been witnessed by many researchers; most of the existing techniques for the verification and design of self-stabilization are either brute-force, or adopt manual approaches non-amenable to automation. In this dissertation, we first investigate the possibility of automatically designing self-stabilization through global state space exploration. In particular, we develop a set of heuristics for automating the addition of recovery actions to distributed protocols on various network topologies. Our heuristics equally exploit the computational power of a single workstation and the available parallelism on computer clusters. We obtain existing and new stabilizing solutions for classical protocols like maximal matching, ring coloring, mutual exclusion, leader election and agreement. Second, we consider a foundation for local reasoning about self-stabilization; i.e., study the global behavior of the distributed system by exploring the state space of just one of its components. It turns out that local reasoning about deadlocks and livelocks is possible for an interesting class of protocols whose proof of stabilization is otherwise complex. In particular, we provide necessary and sufficient conditions – verifiable in the local state space of every process – for global deadlock- and livelock-freedom of protocols on ring topologies. Local reasoning potentially circumvents two fundamental problems that complicate the automated design and verification of distributed protocols: (1) state explosion and (2) partial state information. Moreover, local proofs of convergence are independent of the number of processes in the network, thereby enabling our assertions about deadlocks and livelocks to apply on rings of arbitrary sizes without worrying about state explosion.
Resumo:
To mitigate greenhouse gas (GHG) emissions and reduce U.S. dependence on imported oil, the United States (U.S.) is pursuing several options to create biofuels from renewable woody biomass (hereafter referred to as “biomass”). Because of the distributed nature of biomass feedstock, the cost and complexity of biomass recovery operations has significant challenges that hinder increased biomass utilization for energy production. To facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization and tapping unused forest residues, it is proposed to develop biofuel supply chain models based on optimization and simulation approaches. The biofuel supply chain is structured around four components: biofuel facility locations and sizes, biomass harvesting/forwarding, transportation, and storage. A Geographic Information System (GIS) based approach is proposed as a first step for selecting potential facility locations for biofuel production from forest biomass based on a set of evaluation criteria, such as accessibility to biomass, railway/road transportation network, water body and workforce. The development of optimization and simulation models is also proposed. The results of the models will be used to determine (1) the number, location, and size of the biofuel facilities, and (2) the amounts of biomass to be transported between the harvesting areas and the biofuel facilities over a 20-year timeframe. The multi-criteria objective is to minimize the weighted sum of the delivered feedstock cost, energy consumption, and GHG emissions simultaneously. Finally, a series of sensitivity analyses will be conducted to identify the sensitivity of the decisions, such as the optimal site selected for the biofuel facility, to changes in influential parameters, such as biomass availability and transportation fuel price. Intellectual Merit The proposed research will facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization in the renewable biofuel industry. The GIS-based facility location analysis considers a series of factors which have not been considered simultaneously in previous research. Location analysis is critical to the financial success of producing biofuel. The modeling of woody biomass supply chains using both optimization and simulation, combing with the GIS-based approach as a precursor, have not been done to date. The optimization and simulation models can help to ensure the economic and environmental viability and sustainability of the entire biofuel supply chain at both the strategic design level and the operational planning level. Broader Impacts The proposed models for biorefineries can be applied to other types of manufacturing or processing operations using biomass. This is because the biomass feedstock supply chain is similar, if not the same, for biorefineries, biomass fired or co-fired power plants, or torrefaction/pelletization operations. Additionally, the research results of this research will continue to be disseminated internationally through publications in journals, such as Biomass and Bioenergy, and Renewable Energy, and presentations at conferences, such as the 2011 Industrial Engineering Research Conference. For example, part of the research work related to biofuel facility identification has been published: Zhang, Johnson and Sutherland [2011] (see Appendix A). There will also be opportunities for the Michigan Tech campus community to learn about the research through the Sustainable Future Institute.
Resumo:
BACKGROUND: Psychological distress, poor disease-specific quality of life (QoL), and reduction in vagally mediated early heart rate recovery (HRR) after exercise, all previously predicted morbidity and mortality in patients with chronic heart failure (CHF). We hypothesized lower HRR with greater psychological distress and poorer QoL in CHF. DESIGN: All assessments were made at the beginning of a comprehensive cardiac outpatient rehabilitation intervention program. METHODS: Fifty-six CHF patients (mean 58+/-12 years, 84% men) completed the Hospital Anxiety and Depression Scale and the Minnesota Living With Heart Failure Questionnaire. HRR was determined as the difference between HR at the end of exercise and 1 min after exercise termination (HRR-1). RESULTS: Elevated levels of anxiety symptoms (P=0.005) as well as decreased levels of the Minnesota Living With Heart Failure Questionnaire total (P = 0.025), physical (P=0.026), and emotional (P=0.017) QoL were independently associated with blunted HRR-1. Anxiety, total, physical, and emotional QoL explained 11.4, 8, 7.8, and 9.0%, respectively, of the variance after controlling for covariates. Depressed mood was not associated with HRR-1 (P=0.20). CONCLUSION: Increased psychological distress with regard to elevated anxiety symptoms and impaired QoL were independent correlates of reduced HRR-1 in patients with CHF. Reduced vagal tone might explain part of the adverse clinical outcome previously observed in CHF patients in relation to psychological distress and poor disease-specific QoL.
Resumo:
BACKGROUND: Patients with apparent complete recovery from thrombotic thrombocytopenic purpura (TTP) often complain of problems with memory, concentration, and fatigue. STUDY DESIGN AND METHODS: Twenty-four patients who were enrolled in the Oklahoma TTP-HUS Registry for their initial episode of TTP, 1995-2006, and who had ADAMTS13 activity of less than 10 percent were evaluated for a broad range of cognitive functions 0.1 to 10.6 years (median, 4.0 years) after their most recent episode. At the time of their evaluation, they had normal physical and Mini-Mental State Examinations and no evidence of TTP. RESULTS: The patients, as a group, performed significantly worse on 4 of the 11 cognitive domains tested than standardized US data from neurologically normal individuals adjusted for age, sex, and education (p < 0.05). These four domains measured complex attention and concentration skills, information processing speed, rapid language generation, and rote memorization. Twenty-one (88%) patients performed below expectations on at least 1 of the 11 domains. No clear patterns were observed between cognitive test results and patients' characteristics or features of the preceding TTP, including age, occurrence of severe neurologic abnormalities, multiple episodes, and interval from an acute episode. CONCLUSION: Patients who have recovered from TTP may have persistent cognitive abnormalities. The abnormalities observed in these patients are characteristic of disorders associated with diffuse subcortical microvascular disease. Studies of larger patient groups will be required to confirm these preliminary observations and to determine patient characteristics that may contribute to persistent cognitive abnormalities.
Resumo:
OBJECTIVES Zidovudine (ZDV) is recommended for first-line antiretroviral therapy (ART) in resource-limited settings. ZDV may, however, lead to anemia and impaired immunological response. We compared CD4+ cell counts over 5 years between patients starting ART with and without ZDV in southern Africa. DESIGN Cohort study. METHODS Patients aged at least 16 years who started first-line ART in South Africa, Botswana, Zambia, or Lesotho were included. We used linear mixed-effect models to compare CD4+ cell count trajectories between patients on ZDV-containing regimens and patients on other regimens, censoring follow-up at first treatment change. Impaired immunological recovery, defined as a CD4+ cell count below 100 cells/μl at 1 year, was assessed in logistic regression. Analyses were adjusted for baseline CD4+ cell count and hemoglobin level, age, sex, type of regimen, viral load monitoring, and calendar year. RESULTS A total of 72,597 patients starting ART, including 19,758 (27.2%) on ZDV, were analyzed. Patients on ZDV had higher CD4+ cell counts (150 vs.128 cells/μl) and hemoglobin level (12.0 vs. 11.0 g/dl) at baseline, and were less likely to be women than those on other regimens. Adjusted differences in CD4+ cell counts between regimens containing and not containing ZDV were -16 cells/μl [95% confidence interval (CI) -18 to -14] at 1 year and -56 cells/μl (95% CI -59 to -52) at 5 years. Impaired immunological recovery was more likely with ZDV compared to other regimens (odds ratio 1.40, 95% CI 1.22-1.61). CONCLUSION In southern Africa, ZDV is associated with inferior immunological recovery compared to other backbones. Replacing ZDV with another nucleoside reverse transcriptase inhibitor could avoid unnecessary switches to second-line ART.
Resumo:
BACKGROUND We report on the design and implementation of a study protocol entitled Acupuncture randomised trial for post anaesthetic recovery and postoperative pain - a pilot study (ACUARP) designed to investigate the effectiveness of acupuncture therapy performed in the perioperative period on post anaesthetic recovery and postoperative pain. METHODS/DESIGN The study is designed as a randomised controlled pilot trial with three arms and partial double blinding. We will compare (a) press needle acupuncture, (b) no treatment and (c) press plaster acupressure in a standardised anaesthetic setting. Seventy-five patients scheduled for laparoscopic surgery to the uterus or ovaries will be allocated randomly to one of the three trial arms. The total observation period will begin one day before surgery and end on the second postoperative day. Twelve press needles and press plasters are to be administered preoperatively at seven acupuncture points. The primary outcome measure will be time from extubation to 'ready for discharge' from the post anaesthesia care unit (in minutes). The 'ready for discharge' end point will be assessed using three different scores: the Aldrete score, the Post Anaesthetic Discharge Scoring System and an In-House score. Secondary outcome measures will comprise pre-, intra- and postoperative variables (which are anxiety, pain, nausea and vomiting, concomitant medication). DISCUSSION The results of this study will provide information on whether acupuncture may improve patient post anaesthetic recovery. Comparing acupuncture with acupressure will provide insight into potential therapeutic differences between invasive and non-invasive acupuncture techniques. TRIAL REGISTRATION NCT01816386 (First received: 28 October 2012).
Resumo:
Purpose To investigate the effect of topical glucose on visual parameters in individuals with primary open-angle glaucoma (POAG). Design Double-blind, randomized, crossover study. Participants Nondiabetic pseudophakic patients with definite POAG were recruited; 29 eyes of 16 individuals participated in study 1. A follow-up study (study 2) included 14 eyes of 7 individuals. Intervention Eyes were randomly allocated to receive 50% glucose or saline eye drops every 5 minutes for 60 minutes. Main Outcome Measures The contrast sensitivity and best-corrected logarithm of the minimum angle of resolution (logMAR). Results The 50% glucose reached the vitreous in pseudophakic but not phakic individuals. Glucose significantly improved the mean contrast sensitivity at 12 cycles/degree compared with 0.9% saline by 0.26 log units (95% confidence interval [CI], 0.13–0.38; P < 0.001) and 0.40 log units (95% CI, 0.17–0.60; P < 0.001) in the follow-up study. The intraocular pressure, refraction, and central corneal thickness were not affected by glucose; age was not a significant predictor of the response. Conclusions Topical glucose temporarily improves psychophysical visual parameters in some individuals with POAG, suggesting that neuronal energy substrate delivery to the vitreous reservoir may recover function of “sick” retinal neurons.