919 resultados para zero tolerance
Resumo:
Clinical experience and experimental data suggest that intradialytic hemodynamic profiles could be influenced by the characteristics of the dialysis membranes. Even within the worldwide used polysulfone family, intolerance to specific membranes was occasionally evoked. The aim of this study was to compare hemodynamically some of the commonly used polysulfone dialyzers in Switzerland. We performed an open-label, randomized, cross-over trial, including 25 hemodialysis patients. Four polysulfone dialyzers, A (Revaclear high-flux, Gambro, Stockholm, Sweden), B (Helixone high-flux, Fresenius), C (Xevonta high-flux, BBraun, Melsungen, Germany), and D (Helixone low-flux, Fresenius, Bad Homburg vor der Höhe, Germany), were compared. The hemodynamic profile was assessed and patients were asked to provide tolerance feedback. The mean score (±SD) subjectively assigned to dialysis quality on a 1-10 scale was A 8.4 ± 1.3, B 8.6 ± 1.3, C 8.5 ± 1.6, D 8.5 ± 1.5. Kt/V was A 1.58 ± 0.30, B 1.67 ± 0.33, C 1.62 ± 0.32, D 1.45 ± 0.31. The low- compared with the high-flux membranes, correlated to higher systolic (128.1 ± 13.1 vs. 125.6 ± 12.1 mmHg, P < 0.01) and diastolic (76.8 ± 8.7 vs. 75.3 ± 9.0 mmHg; P < 0.05) pressures, higher peripheral resistance (1.44 ± 0.19 vs. 1.40 ± 0.18 s × mmHg/mL; P < 0.05) and lower cardiac output (3.76 ± 0.62 vs. 3.82 ± 0.59 L/min; P < 0.05). Hypotension events (decrease in systolic blood pressure by >20 mmHg) were 70 with A, 87 with B, 73 with C, and 75 with D (P < 0.01 B vs. A, 0.05 B vs. C and 0.07 B vs. D). The low-flux membrane correlated to higher blood pressure levels compared with the high-flux ones. The Helixone high-flux membrane ensured the best efficiency. Unfortunately, the very same dialyzer correlated to a higher incidence of hypotensive episodes.
Resumo:
Background In an agreement assay, it is of interest to evaluate the degree of agreement between the different methods (devices, instruments or observers) used to measure the same characteristic. We propose in this study a technical simplification for inference about the total deviation index (TDI) estimate to assess agreement between two devices of normally-distributed measurements and describe its utility to evaluate inter- and intra-rater agreement if more than one reading per subject is available for each device. Methods We propose to estimate the TDI by constructing a probability interval of the difference in paired measurements between devices, and thereafter, we derive a tolerance interval (TI) procedure as a natural way to make inferences about probability limit estimates. We also describe how the proposed method can be used to compute bounds of the coverage probability. Results The approach is illustrated in a real case example where the agreement between two instruments, a handle mercury sphygmomanometer device and an OMRON 711 automatic device, is assessed in a sample of 384 subjects where measures of systolic blood pressure were taken twice by each device. A simulation study procedure is implemented to evaluate and compare the accuracy of the approach to two already established methods, showing that the TI approximation produces accurate empirical confidence levels which are reasonably close to the nominal confidence level. Conclusions The method proposed is straightforward since the TDI estimate is derived directly from a probability interval of a normally-distributed variable in its original scale, without further transformations. Thereafter, a natural way of making inferences about this estimate is to derive the appropriate TI. Constructions of TI based on normal populations are implemented in most standard statistical packages, thus making it simpler for any practitioner to implement our proposal to assess agreement.
Resumo:
Reaching a consensus in terms of interchangeability and utility (i.e., disease detection/monitoring) of a medical device is the eventual aim of repeatability and agreement studies. The aim of the tolerance and relative utility indices described in this report is to provide a methodology to compare change in clinical measurement noise between different populations (repeatability) or measurement methods (agreement), so as to highlight problematic areas. No longitudinal data are required to calculate these indices. Both indices establish a metric of least to most effected across all parameters to facilitate comparison. If validated, these indices may prove useful tools when combining reports and forming the consensus required in the validation process for software updates and new medical devices.
Resumo:
Aspergillus fumigatus is the primary etiologic agent of invasive aspergillosis (IA), a major cause of death among immunosuppressed patients. Echinocandins (e.g., caspofungin) are increasingly used as second-line therapy for IA, but their activity is only fungistatic. Heat shock protein 90 (Hsp90) was previously shown to trigger tolerance to caspofungin and the paradoxical effect (i.e., decreased efficacy of caspofungin at higher concentrations). Here, we demonstrate the key role of another molecular chaperone, Hsp70, in governing the stress response to caspofungin via Hsp90 and their cochaperone Hop/Sti1 (StiA in A. fumigatus). Mutation of the StiA-interacting domain of Hsp70 (C-terminal EELD motif) impaired thermal adaptation and caspofungin tolerance with loss of the caspofungin paradoxical effect. Impaired Hsp90 function and increased susceptibility to caspofungin were also observed following pharmacologic inhibition of the C-terminal domain of Hsp70 by pifithrin-μ or after stiA deletion, further supporting the links among Hsp70, StiA, and Hsp90 in governing caspofungin tolerance. StiA was not required for the physical interaction between Hsp70 and Hsp90 but had distinct roles in the regulation of their function in caspofungin and heat stress responses. In conclusion, this study deciphering the physical and functional interactions of the Hsp70-StiA-Hsp90 complex provided new insights into the mechanisms of tolerance to caspofungin in A. fumigatus and revealed a key C-terminal motif of Hsp70, which can be targeted by specific inhibitors, such as pifithrin-μ, to enhance the antifungal activity of caspofungin against A. fumigatus.
Resumo:
Bacterial programmed cell death and quorum sensing are direct examples of prokaryote group behaviors, wherein cells coordinate their actions to function cooperatively like one organism for the benefit of the whole culture. We demonstrate here that 2-n-heptyl-4-hydroxyquinoline-N-oxide (HQNO), a Pseudomonas aeruginosa quorum-sensing-regulated low-molecular-weight excreted molecule, triggers autolysis by self-perturbing the electron transfer reactions of the cytochrome bc1 complex. HQNO induces specific self-poisoning by disrupting the flow of electrons through the respiratory chain at the cytochrome bc1 complex, causing a leak of reducing equivalents to O2 whereby electrons that would normally be passed to cytochrome c are donated directly to O2. The subsequent mass production of reactive oxygen species (ROS) reduces membrane potential and disrupts membrane integrity, causing bacterial cell autolysis and DNA release. DNA subsequently promotes biofilm formation and increases antibiotic tolerance to beta-lactams, suggesting that HQNO-dependent cell autolysis is advantageous to the bacterial populations. These data identify both a new programmed cell death system and a novel role for HQNO as a critical inducer of biofilm formation and antibiotic tolerance. This newly identified pathway suggests intriguing mechanistic similarities with the initial mitochondrial-mediated steps of eukaryotic apoptosis.
Resumo:
This paper presents an experimental study of the effects of tow-drop gaps in Variable Stiffness Panels under drop-weight impact events. Two different configurations, with and without ply-staggering, have been manufactured by Automated Fibre Placement and compared with their baseline counterpart without defects. For the study of damage resistance, three levels of low velocity impact energy are generated with a drop-weight tower. The damage area is analysed by means of ultrasonic inspection. Results of the analysed defect configurations indicate that the influence of gap defects is only relevant under small impact energy values. However, in the case of damage tolerance, the residual compressive strength after impact does not present significant differences to that of conventional straight fibre laminates. This indicates that the strength reduction is driven mainly by the damage caused by the impact event rather than by the influence of manufacturing-induced defects
Resumo:
Automated Fiber Placement is being extensively used in the production of major composite components for the aircraft industry. This technology enables the production of tow-steered panels, which have been proven to greatly improve the structural efficiency of composites by means of in-plane stiffness variation and load redistribution. However, traditional straight-fiber architectures are still preferred. One of the reasons behind this is related to the uncertainties, as a result of process-induced defects, in the mechanical performance of the laminates. This experimental work investigates the effect of the fiber angle discontinuities between different tow courses in a ply on the un-notched and open-hole tensile strength of the laminate. The influence of several manufacturing parameters are studied in detail. The results reveal that 'ply staggering' and '0% gap coverage' is an effective combination in reducing the influence of defects in these laminates
Resumo:
The purpose of the work was to realize a high-speed digital data transfer system for RPC muon chambers in the CMS experiment on CERN’s new LHC accelerator. This large scale system took many years and many stages of prototyping to develop, and required the participation of tens of people. The system interfaces to Frontend Boards (FEB) at the 200,000-channel detector and to the trigger and readout electronics in the control room of the experiment. The distance between these two is about 80 metres and the speed required for the optic links was pushing the limits of available technology when the project was started. Here, as in many other aspects of the design, it was assumed that the features of readily available commercial components would develop in the course of the design work, just as they did. By choosing a high speed it was possible to multiplex the data from some the chambers into the same fibres to reduce the number of links needed. Further reduction was achieved by employing zero suppression and data compression, and a total of only 660 optical links were needed. Another requirement, which conflicted somewhat with choosing the components a late as possible was that the design needed to be radiation tolerant to an ionizing dose of 100 Gy and to a have a moderate tolerance to Single Event Effects (SEEs). This required some radiation test campaigns, and eventually led to ASICs being chosen for some of the critical parts. The system was made to be as reconfigurable as possible. The reconfiguration needs to be done from a distance as the electronics is not accessible except for some short and rare service breaks once the accelerator starts running. Therefore reconfigurable logic is extensively used, and the firmware development for the FPGAs constituted a sizable part of the work. Some special techniques needed to be used there too, to achieve the required radiation tolerance. The system has been demonstrated to work in several laboratory and beam tests, and now we are waiting to see it in action when the LHC will start running in the autumn 2008.
Resumo:
Score-based biotic indices are widely used to evaluate the water quality of streams and rivers. Few adaptations of these indices have been done for South America because there is a lack of knowledge on mac-roinvertebrate taxonomy, distribution and tolerance to pollution in the region. Several areas in the Andes are densely populated and there is need for methods to assess the impact of increasing human pressures on aquatic ecosystems. Considering the unique ecological and geographical features of the Andes, macroinvertebrate indices used in other regions must be adapted with caution. Here we present a review of the literature on mac-roinvertebrate distribution and tolerance to pollution in Andean areas above 2 000masl. Using these data, we propose an Andean Biotic Index (ABI), which is based on the BMWP index. In general, ABI includes fewer macroinvertebrate families than in other regions of the world where the BMWP index has been applied because altitude restricts the distribution of several families. Our review shows that in the high Andes, the tolerance of several macroinvertebrate families to pollution differs from those reported in other areas. We tested the ABI index in two basins in Ecuador and Peru, and compared it to other BMWP adaptations using the reference condi-tion approach. The ABI index is extremely useful for detecting the general impairment of rivers but class quality boundaries should be defined independently for each basin because reference conditions may be different. The ABI is widely used in Ecuador and Peru, with high correlations with land-use pressures in several studies. The ABI index is an integral part of the new multimetric index designed for high Andean streams (IMEERA). Rev. Biol. Trop. 62 (Suppl. 2): 249-273. Epub 2014 April 01.
Resumo:
Anthropogenic pollution of groundwater and surface water has become a very serious environmental problem around the world. A wide range of toxic pollutants is recalcitrant to the conventional treatment methods, thus there is much interest in the development of more efficient remediation processes. Degradation of organic pollutants by zero-valent iron is one of the most promising approaches for water treatment, mainly because it is of low cost, easy to obtain and effective. After a general introduction to water pollution and current treatments, this work highlights the advances, applications and future trends of water remediation by zero-valent iron. Special attention is given to degradation of organochloride and nitroaromatic compounds, which are commonly found in textile and paper mill effluents.
Resumo:
The influences of the spray-drying parameters and the type of nanoparticles (nanocapsules or nanospheres) on the characteristics of nanoparticle-coated diclofenac-loaded microparticles were investigated by using a factorial design 3². Gastrointestinal tolerance following oral administration in rats was evaluated. Formulations were selected considering the best yields, the best encapsulation efficiencies and the lowest water contents, presenting surfaces completely coated by nanostructures and a decrease in the surface areas in relation to the uncoated core. In vitro drug release demonstrated the influence of the nanoparticle-coating on the dissolution profiles of diclofenac. Nanocapsule-coated microparticles presented a protective effect on the gastrointestinal mucosa.
Resumo:
The function of lipids in human nutrition has been intensively debated in the last decade.This context reinforces the concern about controlling the trans fat ingestion, due to its negative implications on health. Interesterification provides an important alternative to modify the consistency of oils and fats without causing formation of trans isomers. This article reports research done towards production of zero trans fats by chemical interesterification, for different industrial purposes. Aspects related to the effect of trans fats on diet, their impact on health and modifications in Brazilian legislation are also covered.
Resumo:
The immune system is the responsible for body integrity and prevention of external invasion. On one side, nanoparticles are no triggers that the immune system is prepared to detect, on the other side it is known that foreign bodies, not only bacteria, viruses and parasites, but also inorganic matter, can cause various pathologies such as silicosis, asbestosis or inflammatory reactions. Therefore, nanoparticles entering the body, after interaction with proteins, will be either recognized as self-agents or detected by the immune system, encompassing immunostimulation or immunosuppression responses. The nature of these interactions seems to be dictated not specially by the composition of the material but by modifications of NP coating (composition, surface charge and structure). Herein, we explore the use of gold nanoparticles as substrates to carry multifunctional ligands to manipulate the immune system in a controlled manner, from undetection to immunostimulation. Murine bone marrow macrophages can be activated with artificial nanometric objects consisting of a gold nanoparticle functionalized with peptides. In the presence of some conjugates, macrophage proliferation was stopped and pro-inflammatory cytokines were induced. The biochemical type of response depended on the type of conjugated peptide and was correlated with the degree of ordering in the peptide coating. These findings help to illustrate the basic requirements involved in medical NP conjugate design to either activate the immune system or hide from it, in order to reach their targets before being removed by phagocytes. Additionally, it opens up the possibility to modulate the immune response in order to suppress unwanted responses resulting from autoimmunity, or allergy or to stimulate protective responses against pathogens.
Resumo:
Technology scaling has proceeded into dimensions in which the reliability of manufactured devices is becoming endangered. The reliability decrease is a consequence of physical limitations, relative increase of variations, and decreasing noise margins, among others. A promising solution for bringing the reliability of circuits back to a desired level is the use of design methods which introduce tolerance against possible faults in an integrated circuit. This thesis studies and presents fault tolerance methods for network-onchip (NoC) which is a design paradigm targeted for very large systems-onchip. In a NoC resources, such as processors and memories, are connected to a communication network; comparable to the Internet. Fault tolerance in such a system can be achieved at many abstraction levels. The thesis studies the origin of faults in modern technologies and explains the classification to transient, intermittent and permanent faults. A survey of fault tolerance methods is presented to demonstrate the diversity of available methods. Networks-on-chip are approached by exploring their main design choices: the selection of a topology, routing protocol, and flow control method. Fault tolerance methods for NoCs are studied at different layers of the OSI reference model. The data link layer provides a reliable communication link over a physical channel. Error control coding is an efficient fault tolerance method especially against transient faults at this abstraction level. Error control coding methods suitable for on-chip communication are studied and their implementations presented. Error control coding loses its effectiveness in the presence of intermittent and permanent faults. Therefore, other solutions against them are presented. The introduction of spare wires and split transmissions are shown to provide good tolerance against intermittent and permanent errors and their combination to error control coding is illustrated. At the network layer positioned above the data link layer, fault tolerance can be achieved with the design of fault tolerant network topologies and routing algorithms. Both of these approaches are presented in the thesis together with realizations in the both categories. The thesis concludes that an optimal fault tolerance solution contains carefully co-designed elements from different abstraction levels