914 resultados para Monitoring system
Resumo:
In this report it was designed an innovative satellite-based monitoring approach applied on the Iraqi Marshlands to survey the extent and distribution of marshland re-flooding and assess the development of wetland vegetation cover. The study, conducted in collaboration with MEEO Srl , makes use of images collected from the sensor (A)ATSR onboard ESA ENVISAT Satellite to collect data at multi-temporal scales and an analysis was adopted to observe the evolution of marshland re-flooding. The methodology uses a multi-temporal pixel-based approach based on classification maps produced by the classification tool SOIL MAPPER ®. The catalogue of the classification maps is available as web service through the Service Support Environment Portal (SSE, supported by ESA). The inundation of the Iraqi marshlands, which has been continuous since April 2003, is characterized by a high degree of variability, ad-hoc interventions and uncertainty. Given the security constraints and vastness of the Iraqi marshlands, as well as cost-effectiveness considerations, satellite remote sensing was the only viable tool to observe the changes taking place on a continuous basis. The proposed system (ALCS – AATSR LAND CLASSIFICATION SYSTEM) avoids the direct use of the (A)ATSR images and foresees the application of LULCC evolution models directly to „stock‟ of classified maps. This approach is made possible by the availability of a 13 year classified image database, conceived and implemented in the CARD project (http://earth.esa.int/rtd/Projects/#CARD).The approach here presented evolves toward an innovative, efficient and fast method to exploit the potentiality of multi-temporal LULCC analysis of (A)ATSR images. The two main objectives of this work are both linked to a sort of assessment: the first is to assessing the ability of modeling with the web-application ALCS using image-based AATSR classified with SOIL MAPPER ® and the second is to evaluate the magnitude, the character and the extension of wetland rehabilitation.
Resumo:
During recent years a consistent number of central nervous system (CNS) drugs have been approved and introduced on the market for the treatment of many psychiatric and neurological disorders, including psychosis, depression, Parkinson disease and epilepsy. Despite the great advancements obtained in the treatment of CNS diseases/disorders, partial response to therapy or treatment failure are frequent, at least in part due to poor compliance, but also genetic variability in the metabolism of psychotropic agents or polypharmacy, which may lead to sub-therapeutic or toxic plasma levels of the drugs, and finally inefficacy of the treatment or adverse/toxic effects. With the aim of improving the treatment, reducing toxic/side effects and patient hospitalisation, Therapeutic Drug Monitoring (TDM) is certainly useful, allowing for a personalisation of the therapy. Reliable analytical methods are required to determine the plasma levels of psychotropic drugs, which are often present at low concentrations (tens or hundreds of nanograms per millilitre). The present PhD Thesis has focused on the development of analytical methods for the determination of CNS drugs in biological fluids, including antidepressants (sertraline and duloxetine), antipsychotics (aripiprazole), antiepileptics (vigabatrin and topiramate) and antiparkinsons (pramipexole). Innovative methods based on liquid chromatography or capillary electrophoresis coupled to diode-array or laser-induced fluorescence detectors have been developed, together with the suitable sample pre-treatment for interference removal and fluorescent labelling in case of LIF detection. All methods have been validated according to official guidelines and applied to the analysis of real samples obtained from patients, resulting suitable for the TDM of psychotropic drugs.
Resumo:
In the recent decade, the request for structural health monitoring expertise increased exponentially in the United States. The aging issues that most of the transportation structures are experiencing can put in serious jeopardy the economic system of a region as well as of a country. At the same time, the monitoring of structures is a central topic of discussion in Europe, where the preservation of historical buildings has been addressed over the last four centuries. More recently, various concerns arose about security performance of civil structures after tragic events such the 9/11 or the 2011 Japan earthquake: engineers looks for a design able to resist exceptional loadings due to earthquakes, hurricanes and terrorist attacks. After events of such a kind, the assessment of the remaining life of the structure is at least as important as the initial performance design. Consequently, it appears very clear that the introduction of reliable and accessible damage assessment techniques is crucial for the localization of issues and for a correct and immediate rehabilitation. The System Identification is a branch of the more general Control Theory. In Civil Engineering, this field addresses the techniques needed to find mechanical characteristics as the stiffness or the mass starting from the signals captured by sensors. The objective of the Dynamic Structural Identification (DSI) is to define, starting from experimental measurements, the modal fundamental parameters of a generic structure in order to characterize, via a mathematical model, the dynamic behavior. The knowledge of these parameters is helpful in the Model Updating procedure, that permits to define corrected theoretical models through experimental validation. The main aim of this technique is to minimize the differences between the theoretical model results and in situ measurements of dynamic data. Therefore, the new model becomes a very effective control practice when it comes to rehabilitation of structures or damage assessment. The instrumentation of a whole structure is an unfeasible procedure sometimes because of the high cost involved or, sometimes, because it’s not possible to physically reach each point of the structure. Therefore, numerous scholars have been trying to address this problem. In general two are the main involved methods. Since the limited number of sensors, in a first case, it’s possible to gather time histories only for some locations, then to move the instruments to another location and replay the procedure. Otherwise, if the number of sensors is enough and the structure does not present a complicate geometry, it’s usually sufficient to detect only the principal first modes. This two problems are well presented in the works of Balsamo [1] for the application to a simple system and Jun [2] for the analysis of system with a limited number of sensors. Once the system identification has been carried, it is possible to access the actual system characteristics. A frequent practice is to create an updated FEM model and assess whether the structure fulfills or not the requested functions. Once again the objective of this work is to present a general methodology to analyze big structure using a limited number of instrumentation and at the same time, obtaining the most information about an identified structure without recalling methodologies of difficult interpretation. A general framework of the state space identification procedure via OKID/ERA algorithm is developed and implemented in Matlab. Then, some simple examples are proposed to highlight the principal characteristics and advantage of this methodology. A new algebraic manipulation for a prolific use of substructuring results is developed and implemented.
Resumo:
Hydrothermal fluids are a fundamental resource for understanding and monitoring volcanic and non-volcanic systems. This thesis is focused on the study of hydrothermal system through numerical modeling with the geothermal simulator TOUGH2. Several simulations are presented, and geophysical and geochemical observables, arising from fluids circulation, are analyzed in detail throughout the thesis. In a volcanic setting, fluids feeding fumaroles and hot spring may play a key role in the hazard evaluation. The evolution of the fluids circulation is caused by a strong interaction between magmatic and hydrothermal systems. A simultaneous analysis of different geophysical and geochemical observables is a sound approach for interpreting monitored data and to infer a consistent conceptual model. Analyzed observables are ground displacement, gravity changes, electrical conductivity, amount, composition and temperature of the emitted gases at surface, and extent of degassing area. Results highlight the different temporal response of the considered observables, as well as the different radial pattern of variation. However, magnitude, temporal response and radial pattern of these signals depend not only on the evolution of fluid circulation, but a main role is played by the considered rock properties. Numerical simulations highlight differences that arise from the assumption of different permeabilities, for both homogeneous and heterogeneous systems. Rock properties affect hydrothermal fluid circulation, controlling both the range of variation and the temporal evolution of the observable signals. Low temperature fumaroles and low discharge rate may be affected by atmospheric conditions. Detailed parametric simulations were performed, aimed to understand the effects of system properties, such as permeability and gas reservoir overpressure, on diffuse degassing when air temperature and barometric pressure changes are applied to the ground surface. Hydrothermal circulation, however, is not only a characteristic of volcanic system. Hot fluids may be involved in several mankind problems, such as studies on geothermal engineering, nuclear waste propagation in porous medium, and Geological Carbon Sequestration (GCS). The current concept for large-scale GCS is the direct injection of supercritical carbon dioxide into deep geological formations which typically contain brine. Upward displacement of such brine from deep reservoirs driven by pressure increases resulting from carbon dioxide injection may occur through abandoned wells, permeable faults or permeable channels. Brine intrusion into aquifers may degrade groundwater resources. Numerical results show that pressure rise drives dense water up to the conduits, and does not necessarily result in continuous flow. Rather, overpressure leads to new hydrostatic equilibrium if fluids are initially density stratified. If warm and salty fluid does not cool passing through the conduit, an oscillatory solution is then possible. Parameter studies delineate steady-state (static) and oscillatory solutions.
Resumo:
Therapeutisches Drug Monitoring (TDM) umfasst die Messung von Medikamentenspiegeln im Blut und stellt die Ergebnisse in Zusammenhang mit dem klinischen Erscheinungsbild der Patienten. Dabei wird angenommen, dass die Konzentrationen im Blut besser mit der Wirkung korrelieren als die Dosis. Dies gilt auch für Antidepressiva. Voraussetzung für eine Therapiesteuerung durch TDM ist die Verfügbarkeit valider Messmethoden im Labor und die korrekte Anwendung des Verfahrens in der Klinik. Ziel dieser Arbeit war es, den Einsatz von TDM für die Depressionsbehandlung zu analysieren und zu verbessern. Im ersten Schritt wurde für das neu zugelassene Antidepressivum Duloxetin eine hochleistungsflüssig-chromatographische (HPLC) Methode mit Säulenschaltung und spektrophotometrischer Detektion etabliert und an Patienten für TDM angewandt. Durch Analyse von 280 Patientenproben wurde herausgefunden, dass Duloxetin-Konzentrationen von 60 bis 120 ng/ml mit gutem klinischen Ansprechen und einem geringen Risiko für Nebenwirkungen einhergingen. Bezüglich seines Interaktionspotentials erwies sich Duloxetin im Vergleich zu anderen Antidepressiva als schwacher Inhibitor des Cytochrom P450 (CYP) Isoenzyms 2D6. Es gab keinen Hinweis auf eine klinische Relevanz. Im zweiten Schritt sollte eine Methode entwickelt werden, mit der möglichst viele unterschiedliche Antidepressiva einschließlich deren Metaboliten messbar sind. Dazu wurde eine flüssigchromatographische Methode (HPLC) mit Ultraviolettspektroskopie (UV) entwickelt, mit der die quantitative Analyse von zehn antidepressiven und zusätzlich zwei antipsychotischen Substanzen innerhalb von 25 Minuten mit ausreichender Präzision und Richtigkeit (beide über 85%) und Sensitivität erlaubte. Durch Säulenschaltung war eine automatisierte Analyse von Blutplasma oder –serum möglich. Störende Matrixbestandteile konnten auf einer Vorsäule ohne vorherige Probenaufbereitung abgetrennt werden. Das kosten- und zeiteffektive Verfahren war eine deutliche Verbesserung für die Bewältigung von Proben im Laboralltag und damit für das TDM von Antidepressiva. Durch Analyse des klinischen Einsatzes von TDM wurden eine Reihe von Anwendungsfehlern identifiziert. Es wurde deshalb versucht, die klinische Anwendung des TDM von Antidepressiva durch die Umstellung von einer weitgehend händischen Dokumentation auf eine elektronische Bearbeitungsweise zu verbessern. Im Rahmen der Arbeit wurde untersucht, welchen Effekt man mit dieser Intervention erzielen konnte. Dazu wurde eine Labor-EDV eingeführt, mit der der Prozess vom Probeneingang bis zur Mitteilung der Messergebnisse auf die Stationen elektronisch erfolgte und die Anwendung von TDM vor und nach der Umstellung untersucht. Die Umstellung fand bei den behandelnden Ärzten gute Akzeptanz. Die Labor-EDV erlaubte eine kumulative Befundabfrage und eine Darstellung des Behandlungsverlaufs jedes einzelnen Patienten inklusive vorhergehender Klinikaufenthalte. Auf die Qualität der Anwendung von TDM hatte die Implementierung des Systems jedoch nur einen geringen Einfluss. Viele Anforderungen waren vor und nach der Einführung der EDV unverändert fehlerhaft, z.B. wurden häufig Messungen vor Erreichen des Steady State angefordert. Die Geschwindigkeit der Bearbeitung der Proben war im Vergleich zur vorher händischen Ausführung unverändert, ebenso die Qualität der Analysen bezüglich Richtigkeit und Präzision. Ausgesprochene Empfehlungen hinsichtlich der Dosierungsstrategie der angeforderten Substanzen wurden häufig nicht beachtet. Verkürzt wurde allerdings die mittlere Latenz, mit der eine Dosisanpassung nach Mitteilung des Laborbefundes erfolgte. Insgesamt ist es mit dieser Arbeit gelungen, einen Beitrag zur Verbesserung des Therapeutischen Drug Monitoring von Antidepressiva zu liefern. In der klinischen Anwendung sind allerdings Interventionen notwendig, um Anwendungsfehler beim TDM von Antidepressiva zu minimieren.
Resumo:
Assessment of the integrity of structural components is of great importance for aerospace systems, land and marine transportation, civil infrastructures and other biological and mechanical applications. Guided waves (GWs) based inspections are an attractive mean for structural health monitoring. In this thesis, the study and development of techniques for GW ultrasound signal analysis and compression in the context of non-destructive testing of structures will be presented. In guided wave inspections, it is necessary to address the problem of the dispersion compensation. A signal processing approach based on frequency warping was adopted. Such operator maps the frequencies axis through a function derived by the group velocity of the test material and it is used to remove the dependence on the travelled distance from the acquired signals. Such processing strategy was fruitfully applied for impact location and damage localization tasks in composite and aluminum panels. It has been shown that, basing on this processing tool, low power embedded system for GW structural monitoring can be implemented. Finally, a new procedure based on Compressive Sensing has been developed and applied for data reduction. Such procedure has also a beneficial effect in enhancing the accuracy of structural defects localization. This algorithm uses the convolutive model of the propagation of ultrasonic guided waves which takes advantage of a sparse signal representation in the warped frequency domain. The recovery from the compressed samples is based on an alternating minimization procedure which achieves both an accurate reconstruction of the ultrasonic signal and a precise estimation of waves time of flight. Such information is used to feed hyperbolic or elliptic localization procedures, for accurate impact or damage localization.
Resumo:
Im Rahmen der vorliegenden Arbeit wurden Eignung und Nutzen des „Objective therapy Compliance Measurement“ (OtCMTM)-Systems, einer innovativen Weiterentwicklung im Bereich der elektronischen Compliance-Messung, untersucht. Unter experimentellen Bedingungen wurden Funktionalität und Verlässlichkeit der elektronischen OtCMTM-Blisterpackungen überprüft, um deren Eignung für den klinischen Einsatz zu zeigen. Funktionalität (≥90% lesbare Blister), Richtigkeit (≤2% Fehler) und Robustheit waren bei den OtCMTM-Blistern der Version 3 gegeben, nachdem die Fehler der Versionen 1 und 2 in Zusammenarbeit mit dem Hersteller TCG identifiziert und eliminiert worden waren. Der als Alternative zu den elektronischen Blistern für die Verpackung von klinischen Prüfmustern entwickelte OtCMTM e-Dispenser wurde bezüglich Funktionalität und Anwenderfreundlichkeit in einer Pilotstudie untersucht. Dabei wurde ein Optimierungsbedarf festgestellt. In einer klinischen Studie wurde das OtCMTM-System mit dem als „Goldstandard“ geltenden MEMS® verglichen. Vergleichskriterien waren Datenqualität, Akzeptanz und Anwenderfreundlichkeit, Zeitaufwand bei der Bereitstellung der Medikation und Datenauswertung, sowie Validität. Insgesamt 40 Patienten, die mit Rekawan® retard 600mg behandelt wurden, nahmen an der offenen, randomisierten, prospektiven Studie teil. Das OtCMTM-System zeigte sich bezüglich Validität, Akzeptanz und Anwenderfreundlichkeit mit MEMS® vergleichbar. Eine erwartete Zeitersparnis wurde mit dem OtCMTM-System gegenüber MEMS® nicht erreicht. Vorteile des OtCMTM-Systems sind eine höhere Datenqualität und die Möglichkeit zum Einsatz in der Telemedizin.
Resumo:
The monitoring of cognitive functions aims at gaining information about the current cognitive state of the user by decoding brain signals. In recent years, this approach allowed to acquire valuable information about the cognitive aspects regarding the interaction of humans with external world. From this consideration, researchers started to consider passive application of brain–computer interface (BCI) in order to provide a novel input modality for technical systems solely based on brain activity. The objective of this thesis is to demonstrate how the passive Brain Computer Interfaces (BCIs) applications can be used to assess the mental states of the users, in order to improve the human machine interaction. Two main studies has been proposed. The first one allows to investigate whatever the Event Related Potentials (ERPs) morphological variations can be used to predict the users’ mental states (e.g. attentional resources, mental workload) during different reactive BCI tasks (e.g. P300-based BCIs), and if these information can predict the subjects’ performance in performing the tasks. In the second study, a passive BCI system able to online estimate the mental workload of the user by relying on the combination of the EEG and the ECG biosignals has been proposed. The latter study has been performed by simulating an operative scenario, in which the occurrence of errors or lack of performance could have significant consequences. The results showed that the proposed system is able to estimate online the mental workload of the subjects discriminating three different difficulty level of the tasks ensuring a high reliability.
Resumo:
What is the intracellular fate of nanoparticles (NPs) taken up by the cells? This question has been investigated for polystyrene NPs of different sizes with a set of molecular biological and biophysical techniques.rnTwo sets of fluorescent NPs, cationic and non-ionic, were synthesized with three different polymerization techniques. Non-ionic particles (132 – 846 nm) were synthesized with dispersion polymerization in an ethanol/water solution. Cationic NPs with 120 nm were synthesized by miniemulsion polymerization Particles with 208, 267 and 603 nm were produced by seeding the 120 nm particle obtained by miniemulsion polymerization with drop-wise added monomer and polymerization of such. The colloidal characterization of all particles showed a comparable amount of the surface groups. In addition, particles were characterized with regard to their size, morphology, solid content, amount of incorporated fluorescent dye and zeta potential. The fluorescent intensities of all particles were measured by fluorescence spectroscopy for calibration in further cellular experiments. rnThe uptake of the NPs to HeLa cells after 1 – 24 h revealed a much higher uptake of cationic NPs in comparison to non-ionic NPs. If the same amount of NPs with different sizes is introduced to the cell, a different amount of particles is present in the cell medium, which complicates a comparison of the uptake. The same conclusion is valid for the particles’ overall surface area. Therefore, HeLa cells were incubated with the same concentration, amount and surface area of NPs. It was found that with the same concentration always the same polymer amount is taking up by cells. However, the amount of particles taken up decreases for the biggest. A correlation to the surface area could not be found. We conclude that particles are endocytosed by an excavator-shovel like mechanism, which does not distinguish between different sizes, but is only dependent on the volume that is taken up. For the decreased amount of large particles, an overload of this mechanism was assumed, which leads to a decrease in the uptake. rnThe participation of specific endocytotic processes has been determined by the use of pharmacological inhibitors, immunocytological staining and immunofluorescence. The uptake of NPs into the endo-lysosomal machinery is dominated by a caveolin-mediated endocytosis. Other pathways, which include macropinocytosis and a dynamin-dependent mechanism but exclude clathrin mediated endocytosis, also occur as competing processes. All particles can be found to some extent in early endosomes, but only bigger particles were proven to localize in late endosomes. No particles were found in lysosomes; at least not in lysosomes that are labeled with Lamp1 and cathepsin D. However, based on the character of the performed experiment, a localization of particles in lysosomes cannot be excluded.rnDuring their ripening process, vesicles undergo a gradual acidification from early over late endosomes to lysosomes. It is hypothesized that NPs in endo-lysosomal compartments experience the same change in pH value. To probe the environmental pH of NPs after endocytosis, the pH-sensitive dye SNARF-4F was grafted onto amino functionalized polystyrene NPs. The pH value is a ratio function of the two emission wavelengths of the protonated and deprotonated form of the dye and is hence independent of concentration changes. The particles were synthesized by the aforementioned miniemulsion polymerization with the addition of the amino functionalized copolymer AEMH. The immobilization of SNARF-4F was performed by an EDC-coupling reaction. The amount of physically adsorbed dye in comparison to covalently bonded dye was 15% as determined by precipitation of the NPs in methanol, which is a very good solvent for SNARF-4F. To determine influences of cellular proteins on the fluorescence properties, a intracellular calibration fit was established with platereader measurements and cLSM imaging by the cell-penetrable SNARF-4F AM ester. Ionophores equilibrated the extracellular and intracellular pH.rnSNARF-4F NPs were taken up well by HeLa cells and showed no toxic effects. The pH environment of SNARF-4F NPs has been qualitatively imaged as a movie over a time period up to 1 h in pseudo-colors by a self-written automated batch program. Quantification revealed an acidification process until pH value of 4.5 over 24 h, which is much slower than the transport of nutrients to lysosomes. NPs are present in early endosomes after min. 1 h, in late endosomes at approx. 8 h and end up in vesicles with a pH value typical for lysosomes after > 24 h. We therefore assume that NPs bear a unique endocytotic mechanism, at least with regards to the kinetic involvedrn
Resumo:
Because of the potentially irreversible impact of groundwater quality deterioration in the Ferrara coastal aquifer, answers concerning the assessment of the extent of the salinization problem, the understanding of the mechanisms governing salinization processes, and the sustainability of the current water resources management are urgent. In this light, the present thesis aims to achieve the following objectives: Characterization of the lowland coastal aquifer of Ferrara: hydrology, hydrochemistry and evolution of the system The importance of data acquisition techniques in saltwater intrusion monitoring Predicting salinization trends in the lowland coastal aquifer Ammonium occurrence in a salinized lowland coastal aquifer Trace elements mobility in a saline coastal aquifer
Resumo:
This doctoral thesis describes the extension of the resonance ionization laser ion source RILIS at CERN/ISOLDE by the addition of an all-solid state tunable titanium:sapphire (Ti:Sa) laser system to complement the well-established system of dye lasers. Synchronous operation of the so called Dual RILIS system of Ti:Sa and dye lasers was investigated and the potential for increased ion beam intensity, reliability, and reduced setup time has been demonstrated. In-source resonance ionization spectroscopy was performed at ISOLDE/CERN and at ISAC/TRIUMF radioactive ion beam facilities to develop an efficient and selective three-colour ionization scheme for the purely radioactive element astatine. A LabVIEW based monitoring, control and measurement system was conceived which enabled, in conjunction with Dual RILIS operation, the spectroscopy of high lying Rydberg states, from which the ionization potential of the astatine atom was determined for the first time experimentally.
Resumo:
Polymerbasierte Kolloide mit Groen im Nanometerbereich werden als aussichts- reiche Kandidaten fur die Verkapselung und den Transport von pharmazeutischen Wirkstoen angesehen. Daher ist es wichtig die physikalischen Prozesse, die die Bil- dung, Struktur und kinetische Stabilitat der polymerbasierten Kolloide beein ussen, besser zu verstehen. Allerdings ist die Untersuchung dieser Prozesse fur nanome- tergroe Objekte kompliziert und erfordert fortgeschrittene Techniken. In dieser Arbeit beschreibe ich Untersuchungen, bei denen Zwei-Farben-Fluoreszenzkreuz- korrelationsspektroskopie (DC FCCS) genutzt wurde, um Informationen uber die Wechselwirkung und den Austausch von dispergierten, nanometergroen Kolloiden zu bekommen. Zunachst habe ich den Prozess der Polymernanopartikelherstellung aus Emul- sionstropfen untersucht, welcher einen der am haugsten angewendeten Prozesse der Nanopartikelformulierung darstellt. Ich konnte zeigen, dass mit DC FCCS eindeutig und direkt Koaleszenz zwischen Emulsionstropfen gemessen werden kann. Dies ist von Interesse, da Koaleszenz als Hauptgrund fur die breite Groenverteilung der nalen Nanopartikel angesehen wird. Weiterhin habe ich den Austausch von Mizellen bildenden Molekulen zwischen amphiphilen Diblock Kopolymermizellen untersucht. Als Modellsystem diente ein Linear-Burste Block Kopolymer, welches Mizellen mit einer dichten und kurzen Korona bildet. Mit Hilfe von DC FCCS konnte der Austausch in verschiedenen Losungsmitteln und bei verschiedenen Temperaturen beobachtet werden. Ich habe herausgefunden, dass in Abhangigkeit der Qualitat des Losungsmittels die Zeit des Austausches um Groenordnungen verschoben werden kann, was eine weitreichende Einstellung der Austauschkinetik ermoglicht. Eine Eigenschaft die all diese Kolloide gemeinsam haben ist ihre Polydispersitat. Im letzten Teil meiner Arbeit habe ich am Beispiel von Polymeren als Modellsystem untersucht, welchen Eekt Polydispersitat und die Art der Fluoreszenzmarkierung auf FCS Experimente haben. Eine Anpassung des klassischen FCS Modells kann die FCS Korrelationskurven dieser Systeme beschreiben. Die Richtigkeit meines Ansatzes habe ich mit dem Vergleich zur Gel-Permeations-Chromatographie und Brownschen Molekulardynamiksimulationen bestatigt.
Resumo:
PURPOSE OF REVIEW: Critical incident reporting alone does not necessarily improve patient safety or even patient outcomes. Substantial improvement has been made by focusing on the further two steps of critical incident monitoring, that is, the analysis of critical incidents and implementation of system changes. The system approach to patient safety had an impact on the view about the patient's role in safety. This review aims to analyse recent advances in the technique of reporting, the analysis of reported incidents, and the implementation of actual system improvements. It also explores how families should be approached about safety issues. RECENT FINDINGS: It is essential to make as many critical incidents as possible known to the intensive care team. Several factors have been shown to increase the reporting rate: anonymity, regular feedback about the errors reported, and the existence of a safety climate. Risk scoring of critical incident reports and root cause analysis may help in the analysis of incidents. Research suggests that patients can be successfully involved in safety. SUMMARY: A persisting high number of reported incidents is anticipated and regarded as continuing good safety culture. However, only the implementation of system changes, based on incident reports, and also involving the expertise of patients and their families, has the potential to improve patient outcome. Hard outcome criteria, such as standardized mortality ratio, have not yet been shown to improve as a result of critical incident monitoring.
Resumo:
Monitoring pathology/regeneration in experimental models of de-/remyelination requires an accurate measure not only of functional changes but also of the amount of myelin. We tested whether X-ray diffraction (XRD), which measures periodicity in unfixed myelin, can assess the structural integrity of myelin in fixed tissue. From laboratories involved in spinal cord injury research and in studying the aging primate brain, we solicited "blind" samples and used an electronic detector to record rapidly the diffraction patterns (30 min each pattern) from them. We assessed myelin integrity by measuring its periodicity and relative amount. Fixation of tissue itself introduced +/-10% variation in periodicity and +/-40% variation in relative amount of myelin. For samples having the most native-like periods, the relative amounts of myelin detected allowed distinctions to be made between normal and demyelinating segments, between motor and sensory tracts within the spinal cord, and between aged and young primate CNS. Different periodicities also allowed distinctions to be made between samples from spinal cord and nerve roots and between well-fixed and poorly fixed samples. Our findings suggest that, in addition to evaluating the effectiveness of different fixatives, XRD could also be used as a robust and rapid technique for quantitating the relative amount of myelin among spinal cords and other CNS tissue samples from experimental models of de- and remyelination.
Resumo:
The chemotherapeutic drug 5-fluorouracil (5-FU) is widely used for treating solid tumors. Response to 5-FU treatment is variable with 10-30% of patients experiencing serious toxicity partly explained by reduced activity of dihydropyrimidine dehydrogenase (DPD). DPD converts endogenous uracil (U) into 5,6-dihydrouracil (UH(2) ), and analogously, 5-FU into 5-fluoro-5,6-dihydrouracil (5-FUH(2) ). Combined quantification of U and UH(2) with 5-FU and 5-FUH(2) may provide a pre-therapeutic assessment of DPD activity and further guide drug dosing during therapy. Here, we report the development of a liquid chromatography-tandem mass spectrometry assay for simultaneous quantification of U, UH(2) , 5-FU and 5-FUH(2) in human plasma. Samples were prepared by liquid-liquid extraction with 10:1 ethyl acetate-2-propanol (v/v). The evaporated samples were reconstituted in 0.1% formic acid and 10 μL aliquots were injected into the HPLC system. Analyte separation was achieved on an Atlantis dC(18) column with a mobile phase consisting of 1.0 mm ammonium acetate, 0.5 mm formic acid and 3.3% methanol. Positively ionized analytes were detected by multiple reaction monitoring. The analytical response was linear in the range 0.01-10 μm for U, 0.1-10 μm for UH(2) , 0.1-75 μm for 5-FU and 0.75-75 μm for 5-FUH(2) , covering the expected concentration ranges in plasma. The method was validated following the FDA guidelines and applied to clinical samples obtained from ten 5-FU-treated colorectal cancer patients. The present method merges the analysis of 5-FU pharmacokinetics and DPD activity into a single assay representing a valuable tool to improve the efficacy and safety of 5-FU-based chemotherapy.