814 resultados para Performance comparison
Resumo:
The first part of the research project of the Co-Advisorship Ph.D Thesis was aimed to select the best Bifidobacterium longum strains suitable to set the basis of our study. We were looking for strains with the abilities to colonize the intestinal mucosa and with good adhesion capacities, so that we can test these strains to investigate their ability to induce apoptosis in “damaged” intestinal cells. Adhesion and apoptosis are the two process that we want to study to better understand the role of an adhesion protein that we have previously identified and that have top scores homologies with the recent serpin encoding gene identified in B. longum by Nestlè researchers. Bifidobacterium longum is a probiotic, known for its beneficial effects to the human gut and even for its immunomodulatory and antitumor activities. Recently, many studies have stressed out the intimate relation between probiotic bacteria and the GIT mucosa and their influence on human cellular homeostasis. We focused on the apoptotic deletion of cancer cells induced by B. longum. This has been valued in vitro, performing the incubation of three B.longum strains with enterocyte-like Caco- 2 cells, to evidence DNA fragmentation, a cornerstone of apoptosis. The three strains tested were defined for their adhesion properties using adhesion and autoaggregation assays. These features are considered necessary to select a probiotic strain. The three strains named B12, B18 and B2990 resulted respectively: “strong adherent”, “adherent” and “non adherent”. Then, bacteria were incubated with Caco-2 cells to investigate apoptotic deletion. Cocultures of Caco-2 cells with B. longum resulted positive in DNA fragmentation test, only when adherent strains were used (B12 and B18). These results indicate that the interaction with adherent B. longum can induce apoptotic deletion of Caco-2 cells, suggesting a role in cellular homeostasis of the gastrointestinal tract and in restoring the ecology of damaged colon tissues. These results were used to keep on researching and the strains tested were used as recipient of recombinant techniques aimed to originate new B.longum strains with enhanced capacity of apoptotic induction in “damaged” intestinal cells. To achieve this new goal it was decided to clone the serpin encoding gene of B. longum, so that we can understand its role in adhesion and apoptosis induction. Bifidobacterium longum has immunostimulant activity that in vitro can lead to apoptotic response of Caco-2 cell line. It secretes a hypothetical eukaryotic type serpin protein, which could be involved in this kind of deletion of damaged cells. We had previously characterised a protein that has homologies with the hypothetical serpin of B. longum (DD087853). In order to create Bifidobacterium serpin transformants, a B. longum cosmid library was screened with a PCR protocol using specific primers for serpin gene. After fragment extraction, the insert named S1 was sub-cloned into pRM2, an Escherichia coli - Bifidobacterium shuttle vector, to construct pRM3. Several protocols for B. longum transformation were performed and the best efficiency was obtained using MRS medium and raffinose. Finally bacterial cell supernatants were tested in a dotblot assay to detect antigens presence against anti-antitrypsin polyclonal antibody. The best signal was produced by one starin that has been renamed B. longum BLKS 7. Our research study was aimed to generate transformants able to over express serpin encoding gene, so that we can have the tools for a further study on bacterial apoptotic induction of Caco-2 cell line. After that we have originated new trasformants the next step to do was to test transformants abilities when exposed to an intestinal cell model. In fact, this part of the project was achieved in the Department of Biochemistry of the Medical Faculty of the University of Maribor, guest of the abroad supervisor of the Co-Advisorship Doctoral Thesis: Prof. Avrelija Cencic. In this study we examined the probiotic ability of some bacterial strains using intestinal cells from a 6 years old pig. The use of intestinal mammalian cells is essential to study this symbiosis and a functional cell model mimics a polarised epithelium in which enterocytes are separated by tight junctions. In this list of strains we have included the Bifidobacterium longum BKS7 transformant strain that we have previously originated; in order to compare its abilities. B. longum B12 wild type and B. longum BKS7 transformant and eight Lactobacillus strains of different sources were co-cultured with porcine small intestine epithelial cells (PSI C1) and porcine blood monocytes (PoM2) in Transwell filter inserts. The strains, including Lb. gasseri, Lb. fermentum, Lb. reuterii, Lb. plantarum and unidentified Lactobacillus from kenyan maasai milk and tanzanian coffee, were assayed for activation of cell lines, measuring nitric oxide by Griess reaction, H202 by tetramethylbenzidine reaction and O2 - by cytochrome C reduction. Cytotoxic effect by crystal violet staining and induction on metabolic activity by MTT cell proliferation assay were tested too. Transepithelial electrical resistance (TER) of polarised PSI C1 was measured during 48 hours co-culture. TER, used to observe epithelium permeability, decrease during pathogenesis and tissue becomes permeable to ion passive flow lowering epithelial barrier function. Probiotics can prevent or restore increased permeability. Lastly, dot-blot was achieved against Interleukin-6 of treated cells supernatants. The metabolic activity of PoM2 and PSI C1 increased slightly after co-culture not affecting mitochondrial functions. No strain was cytotoxic over PSI C1 and PoM2 and no cell activation was observed, as measured by the release of NO2, H202 and O2 - by PoM2 and PSI C1. During coculture TER of polarised PSI C1 was two-fold higher comparing with constant TER (~3000 ) of untreated cells. TER raise generated by bacteria maintains a low permeability of the epithelium. During treatment Interleukin-6 was detected in cell supernatants at several time points, confirming immunostimulant activity. All results were obtained using Lactobacillus paracasei Shirota e Carnobacterium divergens as controls. In conclusion we can state that both the list of putative probiotic bacteria and our new transformant strain of B. longum are not harmful when exposed to intestinal cells and could be selected as probiotics, because can strengthen epithelial barrier function and stimulate nonspecific immunity of intestinal cells on a pig cell model. Indeed, we have found out that none of the strains tested that have good adhesion abilities presents citotoxicity to the intestinal cells and that non of the strains tested can induce cell lines to produce high level of ROS, neither NO2. Moreover we have assayed even the capacity of producing certain citokynes that are correlated with immune response. The detection of Interleukin-6 was assayed in all our samples, including B.longum transformant BKS 7 strain, this result indicates that these bacteria can induce a non specific immune response in the intestinal cells. In fact, when we assayed the presence of Interferon-gamma in cells supernatant after bacterial exposure, we have no positive signals, that means that there is no activation of a specific immune response, thus confirming that these bacteria are not recognize as pathogen by the intestinal cells and are certainly not harmful for intestinal cells. The most important result is the measure of Trans Epithelial Electric Resistance that have shown how the intestinal barrier function get strengthen when cells are exposed to bacteria, due to a reduction of the epithelium permeability. We have now a new strain of B. longum that will be used for further studies above the mechanism of apoptotic induction to “damaged cells” and above the process of “restoring ecology”. This strain will be the basis to originate new transformant strains for Serpin encoding gene that must have better performance and shall be used one day even in clinical cases as in “gene therapy” for cancer treatment and prevention.
Resumo:
The scale down of transistor technology allows microelectronics manufacturers such as Intel and IBM to build always more sophisticated systems on a single microchip. The classical interconnection solutions based on shared buses or direct connections between the modules of the chip are becoming obsolete as they struggle to sustain the increasing tight bandwidth and latency constraints that these systems demand. The most promising solution for the future chip interconnects are the Networks on Chip (NoC). NoCs are network composed by routers and channels used to inter- connect the different components installed on the single microchip. Examples of advanced processors based on NoC interconnects are the IBM Cell processor, composed by eight CPUs that is installed on the Sony Playstation III and the Intel Teraflops pro ject composed by 80 independent (simple) microprocessors. On chip integration is becoming popular not only in the Chip Multi Processor (CMP) research area but also in the wider and more heterogeneous world of Systems on Chip (SoC). SoC comprehend all the electronic devices that surround us such as cell-phones, smart-phones, house embedded systems, automotive systems, set-top boxes etc... SoC manufacturers such as ST Microelectronics , Samsung, Philips and also Universities such as Bologna University, M.I.T., Berkeley and more are all proposing proprietary frameworks based on NoC interconnects. These frameworks help engineers in the switch of design methodology and speed up the development of new NoC-based systems on chip. In this Thesis we propose an introduction of CMP and SoC interconnection networks. Then focusing on SoC systems we propose: • a detailed analysis based on simulation of the Spidergon NoC, a ST Microelectronics solution for SoC interconnects. The Spidergon NoC differs from many classical solutions inherited from the parallel computing world. Here we propose a detailed analysis of this NoC topology and routing algorithms. Furthermore we propose aEqualized a new routing algorithm designed to optimize the use of the resources of the network while also increasing its performance; • a methodology flow based on modified publicly available tools that combined can be used to design, model and analyze any kind of System on Chip; • a detailed analysis of a ST Microelectronics-proprietary transport-level protocol that the author of this Thesis helped developing; • a simulation-based comprehensive comparison of different network interface designs proposed by the author and the researchers at AST lab, in order to integrate shared-memory and message-passing based components on a single System on Chip; • a powerful and flexible solution to address the time closure exception issue in the design of synchronous Networks on Chip. Our solution is based on relay stations repeaters and allows to reduce the power and area demands of NoC interconnects while also reducing its buffer needs; • a solution to simplify the design of the NoC by also increasing their performance and reducing their power and area consumption. We propose to replace complex and slow virtual channel-based routers with multiple and flexible small Multi Plane ones. This solution allows us to reduce the area and power dissipation of any NoC while also increasing its performance especially when the resources are reduced. This Thesis has been written in collaboration with the Advanced System Technology laboratory in Grenoble France, and the Computer Science Department at Columbia University in the city of New York.
Resumo:
[EN]This work presents a comparison among different focus measures used in the literature for autofocusing in a non previously explored application of face detection. This application has different characteristics to those where traditionally autofocus methods have been applied like microscopy or depth from focus. The aim of the work is to find if the best focus measures in traditional applications of autofocus have the same performance in face detection applications. To do that six focus measures has been studied in four different settings from the oldest to more recent ones.
Resumo:
Visual search and oculomotor behaviour are believed to be very relevant for athlete performance, especially for sports requiring refined visuo-motor coordination skills. Modern coaches believe that a correct visuo-motor strategy may be part of advanced training programs. In this thesis two experiments are reported in which gaze behaviour of expert and novice athletes were investigated while they were doing a real sport specific task. The experiments concern two different sports: judo and soccer. In each experiment, number of fixations, fixation locations and mean fixation duration (ms) were considered. An observational analysis was done at the end of the paper to see perceptual differences between near and far space. Purpose: The aim of the judo study was to delineate differences in gaze behaviour characteristics between a population of athletes and one of non athletes. Aspects specifically investigated were: search rate, search order and viewing time across different conditions in a real-world task. The second study was aimed at identifying gaze behaviour in varsity soccer goalkeepers while facing a penalty kick executed with instep and inside foot. Then an attempt has been done to compare the gaze strategies of expert judoka and soccer goalkeepers in order to delineate possible differences related to the different conditions of reacting to events occurring in near (peripersonal) or far (extrapersonal) space. Judo Methods: A sample of 9 judoka (black belt) and 11 near judoka (white belt) were studied. Eye movements were recorded at 500Hz using a video based eye tracker (EyeLink II). Each subject participated in 40 sessions for about 40 minutes. Gaze behaviour was considered as average number of locations fixated per trial, the average number of fixations per trial, and mean fixation duration. Soccer Methods: Seven (n = 7) intermediate level male volunteered for the experiment. The kickers and goalkeepers, had at least varsity level soccer experience. The vision-in-action (VIA) system (Vickers 1996; Vickers 2007) was used to collect the coupled gaze and motor behaviours of the goalkeepers. This system integrated input from a mobile eye tracking system (Applied Sciences Laboratories) with an external video of the goalkeeper’s saving actions. The goalkeepers took 30 penalty kicks on a synthetic pitch in accordance with FIFA (2008) laws. Judo Results: Results indicate that experts group differed significantly from near expert for fixations duration, and number of fixations per trial. The expert judokas used a less exhaustive search strategy involving fewer fixations of longer duration than their novice counterparts and focused on central regions of the body. The results showed that in defence and attack situation expert group did a greater number of transitions with respect to their novice counterpart. Soccer Results: We found significant main effect for the number of locations fixated across outcome (goal/save) but not for foot contact (instep/inside). Participants spent more time fixating the areas in instep than inside kick and in goal than in save situation. Mean and standard error in search strategy as a result of foot contact and outcome indicate that the most gaze behaviour start and finish on ball interest areas. Conclusions: Expert goalkeepers tend to spend more time in inside-save than instep-save penalty, differences that was opposite in scored penalty kick. Judo results show that differences in visual behaviour related to the level of expertise appear mainly when the test presentation is continuous, last for a relatively long period of time and present a high level of uncertainty with regard to the chronology and the nature of events. Expert judoist performers “anchor” the fovea on central regions of the scene (lapel and face) while using peripheral vision to monitor opponents’ limb movements. The differences between judo and soccer gaze strategies are discussed on the light of physiological and neuropsychological differences between near and far space perception.
Resumo:
The "sustainability" concept relates to the prolonging of human economic systems with as little detrimental impact on ecological systems as possible. Construction that exhibits good environmental stewardship and practices that conserve resources in a manner that allow growth and development to be sustained for the long-term without degrading the environment are indispensable in a developed society. Past, current and future advancements in asphalt as an environmentally sustainable paving material are especially important because the quantities of asphalt used annually in Europe as well as in the U.S. are large. The asphalt industry is still developing technological improvements that will reduce the environmental impact without affecting the final mechanical performance. Warm mix asphalt (WMA) is a type of asphalt mix requiring lower production temperatures compared to hot mix asphalt (HMA), while aiming to maintain the desired post construction properties of traditional HMA. Lowering the production temperature reduce the fuel usage and the production of emissions therefore and that improve conditions for workers and supports the sustainable development. Even the crumb-rubber modifier (CRM), with shredded automobile tires and used in the United States since the mid 1980s, has proven to be an environmentally friendly alternative to conventional asphalt pavement. Furthermore, the use of waste tires is not only relevant in an environmental aspect but also for the engineering properties of asphalt [Pennisi E., 1992]. This research project is aimed to demonstrate the dual value of these Asphalt Mixes in regards to the environmental and mechanical performance and to suggest a low environmental impact design procedure. In fact, the use of eco-friendly materials is the first phase towards an eco-compatible design but it cannot be the only step. The eco-compatible approach should be extended also to the design method and material characterization because only with these phases is it possible to exploit the maximum potential properties of the used materials. Appropriate asphalt concrete characterization is essential and vital for realistic performance prediction of asphalt concrete pavements. Volumetric (Mix design) and mechanical (Permanent deformation and Fatigue performance) properties are important factors to consider. Moreover, an advanced and efficient design method is necessary in order to correctly use the material. A design method such as a Mechanistic-Empirical approach, consisting of a structural model capable of predicting the state of stresses and strains within the pavement structure under the different traffic and environmental conditions, was the application of choice. In particular this study focus on the CalME and its Incremental-Recursive (I-R) procedure, based on damage models for fatigue and permanent shear strain related to the surface cracking and to the rutting respectively. It works in increments of time and, using the output from one increment, recursively, as input to the next increment, predicts the pavement conditions in terms of layer moduli, fatigue cracking, rutting and roughness. This software procedure was adopted in order to verify the mechanical properties of the study mixes and the reciprocal relationship between surface layer and pavement structure in terms of fatigue and permanent deformation with defined traffic and environmental conditions. The asphalt mixes studied were used in a pavement structure as surface layer of 60 mm thickness. The performance of the pavement was compared to the performance of the same pavement structure where different kinds of asphalt concrete were used as surface layer. In comparison to a conventional asphalt concrete, three eco-friendly materials, two warm mix asphalt and a rubberized asphalt concrete, were analyzed. The First Two Chapters summarize the necessary steps aimed to satisfy the sustainable pavement design procedure. In Chapter I the problem of asphalt pavement eco-compatible design was introduced. The low environmental impact materials such as the Warm Mix Asphalt and the Rubberized Asphalt Concrete were described in detail. In addition the value of a rational asphalt pavement design method was discussed. Chapter II underlines the importance of a deep laboratory characterization based on appropriate materials selection and performance evaluation. In Chapter III, CalME is introduced trough a specific explanation of the different equipped design approaches and specifically explaining the I-R procedure. In Chapter IV, the experimental program is presented with a explanation of test laboratory devices adopted. The Fatigue and Rutting performances of the study mixes are shown respectively in Chapter V and VI. Through these laboratory test data the CalME I-R models parameters for Master Curve, fatigue damage and permanent shear strain were evaluated. Lastly, in Chapter VII, the results of the asphalt pavement structures simulations with different surface layers were reported. For each pavement structure, the total surface cracking, the total rutting, the fatigue damage and the rutting depth in each bound layer were analyzed.
Resumo:
Il fulcro tematico e concettuale della tesi consiste nel rapporto complesso, paradossale e spesso anche controverso esistente fra il teatro e la performance (art) – e cioè il rapporto fra i concetti di “teatralità” e di “performatività”. L’attenzione è posta su quelle correnti nelle arti performative contemporanee che tendono allo scioglimento delle nozioni di genere, disciplina, tecnica e autorialità e che mettono in questione lo status stesso dell’opera performativa (lo spettacolo) in quanto prodotto esclusivamente estetico, cioè spettacolare. Vengono esaminate – prelevando rispettivamente dal campo del teatro, della danza e della performance art – le pratiche di Jerzy Grotowski e Thomas Richards, Jérôme Bel e Marina Abramović. Quello che accomuna queste pratiche ben diverse tra loro non è soltanto la problematica del rapporto fra teatralità e performatività ma soprattutto l’aspetto particolarmente radicale e assiduo (e anche paradossale) del loro doppio sforzo, che consiste nello spingere la propria disciplina oltre ogni confine prestabilito e nello stesso tempo nel cercare di ri-definire i suoi codici fondanti e lo statuto ontologico che la distinguerebbero dalle altre discipline performative. Sono esaminate anche diverse teorizzazioni della performance con particolare attenzione a quei contributi che mettono in luce (e in questione) il delicato rapporto fra il teatro e la performance (art) attraverso una (ri)concettualizzazione e comparazione dei termini di teatralità e di performatività. La tesi esamina l’evoluzione della comprensione di quel rapporto all’interno del campo teorico-storico e artistico che inizialmente riflette la tendenza a percepire il rapporto in termini di opposizione e addirittura esclusione per approdare col tempo a una visione più riconciliante e complementare. Le radicali pratiche contemporanee fra il teatro e la performance rappresentano forse una nuova forma-processo performativa specifica e autonoma – che potrebbe essere definita tout court “performance” – e con cui viene definitivamente superato il progetto teatrale modernista?
Resumo:
Obiettivo del lavoro è quello di legare tra di loro due aspetti che storicamente sono sempre stati scollegati. Il primo è il lungo dibattito sul tema “oltre il PIL”, che prosegue ininterrottamente da circa mezzo secolo. Il secondo riguarda l’utilizzo dei sistemi di misurazione e valutazione della performance nel settore pubblico italiano. Si illustra l’evoluzione del dibattito sul PIL facendo un excursus storico del pensiero critico che si è sviluppato nel corso di circa cinquanta anni analizzando le ragioni assunte dagli studiosi per confutare l’utilizzo del PIL quale misura universale del benessere. Cogliendo questa suggestione l’Istat, in collaborazione con il CNEL, ha avviato un progetto per individuare nuovi indicatori da affiancare al PIL, in grado di misurare il livello non solo della crescita economica, ma anche del benessere sociale e sostenibile, con l’analisi degli indicatori riferiti a 12 domini di benessere individuati. Al progetto Istat-CNEL si è affiancato il progetto UrBES, promosso dall’Istat e dal Coordinamento dei sindaci metropolitani dell’ANCI, che hanno costituito una rete di città metropolitane per sperimentare la misurazione e il confronto sulla base di indicatori di benessere urbano equo e sostenibile, facendo proprio un progetto del Comune di Bologna e di Laboratorio Urbano (Centro di documentazione, ricerca e proposta sulle città), che ha sottoposto a differenti target un questionario on line, i cui risultati, con riferimento alle risposte fornite alle domande aperte, sono stati elaborati attraverso l’utilizzo di Taltac, un software per l’analisi dei testi, al fine di individuare i “profili” dei rispondenti, associando i risultati dell’elaborazione alle variabili strutturali del questionario. Nell’ultima parte i servizi e progetti erogati dal comune di Bologna sono stati associati alle dimensioni UrBES, per valutare l’impatto delle politiche pubbliche sulla qualità della vita e sul benessere dei cittadini, indicando le criticità legate alla mancanza di dati adeguati.
Resumo:
Sustainable development is one of the biggest challenges of the twenty fist-century. Various university has begun the debate about the content of this concept and the ways in which to integrate it into their policy, organization and activities. Universities have a special responsibility to take over a leading position by demonstrating best practices that sustain and educate a sustainable society. For that reason universities have the opportunity to create the culture of sustainability for today’s student, and to set their expectations for how the world should be. This thesis aim at analyzing how Delft University of Technology and University of Bologna face the challenge of becoming a sustainable campus. In this context, both universities have been studied and analyzed following the International Sustainable Campus Network (ISCN) methodology that provides a common framework to formalize commitments and goals at campus level. In particular this work has been aimed to highlight which key performance indicators are essential to reach sustainability as a consequence the following aspects has been taken into consideration: energy use, water use, solid waste and recycling, carbon emission. Subsequently, in order to provide a better understanding of the current state of sustainability on University of Bologna and Delft University of Technology, and potential strategies to achieve the stated objective, a SWOT Analysis has been undertaken. Strengths, weaknesses, opportunities and threats have been shown to understand how the two universities can implement a synergy to improve each other. In the direction of framing a “Sustainable SWOT” has been considered the model proposed by People and Planet, so it has been necessary to evaluate important matters as for instance policy, investment, management, education and engagement. Regarding this, it has been fundamental to involve the main sustainability coordinators of the two universities, this has been achieved through a brainstorming session. Partnerships are key to the achievement of sustainability. The creation of a bridge between two universities aims to join forces and to create a new generation of talent. As a result, people can become able to support universities in the exchange of information, ideas, and best practices for achieving sustainable campus operations and integrating sustainability in research and teaching. For this purpose the project "SUCCESS" has been presented, the project aims to create an interactive European campus network that can be considered a strategic key player for sustainable campus innovation in Europe. Specifically, the main key performance indicators have been analyzed and the importance they have for the two universities and their strategic impact have been highlighted. For this reason, a survey was conducted with people who play crucial roles for sustainability within the two universities and they were asked to evaluate the KPIs of the project. This assessment has been relevant because has represented the foundation to develop a strategy to create a true collaboration.
Resumo:
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Resumo:
To retrospectively analyze the performance of a commercial computer-aided diagnosis (CAD) software in the detection of pulmonary nodules in original and energy-subtracted (ES) chest radiographs.
Resumo:
Arterial pressure-based cardiac output monitors (APCOs) are increasingly used as alternatives to thermodilution. Validation of these evolving technologies in high-risk surgery is still ongoing. In liver transplantation, FloTrac-Vigileo (Edwards Lifesciences) has limited correlation with thermodilution, whereas LiDCO Plus (LiDCO Ltd.) has not been tested intraoperatively. Our goal was to directly compare the 2 proprietary APCO algorithms as alternatives to pulmonary artery catheter thermodilution in orthotopic liver transplantation (OLT). The cardiac index (CI) was measured simultaneously in 20 OLT patients at prospectively defined surgical landmarks with the LiDCO Plus monitor (CI(L)) and the FloTrac-Vigileo monitor (CI(V)). LiDCO Plus was calibrated according to the manufacturer's instructions. FloTrac-Vigileo did not require calibration. The reference CI was derived from pulmonary artery catheter intermittent thermodilution (CI(TD)). CI(V)-CI(TD) bias ranged from -1.38 (95% confidence interval = -2.02 to -0.75 L/minute/m(2), P = 0.02) to -2.51 L/minute/m(2) (95% confidence interval = -3.36 to -1.65 L/minute/m(2), P < 0.001), and CI(L)-CI(TD) bias ranged from -0.65 (95% confidence interval = -1.29 to -0.01 L/minute/m(2), P = 0.047) to -1.48 L/minute/m(2) (95% confidence interval = -2.37 to -0.60 L/minute/m(2), P < 0.01). For both APCOs, bias to CI(TD) was correlated with the systemic vascular resistance index, with a stronger dependence for FloTrac-Vigileo. The capability of the APCOs for tracking changes in CI(TD) was assessed with a 4-quadrant plot for directional changes and with receiver operating characteristic curves for specificity and sensitivity. The performance of both APCOs was poor in detecting increases and fair in detecting decreases in CI(TD). In conclusion, the calibrated and uncalibrated APCOs perform differently during OLT. Although the calibrated APCO is less influenced by changes in the systemic vascular resistance, neither device can be used interchangeably with thermodilution to monitor cardiac output during liver transplantation.
Resumo:
Induced mild hypothermia after cardiac arrest interferes with clinical assessment of the cardiovascular status of patients. In this situation, non-invasive cardiac output measurement could be useful. Unfortunately, arterial pulse contour is altered by temperature, and the performance of devices using arterial blood pressure contour analysis to derive cardiac output may be insufficient.
Resumo:
The aim of this study was to compare the in situ and in vitro performances of a laser fluorescence (LF) device (DIAGNOdent 2095) with visual inspection for the detection of occlusal caries in permanent teeth. Sixty-four sites were selected, and visual inspection and LF assessments were carried out, in vitro, three times by two independent examiners, with a 1-week interval between evaluations. Afterwards, the occlusal surfaces were mounted on the palatal portion of removable acrylic orthodontic appliances and placed in six volunteers. Assessments were repeated and validated by histological analysis of the tooth sections under a stereomicroscope. For both examiners, the highest intra-examiner values were observed for the visual inspection when in vitro and in situ evaluations were compared. The inter-examiner reproducibility varied from 0.61 to 0.64, except for the in vitro assessment using LF, which presented a lower value (0.43). The methods showed high specificity at the D(1) threshold (considering enamel and dentin caries as disease). In vitro evaluations showed the highest values of sensitivity for both methods when compared to the in situ evaluations at D(1) and D(2) (considering only dentinal caries as the disease) thresholds. For both methods, the results of sensitivity (at D(1) and D(2)) and accuracy (at D(1)) showed significant differences between in vitro and in situ conditions. However, the sensitivity (at D(1) and D(2)), specificity and accuracy (both at D(1)) of the methods were not significantly different when the same condition was considered. It can be concluded that visual inspection and LF showed better performance in vitro than in situ.
Resumo:
The SVWN, BVWN, BP86, BLYP, BPW91, B3P86, B3LYP, B3PW91, B1LYP, mPW1PW, and PBE1PBE density functionals, as implemented in Gaussian 98 and Gaussian 03, were used to calculate ΔG0 and ΔH0 values for 17 deprotonation reactions where the experimental values are accurately known. The PBE1PBE and B3P86 functionals are shown to compute results with accuracy comparable to more computationally intensive compound model chemistries. A rationale for the relative performance of various functionals is explored.
Resumo:
The Pulmonary Embolism Severity Index (PESI) is a validated clinical prognostic model for patients with pulmonary embolism (PE). Recently, a simplified version of the PESI was developed. We sought to compare the prognostic performance of the original and simplified PESI. Using data from 15,531 patients with PE, we compared the proportions of patients classified as low versus higher risk between the original and simplified PESI and estimated 30-day mortality within each risk group. To assess the models' accuracy to predict mortality, we calculated sensitivity, specificity, and predictive values and likelihood ratios for low- versus higher-risk patients. We also compared the models' discriminative power by calculating the area under the receiver-operating characteristic curve. The overall 30-day mortality was 9.3%. The original PESI classified a significantly greater proportion of patients as low-risk than the simplified PESI (40.9% vs. 36.8%; p<0.001). Low-risk patients based on the original and simplified PESI had a mortality of 2.3% and 2.7%, respectively. The original and simplified PESI had similar sensitivities (90% vs. 89%), negative predictive values (98% vs. 97%), and negative likelihood ratios (0.23 vs. 0.28) for predicting mortality. The original PESI had a significantly greater discriminatory power than the simplified PESI (area under the ROC curve 0.78 [95% CI: 0.77-0.79] vs. 0.72 [95% CI: 0.71-0.74]; p<0.001). In conclusion, even though the simplified PESI accurately identified patients at low-risk of adverse outcomes, the original PESI classified a higher proportion of patients as low-risk and had a greater discriminatory power than the simplified PESI.