963 resultados para Instrumentation and orchestration
Resumo:
Introduction: This study compared the combined use of sodium hypochlorite (NaOCl) and chlorhexidine (CXH) with citric acid and CXH on dentinal permeability and precipitate formation. Methods: Thirty-four upper anterior teeth were prepared by rotary instrumentation and NaOCl. The root canal surfaces were conditioned for smear layer removal using 15% citric acid solution under ultrasonic activation and a final wash with distilled water. All teeth were dried, and 30 specimens were randomly divided into three equal groups as follows: positive control group (PC), no irrigation; 15% citric acid + 2% CHX group (CA + CHX); and 1% NaOCl + 2% CHX group (NaOCl + CHX). All roots were immersed in a 0.2% Rhodamine B solution for 24 hours. One-millimeter-thick slices from the cementum-enamel junction were scanned at 400 dpi and analyzed using the software ImageLab (LIDO-USP, Sao Paulo, Brazil) for the assessment of leakage in percentage. For scanning electron microscopy analysis, four teeth, irrigated for NaOCl + CHX samples, were split in half, and each third was evaluated at 1,000x and 5,000x (at the precipitate). Results: Using the analysis of variance test followed by the Bonferroni comparison method, no statistical differences between groups were found when analyzed at the cervical and medium thirds. At the apical third, differences between the PC and NaOCl + CHX (p<0.05) and CA + CHX and NaOCl + CHX could be seen (p < 0.05). Conclusion: The combination of 1% NaOCl and 2% CHX solutions results in the formation of a flocculate precipitate that acts as a chemical smear layer reducing the dentinal permeability in the apical third. (J Endod 2010;36:847-850)
Resumo:
Dissertação para obtenção do Grau de Doutor em Engenharia Física
Resumo:
BACKGROUND Complicated pyelonephritis (cPN), a common cause of hospital admission, is still a poorly-understood entity given the difficulty involved in its correct definition. The aim of this study was to analyze the main epidemiological, clinical, and microbiological characteristics of cPN and its prognosis in a large cohort of patients with cPN. METHODS We conducted a prospective, observational study including 1325 consecutive patients older than 14 years diagnosed with cPN and admitted to a tertiary university hospital between 1997-2013. After analyzing the main demographic, clinical and microbiological data, covariates found to be associated with attributable mortality in univariate analysis were included in a multivariate logistic regression model. RESULTS Of the 1325 patients, 689 (52%) were men and 636 (48%) women; median age 63 years, interquartile range [IQR] (46.5-73). Nine hundred and forty patients (70.9%) had functional or structural abnormalities in the urinary tract, 215 (16.2%) were immunocompromised, 152 (11.5%) had undergone a previous urinary tract instrumentation, and 196 (14.8%) had a long-term bladder catheter, nephrostomy tube or ureteral catheter. Urine culture was positive in 813 (67.7%) of the 1251 patients in whom it was done, and in the 1032 patients who had a blood culture, 366 (34%) had bacteraemia. Escherichia coli was the causative agent in 615 episodes (67%), Klebsiella spp in 73 (7.9%) and Proteus ssp in 61 (6.6%). Fourteen point one percent of GNB isolates were ESBL producers. In total, 343 patients (25.9%) developed severe sepsis and 165 (12.5%) septic shock. Crude mortality was 6.5% and attributable mortality was 4.1%. Multivariate analysis showed that an age >75 years (OR 2.77; 95% CI, 1.35-5.68), immunosuppression (OR 3.14; 95% CI, 1.47-6.70), and septic shock (OR 58.49; 95% CI, 26.6-128.5) were independently associated with attributable mortality. CONCLUSIONS cPN generates a high morbidity and mortality and likely a great consumption of healthcare resources. This study highlights the factors directly associated with mortality, though further studies are needed in the near future aimed at identifying subgroups of low-risk patients susceptible to outpatient management.
Resumo:
The activity of radiopharmaceuticals in nuclear medicine is measured before patient injection with radionuclide calibrators. In Switzerland, the general requirements for quality controls are defined in a federal ordinance and a directive of the Federal Office of Metrology (METAS) which require each instrument to be verified. A set of three gamma sources (Co-57, Cs-137 and Co-60) is used to verify the response of radionuclide calibrators in the gamma energy range of their use. A beta source, a mixture of (90)Sr and (90)Y in secular equilibrium, is used as well. Manufacturers are responsible for the calibration factors. The main goal of the study was to monitor the validity of the calibration factors by using two sources: a (90)Sr/(90)Y source and a (18)F source. The three types of commercial radionuclide calibrators tested do not have a calibration factor for the mixture but only for (90)Y. Activity measurements of a (90)Sr/(90)Y source with the (90)Y calibration factor are performed in order to correct for the extra-contribution of (90)Sr. The value of the correction factor was found to be 1.113 whereas Monte Carlo simulations of the radionuclide calibrators estimate the correction factor to be 1.117. Measurements with (18)F sources in a specific geometry are also performed. Since this radionuclide is widely used in Swiss hospitals equipped with PET and PET-CT, the metrology of the (18)F is very important. The (18)F response normalized to the (137)Cs response shows that the difference with a reference value does not exceed 3% for the three types of radionuclide calibrators.
Resumo:
The primary goal of this project is to demonstrate the accuracy and utility of a freezing drizzle algorithm that can be implemented on roadway environmental sensing systems (ESSs). The types of problems related to the occurrence of freezing precipitation range from simple traffic delays to major accidents that involve fatalities. Freezing drizzle can also lead to economic impacts in communities with lost work hours, vehicular damage, and downed power lines. There are means for transportation agencies to perform preventive and reactive treatments to roadways, but freezing drizzle can be difficult to forecast accurately or even detect as weather radar and surface observation networks poorly observe these conditions. The detection of freezing precipitation is problematic and requires special instrumentation and analysis. The Federal Aviation Administration (FAA) development of aircraft anti-icing and deicing technologies has led to the development of a freezing drizzle algorithm that utilizes air temperature data and a specialized sensor capable of detecting ice accretion. However, at present, roadway ESSs are not capable of reporting freezing drizzle. This study investigates the use of the methods developed for the FAA and the National Weather Service (NWS) within a roadway environment to detect the occurrence of freezing drizzle using a combination of icing detection equipment and available ESS sensors. The work performed in this study incorporated the algorithm developed initially and further modified for work with the FAA for aircraft icing. The freezing drizzle algorithm developed for the FAA was applied using data from standard roadway ESSs. The work performed in this study lays the foundation for addressing the central question of interest to winter maintenance professionals as to whether it is possible to use roadside freezing precipitation detection (e.g., icing detection) sensors to determine the occurrence of pavement icing during freezing precipitation events and the rates at which this occurs.
Resumo:
Prostate-specific antigen (PSA) is a marker that is commonly used in estimating prostate cancer risk. Prostate cancer is usually a slowly progressing disease, which might not cause any symptoms whatsoever. Nevertheless, some cases of cancer are aggressive and need to be treated before they become life-threatening. However, the blood PSA concentration may rise also in benign prostate diseases and using a single total PSA (tPSA) measurement to guide the decision on further examinations leads to many unnecessary biopsies, over-detection, and overtreatment of indolent cancers which would not require treatment. Therefore, there is a need for markers that would better separate cancer from benign disorders, and would also predict cancer aggressiveness. The aim of this study was to evaluate whether intact and nicked forms of free PSA (fPSA-I and fPSA-N) or human kallikrein-related peptidase 2 (hK2) could serve as new tools in estimating prostate cancer risk. First, the immunoassays for fPSA-I and free and total hK2 were optimized so that they would be less prone to assay interference caused by interfering factors present in some blood samples. The optimized assays were shown to work well and were used to study the marker concentrations in the clinical sample panels. The marker levels were measured from preoperative blood samples of prostate cancer patients scheduled for radical prostatectomy. The association of the markers with the cancer stage and grade was studied. It was found that among all tested markers and their combinations especially the ratio of fPSA-N to tPSA and ratio of free PSA (fPSA) to tPSA were associated with both cancer stage and grade. They might be useful in predicting the cancer aggressiveness, but further follow-up studies are necessary to fully evaluate the significance of the markers in this clinical setting. The markers tPSA, fPSA, fPSA-I and hK2 were combined in a statistical model which was previously shown to be able to reduce unnecessary biopsies when applied to large screening cohorts of men with elevated tPSA. The discriminative accuracy of this model was compared to models based on established clinical predictors in reference to biopsy outcome. The kallikrein model and the calculated fPSA-N concentrations (fPSA minus fPSA-I) correlated with the prostate volume and the model, when compared to the clinical models, predicted prostate cancer in biopsy equally well. Hence, the measurement of kallikreins in a blood sample could be used to replace the volume measurement which is time-consuming, needs instrumentation and skilled personnel and is an uncomfortable procedure. Overall, the model could simplify the estimation of prostate cancer risk. Finally, as the fPSA-N seems to be an interesting new marker, a direct immunoassay for measuring fPSA-N concentrations was developed. The analytical performance was acceptable, but the rather complicated assay protocol needs to be improved until it can be used for measuring large sample panels.
Resumo:
The aim of this dissertation is to bridge and synthesize the different streams of literature addressing ecosystem architecture through a multiple‐lens perspective. In addition, the structural properties of and processes to design and manage the architecture will be examined. With this approach, the oft‐neglected actor‐structure duality is addressed and both the position and structure, and action and process are under scrutiny. Further, the developed framework and empirical evidence offer valuable insights on how firms collectively create value and individually appropriate value. The dissertation is divided into two parts. The first part comprises a literature review, as well as the conclusions of the whole study, and the second part includes six research publications. The dissertation is based on three different reasoning logics: abduction, induction and deduction; related qualitative and quantitative methodologies are utilized in the empirical examination of the phenomenon in the information and communication technology industry. The results suggest firstly that there are endogenous and exogenous structural properties of the ecosystem architecture. Out of these, the former ones can be more easily influenced by a particular actor whereas the latter ones are taken more or less for granted. Secondly, the exogenous ecosystem design properties influence the value creation potential of the ecosystem whereas the endogenous ecosystem design properties influence the value appropriation potential of a particular actor in the ecosystem. Thirdly, the study suggests that there is a relationship between endogenous and exogenous structural properties in that the endogenous properties can be leveraged to create and reconfigure the exogenous properties whereas the exogenous properties prose opportunities and restrictions on the use of endogenous properties. In addition, the study suggests that there are different emergent and engineered processes to design and manage ecosystem architecture and to influence both the endogenous and exogenous structural properties of ecosystem architecture. This study makes three main contributions. First, on the conceptual level, it brings coherence and direction to the fast growing body of literature on novel inter‐organizational arrangements, such as ecosystems. It does this by bridging and synthetizing three different streams of literature, namely the boundary, design and orchestration conception. Secondly, it sets out a framework that enhances our understanding of the structural properties of ecosystem architecture; of the processes to design and manage ecosystem architecture; and of their influence on the value creation potential of the ecosystem and the value capture potential of a particular firm. Thirdly, it offers empirical evidence of the structural properties and processes.
Resumo:
In the present paper we discuss the development of "wave-front", an instrument for determining the lower and higher optical aberrations of the human eye. We also discuss the advantages that such instrumentation and techniques might bring to the ophthalmology professional of the 21st century. By shining a small light spot on the retina of subjects and observing the light that is reflected back from within the eye, we are able to quantitatively determine the amount of lower order aberrations (astigmatism, myopia, hyperopia) and higher order aberrations (coma, spherical aberration, etc.). We have measured artificial eyes with calibrated ametropia ranging from +5 to -5 D, with and without 2 D astigmatism with axis at 45º and 90º. We used a device known as the Hartmann-Shack (HS) sensor, originally developed for measuring the optical aberrations of optical instruments and general refracting surfaces in astronomical telescopes. The HS sensor sends information to a computer software for decomposition of wave-front aberrations into a set of Zernike polynomials. These polynomials have special mathematical properties and are more suitable in this case than the traditional Seidel polynomials. We have demonstrated that this technique is more precise than conventional autorefraction, with a root mean square error (RMSE) of less than 0.1 µm for a 4-mm diameter pupil. In terms of dioptric power this represents an RMSE error of less than 0.04 D and 5º for the axis. This precision is sufficient for customized corneal ablations, among other applications.
Resumo:
The increased awareness and evolved consumer habits have set more demanding standards for the quality and safety control of food products. The production of foodstuffs which fulfill these standards can be hampered by different low-molecular weight contaminants. Such compounds can consist of, for example residues of antibiotics in animal use or mycotoxins. The extremely small size of the compounds has hindered the development of analytical methods suitable for routine use, and the methods currently in use require expensive instrumentation and qualified personnel to operate them. There is a need for new, cost-efficient and simple assay concepts which can be used for field testing and are capable of processing large sample quantities rapidly. Immunoassays have been considered as the golden standard for such rapid on-site screening methods. The introduction of directed antibody engineering and in vitro display technologies has facilitated the development of novel antibody based methods for the detection of low-molecular weight food contaminants. The primary aim of this study was to generate and engineer antibodies against low-molecular weight compounds found in various foodstuffs. The three antigen groups selected as targets of antibody development cause food safety and quality defects in wide range of products: 1) fluoroquinolones: a family of synthetic broad-spectrum antibacterial drugs used to treat wide range of human and animal infections, 2) deoxynivalenol: type B trichothecene mycotoxin, a widely recognized problem for crops and animal feeds globally, and 3) skatole, or 3-methyindole is one of the two compounds responsible for boar taint, found in the meat of monogastric animals. This study describes the generation and engineering of antibodies with versatile binding properties against low-molecular weight food contaminants, and the consecutive development of immunoassays for the detection of the respective compounds.
Resumo:
La version intégrale de ce mémoire est disponible uniquement pour consultation individuelle à la Bibliothèque de musique de l’Université́ de Montréal (www.bib.umontreal.ca/MU).
Resumo:
In this article, we provide an initial insight into the study of MI and what it means for a machine to be intelligent. We discuss how MI has progressed to date and consider future scenarios in a realistic and logical way as much as possible. To do this, we unravel one of the major stumbling blocks to the study of MI, which is the field that has become widely known as "artificial intelligence"
Resumo:
Recent developments in instrumentation and facilities for sample preparation have resulted in sharply increased interest in the application of neutron diffraction. Of particular interest are combined approaches in which neutron methods are used in parallel with X-ray techniques. Two distinct examples are given. The first is a single-crystal study of an A-DNA structure formed by the oligonucleotide d(AGGGGCCCCT)2, showing evidence of unusual base protonation that is not visible by X-ray crystallography. The second is a solution scattering study of the interaction of a bisacridine derivative with the human telomeric sequence d(AGGGTTAGGGTTAGGGTTAGGG) and illustrates the differing effects of NaCl and KCl on this interaction.
Resumo:
Global climate change results from a small yet persistent imbalance between the amount of sunlight absorbed by Earth and the thermal radiation emitted back to space. An apparent inconsistency has been diagnosed between interannual variations in the net radiation imbalance inferred from satellite measurements and upper-ocean heating rate from in situ measurements, and this inconsistency has been interpreted as ‘missing energy’ in the system. Here we present a revised analysis of net radiation at the top of the atmosphere from satellite data, and we estimate ocean heat content, based on three independent sources. We find that the difference between the heat balance at the top of the atmosphere and upper-ocean heat content change is not statistically significant when accounting for observational uncertainties in ocean measurements, given transitions in instrumentation and sampling. Furthermore, variability in Earth’s energy imbalance relating to El Niño-Southern Oscillation is found to be consistent within observational uncertainties among the satellite measurements, a reanalysis model simulation and one of the ocean heat content records. We combine satellite data with ocean measurements to depths of 1,800 m, and show that between January 2001 and December 2010, Earth has been steadily accumulating energy at a rate of 0.50±0.43 Wm−2 (uncertainties at the 90% confidence level). We conclude that energy storage is continuing to increase in the sub-surface ocean.
Resumo:
In this paper we report coordinated multispacecraft and ground-based observations of a double substorm onset close to Scandinavia on November 17, 1996. The Wind and the Geotail spacecraft, which were located in the solar wind and the subsolar magnetosheath, respectively, recorded two periods of southward directed interplanetary magnetic field (IMF). These periods were separated by a short northward IMF excursion associated with a solar wind pressure pulse, which compressed the magnetosphere to such a degree that Geotail for a short period was located outside the bow shock. The first period of southward IMF initiated a substorm growth. phase, which was clearly detected by an array of ground-based instrumentation and by Interball in the northern tail lobe. A first substorm onset occurred in close relation to the solar wind pressure pulse impinging on the magnetopause and almost simultaneously with the northward turning of the IMF. However, this substorm did not fully develop. In clear association with the expansion of the magnetosphere at the end of the pressure pulse, the auroral expansion was stopped, and the northern sky cleared. We will present evidence that the change in the solar wind dynamic pressure actively quenched the energy available for any further substorm expansion. Directly after this period, the magnetometer network detected signatures of a renewed substorm growth phase, which was initiated by the second southward turning of the IMF and which finally lead to a second, and this time complete, substorm intensification. We have used our multipoint observations in order to understand the solar wind control of the substorm onset and substorm quenching. The relative timings between the observations on the various satellites and on the ground were used to infer a possible causal relationship between the solar wind pressure variations and consequent substorm development. Furthermore, using a relatively simple algorithm to model the tail lobe field and the total tail flux, we show that there indeed exists a close relationship between the relaxation of a solar wind pressure pulse, the reduction of the tail lobe field, and the quenching of the initial substorm.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)