947 resultados para method comparison
Resumo:
Past river run-off is an important measure for the continental hydrological cycle and the as-sessment of freshwater input into the ocean. However, paleosalinity reconstructions applying different proxies in parallel often show offsets between the respective methods. Here, we compare the established foraminiferal Ba/Ca and d18OWATER salinity proxies for their capability to record the highly seasonal Orinoco freshwater plume in the eastern Caribbean. For this purpose we obtained a data set comprising Ba/Ca and d18OWATER determined on multiple spe-cies of planktonic foraminifera from core tops distributed around the Orinoco river mouth. Our findings indicate that interpretations based on either proxy could lead to different conclu-sions. In particular, Ba/Ca and d18OWATER diverge in their spatial distribution due to different governing factors. Apparently, the Orinoco freshwater plume is best tracked by Ba/Ca ratios of G. ruber (pink and sensu lato morphotypes), while d18OWATER based on the same species is more related to the local precipitation-evaporation balance overprinting the riverine freshwater contribution. Other shallow dwelling species (G. sacculifer, O. universa) show a muted response to the freshwater discharge, most likely due to their ecological and habitat prefer-ences. Extremely high Ba/Ca ratios recorded by G. ruber are attributed to Ba2+-desorption from suspended matter derived from the Orinoco. Samples taken most proximal to the freshwater source do not show pronounced Ba/Ca or d18OWATER anomalies. Here, the suspension loaded freshwater lid developing during maximum discharge suppresses foraminiferal populations. Both proxies are therefore biased towards dry season conditions at these sites, when surface salinity is only minimally reduced.
Resumo:
This paper details a method of determining the uncertainty of dimensional measurement for a three dimensional coordinate measurement machine. An experimental procedure was developed to compare three dimensional coordinate measurements with calibrated reference points. The reference standard used to calibrate these reference points was a fringe counting interferometer with the multilateration technique employed to establish three dimensional coordinates. This is an extension of the established technique of comparing measured lengths with calibrated lengths. Specifically a distributed coordinate measurement device was tested which consisted of a network of Rotary-Laser Automatic Theodolites (R-LATs), this system is known commercially as indoor GPS (iGPS). The method was found to be practical and able to establish that the expanded uncertainty of the basic iGPS system was approximately 1 mm at a 95% confidence level. © Springer-Verlag Berlin Heidelberg 2010.
Resumo:
Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient’s medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method.
Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three approaches were applied in this research for dosimetric evaluation on CT images with severe metal artefacts. The first part of the research used a water phantom with four iodine syringes. Two sets of plans, multi-arc plans and single-arc plans, using the Volumetric Modulated Arc therapy (VMAT) technique were designed to avoid or minimize influences from high-density objects. The second part of the research used projection-based MAR Algorithm and the Dual-Energy Method. Calculated Doses (Mean, Minimum, and Maximum Doses) to the planning treatment volume (PTV) were compared and homogeneity index (HI) calculated.
Results: (1) Without the GSI-based MAR application, a percent error between mean dose and the absolute dose ranging from 3.4-5.7% per fraction was observed. In contrast, the error was decreased to a range of 0.09-2.3% per fraction with the GSI-based MAR algorithm. There was a percent difference ranging from 1.7-4.2% per fraction between with and without using the GSI-based MAR algorithm. (2) A range of 0.1-3.2% difference was observed for the maximum dose values, 1.5-10.4% for minimum dose difference, and 1.4-1.7% difference on the mean doses. Homogeneity indexes (HI) ranging from 0.068-0.065 for dual-energy method and 0.063-0.141 with projection-based MAR algorithm were also calculated.
Conclusion: (1) Percent error without using the GSI-based MAR algorithm may deviate as high as 5.7%. This error invalidates the goal of Radiation Therapy to provide a more precise treatment. Thus, GSI-based MAR algorithm was desirable due to its better dose calculation accuracy. (2) Based on direct numerical observation, there was no apparent deviation between the mean doses of different techniques but deviation was evident on the maximum and minimum doses. The HI for the dual-energy method almost achieved the desirable null values. In conclusion, the Dual-Energy method gave better dose calculation accuracy to the planning treatment volume (PTV) for images with metal artefacts than with or without GE MAR Algorithm.
Resumo:
The fast developing international trade of products based on traditional knowledge and their value chains has become an important aspect of the ethnopharmacological debate. The structure and diversity of value chains and their impact on the phytochemical composition of herbal medicinal products has been overlooked in the debate about quality problems in transnational trade. Different government policies and regulations governing trade in herbal medicinal products impact on such value chains. Medicinal Rhodiola species, including Rhodiola rosea L. and Rhodiola crenulata (Hook.f. & Thomson) H.Ohba, have been used widely in Europe and Asia as traditional herbal medicines with numerous claims for their therapeutic effects. Faced with resource depletion and environment destruction, R. rosea and R. crenulata are becoming endangered, making them more economically valuable to collectors and middlemen, and also increasing the risk of adulteration and low quality. We compare the phytochemical differences among Rhodiola raw materials available on the market to provide a practical method for Rhodiola authentication and the detection of potential adulterant compounds. Samples were collected from Europe and Asia and nuclear magnetic resonance spectroscopy coupled with multivariate analysis software and high performance thin layer chromatography techniques were used to analyse the samples. A method was developed to quantify the amount of adulterant species contained within mixtures. We compared the phytochemical composition of collected Rhodiola samples to authenticated samples. Rosavin and rosarin were mainly present in R. rosea whereas crenulatin was only present in R. crenulata. 30% of the Rhodiola samples purchased from the Chinese market were adulterated by other Rhodiola spp. Moreover, 7 % of the raw-material samples were not labelled satifactorily. The utilisation of both 1H-NMR and HPTLC methods provided an integrated analysis of the phytochemical differences and novel identification method for R. rosea and R. crenulata. Using 1H-NMR spectroscopy it was possible to quantify the presence of R. crenulata in admixtures with R. rosea. This quantitative technique could be used in the future to assess a variety of herbal drugs and products. This project also highlights the need to further study the links between producers and consumers in national and trans-national trade.
Resumo:
Harmful algal blooms (HABs) are a natural global phenomena emerging in severity and extent. Incidents have many economic, ecological and human health impacts. Monitoring and providing early warning of toxic HABs are critical for protecting public health. Current monitoring programmes include measuring the number of toxic phytoplankton cells in the water and biotoxin levels in shellfish tissue. As these efforts are demanding and labour intensive, methods which improve the efficiency are essential. This study compares the utilisation of a multitoxin surface plasmon resonance (multitoxin SPR) biosensor with enzyme-linked immunosorbent assay (ELISA) and analytical methods such as high performance liquid chromatography with fluorescence detection (HPLC-FLD) and liquid chromatography–tandem mass spectrometry (LC–MS/MS) for toxic HAB monitoring efforts in Europe. Seawater samples (n = 256) from European waters, collected 2009–2011, were analysed for biotoxins: saxitoxin and analogues, okadaic acid and dinophysistoxins 1/2 (DTX1/DTX2) and domoic acid responsible for paralytic shellfish poisoning (PSP), diarrheic shellfish poisoning (DSP) and amnesic shellfish poisoning (ASP), respectively. Biotoxins were detected mainly in samples from Spain and Ireland. France and Norway appeared to have the lowest number of toxic samples. Both the multitoxin SPR biosensor and the RNA microarray were more sensitive at detecting toxic HABs than standard light microscopy phytoplankton monitoring. Correlations between each of the detection methods were performed with the overall agreement, based on statistical 2 × 2 comparison tables, between each testing platform ranging between 32% and 74% for all three toxin families illustrating that one individual testing method may not be an ideal solution. An efficient early warning monitoring system for the detection of toxic HABs could therefore be achieved by combining both the multitoxin SPR biosensor and RNA microarray.
Resumo:
Harmful algal blooms (HABs) are a natural global phenomena emerging in severity and extent. Incidents have many economic, ecological and human health impacts. Monitoring and providing early warning of toxic HABs are critical for protecting public health. Current monitoring programmes include measuring the number of toxic phytoplankton cells in the water and biotoxin levels in shellfish tissue. As these efforts are demanding and labour intensive, methods which improve the efficiency are essential. This study compares the utilisation of a multitoxin surface plasmon resonance (multitoxin SPR) biosensor with enzyme-linked immunosorbent assay (ELISA) and analytical methods such as high performance liquid chromatography with fluorescence detection (HPLC-FLD) and liquid chromatography–tandem mass spectrometry (LC–MS/MS) for toxic HAB monitoring efforts in Europe. Seawater samples (n = 256) from European waters, collected 2009–2011, were analysed for biotoxins: saxitoxin and analogues, okadaic acid and dinophysistoxins 1/2 (DTX1/DTX2) and domoic acid responsible for paralytic shellfish poisoning (PSP), diarrheic shellfish poisoning (DSP) and amnesic shellfish poisoning (ASP), respectively. Biotoxins were detected mainly in samples from Spain and Ireland. France and Norway appeared to have the lowest number of toxic samples. Both the multitoxin SPR biosensor and the RNA microarray were more sensitive at detecting toxic HABs than standard light microscopy phytoplankton monitoring. Correlations between each of the detection methods were performed with the overall agreement, based on statistical 2 × 2 comparison tables, between each testing platform ranging between 32% and 74% for all three toxin families illustrating that one individual testing method may not be an ideal solution. An efficient early warning monitoring system for the detection of toxic HABs could therefore be achieved by combining both the multitoxin SPR biosensor and RNA microarray.
Resumo:
Investigating the variability of Agulhas leakage, the volume transport of water from the Indian Ocean to the South Atlantic Ocean, is highly relevant due to its potential contribution to the Atlantic Meridional Overturning Circulation as well as the global circulation of heat and salt and hence global climate. Quantifying Agulhas leakage is challenging due to the non-linear nature of this process; current observations are insufficient to estimate its variability and ocean models all have biases in this region, even at high resolution . An Eulerian threshold integration method is developed to examine the mechanisms of Agulhas leakage variability in six ocean model simulations of varying resolution. This intercomparison, based on the circulation and thermo- haline structure at the Good Hope line, a transect to the south west of the southern tip of Africa, is used to identify features that are robust regardless of the model used and takes into account the thermohaline biases of each model. When determined by a passive tracer method, 60 % of the magnitude of Agulhas leakage is captured and more than 80 % of its temporal fluctuations, suggesting that the method is appropriate for investigating the variability of Agulhas leakage. In all simulations but one, the major driver of variability is associated with mesoscale features passing through the section. High resolution (<1/10 deg.) hindcast models agree on the temporal (2–4 cycles per year) and spatial (300–500 km) scales of these features corresponding to observed Agulhas Rings. Coarser resolution models (<1/4 deg.) reproduce similar time scale of variability of Agulhas leakage in spite of their difficulties in representing the Agulhas rings properties. A coarser resolution climate model (2 deg.) does not resolve the spatio-temporal mechanism of variability of Agulhas leakage. Hence it is expected to underestimate the contribution of Agulhas Current System to climate variability.
Resumo:
Investigating the variability of Agulhas leakage, the volume transport of water from the Indian Ocean to the South Atlantic Ocean, is highly relevant due to its potential contribution to the Atlantic Meridional Overturning Circulation as well as the global circulation of heat and salt and hence global climate. Quantifying Agulhas leakage is challenging due to the non-linear nature of this process; current observations are insufficient to estimate its variability and ocean models all have biases in this region, even at high resolution . An Eulerian threshold integration method is developed to examine the mechanisms of Agulhas leakage variability in six ocean model simulations of varying resolution. This intercomparison, based on the circulation and thermo- haline structure at the Good Hope line, a transect to the south west of the southern tip of Africa, is used to identify features that are robust regardless of the model used and takes into account the thermohaline biases of each model. When determined by a passive tracer method, 60 % of the magnitude of Agulhas leakage is captured and more than 80 % of its temporal fluctuations, suggesting that the method is appropriate for investigating the variability of Agulhas leakage. In all simulations but one, the major driver of variability is associated with mesoscale features passing through the section. High resolution (<1/10 deg.) hindcast models agree on the temporal (2–4 cycles per year) and spatial (300–500 km) scales of these features corresponding to observed Agulhas Rings. Coarser resolution models (<1/4 deg.) reproduce similar time scale of variability of Agulhas leakage in spite of their difficulties in representing the Agulhas rings properties. A coarser resolution climate model (2 deg.) does not resolve the spatio-temporal mechanism of variability of Agulhas leakage. Hence it is expected to underestimate the contribution of Agulhas Current System to climate variability.
Resumo:
In the context of products from certain regions or countries being banned because of an identified or non-identified hazard, proof of geographical origin is essential with regard to feed and food safety issues. Usually, the product labeling of an affected feed lot shows origin, and the paper documentation shows traceability. Incorrect product labeling is common in embargo situations, however, and alternative analytical strategies for controlling feed authenticity are therefore needed. In this study, distillers' dried grains and solubles (DDGS) were chosen as the product on which to base a comparison of analytical strategies aimed at identifying the most appropriate one. Various analytical techniques were investigated for their ability to authenticate DDGS, including spectroscopic and spectrometric techniques combined with multivariate data analysis, as well as proven techniques for authenticating food, such as DNA analysis and stable isotope ratio analysis. An external validation procedure (called the system challenge) was used to analyze sample sets blind and to compare analytical techniques. All the techniques were adapted so as to be applicable to the DDGS matrix. They produced positive results in determining the botanical origin of DDGS (corn vs. wheat), and several of them were able to determine the geographical origin of the DDGS in the sample set. The maintenance and extension of the databanks generated in this study through the analysis of new authentic samples from a single location are essential in order to monitor developments and processing that could affect authentication.
Resumo:
BACKGROUND: KRAS mutation testing is required to select patients with metastatic colorectal cancer (CRC) to receive anti-epidermal growth factor receptor antibodies, but the optimal KRAS mutation test method is uncertain. METHODS: We conducted a two-site comparison of two commercial KRAS mutation kits - the cobas KRAS Mutation Test and the Qiagen therascreen KRAS Kit - and Sanger sequencing. A panel of 120 CRC specimens was tested with all three methods. The agreement between the cobas test and each of the other methods was assessed. Specimens with discordant results were subjected to quantitative massively parallel pyrosequencing (MPP). DNA blends were tested to determine detection rates at 5% mutant alleles. RESULTS: Reproducibility of the cobas test between sites was 98%. Six mutations were detected by cobas that were not detected by Sanger, and five were confirmed by MPP. The cobas test detected eight mutations which were not detected by the therascreen test, and seven were confirmed by MPP. Detection rates with 5% mutant DNA blends were 100% for the cobas and therascreen tests and 19% for Sanger. CONCLUSION: The cobas test was reproducible between sites, and detected several mutations that were not detected by the therascreen test or Sanger. Sanger sequencing had poor sensitivity for low levels of mutation.
Resumo:
The viscosity of ionic liquids (ILs) has been modeled as a function of temperature and at atmospheric pressure using a new method based on the UNIFAC–VISCO method. This model extends the calculations previously reported by our group (see Zhao et al. J. Chem. Eng. Data 2016, 61, 2160–2169) which used 154 experimental viscosity data points of 25 ionic liquids for regression of a set of binary interaction parameters and ion Vogel–Fulcher–Tammann (VFT) parameters. Discrepancies in the experimental data of the same IL affect the quality of the correlation and thus the development of the predictive method. In this work, mathematical gnostics was used to analyze the experimental data from different sources and recommend one set of reliable data for each IL. These recommended data (totally 819 data points) for 70 ILs were correlated using this model to obtain an extended set of binary interaction parameters and ion VFT parameters, with a regression accuracy of 1.4%. In addition, 966 experimental viscosity data points for 11 binary mixtures of ILs were collected from literature to establish this model. All the binary data consist of 128 training data points used for the optimization of binary interaction parameters and 838 test data points used for the comparison of the pure evaluated values. The relative average absolute deviation (RAAD) for training and test is 2.9% and 3.9%, respectively.
Resumo:
Numerical predictions of the turbulent flow and heat transfer of a stationary duct with square ribs 45° angled to the main flow direction are presented. The rib height to channel hydraulic diameter is 0.1, the rib pitch to rib height is 10. The calculations have been carried out for a bulk Reynolds number of 50,000. The flows generated by ribs are dominated by separating and reattaching shear layers with vortex shedding and secondary flows in the cross-section. The hybrid RANS-LES approach is adopted to simulate such flows at a reasonable computation cost. The capability of the various versions of DES method, depending the RANS model, such as DES-SA, DES-RKE, DES-SST, have been compared and validated against the experiment. The significant effect of RANS model on the accuracy of the DES prediction has been shown. The DES-SST method, which was able to reproduce the correct physics of flow and heat transfer in a ribbed duct showed better performance than others.
Resumo:
Therapistsʼ process notes - written descriptions of a session produced shortly afterwards from memory - hold a significant role in child and adolescent psychoanalytic psychotherapy. They are central in training, in supervision, and in developing oneʼs understanding through selfsupervision and forms of psychotherapy research. This thesis examines such process notes through a comparison with audio recordings of the same sessions. In so doing, it aims to generate theory that might illuminate the causes of significantly patterned discrepancies between the notes and recordings, in order to understand more about the processes at work in psychoanalytic psychotherapy and to explore the nature of process notes, their values and limitations. The literature searches conducted revealed limited relevant studies. All identified studies that compare process notes with recordings of sessions seek to quantify the differences between the two forms of recording. Unlike these, this thesis explores the meaning of the differences between process notes and recordings through qualitative data analysis. Using psychoanalytically informed grounded theory, in total nine sets of process notes and recordings from three different psychoanalytic psychotherapists are analysed. The analysis identifies eight core categories of findings. Initial theories are developed from these categories, most significantly concerning the role and influence of a ʻcore transference dynamicʼ between therapist and patient. Further theory is developed on the nature and function of process notes as a means for the therapistʼs conscious and unconscious processing of the session, as well as on the nature of the influence of the relationships – both internal and external – within which they are written. In the light of the findings, a proposal is made for a new approach for learning about the patient and clinical work, ʻthe comparison methodʼ (supervision involving a comparison of process notes and recordings), and, in particular, for its inclusion within the training of psychoanalytic psychotherapists. Further recommendations for research are also made.
Resumo:
Abstract: Highway bridges have great values in a country because in case of any natural disaster they may serve as lines to save people’s lives. Being vulnerable under significant seismic loads, different methods can be considered to design resistant highway bridges and rehabilitate the existing ones. In this study, base isolation has been considered as one efficient method in this regards which in some cases reduces significantly the seismic load effects on the structure. By reducing the ductility demand on the structure without a notable increase of strength, the structure is designed to remain elastic under seismic loads. The problem associated with the isolated bridges, especially with elastomeric bearings, can be their excessive displacements under service and seismic loads. This can defy the purpose of using elastomeric bearings for small to medium span typical bridges where expansion joints and clearances may result in significant increase of initial and maintenance cost. Thus, supplementing the structure with dampers with some stiffness can serve as a solution which in turn, however, may increase the structure base shear. The main objective of this thesis is to provide a simplified method for the evaluation of optimal parameters for dampers in isolated bridges. Firstly, performing a parametric study, some directions are given for the use of simple isolation devices such as elastomeric bearings to rehabilitate existing bridges with high importance. Parameters like geometry of the bridge, code provisions and the type of soil on which the structure is constructed have been introduced to a typical two span bridge. It is concluded that the stiffness of the substructure, soil type and special provisions in the code can determine the employment of base isolation for retrofitting of bridges. Secondly, based on the elastic response coefficient of isolated bridges, a simplified design method of dampers for seismically isolated regular highway bridges has been presented in this study. By setting objectives for reduction of displacement and base shear variation, the required stiffness and damping of a hysteretic damper can be determined. By modelling a typical two span bridge, numerical analyses have followed to verify the effectiveness of the method. The method has been used to identify equivalent linear parameters and subsequently, nonlinear parameters of hysteretic damper for various designated scenarios of displacement and base shear requirements. Comparison of the results of the nonlinear numerical model without damper and with damper has shown that the method is sufficiently accurate. Finally, an innovative and simple hysteretic steel damper was designed. Five specimens were fabricated from two steel grades and were tested accompanying a real scale elastomeric isolator in the structural laboratory of the Université de Sherbrooke. The test procedure was to characterize the specimens by cyclic displacement controlled tests and subsequently to test them by real-time dynamic substructuring (RTDS) method. The test results were then used to establish a numerical model of the system which went through nonlinear time history analyses under several earthquakes. The outcome of the experimental and numerical showed an acceptable conformity with the simplified method.
Resumo:
Software protection is an essential aspect of information security to withstand malicious activities on software, and preserving software assets. However, software developers still lacks a methodology for the assessment of the deployed protections. To solve these issues, we present a novel attack simulation based software protection assessment method to assess and compare various protection solutions. Our solution relies on Petri Nets to specify and visualize attack models, and we developed a Monte Carlo based approach to simulate attacking processes and to deal with uncertainty. Then, based on this simulation and estimation, a novel protection comparison model is proposed to compare different protection solutions. Lastly, our attack simulation based software protection assessment method is presented. We illustrate our method by means of a software protection assessment process to demonstrate that our approach can provide a suitable software protection assessment for developers and software companies.