926 resultados para Total Analysis Systems
The use of genetic correlations to evaluate associations between SNP markers and quantitative traits
Resumo:
Open-pollinated progeny of Corymbia citriodora established in replicated field trials were assessed for stem diameter, wood density, and pulp yield prior to genotyping single nucleotide polymorphisms (SNP) and testing the significance of associations between markers and assessment traits. Multiple individuals within each family were genotyped and phenotyped, which facilitated a comparison of standard association testing methods and an alternative method developed to relate markers to additive genetic effects. Narrow-sense heritability estimates indicated there was significant additive genetic variance within this population for assessment traits ( h ˆ 2 =0.28to0.44 ) and genetic correlations between the three traits were negligible to moderate (r G = 0.08 to 0.50). The significance of association tests (p values) were compared for four different analyses based on two different approaches: (1) two software packages were used to fit standard univariate mixed models that include SNP-fixed effects, (2) bivariate and multivariate mixed models including each SNP as an additional selection trait were used. Within either the univariate or multivariate approach, correlations between the tests of significance approached +1; however, correspondence between the two approaches was less strong, although between-approach correlations remained significantly positive. Similar SNP markers would be selected using multivariate analyses and standard marker-trait association methods, where the former facilitates integration into the existing genetic analysis systems of applied breeding programs and may be used with either single markers or indices of markers created with genomic selection processes.
Developing standardized methods to assess cost of healthy and unhealthy (current) diets in Australia
Resumo:
Unhealthy diets contribute at least 14% to Australia's disease burden and are driven by ‘obesogenic’ food environments. Compliance with dietary recommendations is particularly poor amongst disadvantaged populations including low socioeconomic groups, those living in rural/remote areas and Aboriginal and Torres Strait Islanders. The perception that healthy foods are expensive is a key barrier to healthy choices and a major determinant of diet-related health inequities. Available state/regional/local data (limited and non-comparable) suggests that, despite basic healthy foods not incurring GST, the cost of healthy food is higher and has increased more rapidly than unhealthy food over the last 15 years in Australia. However, there were no nationally standardised tools or protocols to benchmark, compare or monitor food prices and affordability in Australia. Globally, we are leading work to develop and test approaches to assess the price differential of healthy and less-healthy (current) diets under the food price module of the International Network for Food and Obesity/non-communicable diseases (NCDs) Research, Monitoring and Action Support (INFORMAS). This presentation describes contextualization of the INFORMAS approach to develop standardised Australian tools, survey protocols and data collection and analysis systems. The ‘healthy diet basket’ was based on the Australian Foundation Diet, 1 The ‘current diet basket’ and specific items included in each basket, were based on recent national dietary survey data.2 Data collection methods were piloted. The final tools and protocols were then applied to measure the price and affordability of healthy and less healthy (current) diets of different household groups in diverse communities across the nation. We have compared results for different geographical locations/population subgroups in Australia and assessed these against international INFORMAS benchmarks. The results inform the development of policy and practice, including those relevant to mooted changes to the GST base, to promote nutrition and healthy weight and prevent chronic disease in Australia.
Resumo:
Text segmentation and localization algorithms are proposed for the born-digital image dataset. Binarization and edge detection are separately carried out on the three colour planes of the image. Connected components (CC's) obtained from the binarized image are thresholded based on their area and aspect ratio. CC's which contain sufficient edge pixels are retained. A novel approach is presented, where the text components are represented as nodes of a graph. Nodes correspond to the centroids of the individual CC's. Long edges are broken from the minimum spanning tree of the graph. Pair wise height ratio is also used to remove likely non-text components. A new minimum spanning tree is created from the remaining nodes. Horizontal grouping is performed on the CC's to generate bounding boxes of text strings. Overlapping bounding boxes are removed using an overlap area threshold. Non-overlapping and minimally overlapping bounding boxes are used for text segmentation. Vertical splitting is applied to generate bounding boxes at the word level. The proposed method is applied on all the images of the test dataset and values of precision, recall and H-mean are obtained using different approaches.
Resumo:
Poly(dimethylsiloxane) (PDMS) is usually considered as a dielectric material and the PDMS microchannel wall can be treated as an electrically insulated boundary in an applied electric field. However, in certain layouts of microfluidic networks, electrical leakage through the PDMS microfluidic channel walls may not be negligible, which must be carefully considered in the microfluidic circuit design. In this paper, we report on the experimental characterization of the electrical leakage current through PDMS microfluidic channel walls of different configurations. Our numerical and experimental studies indicate that for tens of microns thick PDMS channel walls, electrical leakage through the PDMS wall could significantly alter the electrical field in the main channel. We further show that we can use the electrical leakage through the PDMS microfluidic channel wall to control the electrolyte flow inside the microfluidic channel and manipulate the particle motion inside the microfluidic channel. More specifically, we can trap individual particles at different locations inside the microfluidic channel by balancing the electroosmotic flow and the electrophoretic migration of the particle.
Resumo:
This paper presents stochastic implicit coupling method intended for use in Monte-Carlo (MC) based reactor analysis systems that include burnup and thermal hydraulic (TH) feedbacks. Both feedbacks are essential for accurate modeling of advanced reactor designs and analyses of associated fuel cycles. In particular, we investigate the effect of different burnup-TH coupling schemes on the numerical stability and accuracy of coupled MC calculations. First, we present the beginning of time step method which is the most commonly used. The accuracy of this method depends on the time step length and it is only conditionally stable. This work demonstrates that even for relatively short time steps, this method can be numerically unstable. Namely, the spatial distribution of neutronic and thermal hydraulic parameters, such as nuclide densities and temperatures, exhibit oscillatory behavior. To address the numerical stability issue, new implicit stochastic methods are proposed. The methods solve the depletion and TH problems simultaneously and use under-relaxation to speed up convergence. These methods are numerically stable and accurate even for relatively large time steps and require less computation time than the existing methods. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we presented a novel covalent bonding process between two quartz wafers at 300 degrees C. High-quality wafer bonding was formed by the hydroxylization, aminosilylation and atom transfer radical polymerization (ATRP) of glycidyl methacrylate (GMA), respectively, on quartz wafer surfaces, followed by close contact of the GMA functional wafer and the aminosilylation wafer, the epoxy group opening ring reaction was catalyzed by the amino and solidified to form the covalent bonding of the quartz wafers. The shear force between two wafers in all bonding samples was higher than 1.5 MPa. Microfluidic chips bonded by the above procedures had high transparency and the present procedure avoided the adhesive to block or flow into the channel.
Resumo:
Ceramic carbon materials were developed as new sorbents for solid-phase extraction of organic compounds using chlorpromazine as a representative. The macroporosity and heterogeneity of ceramic carbon materials allow extracting a large amount of chlorpromazine over a short time. Thus, the highly sensitive and selective determination of chlorpromazine in urine sample was achieved by differential pulse voltammograms after only 1-min extraction. The total analysis time was less than 3 min. In comparison with other electrochemical and electrochemi-luminescent methods following 1-min extraction, the proposed method improved sensitivity by about 2 and 1 order of magnitude, respectively. The fast extraction, diversity, and conductivity of ceramic carbon materials make them promising sorbents for various solid-phase extractions, such as solid-phase microextraction, thin-film microextraction, and electrochemically controlled solidphase extraction. The preliminary applications of ceramic carbon materials in chromatography were also studied.
Resumo:
A new detection scheme for the determination of adsorbable coreactants of Ru(bpy)(3)(2+) electrochemiluminescent reaction is presented. It is based on selective preconcentration of coreactant onto an electrode, followed by Ru(bpy)(3)(2+) electrochemiluminescent detection. The coreactant employed is chlorpromazine. It was sensitively detected after 5-min preconcentration onto a lauric acid-modified carbon paste electrode. The linear concentration range was found to occur from 1 x 10(-8) to 3 x 10(-6) mol L-1 with a detection limit of 3.1 x 10(-9) mol L-1. The total analysis time is less than 10 min. As a result of selective preconcentration and medium exchange, such remarkable selectivity is achieved that reproducible quantitation of chlorpromazine in urine is possible.
Resumo:
Security policies are increasingly being implemented by organisations. Policies are mapped to device configurations to enforce the policies. This is typically performed manually by network administrators. The development and management of these enforcement policies is a difficult and error prone task. This thesis describes the development and evaluation of an off-line firewall policy parser and validation tool. This provides the system administrator with a textual interface and the vendor specific low level languages they trust and are familiar with, but the support of an off-line compiler tool. The tool was created using the Microsoft C#.NET language, and the Microsoft Visual Studio Integrated Development Environment (IDE). This provided an object environment to create a flexible and extensible system, as well as simple Web and Windows prototyping facilities to create GUI front-end applications for testing and evaluation. A CLI was provided with the tool, for more experienced users, but it was also designed to be easily integrated into GUI based applications for non-expert users. The evaluation of the system was performed from a custom built GUI application, which can create test firewall rule sets containing synthetic rules, to supply a variety of experimental conditions, as well as record various performance metrics. The validation tool was created, based around a pragmatic outlook, with regard to the needs of the network administrator. The modularity of the design was important, due to the fast changing nature of the network device languages being processed. An object oriented approach was taken, for maximum changeability and extensibility, and a flexible tool was developed, due to the possible needs of different types users. System administrators desire, low level, CLI-based tools that they can trust, and use easily from scripting languages. Inexperienced users may prefer a more abstract, high level, GUI or Wizard that has an easier to learn process. Built around these ideas, the tool was implemented, and proved to be a usable, and complimentary addition to the many network policy-based systems currently available. The tool has a flexible design and contains comprehensive functionality. As opposed to some of the other tools which perform across multiple vendor languages, but do not implement a deep range of options for any of the languages. It compliments existing systems, such as policy compliance tools, and abstract policy analysis systems. Its validation algorithms were evaluated for both completeness, and performance. The tool was found to correctly process large firewall policies in just a few seconds. A framework for a policy-based management system, with which the tool would integrate, is also proposed. This is based around a vendor independent XML-based repository of device configurations, which could be used to bring together existing policy management and analysis systems.
Resumo:
Ring-down absorption spectroscopy is an emerging ‘‘label-free’’ detection method for analytical microdevices, such as micrototal analysis systems (l-TAS). Developed from the related gas-phase cavity ring-down absorption spectroscopy, fiber-optic-based ring-down techniques for liquid samples offer low detection limits, high sensitivity and fast response. ª 2006 Elsevier Ltd. All rights reserved.
Resumo:
A method using L-cysteine for the determination of arsenous acid (As(III)), arsenic acid (As(V)), monomethylarsonic acid (MMAA), and dimethylarsinic acid (DMAA) by hydride generation was demonstrated. The instrument used was a d.c. plasma atomic emission spectrometer (OCP-AES). Complete recovery was reported for As(III), As(V), and DMAA while 86% recovery was reported for MMAA. Detection limits were determined, as arsenic for the species listed previously, to be 1.2, 0.8, 1.1, and 1.0 ngemL-l, respectively. Precision values, at 50 ngemL-1 arsenic concentration, were f.80/0, 2.50/0, 2.6% and 2.6% relative standard deviation, respectively. The L-cysteine reagent was compared directly with the conventional hydride generation technique which uses a potassium iodide-hydrochloric acid medium. Recoveries using L-cysteine when compared with the conventional method provided the following results: similar recoveries were obtained for As(III), slightly better recoveries were obtained for As(V) and MMAA, and significantly better recoveries for DMAA. In addition, tall and sharp peak shapes were observed for all four species when using L-cysteine. The arsenic speciation method involved separation by ion exchange .. high perfonnance liquid chromatography (HPLC) with on-line hydride generation using the L.. cysteine reagent and measurement byOCP-AES. Total analysis time per sample was 12 min while the time between the start of subsequent runs was approximately 20 min. A binary . gradient elution program, which incorporated the following two eluents: 0.01 and 0.5 mM tri.. sodium citrate both containing 5% methanol (v/v) and both at a pH of approximately 9, was used during the separation by HPLC. Recoveries of the four species which were measured as peak area, and were normalized against As(III), were 880/0, 290/0, and 40% for DMAA, MMAA and As(V), respectively. Resolution factors between adjacent analyte peaks of As(III) and DMAA was 1.1; DMAA and MMAA was 1.3; and MMAA and As(V) was 8.6. During the arsenic speciation study, signals from the d.c. plasma optical system were measured using a new photon-signal integrating device. The_new photon integrator developed and built in this laboratory was based on a previously published design which was further modified to reflect current available hardware. This photon integrator was interfaced to a personal computer through an AID convertor. The .photon integrator has adjustable threshold settings and an adjustable post-gain device.
Resumo:
Nous présentons dans cette thèse des théorèmes d’existence pour des systèmes d’équations différentielles non-linéaires d’ordre trois, pour des systèmes d’équa- tions et d’inclusions aux échelles de temps non-linéaires d’ordre un et pour des systèmes d’équations aux échelles de temps non-linéaires d’ordre deux sous cer- taines conditions aux limites. Dans le chapitre trois, nous introduirons une notion de tube-solution pour obtenir des théorèmes d’existence pour des systèmes d’équations différentielles du troisième ordre. Cette nouvelle notion généralise aux systèmes les notions de sous- et sur-solutions pour le problème aux limites de l’équation différentielle du troisième ordre étudiée dans [34]. Dans la dernière section de ce chapitre, nous traitons les systèmes d’ordre trois lorsque f est soumise à une condition de crois- sance de type Wintner-Nagumo. Pour admettre l’existence de solutions d’un tel système, nous aurons recours à la théorie des inclusions différentielles. Ce résultat d’existence généralise de diverses façons un théorème de Grossinho et Minhós [34]. Le chapitre suivant porte sur l’existence de solutions pour deux types de sys- tèmes d’équations aux échelles de temps du premier ordre. Les résultats d’exis- tence pour ces deux problèmes ont été obtenus grâce à des notions de tube-solution adaptées à ces systèmes. Le premier théorème généralise entre autre aux systèmes et à une échelle de temps quelconque, un résultat obtenu pour des équations aux différences finies par Mawhin et Bereanu [9]. Ce résultat permet également d’obte- nir l’existence de solutions pour de nouveaux systèmes dont on ne pouvait obtenir l’existence en utilisant le résultat de Dai et Tisdell [17]. Le deuxième théorème de ce chapitre généralise quant à lui, sous certaines conditions, des résultats de [60]. Le chapitre cinq aborde un nouveau théorème d’existence pour un système d’in- clusions aux échelles de temps du premier ordre. Selon nos recherches, aucun résultat avant celui-ci ne traitait de l’existence de solutions pour des systèmes d’inclusions de ce type. Ainsi, ce chapitre ouvre de nouvelles possibilités dans le domaine des inclusions aux échelles de temps. Notre résultat a été obtenu encore une fois à l’aide d’une hypothèse de tube-solution adaptée au problème. Au chapitre six, nous traitons l’existence de solutions pour des systèmes d’équations aux échelles de temps d’ordre deux. Le premier théorème d’existence que nous obtenons généralise les résultats de [36] étant donné que l’hypothèse que ces auteurs utilisent pour faire la majoration a priori est un cas particulier de notre hypothèse de tube-solution pour ce type de systèmes. Notons également que notre définition de tube-solution généralise aux systèmes les notions de sous- et sur-solutions introduites pour les équations d’ordre deux par [4] et [55]. Ainsi, nous généralisons également des résultats obtenus pour des équations aux échelles de temps d’ordre deux. Finalement, nous proposons un nouveau résultat d’exis- tence pour un système dont le membre droit des équations dépend de la ∆-dérivée de la fonction.
Resumo:
Les agents anti-infectieux sont utilisés pour traiter ou prévenir les infections chez les humains, les animaux, les insectes et les plantes. L’apparition de traces de ces substances dans les eaux usées, les eaux naturelles et même l’eau potable dans plusieurs pays du monde soulève l’inquiétude de la communauté scientifique surtout à cause de leur activité biologique. Le but de ces travaux de recherche a été d’étudier la présence d’anti-infectieux dans les eaux environnementales contaminées (c.-à-d. eaux usées, eaux naturelles et eau potable) ainsi que de développer de nouvelles méthodes analytiques capables de quantifier et confirmer leur présence dans ces matrices. Une méta-analyse sur l’occurrence des anti-infectieux dans les eaux environnementales contaminées a démontré qu’au moins 68 composés et 10 de leurs produits de transformation ont été quantifiés à ce jour. Les concentrations environnementales varient entre 0.1 ng/L et 1 mg/L, selon le composé, la matrice et la source de contamination. D’après cette étude, les effets nuisibles des anti-infectieux sur le biote aquatique sont possibles et ces substances peuvent aussi avoir un effet indirect sur la santé humaine à cause de sa possible contribution à la dissémination de la résistance aux anti-infecteiux chez les bactéries. Les premiers tests préliminaires de développement d’une méthode de détermination des anti-infectieux dans les eaux usées ont montré les difficultés à surmonter lors de l’extraction sur phase solide (SPE) ainsi que l’importance de la sélectivité du détecteur. On a décrit une nouvelle méthode de quantification des anti-infectieux utilisant la SPE en tandem dans le mode manuel et la chromatographie liquide couplée à la spectrométrie de masse en tandem (LC-MS/MS). Les six anti-infectieux ciblés (sulfaméthoxazole, triméthoprime, ciprofloxacin, levofloxacin, clarithromycin et azithromycin) ont été quantifiés à des concentrations entre 39 et 276 ng/L dans les échantillons d’affluent et d’effluent provenant d’une station d’épuration appliquant un traitement primaire et physico- chimique. Les concentrations retrouvées dans les effluents indiquent que la masse moyenne totale de ces substances, déversées hebdomadairement dans le fleuve St. Laurent, était de ~ 2 kg. En vue de réduire le temps total d’analyse et simplifier les manipulations, on a travaillé sur une nouvelle méthode de SPE couplée-LC-MS/MS. Cette méthode a utilisé une technique de permutation de colonnes pour préconcentrer 1.00 mL d’échantillon dans une colonne de SPE couplée. La performance analytique de la méthode a permis la quantification des six anti-infectieux dans les eaux usées municipales et les limites de détection étaient du même ordre de grandeur (13-60 ng/L) que les méthodes basées sur la SPE manuelle. Ensuite, l’application des colonnes de SPE couplée de chromatographie à débit turbulent pour la préconcentration de six anti-infectieux dans les eaux usées a été explorée pour diminuer les effets de matrice. Les résultats obtenus ont indiqué que ces colonnes sont une solution de réchange intéressante aux colonnes de SPE couplée traditionnelles. Finalement, en vue de permettre l’analyse des anti-infectieux dans les eaux de surface et l’eau potable, une méthode SPE couplée-LC-MS/MS utilisant des injections de grand volume (10 mL) a été développée. Le volume de fuite de plusieurs colonnes de SPE couplée a été estimé et la colonne ayant la meilleure rétention a été choisie. Les limites de détection et de confirmation de la méthode ont été entre 1 à 6 ng/L. L’analyse des échantillons réels a démontré que la concentration des trois anti-infectieux ciblés (sulfaméthoxazole, triméthoprime et clarithromycine) était au dessous de la limite de détection de la méthode. La mesure des masses exactes par spectrométrie de masse à temps d’envol et les spectres des ions produits utilisant une pente d’énergie de collision inverse dans un spectromètre de masse à triple quadripôle ont été explorés comme des méthodes de confirmation possibles.
Resumo:
Une nouvelle méthode d'extraction en phase solide (SPE) couplée à une technique d'analyse ultrarapide a été développée pour la détermination simultanée de neuf contaminants émergents (l'atrazine, le déséthylatrazine, le 17(béta)-estradiol, l'éthynylestradiol, la noréthindrone, la caféine, la carbamazépine, le diclofénac et le sulfaméthoxazole) provenant de différentes classes thérapeutiques et présents dans les eaux usées. La pré-concentration et la purification des échantillons a été réalisée avec une cartouche SPE en mode mixte (Strata ABW) ayant à la fois des propriétés échangeuses de cations et d'anions suivie d'une analyse par une désorption thermique par diode laser/ionisation chimique à pression atmosphérique couplée à la spectrométrie de masse en tandem (LDTD-APCI-MS/MS). La LDTD est une nouvelle méthode d'introduction d'échantillon qui réduit le temps total d'analyse à moins de 15 secondes par rapport à plusieurs minutes avec la chromatographie liquide couplée à la spectrométrie de masse en tandem traditionnelle (LC-MS/MS). Plusieurs paramètres SPE ont été évalués dans le but d'optimiser l'efficacité de récupération lors de l'extraction des analytes provenant des eaux usées, tels que la nature de la phase stationnaire, le débit de chargement, le pH d'extraction, le volume et la composition de la solution de lavage et le volume de l'échantillon initial. Cette nouvelle méthode a été appliquée avec succès à de vrais échantillons d'eaux usées provenant d'un réservoir de décantation primaire. Le recouvrement des composés ciblés provenant des eaux usées a été de 78 à 106%, la limite de détection a été de 30 à 122 ng L-1, alors que la limite de quantification a été de 88 à 370 ng L-1. Les courbes d'étalonnage dans les matrices d'eaux usées ont montré une bonne linéarité (R2 > 0,991) pour les analytes cibles ainsi qu’une précision avec un coefficient de variance inférieure à 15%.