428 resultados para Computational Identification
Mental computation : the identification of associated cognitive, metacognitive and affective factors
Resumo:
Nuclear Factor Y (NF-Y) is a trimeric complex that binds to the CCAAT box, a ubiquitous eukaryotic promoter element. The three subunits NF-YA, NF-YB and NF-YC are represented by single genes in yeast and mammals. However, in model plant species (Arabidopsis and rice) multiple genes encode each subunit providing the impetus for the investigation of the NF-Y transcription factor family in wheat. A total of 37 NF-Y and Dr1 genes (10 NF-YA, 11 NF-YB, 14 NF-YC and 2 Dr1) in Triticum aestivum were identified in the global DNA databases by computational analysis in this study. Each of the wheat NF-Y subunit families could be further divided into 4-5 clades based on their conserved core region sequences. Several conserved motifs outside of the NF-Y core regions were also identified by comparison of NF-Y members from wheat, rice and Arabidopsis. Quantitative RT-PCR analysis revealed that some of the wheat NF-Y genes were expressed ubiquitously, while others were expressed in an organ-specific manner. In particular, each TaNF-Y subunit family had members that were expressed predominantly in the endosperm. The expression of nine NF-Y and two Dr1 genes in wheat leaves appeared to be responsive to drought stress. Three of these genes were up-regulated under drought conditions, indicating that these members of the NF-Y and Dr1 families are potentially involved in plant drought adaptation. The combined expression and phylogenetic analyses revealed that members within the same phylogenetic clade generally shared a similar expression profile. Organ-specific expression and differential response to drought indicate a plant-specific biological role for various members of this transcription factor family.
Resumo:
The hydrodynamic behaviour of a novel flat plate photocatalytic reactor for water treatment is investigated using CFD code FLUENT. The reactor consists of a reactive section that features negligible pressure drop and uniform illumination of the photocatalyst to ensure enhanced photocatalytic efficiency. The numerical simulations allowed the identification of several design issues in the original reactor, which include extensive boundary layer separation near the photocatalyst support and regions of flow recirculation that render a significant portion of the reactive area. The simulations reveal that this issue could be addressed by selecting the appropriate inlet positions and configurations. This modification can cause minimal pressure drop across the reactive zone and achieves significant uniformization of the tested pollutant on the photocatalyst surface. The influence of roughness elements type has also been studied with a view to identify their role on the distribution of pollutant concentration on the photocatalyst surface. The results presented here indicate that the flow and pollutant concentration field strongly depend on the geometric parameters and flow conditions.
Resumo:
Introduction: The accurate identification of tissue electron densities is of great importance for Monte Carlo (MC) dose calculations. When converting patient CT data into a voxelised format suitable for MC simulations, however, it is common to simplify the assignment of electron densities so that the complex tissues existing in the human body are categorized into a few basic types. This study examines the effects that the assignment of tissue types and the calculation of densities can have on the results of MC simulations, for the particular case of a Siemen’s Sensation 4 CT scanner located in a radiotherapy centre where QA measurements are routinely made using 11 tissue types (plus air). Methods: DOSXYZnrc phantoms are generated from CT data, using the CTCREATE user code, with the relationship between Hounsfield units (HU) and density determined via linear interpolation between a series of specified points on the ‘CT-density ramp’ (see Figure 1(a)). Tissue types are assigned according to HU ranges. Each voxel in the DOSXYZnrc phantom therefore has an electron density (electrons/cm3) defined by the product of the mass density (from the HU conversion) and the intrinsic electron density (electrons /gram) (from the material assignment), in that voxel. In this study, we consider the problems of density conversion and material identification separately: the CT-density ramp is simplified by decreasing the number of points which define it from 12 down to 8, 3 and 2; and the material-type-assignment is varied by defining the materials which comprise our test phantom (a Supertech head) as two tissues and bone, two plastics and bone, water only and (as an extreme case) lead only. The effect of these parameters on radiological thickness maps derived from simulated portal images is investigated. Results & Discussion: Increasing the degree of simplification of the CT-density ramp results in an increasing effect on the resulting radiological thickness calculated for the Supertech head phantom. For instance, defining the CT-density ramp using 8 points, instead of 12, results in a maximum radiological thickness change of 0.2 cm, whereas defining the CT-density ramp using only 2 points results in a maximum radiological thickness change of 11.2 cm. Changing the definition of the materials comprising the phantom between water and plastic and tissue results in millimetre-scale changes to the resulting radiological thickness. When the entire phantom is defined as lead, this alteration changes the calculated radiological thickness by a maximum of 9.7 cm. Evidently, the simplification of the CT-density ramp has a greater effect on the resulting radiological thickness map than does the alteration of the assignment of tissue types. Conclusions: It is possible to alter the definitions of the tissue types comprising the phantom (or patient) without substantially altering the results of simulated portal images. However, these images are very sensitive to the accurate identification of the HU-density relationship. When converting data from a patient’s CT into a MC simulation phantom, therefore, all possible care should be taken to accurately reproduce the conversion between HU and mass density, for the specific CT scanner used. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital (RBWH), Brisbane, Australia. The authors are grateful to the staff of the RBWH, especially Darren Cassidy, for assistance in obtaining the phantom CT data used in this study. The authors also wish to thank Cathy Hargrave, of QUT, for assistance in formatting the CT data, using the Pinnacle TPS. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.
Resumo:
tRNA-derived RNA fragments (tRFs) are 19mer small RNAs that associate with Argonaute (AGO) proteins in humans. However, in plants, it is unknown if tRFs bind with AGO proteins. Here, using public deep sequencing libraries of immunoprecipitated Argonaute proteins (AGO-IP) and bioinformatics approaches, we identified the Arabidopsis thaliana AGO-IP tRFs. Moreover, using three degradome deep sequencing libraries, we identified four putative tRF targets. The expression pattern of tRFs, based on deep sequencing data, was also analyzed under abiotic and biotic stresses. The results obtained here represent a useful starting point for future studies on tRFs in plants. © 2013 Loss-Morais et al.; licensee BioMed Central Ltd.
Resumo:
The use of Mahalanobis squared distance–based novelty detection in statistical damage identification has become increasingly popular in recent years. The merit of the Mahalanobis squared distance–based method is that it is simple and requires low computational effort to enable the use of a higher dimensional damage-sensitive feature, which is generally more sensitive to structural changes. Mahalanobis squared distance–based damage identification is also believed to be one of the most suitable methods for modern sensing systems such as wireless sensors. Although possessing such advantages, this method is rather strict with the input requirement as it assumes the training data to be multivariate normal, which is not always available particularly at an early monitoring stage. As a consequence, it may result in an ill-conditioned training model with erroneous novelty detection and damage identification outcomes. To date, there appears to be no study on how to systematically cope with such practical issues especially in the context of a statistical damage identification problem. To address this need, this article proposes a controlled data generation scheme, which is based upon the Monte Carlo simulation methodology with the addition of several controlling and evaluation tools to assess the condition of output data. By evaluating the convergence of the data condition indices, the proposed scheme is able to determine the optimal setups for the data generation process and subsequently avoid unnecessarily excessive data. The efficacy of this scheme is demonstrated via applications to a benchmark structure data in the field.
Resumo:
The reaction of the aromatic distonic peroxyl radical cations N-methyl pyridinium-4-peroxyl (PyrOO center dot+) and 4-(N,N,N-trimethyl ammonium)-phenyl peroxyl (AnOO center dot+), with symmetrical dialkyl alkynes 10?ac was studied in the gas phase by mass spectrometry. PyrOO center dot+ and AnOO center dot+ were produced through reaction of the respective distonic aryl radical cations Pyr center dot+ and An center dot+ with oxygen, O2. For the reaction of Pyr center dot+ with O2 an absolute rate coefficient of k1=7.1X10-12 cm3 molecule-1 s-1 and a collision efficiency of 1.2?% was determined at 298 K. The strongly electrophilic PyrOO center dot+ reacts with 3-hexyne and 4-octyne with absolute rate coefficients of khexyne=1.5X10-10 cm3 molecule-1 s-1 and koctyne=2.8X10-10 cm3 molecule-1 s-1, respectively, at 298 K. The reaction of both PyrOO center dot+ and AnOO center dot+ proceeds by radical addition to the alkyne, whereas propargylic hydrogen abstraction was observed as a very minor pathway only in the reactions involving PyrOO center dot+. A major reaction pathway of the vinyl radicals 11 formed upon PyrOO center dot+ addition to the alkynes involves gamma-fragmentation of the peroxy O?O bond and formation of PyrO center dot+. The PyrO center dot+ is rapidly trapped by intermolecular hydrogen abstraction, presumably from a propargylic methylene group in the alkyne. The reaction of the less electrophilic AnOO center dot+ with alkynes is considerably slower and resulted in formation of AnO center dot+ as the only charged product. These findings suggest that electrophilic aromatic peroxyl radicals act as oxygen atom donors, which can be used to generate alpha-oxo carbenes 13 (or isomeric species) from alkynes in a single step. Besides gamma-fragmentation, a number of competing unimolecular dissociative reactions also occur in vinyl radicals 11. The potential energy diagrams of these reactions were explored with density functional theory and ab initio methods, which enabled identification of the chemical structures of the most important products.
Resumo:
This article presents the field applications and validations for the controlled Monte Carlo data generation scheme. This scheme was previously derived to assist the Mahalanobis squared distance–based damage identification method to cope with data-shortage problems which often cause inadequate data multinormality and unreliable identification outcome. To do so, real-vibration datasets from two actual civil engineering structures with such data (and identification) problems are selected as the test objects which are then shown to be in need of enhancement to consolidate their conditions. By utilizing the robust probability measures of the data condition indices in controlled Monte Carlo data generation and statistical sensitivity analysis of the Mahalanobis squared distance computational system, well-conditioned synthetic data generated by an optimal controlled Monte Carlo data generation configurations can be unbiasedly evaluated against those generated by other set-ups and against the original data. The analysis results reconfirm that controlled Monte Carlo data generation is able to overcome the shortage of observations, improve the data multinormality and enhance the reliability of the Mahalanobis squared distance–based damage identification method particularly with respect to false-positive errors. The results also highlight the dynamic structure of controlled Monte Carlo data generation that makes this scheme well adaptive to any type of input data with any (original) distributional condition.
Resumo:
Systems-level identification and analysis of cellular circuits in the brain will require the development of whole-brain imaging with single-cell resolution. To this end, we performed comprehensive chemical screening to develop a whole-brain clearing and imaging method, termed CUBIC (clear, unobstructed brain imaging cocktails and computational analysis). CUBIC is a simple and efficient method involving the immersion of brain samples in chemical mixtures containing aminoalcohols, which enables rapid whole-brain imaging with single-photon excitation microscopy. CUBIC is applicable to multicolor imaging of fluorescent proteins or immunostained samples in adult brains and is scalable from a primate brain to subcellular structures. We also developed a whole-brain cell-nuclear counterstaining protocol and a computational image analysis pipeline that, together with CUBIC reagents, enable the visualization and quantification of neural activities induced by environmental stimulation. CUBIC enables time-course expression profiling of whole adult brains with single-cell resolution.
Resumo:
The problem of determining the script and language of a document image has a number of important applications in the field of document analysis, such as indexing and sorting of large collections of such images, or as a precursor to optical character recognition (OCR). In this paper, we investigate the use of texture as a tool for determining the script of a document image, based on the observation that text has a distinct visual texture. An experimental evaluation of a number of commonly used texture features is conducted on a newly created script database, providing a qualitative measure of which features are most appropriate for this task. Strategies for improving classification results in situations with limited training data and multiple font types are also proposed.