980 resultados para Computational Identification
Resumo:
The problem of structural system identification when measurements originate from multiple tests and multiple sensors is considered. An offline solution to this problem using bootstrap particle filtering is proposed. The central idea of the proposed method is the introduction of a dummy independent variable that allows for simultaneous assimilation of multiple measurements in a sequential manner. The method can treat linear/nonlinear structural models and allows for measurements on strains and displacements under static/dynamic loads. Illustrative examples consider measurement data from numerical models and also from laboratory experiments. The results from the proposed method are compared with those from a Kalman filter-based approach and the superior performance of the proposed method is demonstrated. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
System identification deals with the problem of building mathematical models of dynamical systems based on observed data from the system" [1]. In the context of civil engineering, the system refers to a large scale structure such as a building, bridge, or an offshore structure, and identification mostly involves the determination of modal parameters (the natural frequencies, damping ratios, and mode shapes). This paper presents some modal identification results obtained using a state-of-the-art time domain system identification method (data-driven stochastic subspace algorithms [2]) applied to the output-only data measured in a steel arch bridge. First, a three dimensional finite element model was developed for the numerical analysis of the structure using ANSYS. Modal analysis was carried out and modal parameters were extracted in the frequency range of interest, 0-10 Hz. The results obtained from the finite element modal analysis were used to determine the location of the sensors. After that, ambient vibration tests were conducted during April 23-24, 2009. The response of the structure was measured using eight accelerometers. Two stations of three sensors were formed (triaxial stations). These sensors were held stationary for reference during the test. The two remaining sensors were placed at the different measurement points along the bridge deck, in which only vertical and transversal measurements were conducted (biaxial stations). Point estimate and interval estimate have been carried out in the state space model using these ambient vibration measurements. In the case of parametric models (like state space), the dynamic behaviour of a system is described using mathematical models. Then, mathematical relationships can be established between modal parameters and estimated point parameters (thus, it is common to use experimental modal analysis as a synonym for system identification). Stable modal parameters are found using a stabilization diagram. Furthermore, this paper proposes a method for assessing the precision of estimates of the parameters of state-space models (confidence interval). This approach employs the nonparametric bootstrap procedure [3] and is applied to subspace parameter estimation algorithm. Using bootstrap results, a plot similar to a stabilization diagram is developed. These graphics differentiate system modes from spurious noise modes for a given order system. Additionally, using the modal assurance criterion, the experimental modes obtained have been compared with those evaluated from a finite element analysis. A quite good agreement between numerical and experimental results is observed.
Mental computation : the identification of associated cognitive, metacognitive and affective factors
Resumo:
Nuclear Factor Y (NF-Y) is a trimeric complex that binds to the CCAAT box, a ubiquitous eukaryotic promoter element. The three subunits NF-YA, NF-YB and NF-YC are represented by single genes in yeast and mammals. However, in model plant species (Arabidopsis and rice) multiple genes encode each subunit providing the impetus for the investigation of the NF-Y transcription factor family in wheat. A total of 37 NF-Y and Dr1 genes (10 NF-YA, 11 NF-YB, 14 NF-YC and 2 Dr1) in Triticum aestivum were identified in the global DNA databases by computational analysis in this study. Each of the wheat NF-Y subunit families could be further divided into 4-5 clades based on their conserved core region sequences. Several conserved motifs outside of the NF-Y core regions were also identified by comparison of NF-Y members from wheat, rice and Arabidopsis. Quantitative RT-PCR analysis revealed that some of the wheat NF-Y genes were expressed ubiquitously, while others were expressed in an organ-specific manner. In particular, each TaNF-Y subunit family had members that were expressed predominantly in the endosperm. The expression of nine NF-Y and two Dr1 genes in wheat leaves appeared to be responsive to drought stress. Three of these genes were up-regulated under drought conditions, indicating that these members of the NF-Y and Dr1 families are potentially involved in plant drought adaptation. The combined expression and phylogenetic analyses revealed that members within the same phylogenetic clade generally shared a similar expression profile. Organ-specific expression and differential response to drought indicate a plant-specific biological role for various members of this transcription factor family.
Resumo:
The hydrodynamic behaviour of a novel flat plate photocatalytic reactor for water treatment is investigated using CFD code FLUENT. The reactor consists of a reactive section that features negligible pressure drop and uniform illumination of the photocatalyst to ensure enhanced photocatalytic efficiency. The numerical simulations allowed the identification of several design issues in the original reactor, which include extensive boundary layer separation near the photocatalyst support and regions of flow recirculation that render a significant portion of the reactive area. The simulations reveal that this issue could be addressed by selecting the appropriate inlet positions and configurations. This modification can cause minimal pressure drop across the reactive zone and achieves significant uniformization of the tested pollutant on the photocatalyst surface. The influence of roughness elements type has also been studied with a view to identify their role on the distribution of pollutant concentration on the photocatalyst surface. The results presented here indicate that the flow and pollutant concentration field strongly depend on the geometric parameters and flow conditions.
Resumo:
Introduction: The accurate identification of tissue electron densities is of great importance for Monte Carlo (MC) dose calculations. When converting patient CT data into a voxelised format suitable for MC simulations, however, it is common to simplify the assignment of electron densities so that the complex tissues existing in the human body are categorized into a few basic types. This study examines the effects that the assignment of tissue types and the calculation of densities can have on the results of MC simulations, for the particular case of a Siemen’s Sensation 4 CT scanner located in a radiotherapy centre where QA measurements are routinely made using 11 tissue types (plus air). Methods: DOSXYZnrc phantoms are generated from CT data, using the CTCREATE user code, with the relationship between Hounsfield units (HU) and density determined via linear interpolation between a series of specified points on the ‘CT-density ramp’ (see Figure 1(a)). Tissue types are assigned according to HU ranges. Each voxel in the DOSXYZnrc phantom therefore has an electron density (electrons/cm3) defined by the product of the mass density (from the HU conversion) and the intrinsic electron density (electrons /gram) (from the material assignment), in that voxel. In this study, we consider the problems of density conversion and material identification separately: the CT-density ramp is simplified by decreasing the number of points which define it from 12 down to 8, 3 and 2; and the material-type-assignment is varied by defining the materials which comprise our test phantom (a Supertech head) as two tissues and bone, two plastics and bone, water only and (as an extreme case) lead only. The effect of these parameters on radiological thickness maps derived from simulated portal images is investigated. Results & Discussion: Increasing the degree of simplification of the CT-density ramp results in an increasing effect on the resulting radiological thickness calculated for the Supertech head phantom. For instance, defining the CT-density ramp using 8 points, instead of 12, results in a maximum radiological thickness change of 0.2 cm, whereas defining the CT-density ramp using only 2 points results in a maximum radiological thickness change of 11.2 cm. Changing the definition of the materials comprising the phantom between water and plastic and tissue results in millimetre-scale changes to the resulting radiological thickness. When the entire phantom is defined as lead, this alteration changes the calculated radiological thickness by a maximum of 9.7 cm. Evidently, the simplification of the CT-density ramp has a greater effect on the resulting radiological thickness map than does the alteration of the assignment of tissue types. Conclusions: It is possible to alter the definitions of the tissue types comprising the phantom (or patient) without substantially altering the results of simulated portal images. However, these images are very sensitive to the accurate identification of the HU-density relationship. When converting data from a patient’s CT into a MC simulation phantom, therefore, all possible care should be taken to accurately reproduce the conversion between HU and mass density, for the specific CT scanner used. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital (RBWH), Brisbane, Australia. The authors are grateful to the staff of the RBWH, especially Darren Cassidy, for assistance in obtaining the phantom CT data used in this study. The authors also wish to thank Cathy Hargrave, of QUT, for assistance in formatting the CT data, using the Pinnacle TPS. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.
Resumo:
tRNA-derived RNA fragments (tRFs) are 19mer small RNAs that associate with Argonaute (AGO) proteins in humans. However, in plants, it is unknown if tRFs bind with AGO proteins. Here, using public deep sequencing libraries of immunoprecipitated Argonaute proteins (AGO-IP) and bioinformatics approaches, we identified the Arabidopsis thaliana AGO-IP tRFs. Moreover, using three degradome deep sequencing libraries, we identified four putative tRF targets. The expression pattern of tRFs, based on deep sequencing data, was also analyzed under abiotic and biotic stresses. The results obtained here represent a useful starting point for future studies on tRFs in plants. © 2013 Loss-Morais et al.; licensee BioMed Central Ltd.
Resumo:
The use of Mahalanobis squared distance–based novelty detection in statistical damage identification has become increasingly popular in recent years. The merit of the Mahalanobis squared distance–based method is that it is simple and requires low computational effort to enable the use of a higher dimensional damage-sensitive feature, which is generally more sensitive to structural changes. Mahalanobis squared distance–based damage identification is also believed to be one of the most suitable methods for modern sensing systems such as wireless sensors. Although possessing such advantages, this method is rather strict with the input requirement as it assumes the training data to be multivariate normal, which is not always available particularly at an early monitoring stage. As a consequence, it may result in an ill-conditioned training model with erroneous novelty detection and damage identification outcomes. To date, there appears to be no study on how to systematically cope with such practical issues especially in the context of a statistical damage identification problem. To address this need, this article proposes a controlled data generation scheme, which is based upon the Monte Carlo simulation methodology with the addition of several controlling and evaluation tools to assess the condition of output data. By evaluating the convergence of the data condition indices, the proposed scheme is able to determine the optimal setups for the data generation process and subsequently avoid unnecessarily excessive data. The efficacy of this scheme is demonstrated via applications to a benchmark structure data in the field.
Resumo:
The reaction of the aromatic distonic peroxyl radical cations N-methyl pyridinium-4-peroxyl (PyrOO center dot+) and 4-(N,N,N-trimethyl ammonium)-phenyl peroxyl (AnOO center dot+), with symmetrical dialkyl alkynes 10?ac was studied in the gas phase by mass spectrometry. PyrOO center dot+ and AnOO center dot+ were produced through reaction of the respective distonic aryl radical cations Pyr center dot+ and An center dot+ with oxygen, O2. For the reaction of Pyr center dot+ with O2 an absolute rate coefficient of k1=7.1X10-12 cm3 molecule-1 s-1 and a collision efficiency of 1.2?% was determined at 298 K. The strongly electrophilic PyrOO center dot+ reacts with 3-hexyne and 4-octyne with absolute rate coefficients of khexyne=1.5X10-10 cm3 molecule-1 s-1 and koctyne=2.8X10-10 cm3 molecule-1 s-1, respectively, at 298 K. The reaction of both PyrOO center dot+ and AnOO center dot+ proceeds by radical addition to the alkyne, whereas propargylic hydrogen abstraction was observed as a very minor pathway only in the reactions involving PyrOO center dot+. A major reaction pathway of the vinyl radicals 11 formed upon PyrOO center dot+ addition to the alkynes involves gamma-fragmentation of the peroxy O?O bond and formation of PyrO center dot+. The PyrO center dot+ is rapidly trapped by intermolecular hydrogen abstraction, presumably from a propargylic methylene group in the alkyne. The reaction of the less electrophilic AnOO center dot+ with alkynes is considerably slower and resulted in formation of AnO center dot+ as the only charged product. These findings suggest that electrophilic aromatic peroxyl radicals act as oxygen atom donors, which can be used to generate alpha-oxo carbenes 13 (or isomeric species) from alkynes in a single step. Besides gamma-fragmentation, a number of competing unimolecular dissociative reactions also occur in vinyl radicals 11. The potential energy diagrams of these reactions were explored with density functional theory and ab initio methods, which enabled identification of the chemical structures of the most important products.
Resumo:
This article presents the field applications and validations for the controlled Monte Carlo data generation scheme. This scheme was previously derived to assist the Mahalanobis squared distance–based damage identification method to cope with data-shortage problems which often cause inadequate data multinormality and unreliable identification outcome. To do so, real-vibration datasets from two actual civil engineering structures with such data (and identification) problems are selected as the test objects which are then shown to be in need of enhancement to consolidate their conditions. By utilizing the robust probability measures of the data condition indices in controlled Monte Carlo data generation and statistical sensitivity analysis of the Mahalanobis squared distance computational system, well-conditioned synthetic data generated by an optimal controlled Monte Carlo data generation configurations can be unbiasedly evaluated against those generated by other set-ups and against the original data. The analysis results reconfirm that controlled Monte Carlo data generation is able to overcome the shortage of observations, improve the data multinormality and enhance the reliability of the Mahalanobis squared distance–based damage identification method particularly with respect to false-positive errors. The results also highlight the dynamic structure of controlled Monte Carlo data generation that makes this scheme well adaptive to any type of input data with any (original) distributional condition.
Resumo:
Systems-level identification and analysis of cellular circuits in the brain will require the development of whole-brain imaging with single-cell resolution. To this end, we performed comprehensive chemical screening to develop a whole-brain clearing and imaging method, termed CUBIC (clear, unobstructed brain imaging cocktails and computational analysis). CUBIC is a simple and efficient method involving the immersion of brain samples in chemical mixtures containing aminoalcohols, which enables rapid whole-brain imaging with single-photon excitation microscopy. CUBIC is applicable to multicolor imaging of fluorescent proteins or immunostained samples in adult brains and is scalable from a primate brain to subcellular structures. We also developed a whole-brain cell-nuclear counterstaining protocol and a computational image analysis pipeline that, together with CUBIC reagents, enable the visualization and quantification of neural activities induced by environmental stimulation. CUBIC enables time-course expression profiling of whole adult brains with single-cell resolution.
Resumo:
Background: Tuberculosis still remains one of the largest killer infectious diseases, warranting the identification of newer targets and drugs. Identification and validation of appropriate targets for designing drugs are critical steps in drug discovery, which are at present major bottle-necks. A majority of drugs in current clinical use for many diseases have been designed without the knowledge of the targets, perhaps because standard methodologies to identify such targets in a high-throughput fashion do not really exist. With different kinds of 'omics' data that are now available, computational approaches can be powerful means of obtaining short-lists of possible targets for further experimental validation. Results: We report a comprehensive in silico target identification pipeline, targetTB, for Mycobacterium tuberculosis. The pipeline incorporates a network analysis of the protein-protein interactome, a flux balance analysis of the reactome, experimentally derived phenotype essentiality data, sequence analyses and a structural assessment of targetability, using novel algorithms recently developed by us. Using flux balance analysis and network analysis, proteins critical for survival of M. tuberculosis are first identified, followed by comparative genomics with the host, finally incorporating a novel structural analysis of the binding sites to assess the feasibility of a protein as a target. Further analyses include correlation with expression data and non-similarity to gut flora proteins as well as 'anti-targets' in the host, leading to the identification of 451 high-confidence targets. Through phylogenetic profiling against 228 pathogen genomes, shortlisted targets have been further explored to identify broad-spectrum antibiotic targets, while also identifying those specific to tuberculosis. Targets that address mycobacterial persistence and drug resistance mechanisms are also analysed. Conclusion: The pipeline developed provides rational schema for drug target identification that are likely to have high rates of success, which is expected to save enormous amounts of money, resources and time in the drug discovery process. A thorough comparison with previously suggested targets in the literature demonstrates the usefulness of the integrated approach used in our study, highlighting the importance of systems-level analyses in particular. The method has the potential to be used as a general strategy for target identification and validation and hence significantly impact most drug discovery programmes.