913 resultados para MS-based methods
Resumo:
Background Magnetoencephalography (MEG) provides a direct measure of brain activity with high combined spatiotemporal resolution. Preprocessing is necessary to reduce contributions from environmental interference and biological noise. New method The effect on the signal-to-noise ratio of different preprocessing techniques is evaluated. The signal-to-noise ratio (SNR) was defined as the ratio between the mean signal amplitude (evoked field) and the standard error of the mean over trials. Results Recordings from 26 subjects obtained during and event-related visual paradigm with an Elekta MEG scanner were employed. Two methods were considered as first-step noise reduction: Signal Space Separation and temporal Signal Space Separation, which decompose the signal into components with origin inside and outside the head. Both algorithm increased the SNR by approximately 100%. Epoch-based methods, aimed at identifying and rejecting epochs containing eye blinks, muscular artifacts and sensor jumps provided an SNR improvement of 5–10%. Decomposition methods evaluated were independent component analysis (ICA) and second-order blind identification (SOBI). The increase in SNR was of about 36% with ICA and 33% with SOBI. Comparison with existing methods No previous systematic evaluation of the effect of the typical preprocessing steps in the SNR of the MEG signal has been performed. Conclusions The application of either SSS or tSSS is mandatory in Elekta systems. No significant differences were found between the two. While epoch-based methods have been routinely applied the less often considered decomposition methods were clearly superior and therefore their use seems advisable.
Resumo:
We present a novel general resource analysis for logic programs based on sized types.Sized types are representations that incorporate structural (shape) information and allow expressing both lower and upper bounds on the size of a set of terms and their subterms at any position and depth. They also allow relating the sizes of terms and subterms occurring at different argument positions in logic predicates. Using these sized types, the resource analysis can infer both lower and upper bounds on the resources used by all the procedures in a program as functions on input term (and subterm) sizes, overcoming limitations of existing analyses and enhancing their precision. Our new resource analysis has been developed within the abstract interpretation framework, as an extension of the sized types abstract domain, and has been integrated into the Ciao preprocessor, CiaoPP. The abstract domain operations are integrated with the setting up and solving of recurrence equations for both, inferring size and resource usage functions. We show that the analysis is an improvement over the previous resource analysis present in CiaoPP and compares well in power to state of the art systems.
Resumo:
The cyclic compression of several granular systems has been simulated with a molecular dynamics code. All the samples consisted of bidimensional, soft, frictionless and equal-sized particles that were initially arranged according to a squared lattice and were compressed by randomly generated irregular walls. The compression protocols can be described by some control variables (volume or external force acting on the walls) and by some dimensionless factors, that relate stiffness, density, diameter, damping ratio and water surface tension to the external forces, displacements and periods. Each protocol, that is associated to a dynamic process, results in an arrangement with its own macroscopic features: volume (or packing ratio), coordination number, and stress; and the differences between packings can be highly significant. The statistical distribution of the force-moment state of the particles (i.e. the equivalent average stress multiplied by the volume) is analyzed. In spite of the lack of a theoretical framework based on statistical mechanics specific for these protocols, it is shown how the obtained distributions of mean and relative deviatoric force-moment are. Then it is discussed on the nature of these distributions and on their relation to specific protocols.
Resumo:
The aim of this work is to develop an automated tool for the optimization of turbomachinery blades founded on an evolutionary strategy. This optimization scheme will serve to deal with supersonic blades cascades for application to Organic Rankine Cycle (ORC) turbines. The blade geometry is defined using parameterization techniques based on B-Splines curves, that allow to have a local control of the shape. The location in space of the control points of the B-Spline curve define the design variables of the optimization problem. In the present work, the performance of the blade shape is assessed by means of fully-turbulent flow simulations performed with a CFD package, in which a look-up table method is applied to ensure an accurate thermodynamic treatment. The solver is set along with the optimization tool to determine the optimal shape of the blade. As only blade-to-blade effects are of interest in this study, quasi-3D calculations are performed, and a single-objective evolutionary strategy is applied to the optimization. As a result, a non-intrusive tool, with no need for gradients definition, is developed. The computational cost is reduced by the use of surrogate models. A Gaussian interpolation scheme (Kriging model) is applied for the estimated n-dimensional function, and a surrogate-based local optimization strategy is proved to yield an accurate way for optimization. In particular, the present optimization scheme has been applied to the re-design of a supersonic stator cascade of an axial-flow turbine. In this design exercise very strong shock waves are generated in the rear blade suction side and shock-boundary layer interaction mechanisms occur. A significant efficiency improvement as a consequence of a more uniform flow at the blade outlet section of the stator is achieved. This is also expected to provide beneficial effects on the design of a subsequent downstream rotor. The method provides an improvement to gradient-based methods and an optimized blade geometry is easily achieved using the genetic algorithm.
Resumo:
Remote sensing information from spaceborne and airborne platforms continues to provide valuable data for different environmental monitoring applications. In this sense, high spatial resolution im-agery is an important source of information for land cover mapping. For the processing of high spa-tial resolution images, the object-based methodology is one of the most commonly used strategies. However, conventional pixel-based methods, which only use spectral information for land cover classification, are inadequate for classifying this type of images. This research presents a method-ology to characterise Mediterranean land covers in high resolution aerial images by means of an object-oriented approach. It uses a self-calibrating multi-band region growing approach optimised by pre-processing the image with a bilateral filtering. The obtained results show promise in terms of both segmentation quality and computational efficiency.
Resumo:
In the last decade, the research community has focused on new classification methods that rely on statistical characteristics of Internet traffic, instead of pre-viously popular port-number-based or payload-based methods, which are under even bigger constrictions. Some research works based on statistical characteristics generated large fea-ture sets of Internet traffic; however, nowadays it?s impossible to handle hun-dreds of features in big data scenarios, only leading to unacceptable processing time and misleading classification results due to redundant and correlative data. As a consequence, a feature selection procedure is essential in the process of Internet traffic characterization. In this paper a survey of feature selection methods is presented: feature selection frameworks are introduced, and differ-ent categories of methods are briefly explained and compared; several proposals on feature selection in Internet traffic characterization are shown; finally, future application of feature selection to a concrete project is proposed.
Resumo:
We examine the occurrence of the ≈300 known protein folds in different groups of organisms. To do this, we characterize a large fraction of the currently known protein sequences (≈140,000) in structural terms, by matching them to known structures via sequence comparison (or by secondary-structure class prediction for those without structural homologues). Overall, we find that an appreciable fraction of the known folds are present in each of the major groups of organisms (e.g., bacteria and eukaryotes share 156 of 275 folds), and most of the common folds are associated with many families of nonhomologous sequences (i.e., >10 sequence families for each common fold). However, different groups of organisms have characteristically distinct distributions of folds. So, for instance, some of the most common folds in vertebrates, such as globins or zinc fingers, are rare or absent in bacteria. Many of these differences in fold usage are biologically reasonable, such as the folds of metabolic enzymes being common in bacteria and those associated with extracellular transport and communication being common in animals. They also have important implications for database-based methods for fold recognition, suggesting that an unknown sequence from a plant is more likely to have a certain fold (e.g., a TIM barrel) than an unknown sequence from an animal.
Resumo:
Gene expression profiling provides powerful analyses of transcriptional responses to cellular perturbation. In contrast to DNA array-based methods, reporter gene technology has been underused for this application. Here we describe a genomewide, genome-registered collection of Escherichia coli bioluminescent reporter gene fusions. DNA sequences from plasmid-borne, random fusions of E. coli chromosomal DNA to a Photorhabdus luminescens luxCDABE reporter allowed precise mapping of each fusion. The utility of this collection covering about 30% of the transcriptional units was tested by analyzing individual fusions representative of heat shock, SOS, OxyR, SoxRS, and cya/crp stress-responsive regulons. Each fusion strain responded as anticipated to environmental conditions known to activate the corresponding regulatory circuit. Thus, the collection mirrors E. coli's transcriptional wiring diagram. This genomewide collection of gene fusions provides an independent test of results from other gene expression analyses. Accordingly, a DNA microarray-based analysis of mitomycin C-treated E. coli indicated elevated expression of expected and unanticipated genes. Selected luxCDABE fusions corresponding to these up-regulated genes were used to confirm or contradict the DNA microarray results. The power of partnering gene fusion and DNA microarray technology to discover promoters and define operons was demonstrated when data from both suggested that a cluster of 20 genes encoding production of type I extracellular polysaccharide in E. coli form a single operon.
Resumo:
Phyllosphere microbial communities were evaluated on leaves of field-grown plant species by culture-dependent and -independent methods. Denaturing gradient gel electrophoresis (DGGE) with 16S rDNA primers generally indicated that microbial community structures were similar on different individuals of the same plant species, but unique on different plant species. Phyllosphere bacteria were identified from Citrus sinesis (cv. Valencia) by using DGGE analysis followed by cloning and sequencing of the dominant rDNA bands. Of the 17 unique sequences obtained, database queries showed only four strains that had been described previously as phyllosphere bacteria. Five of the 17 sequences had 16S similarities lower than 90% to database entries, suggesting that they represent previously undescribed species. In addition, three fungal species were also identified. Very different 16S rDNA DGGE banding profiles were obtained when replicate cv. Valencia leaf samples were cultured in BIOLOG EcoPlates for 4.5 days. All of these rDNA sequences had 97–100% similarity to those of known phyllosphere bacteria, but only two of them matched those identified by the culture independent DGGE analysis. Like other studied ecosystems, microbial phyllosphere communities therefore are more complex than previously thought, based on conventional culture-based methods.
Resumo:
The field of natural language processing (NLP) has seen a dramatic shift in both research direction and methodology in the past several years. In the past, most work in computational linguistics tended to focus on purely symbolic methods. Recently, more and more work is shifting toward hybrid methods that combine new empirical corpus-based methods, including the use of probabilistic and information-theoretic techniques, with traditional symbolic methods. This work is made possible by the recent availability of linguistic databases that add rich linguistic annotation to corpora of natural language text. Already, these methods have led to a dramatic improvement in the performance of a variety of NLP systems with similar improvement likely in the coming years. This paper focuses on these trends, surveying in particular three areas of recent progress: part-of-speech tagging, stochastic parsing, and lexical semantics.
Resumo:
A infecção por papilomavirus é a principal causa de desenvolvimento de neoplasias intraepiteliais cervicais (NIC) e câncer do colo do útero (CCU). Estudos epidemiológicos têm demonstrado que a persistência do genoma viral encontra-se associado a variantes moleculares específicas de papilomavirus humano (HPV) de alto risco. As moléculas HLA de classe II têm um importante papel na resposta imune. Associações entre HLA e CCU ou infecção por HPV tem sido demonstrado em diferentes populações. O nosso objetivo foi verificar se a variabilidade de HLA-DRB1 e DQB1 estavam associada ao CCU e NIC III em mulheres de Belém, uma população formada pelos 3 principais grupos étnicos humanos e uma área de alto risco para o CCU no Norte do Brasil. Foi investigada a existência de diferenças na distribuição de alelos HLA entre mulheres com CCU e NIC III portadoras de diferentes variantes de HPV-16 e mulheres citologicamente normais. Os genes HLA DQB1 e DRB1 foram tipados pelo método de PCR-SSO em 95 casos e 287 controles de mulheres com citologia normal atendidas em um centro de prevenção do colo do útero na mesma cidade. As variantes de HPV-16 foram tipadas por sequenciamento de um fragmento da região controladora do genoma viral (LCR). O polimorfismo na posição 350 do gene E6 foi tipado baseado em um protocolo de hibridização em pontos, para identificar a alteração na posição 350T→G. A magnitude das associações foi estimada por odds ratio (OR) e os respectivos intervalos de confiança (IC), ajustados para potenciais fatores de confusão. Uma associação positiva foi observada entre CCU e os haplótipos DRB1* 150 l-DQB1*0602, DRB1*04-DQB1*0301 e DRB1*1602-DQB1*0301. Ao contrário, DRB1*01-DQB1*0501 mostrou um efeito protetor. Os alelos DRB1*0804, DQB1*0402 apresentaram efeito protetor contra positividade por HPV. O alelo DQB1*0502 e o grupo DRB1*15 foram positivamente associados. Os nossos resultados mostram que as associações positivas de DRB1*1501 e DRB1*1602 podem ser atribuídas a variantes asiático-americanas quando comparado a variantes européias. O risco conferido a DRB1*1501 foi encontrado associado tanto a variantes E6350G quanto a variantes E6350T, entretanto, o maior efeito foi devido às variantes E6250T. A associação positiva de DRB1*1602 foi significativa somente no grupo de mulheres positivas para E6350G. Estes resultados estão de acordo com a composição étnica da população estudada bem como um maior potencial oncogênico de certas variantes. Nossos dados sugerem que a contribuição dos alelos HLA na susceptibilidade genética ao CCU difere de acordo com a distribuição das variantes de HPV em uma dada região geográfica ou grupo étnico.
Resumo:
The increasing economic competition drives the industry to implement tools that improve their processes efficiencies. The process automation is one of these tools, and the Real Time Optimization (RTO) is an automation methodology that considers economic aspects to update the process control in accordance with market prices and disturbances. Basically, RTO uses a steady-state phenomenological model to predict the process behavior, and then, optimizes an economic objective function subject to this model. Although largely implemented in industry, there is not a general agreement about the benefits of implementing RTO due to some limitations discussed in the present work: structural plant/model mismatch, identifiability issues and low frequency of set points update. Some alternative RTO approaches have been proposed in literature to handle the problem of structural plant/model mismatch. However, there is not a sensible comparison evaluating the scope and limitations of these RTO approaches under different aspects. For this reason, the classical two-step method is compared to more recently derivative-based methods (Modifier Adaptation, Integrated System Optimization and Parameter estimation, and Sufficient Conditions of Feasibility and Optimality) using a Monte Carlo methodology. The results of this comparison show that the classical RTO method is consistent, providing a model flexible enough to represent the process topology, a parameter estimation method appropriate to handle measurement noise characteristics and a method to improve the sample information quality. At each iteration, the RTO methodology updates some key parameter of the model, where it is possible to observe identifiability issues caused by lack of measurements and measurement noise, resulting in bad prediction ability. Therefore, four different parameter estimation approaches (Rotational Discrimination, Automatic Selection and Parameter estimation, Reparametrization via Differential Geometry and classical nonlinear Least Square) are evaluated with respect to their prediction accuracy, robustness and speed. The results show that the Rotational Discrimination method is the most suitable to be implemented in a RTO framework, since it requires less a priori information, it is simple to be implemented and avoid the overfitting caused by the Least Square method. The third RTO drawback discussed in the present thesis is the low frequency of set points update, this problem increases the period in which the process operates at suboptimum conditions. An alternative to handle this problem is proposed in this thesis, by integrating the classic RTO and Self-Optimizing control (SOC) using a new Model Predictive Control strategy. The new approach demonstrates that it is possible to reduce the problem of low frequency of set points updates, improving the economic performance. Finally, the practical aspects of the RTO implementation are carried out in an industrial case study, a Vapor Recompression Distillation (VRD) process located in Paulínea refinery from Petrobras. The conclusions of this study suggest that the model parameters are successfully estimated by the Rotational Discrimination method; the RTO is able to improve the process profit in about 3%, equivalent to 2 million dollars per year; and the integration of SOC and RTO may be an interesting control alternative for the VRD process.
Resumo:
One of the main challenges to be addressed in text summarization concerns the detection of redundant information. This paper presents a detailed analysis of three methods for achieving such goal. The proposed methods rely on different levels of language analysis: lexical, syntactic and semantic. Moreover, they are also analyzed for detecting relevance in texts. The results show that semantic-based methods are able to detect up to 90% of redundancy, compared to only the 19% of lexical-based ones. This is also reflected in the quality of the generated summaries, obtaining better summaries when employing syntactic- or semantic-based approaches to remove redundancy.
Resumo:
A MATLAB-based computer code has been developed for the simultaneous wavelet analysis and filtering of several environmental time series, particularly focused on the analyses of cave monitoring data. The continuous wavelet transform, the discrete wavelet transform and the discrete wavelet packet transform have been implemented to provide a fast and precise time–period examination of the time series at different period bands. Moreover, statistic methods to examine the relation between two signals have been included. Finally, the entropy of curves and splines based methods have also been developed for segmenting and modeling the analyzed time series. All these methods together provide a user-friendly and fast program for the environmental signal analysis, with useful, practical and understandable results.
Resumo:
Since the beginning of 3D computer vision problems, the use of techniques to reduce the data to make it treatable preserving the important aspects of the scene has been necessary. Currently, with the new low-cost RGB-D sensors, which provide a stream of color and 3D data of approximately 30 frames per second, this is getting more relevance. Many applications make use of these sensors and need a preprocessing to downsample the data in order to either reduce the processing time or improve the data (e.g., reducing noise or enhancing the important features). In this paper, we present a comparison of different downsampling techniques which are based on different principles. Concretely, five different downsampling methods are included: a bilinear-based method, a normal-based, a color-based, a combination of the normal and color-based samplings, and a growing neural gas (GNG)-based approach. For the comparison, two different models have been used acquired with the Blensor software. Moreover, to evaluate the effect of the downsampling in a real application, a 3D non-rigid registration is performed with the data sampled. From the experimentation we can conclude that depending on the purpose of the application some kernels of the sampling methods can improve drastically the results. Bilinear- and GNG-based methods provide homogeneous point clouds, but color-based and normal-based provide datasets with higher density of points in areas with specific features. In the non-rigid application, if a color-based sampled point cloud is used, it is possible to properly register two datasets for cases where intensity data are relevant in the model and outperform the results if only a homogeneous sampling is used.