951 resultados para vector auto-regressive model
Resumo:
This paper presents a model of the Stokes emission vector from the ocean surface. The ocean surface is described as an ensemble of facets with Cox and Munk's (1954) Gram-Charlier slope distribution. The study discusses the impact of different up-wind and cross-wind rms slopes, skewness, peakedness, foam cover models and atmospheric effects on the azimuthal variation of the Stokes vector, as well as the limitations of the model. Simulation results compare favorably, both in mean value and azimuthal dependence, with SSM/I data at 53° incidence angle and with JPL's WINDRAD measurements at incidence angles from 30° to 65°, and at wind speeds from 2.5 to 11 m/s.
Resumo:
In our study we use a kernel based classification technique, Support Vector Machine Regression for predicting the Melting Point of Drug – like compounds in terms of Topological Descriptors, Topological Charge Indices, Connectivity Indices and 2D Auto Correlations. The Machine Learning model was designed, trained and tested using a dataset of 100 compounds and it was found that an SVMReg model with RBF Kernel could predict the Melting Point with a mean absolute error 15.5854 and Root Mean Squared Error 19.7576
Resumo:
Support Vector Machines Regression (SVMR) is a regression technique which has been recently introduced by V. Vapnik and his collaborators (Vapnik, 1995; Vapnik, Golowich and Smola, 1996). In SVMR the goodness of fit is measured not by the usual quadratic loss function (the mean square error), but by a different loss function called Vapnik"s $epsilon$- insensitive loss function, which is similar to the "robust" loss functions introduced by Huber (Huber, 1981). The quadratic loss function is well justified under the assumption of Gaussian additive noise. However, the noise model underlying the choice of Vapnik's loss function is less clear. In this paper the use of Vapnik's loss function is shown to be equivalent to a model of additive and Gaussian noise, where the variance and mean of the Gaussian are random variables. The probability distributions for the variance and mean will be stated explicitly. While this work is presented in the framework of SVMR, it can be extended to justify non-quadratic loss functions in any Maximum Likelihood or Maximum A Posteriori approach. It applies not only to Vapnik's loss function, but to a much broader class of loss functions.
Resumo:
Este artículo pertenece a una sección de la revista dedicada a psicología social
Resumo:
This paper proposes a novel way to combine different observation models in a particle filter framework. This, so called, auto-adjustable observation model, enhance the particle filter accuracy when the tracked objects overlap without infringing a great runtime penalty to the whole tracking system. The approach has been tested under two important real world situations related to animal behavior: mice and larvae tracking. The proposal was compared to some state-of-art approaches and the results show, under the datasets tested, that a good trade-off between accuracy and runtime can be achieved using an auto-adjustable observation model. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Recently, two international standard organizations, ISO and OGC, have done the work of standardization for GIS. Current standardization work for providing interoperability among GIS DB focuses on the design of open interfaces. But, this work has not considered procedures and methods for designing river geospatial data. Eventually, river geospatial data has its own model. When we share the data by open interface among heterogeneous GIS DB, differences between models result in the loss of information. In this study a plan was suggested both to respond to these changes in the information envirnment and to provide a future Smart River-based river information service by understanding the current state of river geospatial data model, improving, redesigning the database. Therefore, primary and foreign key, which can distinguish attribute information and entity linkages, were redefined to increase the usability. Database construction of attribute information and entity relationship diagram have been newly redefined to redesign linkages among tables from the perspective of a river standard database. In addition, this study was undertaken to expand the current supplier-oriented operating system to a demand-oriented operating system by establishing an efficient management of river-related information and a utilization system, capable of adapting to the changes of a river management paradigm.
Resumo:
The behavior of the non-perturbative parts of the isovector-vector and isovector and isosinglet axial-vector correlators at Euclidean momenta is studied in the framework of a covariant chiral quark model with non-local quark-quark interactions. The gauge covariance is ensured with the help of the P-exponents, with the corresponding modification of the quark-current interaction vertices taken into account. The low- and high-momentum behavior of the correlators is compared with the chiral perturbation theory and with the QCD operator product expansion, respectively. The V-A combination of the correlators obtained in the model reproduces quantitatively the ALEPH and OPAL data on hadronic tau decays, transformed into the Euclidean domain via dispersion relations. The predictions for the electromagnetic pi(+/-) - pi(0) mass difference and for the pion electric polarizability are also in agreement with the experimental values. The topological susceptibility of the vacuum is evaluated as a function of the momentum, and its first moment is predicted to be chi'(0) approximate to (50 MeV)(2). In addition, the fulfillment of the Crewther theorem is demonstrated.
Resumo:
If the electroweak symmetry breaking is originated from a strongly coupled sector, as for instance in composite Higgs models, the Higgs boson couplings can deviate from their Standard Model values. In such cases, at sufficiently high energies there could occur an onset of multiple Higgs boson and longitudinally polarised electroweak gauge boson (V L ) production. We study the sensitivity to anomalous Higgs couplings in inelastic processes with 3 and 4 particles (either Higgs bosons or V L 's) in the final state. We show that, due to the more severe cancellations in the corresponding amplitudes as compared to the usual 2 → 2 processes, large enhancements with respect to the Standard Model can arise even for small modifications of the Higgs couplings. In particular, we find that triple Higgs production provides the best multiparticle channel to look for these deviations. We briefly explore the consequences of multiparticle production at the LHC. © 2013 SISSA.
Resumo:
High throughput sequencing (HTS) provides new research opportunities for work on non-model organisms, such as differential expression studies between populations exposed to different environmental conditions. However, such transcriptomic studies first require the production of a reference assembly. The choice of sampling procedure, sequencing strategy and assembly workflow is crucial. To develop a reliable reference transcriptome for Triatoma brasiliensis, the major Chagas disease vector in Northeastern Brazil, different de novo assembly protocols were generated using various datasets and software. Both 454 and Illumina sequencing technologies were applied on RNA extracted from antennae and mouthparts from single or pooled individuals. The 454 library yielded 278 Mb. Fifteen Illumina libraries were constructed and yielded nearly 360 million RNA-seq single reads and 46 million RNA-seq paired-end reads for nearly 45 Gb. For the 454 reads, we used three assemblers, Newbler, CAP3 and/or MIRA and for the Illumina reads, the Trinity assembler. Ten assembly workflows were compared using these programs separately or in combination. To compare the assemblies obtained, quantitative and qualitative criteria were used, including contig length, N50, contig number and the percentage of chimeric contigs. Completeness of the assemblies was estimated using the CEGMA pipeline. The best assembly (57,657 contigs, completeness of 80 %, < 1 % chimeric contigs) was a hybrid assembly leading to recommend the use of (1) a single individual with large representation of biological tissues, (2) merging both long reads and short paired-end Illumina reads, (3) several assemblers in order to combine the specific advantages of each.
Resumo:
Backgrounds Ea aims: The boundaries between the categories of body composition provided by vectorial analysis of bioimpedance are not well defined. In this paper, fuzzy sets theory was used for modeling such uncertainty. Methods: An Italian database with 179 cases 18-70 years was divided randomly into developing (n = 20) and testing samples (n = 159). From the 159 registries of the testing sample, 99 contributed with unequivocal diagnosis. Resistance/height and reactance/height were the input variables in the model. Output variables were the seven categories of body composition of vectorial analysis. For each case the linguistic model estimated the membership degree of each impedance category. To compare such results to the previously established diagnoses Kappa statistics was used. This demanded singling out one among the output set of seven categories of membership degrees. This procedure (defuzzification rule) established that the category with the highest membership degree should be the most likely category for the case. Results: The fuzzy model showed a good fit to the development sample. Excellent agreement was achieved between the defuzzified impedance diagnoses and the clinical diagnoses in the testing sample (Kappa = 0.85, p < 0.001). Conclusions: fuzzy linguistic model was found in good agreement with clinical diagnoses. If the whole model output is considered, information on to which extent each BIVA category is present does better advise clinical practice with an enlarged nosological framework and diverse therapeutic strategies. (C) 2012 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
Resumo:
Abstract Background To understand the molecular mechanisms underlying important biological processes, a detailed description of the gene products networks involved is required. In order to define and understand such molecular networks, some statistical methods are proposed in the literature to estimate gene regulatory networks from time-series microarray data. However, several problems still need to be overcome. Firstly, information flow need to be inferred, in addition to the correlation between genes. Secondly, we usually try to identify large networks from a large number of genes (parameters) originating from a smaller number of microarray experiments (samples). Due to this situation, which is rather frequent in Bioinformatics, it is difficult to perform statistical tests using methods that model large gene-gene networks. In addition, most of the models are based on dimension reduction using clustering techniques, therefore, the resulting network is not a gene-gene network but a module-module network. Here, we present the Sparse Vector Autoregressive model as a solution to these problems. Results We have applied the Sparse Vector Autoregressive model to estimate gene regulatory networks based on gene expression profiles obtained from time-series microarray experiments. Through extensive simulations, by applying the SVAR method to artificial regulatory networks, we show that SVAR can infer true positive edges even under conditions in which the number of samples is smaller than the number of genes. Moreover, it is possible to control for false positives, a significant advantage when compared to other methods described in the literature, which are based on ranks or score functions. By applying SVAR to actual HeLa cell cycle gene expression data, we were able to identify well known transcription factor targets. Conclusion The proposed SVAR method is able to model gene regulatory networks in frequent situations in which the number of samples is lower than the number of genes, making it possible to naturally infer partial Granger causalities without any a priori information. In addition, we present a statistical test to control the false discovery rate, which was not previously possible using other gene regulatory network models.