877 resultados para Acceleration data structure


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We examine the problem of combining Mexican inflation predictions or projections provided by a biweekly survey of professional forecasters. Consumer price inflation in Mexico is measured twice a month. We consider several combining methods and advocate the use of dimension reduction techniques whose performance is compared with different benchmark methods, including the simplest average prediction. Missing values in the database are imputed by two different databased methods. The results obtained are basically robust to the choice of the imputation method. A preliminary analysis of the data was based on its panel data structure and showed the potential usefulness of using dimension reduction techniques to combine the experts' predictions. The main findings are: the first monthly predictions are best combined by way of the first principal component of the predictions available; the best second monthly prediction is obtained by calculating the median prediction and is more accurate than the first one.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Agronomia (Energia na Agricultura) - FCA

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work was developed a program capable of performing automatic counting of vehicles on roads. The problem of counting vehicles is using expensive techniques for its realization, techniques which often involve manual counting or degradation of the pavement. The main motivation for this work was the importance that the vehicle counting represents to the Traffic Engineer, being essential to analyze the performance of the roads, allowing to measure the need for installation of traffic lights, roundabouts, access ways, among other means capable of ensuring a continuous flow and safe for vehicles. The main objective of this work was to apply a statistical segmentation technique recently developed, based on a nonparametric linear regression model, to solve the segmentation problem of the program counter. The development program was based on the creation of three major modules, one for the segmentation, another for the tracking and another for the recognition. For the development of the segmentation module, it was applied a statistical technique combined with the segmentation by background difference, in order to optimize the process. The tracking module was developed based on the use of Kalman filters and application of simple concepts of analytical geometry. To develop the recognition module, it was used Fourier descriptors and a neural network multilayer perceptron, trained by backpropagation. Besides the development of the modules, it was also developed a control logic capable of performing the interconnection among the modules, mainly based on a data structure called state. The analysis of the results was applied to the program counter and its component modules, and the individual analysis served as a means to establish the par ameter values of techniques used. The find result was positive, since the statistical segmentation technique proved to be very useful and the developed program was able to count the vehicles belonging to the three goal..

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A total of 3.035 lactations of Holstein cows from four farms in the Southeast, to check the influence of data structure of milk yield on the genetic parameters. Four dataset with different structures were tested, weekly controls (CW) with 122.842 controls, monthly controls (CM) 30.883, bimonthly controls (CB) with 15,837 and quarterly controls (CQ) with 12,702. The random regression model was used and was considered as random additive genetic and permanent environment effects, fixed effects of the contemporary groups (herd-year-month of test-day) and age of cow (linear and quadratic effects). Heritability estimates showed similar trends among the data files analyzed, with the greatest similarity between dataset CS, CM and CB. The dataset submitted all the CB estimates of genetic parameters analyzed with the same trend and similar magnitude to the CS and CM dataset, allowing the claim that there was no influence of the data structure on estimates of covariance components for the dataset CS, CM and CB. Thus, milk recording could be accomplished in a CB structure.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The linearity assumption in the structural dynamics analysis is a severe practical limitation. Further, in the investigation of mechanisms presented in fighter aircrafts, as for instance aeroelastic nonlinearity, friction or gaps in wing-load-payload mounting interfaces, is mandatory to use a nonlinear analysis technique. Among different approaches that can be used to this matter, the Volterra theory is an interesting strategy, since it is a generalization of the linear convolution. It represents the response of a nonlinear system as a sum of linear and nonlinear components. Thus, this paper aims to use the discrete-time version of Volterra series expanded with Kautz filters to characterize the nonlinear dynamics of a F-16 aircraft. To illustrate the approach, it is identified and characterized a non-parametric model using the data obtained during a ground vibration test performed in a F-16 wing-to-payload mounting interfaces. Several amplitude inputs applied in two shakers are used to show softening nonlinearities presented in the acceleration data. The results obtained in the analysis have shown the capability of the Volterra series to give some insight about the nonlinear dynamics of the F-16 mounting interfaces. The biggest advantage of this approach is to separate the linear and nonlinear contributions through the multiple convolutions through the Volterra kernels.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

[ES] SPARQL Interpreter es uno de los cinco componentes de la Arquitectura Triskel, una arquitectura de software para una base de datos NoSQL que intenta aportar una solución al problema de Big Data en la web semántica. Este componente da solución al problema de la comunicación entre el lenguaje y el motor, interpretando las consultas que se realicen contra el almacenamiento en lenguaje SPARQL y generando una estructura de datos que los componentes inferiores puedan leer y ejecutar.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nuclear Magnetic Resonance (NMR) is a branch of spectroscopy that is based on the fact that many atomic nuclei may be oriented by a strong magnetic field and will absorb radiofrequency radiation at characteristic frequencies. The parameters that can be measured on the resulting spectral lines (line positions, intensities, line widths, multiplicities and transients in time-dependent experi-ments) can be interpreted in terms of molecular structure, conformation, molecular motion and other rate processes. In this way, high resolution (HR) NMR allows performing qualitative and quantitative analysis of samples in solution, in order to determine the structure of molecules in solution and not only. In the past, high-field NMR spectroscopy has mainly concerned with the elucidation of chemical structure in solution, but today is emerging as a powerful exploratory tool for probing biochemical and physical processes. It represents a versatile tool for the analysis of foods. In literature many NMR studies have been reported on different type of food such as wine, olive oil, coffee, fruit juices, milk, meat, egg, starch granules, flour, etc using different NMR techniques. Traditionally, univariate analytical methods have been used to ex-plore spectroscopic data. This method is useful to measure or to se-lect a single descriptive variable from the whole spectrum and , at the end, only this variable is analyzed. This univariate methods ap-proach, applied to HR-NMR data, lead to different problems due especially to the complexity of an NMR spectrum. In fact, the lat-ter is composed of different signals belonging to different mole-cules, but it is also true that the same molecules can be represented by different signals, generally strongly correlated. The univariate methods, in this case, takes in account only one or a few variables, causing a loss of information. Thus, when dealing with complex samples like foodstuff, univariate analysis of spectra data results not enough powerful. Spectra need to be considered in their wholeness and, for analysing them, it must be taken in consideration the whole data matrix: chemometric methods are designed to treat such multivariate data. Multivariate data analysis is used for a number of distinct, differ-ent purposes and the aims can be divided into three main groups: • data description (explorative data structure modelling of any ge-neric n-dimensional data matrix, PCA for example); • regression and prediction (PLS); • classification and prediction of class belongings for new samples (LDA and PLS-DA and ECVA). The aim of this PhD thesis was to verify the possibility of identify-ing and classifying plants or foodstuffs, in different classes, based on the concerted variation in metabolite levels, detected by NMR spectra and using the multivariate data analysis as a tool to inter-pret NMR information. It is important to underline that the results obtained are useful to point out the metabolic consequences of a specific modification on foodstuffs, avoiding the use of a targeted analysis for the different metabolites. The data analysis is performed by applying chemomet-ric multivariate techniques to the NMR dataset of spectra acquired. The research work presented in this thesis is the result of a three years PhD study. This thesis reports the main results obtained from these two main activities: A1) Evaluation of a data pre-processing system in order to mini-mize unwanted sources of variations, due to different instrumental set up, manual spectra processing and to sample preparations arte-facts; A2) Application of multivariate chemiometric models in data analy-sis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Das Time-of-Flight Aerosol Mass Spectrometer (ToF-AMS) der Firma Aerodyne ist eine Weiterentwicklung des Aerodyne Aerosolmassenspektrometers (Q-AMS). Dieses ist gut charakterisiert und kommt weltweit zum Einsatz. Beide Instrumente nutzen eine aerodynamische Linse, aerodynamische Partikelgrößenbestimmung, thermische Verdampfung und Elektronenstoß-Ionisation. Im Gegensatz zum Q-AMS, wo ein Quadrupolmassenspektrometer zur Analyse der Ionen verwendet wird, kommt beim ToF-AMS ein Flugzeit-Massenspektrometer zum Einsatz. In der vorliegenden Arbeit wird anhand von Laborexperimenten und Feldmesskampagnen gezeigt, dass das ToF-AMS zur quantitativen Messung der chemischen Zusammensetzung von Aerosolpartikeln mit hoher Zeit- und Größenauflösung geeignet ist. Zusätzlich wird ein vollständiges Schema zur ToF-AMS Datenanalyse vorgestellt, dass entwickelt wurde, um quantitative und sinnvolle Ergebnisse aus den aufgenommenen Rohdaten, sowohl von Messkampagnen als auch von Laborexperimenten, zu erhalten. Dieses Schema basiert auf den Charakterisierungsexperimenten, die im Rahmen dieser Arbeit durchgeführt wurden. Es beinhaltet Korrekturen, die angebracht werden müssen, und Kalibrationen, die durchgeführt werden müssen, um zuverlässige Ergebnisse aus den Rohdaten zu extrahieren. Beträchtliche Arbeit wurde außerdem in die Entwicklung eines zuverlässigen und benutzerfreundlichen Datenanalyseprogramms investiert. Dieses Programm kann zur automatischen und systematischen ToF-AMS Datenanalyse und –korrektur genutzt werden.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The new generation of multicore processors opens new perspectives for the design of embedded systems. Multiprocessing, however, poses new challenges to the scheduling of real-time applications, in which the ever-increasing computational demands are constantly flanked by the need of meeting critical time constraints. Many research works have contributed to this field introducing new advanced scheduling algorithms. However, despite many of these works have solidly demonstrated their effectiveness, the actual support for multiprocessor real-time scheduling offered by current operating systems is still very limited. This dissertation deals with implementative aspects of real-time schedulers in modern embedded multiprocessor systems. The first contribution is represented by an open-source scheduling framework, which is capable of realizing complex multiprocessor scheduling policies, such as G-EDF, on conventional operating systems exploiting only their native scheduler from user-space. A set of experimental evaluations compare the proposed solution to other research projects that pursue the same goals by means of kernel modifications, highlighting comparable scheduling performances. The principles that underpin the operation of the framework, originally designed for symmetric multiprocessors, have been further extended first to asymmetric ones, which are subjected to major restrictions such as the lack of support for task migrations, and later to re-programmable hardware architectures (FPGAs). In the latter case, this work introduces a scheduling accelerator, which offloads most of the scheduling operations to the hardware and exhibits extremely low scheduling jitter. The realization of a portable scheduling framework presented many interesting software challenges. One of these has been represented by timekeeping. In this regard, a further contribution is represented by a novel data structure, called addressable binary heap (ABH). Such ABH, which is conceptually a pointer-based implementation of a binary heap, shows very interesting average and worst-case performances when addressing the problem of tick-less timekeeping of high-resolution timers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This dissertation has three separate parts: the first part deals with the general pedigree association testing incorporating continuous covariates; the second part deals with the association tests under population stratification using the conditional likelihood tests; the third part deals with the genome-wide association studies based on the real rheumatoid arthritis (RA) disease data sets from Genetic Analysis Workshop 16 (GAW16) problem 1. Many statistical tests are developed to test the linkage and association using either case-control status or phenotype covariates for family data structure, separately. Those univariate analyses might not use all the information coming from the family members in practical studies. On the other hand, the human complex disease do not have a clear inheritance pattern, there might exist the gene interactions or act independently. In part I, the new proposed approach MPDT is focused on how to use both the case control information as well as the phenotype covariates. This approach can be applied to detect multiple marker effects. Based on the two existing popular statistics in family studies for case-control and quantitative traits respectively, the new approach could be used in the simple family structure data set as well as general pedigree structure. The combined statistics are calculated using the two statistics; A permutation procedure is applied for assessing the p-value with adjustment from the Bonferroni for the multiple markers. We use simulation studies to evaluate the type I error rates and the powers of the proposed approach. Our results show that the combined test using both case-control information and phenotype covariates not only has the correct type I error rates but also is more powerful than the other existing methods. For multiple marker interactions, our proposed method is also very powerful. Selective genotyping is an economical strategy in detecting and mapping quantitative trait loci in the genetic dissection of complex disease. When the samples arise from different ethnic groups or an admixture population, all the existing selective genotyping methods may result in spurious association due to different ancestry distributions. The problem can be more serious when the sample size is large, a general requirement to obtain sufficient power to detect modest genetic effects for most complex traits. In part II, I describe a useful strategy in selective genotyping while population stratification is present. Our procedure used a principal component based approach to eliminate any effect of population stratification. The paper evaluates the performance of our procedure using both simulated data from an early study data sets and also the HapMap data sets in a variety of population admixture models generated from empirical data. There are one binary trait and two continuous traits in the rheumatoid arthritis dataset of Problem 1 in the Genetic Analysis Workshop 16 (GAW16): RA status, AntiCCP and IgM. To allow multiple traits, we suggest a set of SNP-level F statistics by the concept of multiple-correlation to measure the genetic association between multiple trait values and SNP-specific genotypic scores and obtain their null distributions. Hereby, we perform 6 genome-wide association analyses using the novel one- and two-stage approaches which are based on single, double and triple traits. Incorporating all these 6 analyses, we successfully validate the SNPs which have been identified to be responsible for rheumatoid arthritis in the literature and detect more disease susceptibility SNPs for follow-up studies in the future. Except for chromosome 13 and 18, each of the others is found to harbour susceptible genetic regions for rheumatoid arthritis or related diseases, i.e., lupus erythematosus. This topic is discussed in part III.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Long-term surface ECG is routinely used to diagnose paroxysmal arrhythmias. However, this method only provides information about the heart's electrical activity. To this end, we investigated a novel esophageal catheter that features synchronous esophageal ECG and acceleration measurements, the latter being a record of the heart's mechanical activity. The acceleration data were quantified in a small study and successfully linked to the activity sequences of the heart in all subjects. The acceleration signals were additionally transformed into motion. The extracted cardiac motion was proved to be a valid reference input for an adaptive filter capable of removing relevant baseline wandering in the recorded esophageal ECGs. Taking both capabilities into account, the proposed recorder might be a promising tool for future long-term heart monitoring.