963 resultados para Processor power estimation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Precise estimation of propagation parameters inprecipitation media is of interest to improve the performanceof communications systems and in remote sensing applications.In this paper, we present maximum-likelihood estimators ofspecific attenuation and specific differential phase in rain. Themodel used for obtaining the cited estimators assumes coherentpropagation, reflection symmetry of the medium, and Gaussianstatistics of the scattering matrix measurements. No assumptionsabout the microphysical properties of the medium are needed.The performance of the estimators is evaluated through simulateddata. Results show negligible estimators bias and variances closeto Cramer–Rao bounds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Analysis of variance is commonly used in morphometry in order to ascertain differences in parameters between several populations. Failure to detect significant differences between populations (type II error) may be due to suboptimal sampling and lead to erroneous conclusions; the concept of statistical power allows one to avoid such failures by means of an adequate sampling. Several examples are given in the morphometry of the nervous system, showing the use of the power of a hierarchical analysis of variance test for the choice of appropriate sample and subsample sizes. In the first case chosen, neuronal densities in the human visual cortex, we find the number of observations to be of little effect. For dendritic spine densities in the visual cortex of mice and humans, the effect is somewhat larger. A substantial effect is shown in our last example, dendritic segmental lengths in monkey lateral geniculate nucleus. It is in the nature of the hierarchical model that sample size is always more important than subsample size. The relative weight to be attributed to subsample size thus depends on the relative magnitude of the between observations variance compared to the between individuals variance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We evaluated the accuracy of skinfold thicknesses, BMI and waist circumference for the prediction of percentage body fat (PBF) in a representative sample of 372 Swiss children aged 6-13 years. PBF was measured using dual-energy X-ray absorptiometry. On the basis of a preliminary bootstrap selection of predictors, seven regression models were evaluated. All models included sex, age and pubertal stage plus one of the following predictors: (1) log-transformed triceps skinfold (logTSF); (2) logTSF and waist circumference; (3) log-transformed sum of triceps and subscapular skinfolds (logSF2); (4) log-transformed sum of triceps, biceps, subscapular and supra-iliac skinfolds (logSF4); (5) BMI; (6) waist circumference; (7) BMI and waist circumference. The adjusted determination coefficient (R² adj) and the root mean squared error (RMSE; kg) were calculated for each model. LogSF4 (R² adj 0.85; RMSE 2.35) and logSF2 (R² adj 0.82; RMSE 2.54) were similarly accurate at predicting PBF and superior to logTSF (R² adj 0.75; RMSE 3.02), logTSF combined with waist circumference (R² adj 0.78; RMSE 2.85), BMI (R² adj 0.62; RMSE 3.73), waist circumference (R² adj 0.58; RMSE 3.89), and BMI combined with waist circumference (R² adj 0.63; RMSE 3.66) (P < 0.001 for all values of R² adj). The finding that logSF4 was only modestly superior to logSF2 and that logTSF was better than BMI and waist circumference at predicting PBF has important implications for paediatric epidemiological studies aimed at disentangling the effect of body fat on health outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work proposes novel network analysis techniques for multivariate time series.We define the network of a multivariate time series as a graph where verticesdenote the components of the process and edges denote non zero long run partialcorrelations. We then introduce a two step LASSO procedure, called NETS, toestimate high dimensional sparse Long Run Partial Correlation networks. This approachis based on a VAR approximation of the process and allows to decomposethe long run linkages into the contribution of the dynamic and contemporaneousdependence relations of the system. The large sample properties of the estimatorare analysed and we establish conditions for consistent selection and estimation ofthe non zero long run partial correlations. The methodology is illustrated with anapplication to a panel of U.S. bluechips.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Actualment un típic embedded system (ex. telèfon mòbil) requereix alta qualitat per portar a terme tasques com codificar/descodificar a temps real; han de consumir poc energia per funcionar hores o dies utilitzant bateries lleugeres; han de ser el suficientment flexibles per integrar múltiples aplicacions i estàndards en un sol aparell; han de ser dissenyats i verificats en un període de temps curt tot i l’augment de la complexitat. Els dissenyadors lluiten contra aquestes adversitats, que demanen noves innovacions en arquitectures i metodologies de disseny. Coarse-grained reconfigurable architectures (CGRAs) estan emergent com a candidats potencials per superar totes aquestes dificultats. Diferents tipus d’arquitectures han estat presentades en els últims anys. L’alta granularitat redueix molt el retard, l’àrea, el consum i el temps de configuració comparant amb les FPGAs. D’altra banda, en comparació amb els tradicionals processadors coarse-grained programables, els alts recursos computacionals els permet d’assolir un alt nivell de paral•lelisme i eficiència. No obstant, els CGRAs existents no estant sent aplicats principalment per les grans dificultats en la programació per arquitectures complexes. ADRES és una nova CGRA dissenyada per I’Interuniversity Micro-Electronics Center (IMEC). Combina un processador very-long instruction word (VLIW) i un coarse-grained array per tenir dues opcions diferents en un mateix dispositiu físic. Entre els seus avantatges destaquen l’alta qualitat, poca redundància en les comunicacions i la facilitat de programació. Finalment ADRES és un patró enlloc d’una arquitectura concreta. Amb l’ajuda del compilador DRESC (Dynamically Reconfigurable Embedded System Compile), és possible trobar millors arquitectures o arquitectures específiques segons l’aplicació. Aquest treball presenta la implementació d’un codificador MPEG-4 per l’ADRES. Mostra l’evolució del codi per obtenir una bona implementació per una arquitectura donada. També es presenten les característiques principals d’ADRES i el seu compilador (DRESC). Els objectius són de reduir al màxim el nombre de cicles (temps) per implementar el codificador de MPEG-4 i veure les diferents dificultats de treballar en l’entorn ADRES. Els resultats mostren que els cícles es redueixen en un 67% comparant el codi inicial i final en el mode VLIW i un 84% comparant el codi inicial en VLIW i el final en mode CGA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences ("batch effects") as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies. FOCUS: The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects. DATA: We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., 'control') or group 2 (e.g., 'treated'). We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects. METHODS: We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM), k-nearest neighbors (kNN) and Random Forests (RF). Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The relief of the seafloor is an important source of data for many scientists. In this paper we present an optical system to deal with underwater 3D reconstruction. This system is formed by three cameras that take images synchronously in a constant frame rate scheme. We use the images taken by these cameras to compute dense 3D reconstructions. We use Bundle Adjustment to estimate the motion ofthe trinocular rig. Given the path followed by the system, we get a dense map of the observed scene by registering the different dense local reconstructions in a unique and bigger one

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Every year, flash floods cause economic losses and major problems for undertaking daily activity in the Catalonia region (NE Spain). Sometimes catastrophic damage and casualties occur. When a long term analysis of floods is undertaken, a question arises regarding the changing role of the vulnerability and the hazard in risk evolution. This paper sets out to give some information to deal with this question, on the basis of analysis of all the floods that have occurred in Barcelona county (Catalonia) since the 14th century, as well as the flooded area, urban evolution, impacts and the weather conditions for any of most severe events. With this objective, the identification and classification of historical floods, and characterisation of flash-floods among these, have been undertaken. Besides this, the main meteorological factors associated with recent flash floods in this city and neighbouring regions are well-known. On the other hand, the identification of rainfall trends that could explain the historical evolution of flood hazard occurrence in this city has been analysed. Finally, identification of the influence of urban development on the vulnerability to floods has been carried out. Barcelona city has been selected thanks to its long continuous data series (daily rainfall data series, since 1854; one of the longest rainfall rate series of Europe, since 1921) and for the accurate historical archive information that is available (since the Roman Empire for the urban evolution). The evolution of flood occurrence shows the existence of oscillations in the earlier and later modern-age periods that can be attributed to climatic variability, evolution of the perception threshold and changes in vulnerability. A great increase of vulnerability can be assumed for the period 1850¿1900. The analysis of the time evolution for the Barcelona rainfall series (1854¿2000) shows that no trend exists, although, due to changes in urban planning, flash-floods impact has altered over this time. The number of catastrophic flash floods has diminished, although the extraordinary ones have increased.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tripping is considered a major cause of fall in older people. Therefore, foot clearance (i.e., height of the foot above ground during swing phase) could be a key factor to better understand the complex relationship between gait and falls. This paper presents a new method to estimate clearance using a foot-worn and wireless inertial sensor system. The method relies on the computation of foot orientation and trajectory from sensors signal data fusion, combined with the temporal detection of toe-off and heel-strike events. Based on a kinematic model that automatically estimates sensor position relative to the foot, heel and toe trajectories are estimated. 2-D and 3-D models are presented with different solving approaches, and validated against an optical motion capture system on 12 healthy adults performing short walking trials at self-selected, slow, and fast speed. Parameters corresponding to local minimum and maximum of heel and toe clearance were extracted and showed accuracy ± precision of 4.1 ± 2.3 cm for maximal heel clearance and 1.3 ± 0.9 cm for minimal toe clearance compared to the reference. The system is lightweight, wireless, easy to wear and to use, and provide a new and useful tool for routine clinical assessment of gait outside a dedicated laboratory.