19 resultados para Null Hypothesis
em Universidad Politécnica de Madrid
Resumo:
Dendritic spines establish most excitatory synapses in the brain and are located in Purkinje cell’s dendrites along helical paths, perhaps maximizing the probability to contact different axons. To test whether spine helixes also occur in neocortex, we reconstructed >500 dendritic segments from adult human cortex obtained from autopsies. With Fourier analysis and spatial statistics, we analyzed spine position along apical and basal dendrites of layer 3 pyramidal neurons from frontal, temporal, and cingulate cortex. Although we occasionally detected helical positioning, for the great majority of dendrites we could not reject the null hypothesis of spatial randomness in spine locations, either in apical or basal dendrites, in neurons of different cortical areas or among spines of different volumes and lengths. We conclude that in adult human neocortex spine positions are mostly random. We discuss the relevance of these results for spine formation and plasticity and their functional impact for cortical circuits.
Resumo:
Dendritic spines establish most excitatory synapses in the brain and are located in Purkinje cell?s dendrites along helical paths, perhaps maximizing the probability to contact different axons. To test whether spine helixes also occur in neocortex, we reconstructed ?500 dendritic segments from adult human cortex obtained from autopsies. With Fourier analysis and spatial statistics, we analyzed spine position along apical and basal dendrites of layer 3 pyramidal neurons from frontal, temporal, and cingulate cortex. Although we occasionally detected helical positioning, for the great majority of dendrites we could not reject the null hypothesis of spatial randomness in spine locations, either in apical or basal dendrites, in neurons of different cortical areas or among spines of different volumes and lengths. We conclude that in adult human neocortex spine positions are mostly random. We discuss the relevance of these results for spine formation and plasticity and their functional impact for cortical circuits.
Resumo:
Belief propagation (BP) is a technique for distributed inference in wireless networks and is often used even when the underlying graphical model contains cycles. In this paper, we propose a uniformly reweighted BP scheme that reduces the impact of cycles by weighting messages by a constant ?edge appearance probability? rho ? 1. We apply this algorithm to distributed binary hypothesis testing problems (e.g., distributed detection) in wireless networks with Markov random field models. We demonstrate that in the considered setting the proposed method outperforms standard BP, while maintaining similar complexity. We then show that the optimal ? can be approximated as a simple function of the average node degree, and can hence be computed in a distributed fashion through a consensus algorithm.
Resumo:
the aim of this study is to apply an integrated methodological approximation where dendrochronology and documentary analysis allow us to reconstruct the historical flood record of the Segovia Mint. Our hypothesis is that differences between the dendrochronological data of the wooden decking pieces can be related to historical floods and, therefore, they could be used as proxy-source data in future palaeoflood research.
Resumo:
Self-organized InGaAs QDs are intensively studied for optoelectronic applications. Several approaches are in study to reach the emission wavelengths needed for these applications. The use of antimony (Sb) in either the capping layer or into the dots is one example. However, these studies are normally focused on buried QD (BQD) where there are still different controversial theories concerning the role of Sb. Ones suggest that Sb incorporates into the dot [1], while others support the hypothesis that the Sb occupies positions surrounding the dot [2] thus helping to keep their shape during the capping growth.
Resumo:
The aim of this study is to evaluate the effects obtained after applying two active learning methodologies (cooperative learning and project based learning) to the achievement of the competence problem solving. This study was carried out at the Technical University of Madrid, where these methodologies were applied to two Operating Systems courses. The first hypothesis tested was whether the implementation of active learning methodologies favours the achievement of ?problem solving?. The second hypothesis was focused on testing if students with higher rates in problem solving competence obtain better results in their academic performance. The results indicated that active learning methodologies do not produce any significant change in the generic competence ?problem solving? during the period analysed. Concerning this, we consider that students should work with these methodologies for a longer period, besides having a specific training. Nevertheless, a close correlation between problem solving self appraisal and academic performance has been detected.
Resumo:
This article presents the proposal of the Computer Vision Group to the first phase of the international competition “Concurso de Ingeniería de Control 2012, Control Aut ́onomo del seguimiento de trayectorias de un vehículo cuatrirrotor”. This phase consists mainly of two parts: identifying a model and designing a trajectory controller for the AR Drone quadrotor. For the identification task, two models are proposed: a simplified model that captures only the main dynamics of the quadrotor, and a second model based on the physical laws underlying the AR Drone behavior. The trajectory controller design is based on the simplified model, whereas the physical model is used to tune the controller to attain a certain level of robust stability to model uncertainties. The controller design is simplified by the hypothesis that accurate positions sensors will be available to implement a feedback controller.
Resumo:
this paper analyzes the singularities inherent to the financial industry, in relation to other businesses, and its implications to financial crises throughout history. The efficient markets hypothesis is questioned, and its impact on the deregulation of the financial system is analyzed. Finally, the causes of the current crisis are investigated, and the general lines to be addressed for the redesign of a financial system to achieve an efficient and equitable capitalism are suggested.
Resumo:
A novel methodology based on instrumented indentation was developed to characterize the mechanical properties of amorphous materials. The approach is based on the concept of a universal postulate that assumes the existence of a characteristic indentation pressure proportional to the hardness. This hypothesis was numerically validated. This method overcomes the limitation of the conventional indentation models (pile-up effects and pressure sensitivity materials).
Resumo:
In this paper we investigated differences in language use of speakers yielding different verbal intelligence when they describe the same event. The work is based on a corpus containing descriptions of a short film and verbal intelligence scores of the speakers. For analyzing the monologues and the film transcript, the number of reused words, lemmas, n-grams, cosine similarity and other features were calculated and compared to each other for different verbal intelligence groups. The results showed that the similarity of monologues of higher verbal intelligence speakers was greater than of lower and average verbal intelligence participants. A possible explanation of this phenomenon is that candidates yielding higher verbal intelligence have a better short-term memory. In this paper we also checked a hypothesis that differences in vocabulary of speakers yielding different verbal intelligence are sufficient enough for good classification results. For proving this hypothesis, the Nearest Neighbor classifier was trained using TF-IDF vocabulary measures. The maximum achieved accuracy was 92.86%.
Resumo:
Evaluating the seismic hazard requires establishing a distribution of the seismic activity rate, irrespective of the methodology used in the evaluation. In practice, how that activity rate is established tends to be the main difference between the various evaluation methods. The traditional procedure relies on a seismogenic zonation and the Gutenberg-Richter (GR) hypothesis. Competing zonations are often compared looking only at the geometry of the zones, but the resulting activity rate is affected by both geometry and the values assigned to the GR parameters. Contour plots can be used for conducting more meaningful comparisons, providing the GR parameters are suitably normalised. More recent approaches for establishing the seismic activity rate forego the use of zones and GR statistics and special attention is paid here to such procedures. The paper presents comparisons between the local activity rates that result for the complete Iberian Peninsula using kernel estimators as well as two seismogenic zonations. It is concluded that the smooth variation of the seismic activity rate produced by zoneless methods is more realistic than the stepwise changes associated with zoned approaches; moreover, the choice of zonation often has a stronger influence on the results than its fairly subjective origin would warrant. It is also observed that the activity rate derived from the kernel approach, related with the GR parameter “a”, is qualitatively consistent with the epicentres in the catalogue. Finally, when comparing alternative zonations it is not just their geometry but the distribution of activity rate that should be compared.
Resumo:
In this paper we propose a novel fast random search clustering (RSC) algorithm for mixing matrix identification in multiple input multiple output (MIMO) linear blind inverse problems with sparse inputs. The proposed approach is based on the clustering of the observations around the directions given by the columns of the mixing matrix that occurs typically for sparse inputs. Exploiting this fact, the RSC algorithm proceeds by parameterizing the mixing matrix using hyperspherical coordinates, randomly selecting candidate basis vectors (i.e. clustering directions) from the observations, and accepting or rejecting them according to a binary hypothesis test based on the Neyman–Pearson criterion. The RSC algorithm is not tailored to any specific distribution for the sources, can deal with an arbitrary number of inputs and outputs (thus solving the difficult under-determined problem), and is applicable to both instantaneous and convolutive mixtures. Extensive simulations for synthetic and real data with different number of inputs and outputs, data size, sparsity factors of the inputs and signal to noise ratios confirm the good performance of the proposed approach under moderate/high signal to noise ratios. RESUMEN. Método de separación ciega de fuentes para señales dispersas basado en la identificación de la matriz de mezcla mediante técnicas de "clustering" aleatorio.
Resumo:
In recent years, the importance of the management of eco-innovations has been growing, more in practice than in academia. However, although in the literature there are already some evidences focussed on management of eco-innovations, there is no comprehensive review on the knowledge base of diffusion of eco-innovations. This paper provides a current overview of the existing body of literature, identifying the most active scholars and relevant publications in this field, and deepening in the major disciplines and research streams. Results show that the theory of diffusion of innovations which provided the philosophical underpinnings of how innovations are diffused is not the main knowledge base to explain the diffusion of eco-innovations. Lead market hypothesis, sustainable transitions and the ecological modernization appear as the initial base of the cognitive platform that can contribute to the understanding of diffusion of eco-innovations.
Resumo:
In the last decades, neuropsychological theories tend to consider cognitive functions as a result of the whole brainwork and not as individual local areas of its cortex. Studies based on neuroimaging techniques have increased in the last years, promoting an exponential growth of the body of knowledge about relations between cognitive functions and brain structures [1]. However, so fast evolution make complicated to integrate them in verifiable theories and, even more, translated in to cognitive rehabilitation. The aim of this research work is to develop a cognitive process-modeling tool. The purpose of this system is, in the first term, to represent multidimensional data, from structural and functional connectivity, neuroimaging, data from lesion studies and derived data from clinical intervention [2][3]. This will allow to identify consolidated knowledge, hypothesis, experimental designs, new data from ongoing studies and emerging results from clinical interventions. In the second term, we pursuit to use Artificial Intelligence to assist in decision making allowing to advance towards evidence based and personalized treatments in cognitive rehabilitation. This work presents the knowledge base design of the knowledge representation tool. It is compound of two different taxonomies (structure and function) and a set of tags linking both taxonomies at different levels of structural and functional organization. The remainder of the abstract is organized as follows: Section 2 presents the web application used for gathering necessary information for generating the knowledge base, Section 3 describes knowledge base structure and finally Section 4 expounds reached conclusions.
Resumo:
In this paper, multiple regression analysis is used to model the top of descent (TOD) location of user-preferred descent trajectories computed by the flight management system (FMS) on over 1000 commercial flights into Melbourne, Australia. In addition to recording TOD, the cruise altitude, final altitude, cruise Mach, descent speed, wind, and engine type were also identified for use as the independent variables in the regression analysis. Both first-order and second-order models are considered, where cross-validation, hypothesis testing, and additional analysis are used to compare models. This identifies the models that should give the smallest errors if used to predict TOD location for new data in the future. A model that is linear in TOD altitude, final altitude, descent speed, and wind gives an estimated standard deviation of 3.9 nmi for TOD location given the trajectory parame- ters, which means about 80% of predictions would have error less than 5 nmi in absolute value. This accuracy is better than demonstrated by other ground automation predictions using kinetic models. Furthermore, this approach would enable online learning of the model. Additional data or further knowledge of algorithms is necessary to conclude definitively that no second-order terms are appropriate. Possible applications of the linear model are described, including enabling arriving aircraft to fly optimized descents computed by the FMS even in congested airspace.