936 resultados para Vector Space Model
Resumo:
We thank Orkney Islands Council for access to Eynhallow and Talisman Energy (UK) Ltd and Marine Scotland for fieldwork and equipment support. Handling and tagging of fulmars was conducted under licences from the British Trust for Ornithology and the UK Home Office. EE was funded by a Marine Alliance for Science and Technology for Scotland/University of Aberdeen College of Life Sciences and Medicine studentship and LQ was supported by a NERC Studentship. Thanks also to the many colleagues who assisted with fieldwork during the project, and to Helen Bailey and Arliss Winship for advice on implementing the state-space model.
Resumo:
The problem of social diffusion has animated sociological thinking on topics ranging from the spread of an idea, an innovation or a disease, to the foundations of collective behavior and political polarization. While network diffusion has been a productive metaphor, the reality of diffusion processes is often muddier. Ideas and innovations diffuse differently from diseases, but, with a few exceptions, the diffusion of ideas and innovations has been modeled under the same assumptions as the diffusion of disease. In this dissertation, I develop two new diffusion models for "socially meaningful" contagions that address two of the most significant problems with current diffusion models: (1) that contagions can only spread along observed ties, and (2) that contagions do not change as they spread between people. I augment insights from these statistical and simulation models with an analysis of an empirical case of diffusion - the use of enterprise collaboration software in a large technology company. I focus the empirical study on when people abandon innovations, a crucial, and understudied aspect of the diffusion of innovations. Using timestamped posts, I analyze when people abandon software to a high degree of detail.
To address the first problem, I suggest a latent space diffusion model. Rather than treating ties as stable conduits for information, the latent space diffusion model treats ties as random draws from an underlying social space, and simulates diffusion over the social space. Theoretically, the social space model integrates both actor ties and attributes simultaneously in a single social plane, while incorporating schemas into diffusion processes gives an explicit form to the reciprocal influences that cognition and social environment have on each other. Practically, the latent space diffusion model produces statistically consistent diffusion estimates where using the network alone does not, and the diffusion with schemas model shows that introducing some cognitive processing into diffusion processes changes the rate and ultimate distribution of the spreading information. To address the second problem, I suggest a diffusion model with schemas. Rather than treating information as though it is spread without changes, the schema diffusion model allows people to modify information they receive to fit an underlying mental model of the information before they pass the information to others. Combining the latent space models with a schema notion for actors improves our models for social diffusion both theoretically and practically.
The empirical case study focuses on how the changing value of an innovation, introduced by the innovations' network externalities, influences when people abandon the innovation. In it, I find that people are least likely to abandon an innovation when other people in their neighborhood currently use the software as well. The effect is particularly pronounced for supervisors' current use and number of supervisory team members who currently use the software. This case study not only points to an important process in the diffusion of innovation, but also suggests a new approach -- computerized collaboration systems -- to collecting and analyzing data on organizational processes.
Resumo:
Social interactions have been the focus of social science research for a century, but their study has recently been revolutionized by novel data sources and by methods from computer science, network science, and complex systems science. The study of social interactions is crucial for understanding complex societal behaviours. Social interactions are naturally represented as networks, which have emerged as a unifying mathematical language to understand structural and dynamical aspects of socio-technical systems. Networks are, however, highly dimensional objects, especially when considering the scales of real-world systems and the need to model the temporal dimension. Hence the study of empirical data from social systems is challenging both from a conceptual and a computational standpoint. A possible approach to tackling such a challenge is to use dimensionality reduction techniques that represent network entities in a low-dimensional feature space, preserving some desired properties of the original data. Low-dimensional vector space representations, also known as network embeddings, have been extensively studied, also as a way to feed network data to machine learning algorithms. Network embeddings were initially developed for static networks and then extended to incorporate temporal network data. We focus on dimensionality reduction techniques for time-resolved social interaction data modelled as temporal networks. We introduce a novel embedding technique that models the temporal and structural similarities of events rather than nodes. Using empirical data on social interactions, we show that this representation captures information relevant for the study of dynamical processes unfolding over the network, such as epidemic spreading. We then turn to another large-scale dataset on social interactions: a popular Web-based crowdfunding platform. We show that tensor-based representations of the data and dimensionality reduction techniques such as tensor factorization allow us to uncover the structural and temporal aspects of the system and to relate them to geographic and temporal activity patterns.
Resumo:
Multi-phase electrical drives are potential candidates for the employment in innovative electric vehicle powertrains, in response to the request for high efficiency and reliability of this type of application. In addition to the multi-phase technology, in the last decades also, multilevel technology has been developed. These two technologies are somewhat complementary since both allow increasing the power rating of the system without increasing the current and voltage ratings of the single power switches of the inverter. In this thesis, some different topics concerning the inverter, the motor and the fault diagnosis of an electric vehicle powertrain are addressed. In particular, the attention is focused on multi-phase and multilevel technologies and their potential advantages with respect to traditional technologies. First of all, the mathematical models of two multi-phase machines, a five-phase induction machine and an asymmetrical six-phase permanent magnet synchronous machines are developed using the Vector Space Decomposition approach. Then, a new modulation technique for multi-phase multilevel T-type inverters, which solves the voltage balancing problem of the DC-link capacitors, ensuring flexible management of the capacitor voltages, is developed. The technique is based on the proper selection of the zero-sequence component of the modulating signals. Subsequently, a diagnostic technique for detecting the state of health of the rotor magnets in a six-phase permanent magnet synchronous machine is established. The technique is based on analysing the electromotive force induced in the stator windings by the rotor magnets. Furthermore, an innovative algorithm able to extend the linear modulation region for five-phase inverters, taking advantage of the multiple degrees of freedom available in multi-phase systems is presented. Finally, the mathematical model of an eighteen-phase squirrel cage induction motor is defined. This activity aims to develop a motor drive able to change the number of poles of the machine during the machine operation.
Resumo:
The study of random probability measures is a lively research topic that has attracted interest from different fields in recent years. In this thesis, we consider random probability measures in the context of Bayesian nonparametrics, where the law of a random probability measure is used as prior distribution, and in the context of distributional data analysis, where the goal is to perform inference given avsample from the law of a random probability measure. The contributions contained in this thesis can be subdivided according to three different topics: (i) the use of almost surely discrete repulsive random measures (i.e., whose support points are well separated) for Bayesian model-based clustering, (ii) the proposal of new laws for collections of random probability measures for Bayesian density estimation of partially exchangeable data subdivided into different groups, and (iii) the study of principal component analysis and regression models for probability distributions seen as elements of the 2-Wasserstein space. Specifically, for point (i) above we propose an efficient Markov chain Monte Carlo algorithm for posterior inference, which sidesteps the need of split-merge reversible jump moves typically associated with poor performance, we propose a model for clustering high-dimensional data by introducing a novel class of anisotropic determinantal point processes, and study the distributional properties of the repulsive measures, shedding light on important theoretical results which enable more principled prior elicitation and more efficient posterior simulation algorithms. For point (ii) above, we consider several models suitable for clustering homogeneous populations, inducing spatial dependence across groups of data, extracting the characteristic traits common to all the data-groups, and propose a novel vector autoregressive model to study of growth curves of Singaporean kids. Finally, for point (iii), we propose a novel class of projected statistical methods for distributional data analysis for measures on the real line and on the unit-circle.
Resumo:
The study of ancient, undeciphered scripts presents unique challenges, that depend both on the nature of the problem and on the peculiarities of each writing system. In this thesis, I present two computational approaches that are tailored to two different tasks and writing systems. The first of these methods is aimed at the decipherment of the Linear A afraction signs, in order to discover their numerical values. This is achieved with a combination of constraint programming, ad-hoc metrics and paleographic considerations. The second main contribution of this thesis regards the creation of an unsupervised deep learning model which uses drawings of signs from ancient writing system to learn to distinguish different graphemes in the vector space. This system, which is based on techniques used in the field of computer vision, is adapted to the study of ancient writing systems by incorporating information about sequences in the model, mirroring what is often done in natural language processing. In order to develop this model, the Cypriot Greek Syllabary is used as a target, since this is a deciphered writing system. Finally, this unsupervised model is adapted to the undeciphered Cypro-Minoan and it is used to answer open questions about this script. In particular, by reconstructing multiple allographs that are not agreed upon by paleographers, it supports the idea that Cypro-Minoan is a single script and not a collection of three script like it was proposed in the literature. These results on two different tasks shows that computational methods can be applied to undeciphered scripts, despite the relatively low amount of available data, paving the way for further advancement in paleography using these methods.
Resumo:
Due to both the widespread and multipurpose use of document images and the current availability of a high number of document images repositories, robust information retrieval mechanisms and systems have been increasingly demanded. This paper presents an approach to support the automatic generation of relationships among document images by exploiting Latent Semantic Indexing (LSI) and Optical Character Recognition (OCR). We developed the LinkDI (Linking of Document Images) service, which extracts and indexes document images content, computes its latent semantics, and defines relationships among images as hyperlinks. LinkDI was experimented with document images repositories, and its performance was evaluated by comparing the quality of the relationships created among textual documents as well as among their respective document images. Considering those same document images, we ran further experiments in order to compare the performance of LinkDI when it exploits or not the LSI technique. Experimental results showed that LSI can mitigate the effects of usual OCR misrecognition, which reinforces the feasibility of LinkDI relating OCR output with high degradation.
Resumo:
Converting aeroelastic vibrations into electricity for low power generation has received growing attention over the past few years. In addition to potential applications for aerospace structures, the goal is to develop alternative and scalable configurations for wind energy harvesting to use in wireless electronic systems. This paper presents modeling and experiments of aeroelastic energy harvesting using piezoelectric transduction with a focus on exploiting combined nonlinearities. An airfoil with plunge and pitch degrees of freedom (DOF) is investigated. Piezoelectric coupling is introduced to the plunge DOF while nonlinearities are introduced through the pitch DOF. A state-space model is presented and employed for the simulations of the piezoaeroelastic generator. A two-state approximation to Theodorsen aerodynamics is used in order to determine the unsteady aerodynamic loads. Three case studies are presented. First the interaction between piezoelectric power generation and linear aeroelastic behavior of a typical section is investigated for a set of resistive loads. Model predictions are compared to experimental data obtained from the wind tunnel tests at the flutter boundary. In the second case study, free play nonlinearity is added to the pitch DOF and it is shown that nonlinear limit-cycle oscillations can be obtained not only above but also below the linear flutter speed. The experimental results are successfully predicted by the model simulations. Finally, the combination of cubic hardening stiffness and free play nonlinearities is considered in the pitch DOF. The nonlinear piezoaeroelastic response is investigated for different values of the nonlinear-to-linear stiffness ratio. The free play nonlinearity reduces the cut-in speed while the hardening stiffness helps in obtaining persistent oscillations of acceptable amplitude over a wider range of airflow speeds. Such nonlinearities can be introduced to aeroelastic energy harvesters (exploiting piezoelectric or other transduction mechanisms) for performance enhancement.
Resumo:
Starting from the Durbin algorithm in polynomial space with an inner product defined by the signal autocorrelation matrix, an isometric transformation is defined that maps this vector space into another one where the Levinson algorithm is performed. Alternatively, for iterative algorithms such as discrete all-pole (DAP), an efficient implementation of a Gohberg-Semencul (GS) relation is developed for the inversion of the autocorrelation matrix which considers its centrosymmetry. In the solution of the autocorrelation equations, the Levinson algorithm is found to be less complex operationally than the procedures based on GS inversion for up to a minimum of five iterations at various linear prediction (LP) orders.
Resumo:
The tissue distribution kinetics of a highly bound solute, propranolol, was investigated in a heterogeneous organ, the isolated perfused limb, using the impulse-response technique and destructive sampling. The propranolol concentration in muscle, skin, and fat as well as in outflow perfusate was measured up to 30 min after injection. The resulting data were analysed assuming (1) vascular, muscle, skin and fat compartments as well mixed (compartmental model) and (2) using a distributed-in-space model which accounts for the noninstantaneous intravascular mixing and tissue distribution processes but consists only of a vascular and extravascular phase (two-phase model). The compartmental model adequately described propranolol concentration-time data in the three tissue compartments and the outflow concentration-time curve (except of the early mixing phase). In contrast, the two-phase model better described the outflow concentration-time curve but is limited in accounting only for the distribution kinetics in the dominant tissue, the muscle. The two-phase model well described the time course of propranolol concentration in muscle tissue, with parameter estimates similar to those obtained with the compartmental model. The results suggest, first that the uptake kinetics of propranolol into skin and fat cannot be analysed on the basis of outflow data alone and, second that the assumption of well-mixed compartments is a valid approximation from a practical point of view las, e.g., in physiological based pharmacokinetic modelling). The steady-state distribution volumes of skin and fat were only 16 and 4%, respectively, of that of muscle tissue (16.7 ml), with higher partition coefficient in fat (6.36) than in skin (2.64) and muscle (2.79. (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
When linear equality constraints are invariant through time they can be incorporated into estimation by restricted least squares. If, however, the constraints are time-varying, this standard methodology cannot be applied. In this paper we show how to incorporate linear time-varying constraints into the estimation of econometric models. The method involves the augmentation of the observation equation of a state-space model prior to estimation by the Kalman filter. Numerical optimisation routines are used for the estimation. A simple example drawn from demand analysis is used to illustrate the method and its application.
Resumo:
No âmbito da condução da política monetária, as funções de reação estimadas em estudos empíricos, tanto para a economia brasileira como para outras economias, têm mostrado uma boa aderência aos dados. Porém, os estudos mostram que o poder explicativo das estimativas aumenta consideravelmente quando se inclui um componente de suavização da taxa de juros, representado pela taxa de juros defasada. Segundo Clarida, et. al. (1998) o coeficiente da taxa de juros defasada (situado ente 0,0 e 1,0) representaria o grau de inércia da política monetária, e quanto maior esse coeficiente, menor e mais lenta é a resposta da taxa de juros ao conjunto de informações relevantes. Por outro lado, a literatura empírica internacional mostra que esse componente assume um peso expressivo nas funções de reação, o que revela que os BCs ajustam o instrumento de modo lento e parcimonioso. No entanto, o caso brasileiro é de particular interesse porque os trabalhos mais recentes têm evidenciado uma elevação no componente inercial, o que sugere que o BCB vem aumentando o grau de suavização da taxa de juros nos últimos anos. Nesse contexto, mais do que estimar uma função de reação forward looking para captar o comportamento global médio do Banco Central do Brasil no período de Janeiro de 2005 a Maio de 2013, o trabalho se propôs a procurar respostas para uma possível relação de causalidade dinâmica entre a trajetória do coeficiente de inércia e as variáveis macroeconômicas relevantes, usando como método a aplicação do filtro de Kalman para extrair a trajetória do coeficiente de inércia e a estimação de um modelo de Vetores Autorregressivos (VAR) que incluirá a trajetória do coeficiente de inércia e as variáveis macroeconômicas relevantes. De modo geral, pelas regressões e pelo filtro de Kalman, os resultados mostraram um coeficiente de inércia extremamente elevado em todo o período analisado, e coeficientes de resposta global muito pequenos, inconsistentes com o que é esperado pela teoria. Pelo método VAR, o resultado de maior interesse foi o de que choques positivos na variável de inércia foram responsáveis por desvios persistentes no hiato do produto e, consequentemente, sobre os desvios de inflação e de expectativas de inflação em relação à meta central.
Resumo:
This paper studies the evolution of the default risk premia for European firms during the years surrounding the recent credit crisis. We employ the information embedded in Credit Default Swaps (CDS) and Moody’s KMV EDF default probabilities to analyze the common factors driving this risk premia. The risk premium is characterized in several directions: Firstly, we perform a panel data analysis to capture the relationship between CDS spreads and actual default probabilities. Secondly, we employ the intensity framework of Jarrow et al. (2005) in order to measure the theoretical effect of risk premium on expected bond returns. Thirdly, we carry out a dynamic panel data to identify the macroeconomic sources of risk premium. Finally, a vector autoregressive model analyzes which proportion of the co-movement is attributable to financial or macro variables. Our estimations report coefficients for risk premium substantially higher than previously referred for US firms and a time varying behavior. A dominant factor explains around 60% of the common movements in risk premia. Additionally, empirical evidence suggests a public-to-private risk transfer between the sovereign CDS spreads and corporate risk premia.
Resumo:
Tese de Doutoramento em Ciências do Mar, especialidade em Ecologia Marinha.
Resumo:
In this paper a modified version of the classical Van der Pol oscillator is proposed, introducing fractional-order time derivatives into the state-space model. The resulting fractional-order Van der Pol oscillator is analyzed in the time and frequency domains, using phase portraits, spectral analysis and bifurcation diagrams. The fractional-order dynamics is illustrated through numerical simulations of the proposed schemes using approximations to fractional-order operators. Finally, the analysis is extended to the forced Van der Pol oscillator.