964 resultados para Computer input-outpus equipment.
Resumo:
The experimental portion of this thesis tries to estimate the density of the power spectrum of very low frequency semiconductor noise, from 10-6.3 cps to 1. cps with a greater accuracy than that achieved in previous similar attempts: it is concluded that the spectrum is 1/fα with α approximately 1.3 over most of the frequency range, but appearing to have a value of about 1 in the lowest decade. The noise sources are, among others, the first stage circuits of a grounded input silicon epitaxial operational amplifier. This thesis also investigates a peculiar form of stationarity which seems to distinguish flicker noise from other semiconductor noise.
In order to decrease by an order of magnitude the pernicious effects of temperature drifts, semiconductor "aging", and possible mechanical failures associated with prolonged periods of data taking, 10 independent noise sources were time-multiplexed and their spectral estimates were subsequently averaged. If the sources have similar spectra, it is demonstrated that this reduces the necessary data-taking time by a factor of 10 for a given accuracy.
In view of the measured high temperature sensitivity of the noise sources, it was necessary to combine the passive attenuation of a special-material container with active control. The noise sources were placed in a copper-epoxy container of high heat capacity and medium heat conductivity, and that container was immersed in a temperature controlled circulating ethylene-glycol bath.
Other spectra of interest, estimated from data taken concurrently with the semiconductor noise data were the spectra of the bath's controlled temperature, the semiconductor surface temperature, and the power supply voltage amplitude fluctuations. A brief description of the equipment constructed to obtain the aforementioned data is included.
The analytical portion of this work is concerned with the following questions: what is the best final spectral density estimate given 10 statistically independent ones of varying quality and magnitude? How can the Blackman and Tukey algorithm which is used for spectral estimation in this work be improved upon? How can non-equidistant sampling reduce data processing cost? Should one try to remove common trands shared by supposedly statistically independent noise sources and, if so, what are the mathematical difficulties involved? What is a physically plausible mathematical model that can account for flicker noise and what are the mathematical implications on its statistical properties? Finally, the variance of the spectral estimate obtained through the Blackman/Tukey algorithm is analyzed in greater detail; the variance is shown to diverge for α ≥ 1 in an assumed power spectrum of k/|f|α, unless the assumed spectrum is "truncated".
Resumo:
This thesis is an investigation into the nature of data analysis and computer software systems which support this activity.
The first chapter develops the notion of data analysis as an experimental science which has two major components: data-gathering and theory-building. The basic role of language in determining the meaningfulness of theory is stressed, and the informativeness of a language and data base pair is studied. The static and dynamic aspects of data analysis are then considered from this conceptual vantage point. The second chapter surveys the available types of computer systems which may be useful for data analysis. Particular attention is paid to the questions raised in the first chapter about the language restrictions imposed by the computer system and its dynamic properties.
The third chapter discusses the REL data analysis system, which was designed to satisfy the needs of the data analyzer in an operational relational data system. The major limitation on the use of such systems is the amount of access to data stored on a relatively slow secondary memory. This problem of the paging of data is investigated and two classes of data structure representations are found, each of which has desirable paging characteristics for certain types of queries. One representation is used by most of the generalized data base management systems in existence today, but the other is clearly preferred in the data analysis environment, as conceptualized in Chapter I.
This data representation has strong implications for a fundamental process of data analysis -- the quantification of variables. Since quantification is one of the few means of summarizing and abstracting, data analysis systems are under strong pressure to facilitate the process. Two implementations of quantification are studied: one analagous to the form of the lower predicate calculus and another more closely attuned to the data representation. A comparison of these indicates that the use of the "label class" method results in orders of magnitude improvement over the lower predicate calculus technique.
Resumo:
A presente pesquisa concentra-se no estudo hidrológico utilizando o potencial das geotecnologias na modelagem do escoamento na bacia do rio Bengalas, cujo rio principal de mesmo nome, corta o município de Nova Friburgo, RJ, no sentido Sul-Norte. Esse município, um dos mais importantes da região serrana, sofre frequentemente com inundações e deslizamentos, onde, dados históricos e acontecimentos recentes mostram que a ocupação inadequada de encostas e calhas dos rios são as áreas destacada e negativamente afetadas. A metodologia tem suporte no uso de um SIG, extraindo informações, que por sua vez, serão entrada de dados na fase de modelagem, e reforçando a apresentação dos resultados das simulações através de mapas. Ela está divida basicamente em três etapas: "SIG", "Modelagem" e "Suporte à Tomada de Decisão/Simulação". Esse primeiro estudo permitiu compor um banco de dados geográfico com as características fisiográficas da bacia; a seleção criteriosa de uma modelagem matemática e encadeamento de seus parâmetros com os componentes do ciclo hidrológico; realizar a calibração do modelo de transformação chuva-vazão, Soil Conservation Service (CN); e simular a passagem dos volumes gerados pela precipitação efetiva na calha do rio Bengalas, com o objetivo de identificar e analisar as áreas suscetíveis a inundações na porção central da cidade de Nova Friburgo. Modelagem dessa natureza vem sendo empregada, principalmente, no gerenciamento de recursos hídricos, onde a tomada de decisões embasada nos resultados de simulações computacionais, contribuem para evitar prejuízos materiais e financeiros, e ainda, perdas de vidas humanas em áreas de risco, neste caso, aquelas suscetíveis a inundações. Analisando os resultados encontrados temos que a área suscetível à inundação para uma chuva com tempo de recorrência de 50 anos, o mais crítico estudado, seja de aproximadamente 1,0 km, distribuídos nos seus 8,5km na região central de Nova Friburgo-RJ, sendo está, ora delimitada, prioritariamente edificada.
Resumo:
This report is an introduction to the concept of treewidth, a property of graphs that has important implications in algorithms. Some basic concepts of graph theory are presented in the first chapter for those readers that are not familiar with the notation. In Chapter 2, the definition of treewidth and some different ways of characterizing it are explained. The last two chapters focus on the algorithmic implications of treewidth, which are very relevant in Computer Science. An algorithm to compute the treewidth of a graph is presented and its result can be later applied to many other problems in graph theory, like those introduced in the last chapter.
Resumo:
Study of emotions in human-computer interaction is a growing research area. This paper shows an attempt to select the most significant features for emotion recognition in spoken Basque and Spanish Languages using different methods for feature selection. RekEmozio database was used as the experimental data set. Several Machine Learning paradigms were used for the emotion classification task. Experiments were executed in three phases, using different sets of features as classification variables in each phase. Moreover, feature subset selection was applied at each phase in order to seek for the most relevant feature subset. The three phases approach was selected to check the validity of the proposed approach. Achieved results show that an instance-based learning algorithm using feature subset selection techniques based on evolutionary algorithms is the best Machine Learning paradigm in automatic emotion recognition, with all different feature sets, obtaining a mean of 80,05% emotion recognition rate in Basque and a 74,82% in Spanish. In order to check the goodness of the proposed process, a greedy searching approach (FSS-Forward) has been applied and a comparison between them is provided. Based on achieved results, a set of most relevant non-speaker dependent features is proposed for both languages and new perspectives are suggested.