998 resultados para Master data
Resumo:
We study theoretical and empirical aspects of the mean exit time (MET) of financial time series. The theoretical modeling is done within the framework of continuous time random walk. We empirically verify that the mean exit time follows a quadratic scaling law and it has associated a prefactor which is specific to the analyzed stock. We perform a series of statistical tests to determine which kind of correlation are responsible for this specificity. The main contribution is associated with the autocorrelation property of stock returns. We introduce and solve analytically both two-state and three-state Markov chain models. The analytical results obtained with the two-state Markov chain model allows us to obtain a data collapse of the 20 measured MET profiles in a single master curve.
Resumo:
The communication presents the results of an investigation of exploratory and comparative character which objective is to analyze the influence of the actual labour situation into the demand of official master studies in the field of education. The study has been developed in two countries with a very different labour situation: Brasil, country of economic expansion and Spain, in recession due to the actual economic crisis. In that sense, the study provides data for deep thinking about the influence of the constriction or expansion of employment on the behaviour and demand of the students who access master studies and on how the previous formative and labour trajectory affects their expectations, demands and future projects. The working methodology is qualitative and the strategy for data collection the “focus group”. As a first approach, two groups of discussion have been formed with master students. A first one with students from Universidad de Barcelona- España and another one with members of Universidade do Vale do Itajaí- Brasil. Then, we constituted a mixed group of discussion in order to analyze differences and similarities.
Resumo:
The communication presents the results of an investigation of exploratory and comparative character which objective is to analyze the influence of the actual labour situation into the demand of official master studies in the field of education. The study has been developed in two countries with a very different labour situation: Brasil, country of economic expansion and Spain, in recession due to the actual economic crisis. In that sense, the study provides data for deep thinking about the influence of the constriction or expansion of employment on the behaviour and demand of the students who access master studies and on how the previous formative and labour trajectory affects their expectations, demands and future projects. The working methodology is qualitative and the strategy for data collection the “focus group”. As a first approach, two groups of discussion have been formed with master students. A first one with students from Universidad de Barcelona- España and another one with members of Universidade do Vale do Itajaí- Brasil. Then, we constituted a mixed group of discussion in order to analyze differences and similarities.
Resumo:
The asphalt concrete (AC) dynamic modulus (|E*|) is a key design parameter in mechanistic-based pavement design methodologies such as the American Association of State Highway and Transportation Officials (AASHTO) MEPDG/Pavement-ME Design. The objective of this feasibility study was to develop frameworks for predicting the AC |E*| master curve from falling weight deflectometer (FWD) deflection-time history data collected by the Iowa Department of Transportation (Iowa DOT). A neural networks (NN) methodology was developed based on a synthetically generated viscoelastic forward solutions database to predict AC relaxation modulus (E(t)) master curve coefficients from FWD deflection-time history data. According to the theory of viscoelasticity, if AC relaxation modulus, E(t), is known, |E*| can be calculated (and vice versa) through numerical inter-conversion procedures. Several case studies focusing on full-depth AC pavements were conducted to isolate potential backcalculation issues that are only related to the modulus master curve of the AC layer. For the proof-of-concept demonstration, a comprehensive full-depth AC analysis was carried out through 10,000 batch simulations using a viscoelastic forward analysis program. Anomalies were detected in the comprehensive raw synthetic database and were eliminated through imposition of certain constraints involving the sigmoid master curve coefficients. The surrogate forward modeling results showed that NNs are able to predict deflection-time histories from E(t) master curve coefficients and other layer properties very well. The NN inverse modeling results demonstrated the potential of NNs to backcalculate the E(t) master curve coefficients from single-drop FWD deflection-time history data, although the current prediction accuracies are not sufficient to recommend these models for practical implementation. Considering the complex nature of the problem investigated with many uncertainties involved, including the possible presence of dynamics during FWD testing (related to the presence and depth of stiff layer, inertial and wave propagation effects, etc.), the limitations of current FWD technology (integration errors, truncation issues, etc.), and the need for a rapid and simplified approach for routine implementation, future research recommendations have been provided making a strong case for an expanded research study.
Resumo:
In this paper, some steganalytic techniques designed to detect the existence of hidden messages using histogram shifting methods are presented. Firstly, some techniques to identify specific methods of histogram shifting, based on visible marks on the histogram or abnormal statistical distributions are suggested. Then, we present a general technique capable of detecting all histogram shifting techniques analyzed. This technique is based on the effect of histogram shifting methods on the "volatility" of the histogram of differences and the study of its reduction whenever new data are hidden.
Resumo:
This work is devoted to the problem of reconstructing the basis weight structure at paper web with black{box techniques. The data that is analyzed comes from a real paper machine and is collected by an o®-line scanner. The principal mathematical tool used in this work is Autoregressive Moving Average (ARMA) modelling. When coupled with the Discrete Fourier Transform (DFT), it gives a very flexible and interesting tool for analyzing properties of the paper web. Both ARMA and DFT are independently used to represent the given signal in a simplified version of our algorithm, but the final goal is to combine the two together. Ljung-Box Q-statistic lack-of-fit test combined with the Root Mean Squared Error coefficient gives a tool to separate significant signals from noise.
Resumo:
Nowadays the used fuel variety in power boilers is widening and new boiler constructions and running models have to be developed. This research and development is done in small pilot plants where more faster analyse about the boiler mass and heat balance is needed to be able to find and do the right decisions already during the test run. The barrier on determining boiler balance during test runs is the long process of chemical analyses of collected input and outputmatter samples. The present work is concentrating on finding a way to determinethe boiler balance without chemical analyses and optimise the test rig to get the best possible accuracy for heat and mass balance of the boiler. The purpose of this work was to create an automatic boiler balance calculation method for 4 MW CFB/BFB pilot boiler of Kvaerner Pulping Oy located in Messukylä in Tampere. The calculation was created in the data management computer of pilot plants automation system. The calculation is made in Microsoft Excel environment, which gives a good base and functions for handling large databases and calculations without any delicate programming. The automation system in pilot plant was reconstructed und updated by Metso Automation Oy during year 2001 and the new system MetsoDNA has good data management properties, which is necessary for big calculations as boiler balance calculation. Two possible methods for calculating boiler balance during test run were found. Either the fuel flow is determined, which is usedto calculate the boiler's mass balance, or the unburned carbon loss is estimated and the mass balance of the boiler is calculated on the basis of boiler's heat balance. Both of the methods have their own weaknesses, so they were constructed parallel in the calculation and the decision of the used method was left to user. User also needs to define the used fuels and some solid mass flowsthat aren't measured automatically by the automation system. With sensitivity analysis was found that the most essential values for accurate boiler balance determination are flue gas oxygen content, the boiler's measured heat output and lower heating value of the fuel. The theoretical part of this work concentrates in the error management of these measurements and analyses and on measurement accuracy and boiler balance calculation in theory. The empirical part of this work concentrates on the creation of the balance calculation for the boiler in issue and on describing the work environment.
Resumo:
The main objective of this study was todo a statistical analysis of ecological type from optical satellite data, using Tipping's sparse Bayesian algorithm. This thesis uses "the Relevence Vector Machine" algorithm in ecological classification betweenforestland and wetland. Further this bi-classification technique was used to do classification of many other different species of trees and produces hierarchical classification of entire subclasses given as a target class. Also, we carried out an attempt to use airborne image of same forest area. Combining it with image analysis, using different image processing operation, we tried to extract good features and later used them to perform classification of forestland and wetland.
Resumo:
Kiihtyvä kilpailu yritysten välillä on tuonut yritykset vaikeidenhaasteiden eteen. Tuotteet pitäisi saada markkinoille nopeammin, uusien tuotteiden pitäisi olla parempia kuin vanhojen ja etenkin parempia kuin kilpailijoiden vastaavat tuotteet. Lisäksi tuotteiden suunnittelu-, valmistus- ja muut kustannukset eivät saisi olla suuria. Näiden haasteiden toteuttamisessa yritetään usein käyttää apuna tuotetietoja, niiden hallintaa ja vaihtamista. Andritzin, kuten muidenkin yritysten, on otettava nämä asiat huomioon pärjätäkseen kilpailussa. Tämä työ on tehty Andritzille, joka on maailman johtavia paperin ja sellun valmistukseen tarkoitettujen laitteiden valmistajia ja huoltopalveluiden tarjoajia. Andritz on ottamassa käyttöön ERP-järjestelmän kaikissa toimipisteissään. Sitä halutaan hyödyntää mahdollisimman tehokkaasti, joten myös tuotetiedot halutaan järjestelmään koko elinkaaren ajalta. Osan tuotetiedoista luo Andritzin kumppanit ja alihankkijat, joten myös tietojen vaihto partnereiden välillä halutaan hoitaasiten, että tiedot saadaan suoraan ERP-järjestelmään. Tämän työn tavoitteena onkin löytää ratkaisu, jonka avulla Andritzin ja sen kumppaneiden välinen tietojenvaihto voidaan hoitaa. Tämä diplomityö esittelee tuotetietojen, niiden hallinnan ja vaihtamisen tarkoituksen ja tärkeyden. Työssä esitellään erilaisia ratkaisuvaihtoehtoja tiedonvaihtojärjestelmän toteuttamiseksi. Osa niistä perustuu yleisiin ja toimialakohtaisiin standardeihin. Myös kaksi kaupallista tuotetta esitellään. Tarkasteltavana onseuraavat standardit: PaperIXI, papiNet, X-OSCO, PSK-standardit sekä RosettaNet. Lisäksi työssä tarkastellaan ERP-järjestelmän toimittajan, SAP:in ratkaisuja tietojenvaihtoon. Näistä vaihtoehdoista parhaimpia tarkastellaan vielä yksityiskohtaisemmin ja lopuksi eri ratkaisuja vertaillaan keskenään, jotta löydettäisiin Andritzin tarpeisiin paras vaihtoehto.
Resumo:
We provide an incremental quantile estimator for Non-stationary Streaming Data. We propose a method for simultaneous estimation of multiple quantiles corresponding to the given probability levels from streaming data. Due to the limitations of the memory, it is not feasible to compute the quantiles by storing the data. So estimating the quantiles as the data pass by is the only possibility. This can be effective in network measurement. To provide the minimum of the mean-squared error of the estimation, we use parabolic approximation and for comparison we simulate the results for different number of runs and using both linear and parabolic approximations.
Resumo:
To enable a mathematically and physically sound execution of the fatigue test and a correct interpretation of its results, statistical evaluation methods are used to assist in the analysis of fatigue testing data. The main objective of this work is to develop step-by-stepinstructions for statistical analysis of the laboratory fatigue data. The scopeof this project is to provide practical cases about answering the several questions raised in the treatment of test data with application of the methods and formulae in the document IIW-XIII-2138-06 (Best Practice Guide on the Statistical Analysis of Fatigue Data). Generally, the questions in the data sheets involve some aspects: estimation of necessary sample size, verification of the statistical equivalence of the collated sets of data, and determination of characteristic curves in different cases. The series of comprehensive examples which are given in this thesis serve as a demonstration of the various statistical methods to develop a sound procedure to create reliable calculation rules for the fatigue analysis.
Resumo:
Tämä diplomityö tarkastelee TETRA-verkon soveltuvuutta turvallisuusviranomaisille tarkoitettuun telemetriasovellukseen, jossa erilaiset mittausdatat ja hälytystiedot kulkevat verkon yli SDS-viestiliikenteenä. Diplomityön tarkoituksena on tehdä kaksi sulautettua ohjelmistoa sekä yksi PCohjelmisto, joita käytetään rakennettavassa demolaitteistossa. Lisäksi selvitetään TETRA-verkon toimivuus ja rajoitukset sovelluksessa eri olosuhteissa ja eri kuormitustilanteissa. Diplomityön teoriaosassa käydään läpi työn määrittely ja ohjelmistonkehitysprosessin läpivienti eri osa-alueilla. Loppuosassa kuvataan tehdyt ohjelmistot erikseen ja yhdessä suunnittelusta toteutukseen, sekä lopullisen järjestelmän testaus.
Resumo:
En el present treball hem tractat d'aportar una visió actual del món de les dades obertes enllaçades en l'àmbit de l'educació. Hem revisat tant les aplicacions que van dirigides a implementar aquestes tecnologies en els repositoris de dades existents (pàgines web, repositoris d'objectes educacionals, repositoris de cursos i programes educatius) com a ser suport de nous paradigmes dins del món de l'educació.
Resumo:
In this thesis author approaches the problem of automated text classification, which is one of basic tasks for building Intelligent Internet Search Agent. The work discusses various approaches to solving sub-problems of automated text classification, such as feature extraction and machine learning on text sources. Author also describes her own multiword approach to feature extraction and pres-ents the results of testing this approach using linear discriminant analysis based classifier, and classifier combining unsupervised learning for etalon extraction with supervised learning using common backpropagation algorithm for multilevel perceptron.
Resumo:
Tässä diplomityössä on oletettu että neljännen sukupolven mobiiliverkko on saumaton yhdistelmä olemassa olevia toisen ja kolmannen sukupolven langattomia verkkoja sekä lyhyen kantaman WLAN- ja Bluetooth-radiotekniikoita. Näiden tekniikoiden on myös oletettu olevan niin yhteensopivia ettei käyttäjä havaitse saanti verkon muuttumista. Työ esittelee neljännen sukupolven mobiiliverkkoihin liittyvien tärkeimpien langattomien tekniikoiden arkkitehtuurin ja perustoiminta-periaatteet. Työ kuvaa eri tekniikoita ja käytäntöjä tiedon mittaamiseen ja keräämiseen. Saatuja transaktiomittauksia voidaan käyttää tarjottaessa erilaistettuja palvelutasoja sekä verkko- ja palvelukapasiteetin optimoimisessa. Lisäksi työssä esitellään Internet Business Information Manager joka on ohjelmistokehys hajautetun tiedon keräämiseen. Sen keräämää mittaustietoa voidaan käyttää palvelun tason seurannassa j a raportoinnissa sekä laskutuksessa. Työn käytännön osuudessa piti kehittää langattoman verkon liikennettä seuraava agentti joka tarkkailisi palvelun laatua. Agentti sijaitsisi matkapuhelimessa mitaten verkon liikennettä. Agenttia ei kuitenkaan voitu toteuttaa koska ohjelmistoympäristö todettiin vajaaksi. Joka tapauksessa työ osoitti että käyttäjän näkökulmasta tietoa kerääville agenteille on todellinen tarve.