934 resultados para Specification
Resumo:
Tämän työn tavoitteena oli hitsata tandem MAG –laitteistolla 25 mm paksua Ruukin E500 TMCP terästä. Työssä oli tarkoituksena vähentää railotilavuutta mahdollisimman paljon sekä suorittaa testihitsaukset 0.8 kJ/mm sekä 2.5 kJ/mm lämmöntuonneilla. Teoriaosuudessa käsiteltiin Tandem MAG-hitsaukseen, sen tuottavuuteen ja laatukysymyksiin liittyviä asioita sekä siinä perehdyttiin suurlujuusteräksien käyttöön hitsauksessa sekä laivanrakennuksessa. Kokeellisessa osuudessa perehdyttiin hitsauksessa huomattuihin etuihin, ongelmiin sekä ongelmien ratkaisumahdollisuuksiin. Hitsausliitoksen mekaaniset ominaisuudet tutkittiin rikkomattomin sekä rikkovin menetelmin. Alustavat hitsausohjeet luotiin kummallekin lämmöntuonnille. Testaukset aloitettiin 30 º railokulmalla pienentäen kulmaa mahdollisuuksien mukaan. Testauksissa ei saatu hitsattua onnistuneesti alle 30 º railokulmalla. Hitsaustestien aikana huomattiin magneettisen puhalluksen vaikutus hitsaustapahtumaan. Kaasunvirtausnopeuden tuli olla tietyn suuruinen jotta palkokerrokset onnistuivat ilman huokoisuusongelmaa. Pienemmällä lämmöntuonnilla hitsattaessa kaasunvirtausnopeudet olivat tärkeämpiä hitsatessa ylempiä palkokerroksia. Kääntämällä hitsauspoltinta sivuttaissuunnassa 7-10 astetta auttoi ehkäisemään reunahaavan syntymistä. Rikkovista menetelmistä testitulokset olivat hyväksyttyjä kaikkien muiden paitsi päittäishitsin sivutaivutuskokeen osalta.
Resumo:
Reports of uterine cancer deaths that do not specify the subsite of the tumor threaten the quality of the epidemiologic appraisal of corpus and cervix uteri cancer mortality. The present study assessed the impact of correcting the estimated corpus and cervix uteri cancer mortality in the city of São Paulo, Brazil. The epidemiologic assessment of death rates comprised the estimation of magnitudes, trends (1980-2003), and area-level distribution based on three strategies: i) using uncorrected death certificate information; ii) correcting estimates of corpus and cervix uteri mortality by fully reallocating unspecified deaths to either one of these categories, and iii) partially correcting specified estimates by maintaining as unspecified a fraction of deaths certified as due to cancer of "uterus not otherwise specified". The proportion of uterine cancer deaths without subsite specification decreased from 42.9% in 1984 to 20.8% in 2003. Partial and full corrections resulted in considerable increases of cervix (31.3 and 48.8%, respectively) and corpus uteri (34.4 and 55.2%) cancer mortality. Partial correction did not change trends for subsite-specific uterine cancer mortality, whereas full correction did, thus representing an early indication of decrease for cervical neoplasms and stability for tumors of the corpus uteri in this population. Ecologic correlations between mortality and socioeconomic indices were unchanged for both strategies of correcting estimates. Reallocating unspecified uterine cancer mortality in contexts with a high proportion of these deaths has a considerable impact on the epidemiologic profile of mortality and provides more reliable estimates of cervix and corpus uteri cancer death rates and trends.
Resumo:
Pro gradu-tutkimuksen tarkoituksena on selvittää, miten siirtohinnoittelun kehitys näkyy ammattikirjoituksissa 20 vuoden aikana sekä mihin suuntaan siirtohinnoittelu on kehittymässä tulevaisuudessa. Mitkä ovat siirtohinnoittelun riskit ja edut tutkimusaineiston perusteella ja minkälaisista näkökulmista siirtohinnoittelusta on 20 vuoden aikana kirjoitettu. Artikkeleista pyritään löytämään, minkälaisia eroja on suomalaisen ja kansainvälisen kirjoittelun väliltä. Tämä on laadullinen tutkimus, jossa käytetään tutkimusmenetelmänä sisällönanalyysia. Tutkimuksessa on sisällönanalyysin mukaisesti luokittelua, teemoittelua sekä vertailua. Tutkimusaineisto koostuu kolmesta lehdestä Verotus, Tilintarkastus-lehti, nykyään Balanssi ja The Accounting Review. Lehdistä on kerätty siirtohinnoittelua käsittelevät artikkelit 20 vuoden ajalta. Tutkimustulosten perusteella näkyy siirtohinnoittelun kehitys sekä kuinka aiheesta on tullut entistä keskeisempi. Siirtohinnoittelu on nyt merkittävässä roolissa kansainvälisessä verotuksessa. Haasteena nähdään markkinaehtoisen hinnan asettaminen oikeaan arvoon. Tärkeänä luokkana nousi siirtohinnoittelun kehittäminen. Myös valtioiden uhka siitä, että siirtohinnoittelun kautta verovaroja siirtyy toisiin valtioihin, oli yksi esiin noussut teema. Johtopäätöksenä voidaan todeta, että siirtohinnoittelu on problemaattinen verotuksen aihe, koska selkeää suoraa siirtohintaa ei aina pystytä antamaan tuotteelle, palvelulle tai rahoitukselle, vaan hinta on veteen piirretty viiva. Siirtohinnoittelua pidetään uhkana valtioiden veropohjan rappeutumiselle, jos sen avulla siirretään varoja alemman verotuksen valtioihin. Konsernit toimivat entistä laajemmin eri valtioissa, joten niiden tavoitteena on suunnitella eri konsernin osien tulosta. Tulevaisuudessa on paljon kehityskohtia ja tarpeita siirtohinnoittelun alueella. Kehitykseen vaikuttavat valtioiden yhteisöveropoliittiset päätökset sekä valtioiden omat lainsäädännöt sekä valtioiden sitoutuminen OECD:n säännöksiin, joiden avulla pyritään yhtenäisiin toimintamalleihin. Jatkossa tutkimusta voisi laajentaa kansainvälisemmäksi sekä vielä syvemmin voisi miettiä myös kehityksen suuntia ja niiden vaikutuksia.
Resumo:
ICT contributed to about 0.83 GtCO2 emissions where the 37% comes from the telecoms infrastructures. At the same time, the increasing cost of energy has been hindering the industry in providing more affordable services for the users. One of the sources of these problems is said to be the rigidity of the current network infrastructures which limits innovations in the network. SDN (Software Defined Network) has emerged as one of the prominent solutions with its idea of abstraction, visibility, and programmability in the network. Nevertheless, there are still significant efforts needed to actually utilize it to create a more energy and environmentally friendly network. In this paper, we suggested and developed a platform for developing ecology-related SDN applications. The main approach we take in realizing this goal is by maximizing the abstractions provided by OpenFlow and to expose RESTful interfaces to modules which enable energy saving in the network. While OpenFlow is made to be the standard for SDN protocol, there are still some mechanisms not defined in its specification such as settings related to Quality of Service (QoS). To solve this, we created REST interfaces for setting of QoS in the switches which can maximize network utilization. We also created a module for minimizing the required network resources in delivering packets across the network. This is achieved by utilizing redundant links when it is needed, but disabling them when the load in the network decreases. The usage of multi paths in a network is also evaluated for its benefit in terms of transfer rate improvement and energy savings. Hopefully, the developed framework can be beneficial for developers in creating applications for supporting environmentally friendly network infrastructures.
Resumo:
Energy efficiency is an important topic when considering electric motor drives market. Although more efficient electric motor types are available, the induction motor remains as the most common industrial motor type. IEC methods for determining losses and efficiency of converter-fed induction motors were introduced recently with the release of technical specification IEC/TS 60034-2-3. Determining the induction motor losses with IEC/TS 60034-2-3 method 2-3-A and assessing the practical applicability of the method are the main interests of this study. The method 2-3-A introduces a specific test converter waveform to be used in the measurements. Differences between the induction motor losses with a test converter supply, and with a DTC converter supply are investigated. In the IEC methods, the tests are run at motor rated fundamental voltage, which, in practice, requires the frequency converter to be fed with a risen input voltage. In this study, the tests are run on both frequency converters with artificially risen converter input voltage, resulting in rated motor fundamental input voltage as required by IEC. For comparison, the tests are run with both converters on normal grid input voltage supply, which results in lower motor fundamental voltage and reduced flux level, but should be more relevant from practical point of view. According to IEC method 2-3-A, tests are run at rated motor load, and to ensure comparability of the results, the rated load is used in the grid-fed converter measurements, although motor is overloaded while producing the rated torque at reduced flux level. The IEC 2-3-A method requires also sinusoidal supply test results with IEC method 2-1-1B. Therefore, the induction motor losses with the recently updated IEC 60034-2-1 method 2-1-1B are determined at the motor rated voltage, but also at two lower motor voltages, which are according to the output fundamental voltages of the two network-supplied converters. The method 2-3-A was found to be complex to apply but the results were stable. According to the results, the method 2-3-A and the test converter supply are usable for comparing losses and efficiency of different induction motors at the operating point of rated voltage, rated frequency and rated load, but the measurements do not give any prediction of the motor losses at final application. One might therefore strongly criticize the method’s main principles. It seems, that the release of IEC 60034-2-3 as a technical specification instead of a final standard for now was justified, since the practical relevance of the main method is questionable.
Resumo:
Human beings have always strived to preserve their memories and spread their ideas. In the beginning this was always done through human interpretations, such as telling stories and creating sculptures. Later, technological progress made it possible to create a recording of a phenomenon; first as an analogue recording onto a physical object, and later digitally, as a sequence of bits to be interpreted by a computer. By the end of the 20th century technological advances had made it feasible to distribute media content over a computer network instead of on physical objects, thus enabling the concept of digital media distribution. Many digital media distribution systems already exist, and their continued, and in many cases increasing, usage is an indicator for the high interest in their future enhancements and enriching. By looking at these digital media distribution systems, we have identified three main areas of possible improvement: network structure and coordination, transport of content over the network, and the encoding used for the content. In this thesis, our aim is to show that improvements in performance, efficiency and availability can be done in conjunction with improvements in software quality and reliability through the use of formal methods: mathematical approaches to reasoning about software so that we can prove its correctness, together with the desirable properties. We envision a complete media distribution system based on a distributed architecture, such as peer-to-peer networking, in which different parts of the system have been formally modelled and verified. Starting with the network itself, we show how it can be formally constructed and modularised in the Event-B formalism, such that we can separate the modelling of one node from the modelling of the network itself. We also show how the piece selection algorithm in the BitTorrent peer-to-peer transfer protocol can be adapted for on-demand media streaming, and how this can be modelled in Event-B. Furthermore, we show how modelling one peer in Event-B can give results similar to simulating an entire network of peers. Going further, we introduce a formal specification language for content transfer algorithms, and show that having such a language can make these algorithms easier to understand. We also show how generating Event-B code from this language can result in less complexity compared to creating the models from written specifications. We also consider the decoding part of a media distribution system by showing how video decoding can be done in parallel. This is based on formally defined dependencies between frames and blocks in a video sequence; we have shown that also this step can be performed in a way that is mathematically proven correct. Our modelling and proving in this thesis is, in its majority, tool-based. This provides a demonstration of the advance of formal methods as well as their increased reliability, and thus, advocates for their more wide-spread usage in the future.
Resumo:
Tämä diplomityö arvioi hitsauksen laadunhallintaohjelmistomarkkinoiden kilpailijoita. Kilpailukenttä on uusi ja ei ole tarkkaa tietoa siitä minkälaisia kilpailijoita on markkinoilla. Hitsauksen laadunhallintaohjelmisto auttaa yrityksiä takaamaan korkean laadun. Ohjelmisto takaa korkean laadun varmistamalla, että hitsaaja on pätevä, hän noudattaa hitsausohjeita ja annettuja parametreja. Sen lisäksi ohjelmisto kerää kaiken tiedon hitsausprosessista ja luo siitä vaadittavat dokumentit. Diplomityön teoriaosuus muodostuu kirjallisuuskatsauksesta ratkaisuliike-toimintaan, kilpailija-analyysin ja kilpailuvoimien teoriaan sekä hitsauksen laadunhallintaan. Työn empiriaosuus on laadullinen tutkimus, jossa tutkitaan kilpailevia hitsauksen laadunhallintaohjelmistoja ja haastatellaan ohjelmistojen käyttäjiä. Diplomityön tuloksena saadaan uusi kilpailija-analyysimalli hitsauksen laadunhallintaohjelmistoille. Mallin avulla voidaan arvostella ohjelmistot niiden tarjoamien primääri- ja sekundääriominaisuuksien perusteella. Toiseksi tässä diplomityössä analysoidaan nykyinen kilpailijatilanne hyödyntämällä juuri kehitettyä kilpailija-analyysimallia.
Resumo:
Sähköisen verkon täsmällinen tunteminen on edellytys verkon tehokkaalle suunnittelulle ja käytölle. Lappeenrannassa ulkovalaistusverkon dokumentointi oli lähtökohtaisestei hyvin hajanaista. Tämän tutkimuksen tärkein anti oli tarjota Lappeenrannan kaupungille dokumentointiratkaisu, jolla ulkovalaistusverkon dokumentaatio ja topologia saatetaan moderniin sähköiseen muotoon. Tutkimus jakaantui neljään eri pääkohtaan: Lappeenrannan ulkovalaistusverkon tutkimustyöhön, verkkotietojärjestelmien tutkimustyöhön, dokumentointiprosessin määrittelyyn ja verkkotietojärjestelmien ylläpitoprosessin määrittelyyn. Näissä on käytetty merkittävissä määrin asiantuntijoiden lausuntoja, erityisesti ulkovalaistusverkon rakenteesta ja eri verkkotietojärjestelmien ominaisuuksista sekä käytöstä. Näihin perustuen on muodostettu kuva uuden verkkotietojärjestelmän vaatimuksista ja dokumentoinnin lähtötasosta, joiden avulla on määritelty itse dokumentointiprosessi, sen potentiaaliset ongelmat ja arvioitu työmäärää. Ylläpitoprosessien määrittelyssä on käytetty lähtökohtana Lappeenrannan Energiaverkot Oy:n nykyisiä ylläpitoprosesseja ja -menetelmiä. Tutkimuksen tuloksena syntyi määrittelyt kolmelle eri verkkotietojärjestelmälle, joilla ongelma saadaan ratkaistua. Näistä kaksi täytti vaatimukset täysin: Trimble NIS ja Keypron KeyLight.
Resumo:
China: long-run economic growth. The paper aims to understand on theoretical and empirical grounds the main determinants of China´s long run growth. The econometric analysis suggests the exchange rate as the most important variable in explaining China´s economic growth and in a different model specification using growth rates of exports instead of trade openness, the exchange rate remains as the main variable but export performance has almost the same relevance. Exchange rate policy seems to be a direct road to explain economic growth in China and there is no clear sign that China will increase exchange rate flexibility in the same pattern and pace suggested by most trade partners, which cannot be criticized based on China´s own interest in sustaining its export performance and economic growth.
Resumo:
Työssä tutkittiin kirjallisuustyönä akkuteknologian nykytilaa ja markkinoita kulutuselektroniikan osalta. Työssä tehtiin myös katsaus potentiaalisiin tulevaisuuden akkuteknologioihin. Työssä havaittiin, että kulutuselektroniikassa ainoat suuresti käytetyt akkutyypit ovat nikkelimetallihybridi- (NiMH) ja litiumioniakut (Li-ion). Tärkeimpänä ominaisuutena kulutuselektroniikassa akuilla yleensä pidetään kapasiteettia, jossa Li-ion akut ovat selvästi parempia jopa kaksinkertaisen energiatiheyden takia. Li-ion akuilla voidaan saavuttaa myös moninkertainen käyttöikä lataussykleinä ja moninkertainen purkausvirta, riippuen käytetystä katodimateriaalista. NiMH akuilla etuna on lähinnä halvempi hinta ja parempi turvallisuus. Toisaalta myös pieni jännite voidaan laskea hyväksi puoleksi, koska NiMH akuilla voidaan korvata kertakäyttöisiä alkaliparistoja. Vuonna 2012 Li-ion akkuja myytiin kapasiteetissa mitattuna jopa kahdeksan kertaa enemmän kuin NiMH akkuja ja myyntimäärien ennustetaan myös kasvavan tulevaisuudessa. Liion akkujen myyntimääristä suurin osa oli kulutuselektroniikan käyttökohteisiin ja jopa kaksi kolmasosaa oli kannettavien tietokoneiden ja kännyköiden akkuja. Uusia akkuteknologioita ja Li-ion akkujen parannuksia on paljon kehitteillä, mutta suurimman potentiaalin ja myös suuret ongelmat kaupallistumiseen omaa litium-ilma akut. Lyhyemmällä aikavälillä potentiaalisia teknologioita ovat litium-rikki akut, sekä nykyisiin Li-ion akkuihin kehitteillä olevat anodimateriaalit kuten esim. pii ja alumiini/titaani, joiden ongelmiin on löydetty ratkaisuja nanoteknologiasta.
Resumo:
Layout planning is a process of sizing and placing rooms (e.g. in a house) while a t t empt ing to optimize various criteria. Often the r e are conflicting c r i t e r i a such as construction cost, minimizing the distance between r e l a t ed activities, and meeting the area requirements for these activities. The process of layout planning ha s mostly been done by hand, wi th a handful of a t t empt s to automa t e the process. Thi s thesis explores some of these pa s t a t t empt s and describes several new techniques for automa t ing the layout planning process using evolutionary computation. These techniques a r e inspired by the existing methods, while adding some of the i r own innovations. Additional experimenLs are done to t e s t the possibility of allowing polygonal exteriors wi th rectilinear interior walls. Several multi-objective approaches are used to evaluate and compare fitness. The evolutionary r epr e s ent a t ion and requirements specification used provide great flexibility in problem scope and depth and is worthy of considering in future layout and design a t t empt s . The system outlined in thi s thesis is capable of evolving a variety of floor plans conforming to functional and geometric specifications. Many of the resulting plans look reasonable even when compared to a professional floor plan. Additionally polygonal and multi-floor buildings were also generated.
Resumo:
Printed blank by which John Brown Cullen solemnly declares that he is experienced in the art of measuring and culling timber. He states that he is entering into the service of Burton and Bro. of Barrie He will make out the specification of the timber in berths 192 and 198 and submit his findings to Burton and Brother, Oct. 22, 1877.
Resumo:
Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.
Resumo:
This paper proposes finite-sample procedures for testing the SURE specification in multi-equation regression models, i.e. whether the disturbances in different equations are contemporaneously uncorrelated or not. We apply the technique of Monte Carlo (MC) tests [Dwass (1957), Barnard (1963)] to obtain exact tests based on standard LR and LM zero correlation tests. We also suggest a MC quasi-LR (QLR) test based on feasible generalized least squares (FGLS). We show that the latter statistics are pivotal under the null, which provides the justification for applying MC tests. Furthermore, we extend the exact independence test proposed by Harvey and Phillips (1982) to the multi-equation framework. Specifically, we introduce several induced tests based on a set of simultaneous Harvey/Phillips-type tests and suggest a simulation-based solution to the associated combination problem. The properties of the proposed tests are studied in a Monte Carlo experiment which shows that standard asymptotic tests exhibit important size distortions, while MC tests achieve complete size control and display good power. Moreover, MC-QLR tests performed best in terms of power, a result of interest from the point of view of simulation-based tests. The power of the MC induced tests improves appreciably in comparison to standard Bonferroni tests and, in certain cases, outperforms the likelihood-based MC tests. The tests are applied to data used by Fischer (1993) to analyze the macroeconomic determinants of growth.
Resumo:
A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.