951 resultados para Significance-driven computing
Resumo:
Työn tutkimusongelmana oli selvittää EDI-yhteyksien vaihtoehtoiset toteutustavat kustannuksineen. EDI-palveluiden käytöstä aiheutuvien kustannusten arviointia varten muodostettiin MS Office Excel -laskentamalli, jonka avulla voidaan arvioida perushankinnasta ja käytöstä aiheutuvia kustannuksia sekä EDI-projektin kannattavuutta. Tilaustenkäsittelyn yhteydessä syntyviä vuotuisia kustannuksia sekä kustannussäätöjä arvioitiin aikaperusteisen toimintolaskennan avulla. Sovellettava teoria rajattiin toiminto- ja investointilaskennan alueelle. EDI-palveluita tarjoavilta operaattoreilta selvitettiin hinta, palveluiden laatu ja muut lisäarvopalvelut. Kustannustarkastelu rajattiin perushankintakustannusten ja käytönaikaisten kustannusten selvittämiseen. Työn tutkimusote on luonteeltaan konstruktiivinen, sillä tavoitteena oli luoda johdon päätöksentekoa tukeva laskentamalli. Kartoituksen perusteella EDI:n käyttökustannuksiin vaikuttavat tekijät muodostuvat kolmesta ryhmästä, joita ovat sähköisten tilaussanomien piirissä olevien asiakkaiden ja toimittajien lukumäärä, verkkolaskut sekä EDI-sanomat. EDI-sanomien osalta vaikuttavana tekijänä on lähettävän sanoman muoto ja veloitusperuste.
Resumo:
Direct-driven permanent magnet synchronous generator is one of the most promising topologies for megawatt-range wind power applications. The rotational speed of the direct-driven generator is very low compared with the traditional electrical machines. The low rotational speed requires high torque to produce megawatt-range power. The special features of the direct-driven generators caused by the low speed and high torque are discussed in this doctoral thesis. Low speed and high torque set high demands on the torque quality. The cogging torque and the load torque ripple must be as low as possible to prevent mechanical failures. In this doctoral thesis, various methods to improve the torque quality are compared with each other. The rotor surface shaping, magnet skew, magnet shaping, and the asymmetrical placement of magnets and stator slots are studied not only by means of torque quality, but also the effects on the electromagnetic performance and manufacturability of the machine are discussed. The heat transfer of the direct-driven generator must be designed to handle the copper losses of the stator winding carrying high current density and to keep the temperature of the magnets low enough. The cooling system of the direct-driven generator applying the doubly radial air cooling with numerous radial cooling ducts was modeled with a lumped-parameter-based thermal network. The performance of the cooling system was discussed during the steady and transient states. The effect of the number and width of radial cooling ducts was explored. The large number of radial cooling ducts drastically increases the impact of the stack end area effects, because the stator stack consists of numerous substacks. The effects of the radial cooling ducts on the effective axial length of the machine were studied by analyzing the crosssection of the machine in the axial direction. The method to compensate the magnet end area leakage was considered. The effect of the cooling ducts and the stack end area effects on the no-load voltages and inductances of the machine were explored by using numerical analysis tools based on the three-dimensional finite element method. The electrical efficiency of the permanent magnet machine with different control methods was estimated analytically over the whole speed and torque range. The electrical efficiencies achieved with the most common control methods were compared with each other. The stator voltage increase caused by the armature reaction was analyzed. The effect of inductance saturation as a function of load current was implemented to the analytical efficiency calculation.
Resumo:
The present study shows the development, simulation and actual implementation of a closed-loop controller based on fuzzy logic that is able to regulate and standardize the mass flow of a helical fertilizer applicator. The control algorithm was developed using MATLAB's Fuzzy Logic Toolbox. Both open and closed-loop simulations of the controller were performed in MATLAB's Simulink environment. The instantaneous deviation of the mass flow from the set point (SP), its derivative, the equipment´s translation velocity and acceleration were all used as input signals for the controller, whereas the voltage of the applicator's DC electric motor (DCEM) was driven by the controller as output signal. Calibration and validation of the rules and membership functions of the fuzzy logic were accomplished in the computer simulation phase, taking into account the system's response to SP changes. The mass flow variation coefficient, measured in experimental tests, ranged from 6.32 to 13.18%. The steady state error fell between -0.72 and 0.13g s-1 and the recorded average rise time of the system was 0.38 s. The implemented controller was able to both damp the oscillations in mass flow that are characteristic of helical fertilizer applicators, and to effectively respond to SP variations.
Resumo:
Increased emissions of greenhouse gases into the atmosphere are causing an anthropogenic climate change. The resulting global warming challenges the ability of organisms to adapt to the new temperature conditions. However, warming is not the only major threat. In marine environments, dissolution of carbon dioxide from the atmosphere causes a decrease in surface water pH, the so called ocean acidification. The temperature and acidification effects can interact, and create even larger problems for the marine flora and fauna than either of the effects would cause alone. I have used Baltic calanoid copepods (crustacean zooplankton) as my research object and studied their growth and stress responses using climate predictions projected for the next century. I have studied both direct temperature and pH effects on copepods, and indirect effects via their food: the changing phytoplankton spring bloom composition and toxic cyanobacterium. The main aims of my thesis were: 1) to find out how warming and acidification combined with a toxic cyanobacterium affect copepod reproductive success (egg production, egg viability, egg hatching success, offspring development) and oxidative balance (antioxidant capacity, oxidative damage), and 2) to reveal the possible food quality effects of spring phytoplankton bloom composition dominated by diatoms or dinoflagellates on reproducing copepods (egg production, egg hatching, RNA:DNA ratio). The two copepod genera used, Acartia sp. and Eurytemora affinis are the dominating mesozooplankton taxa (0.2 – 2 mm) in my study area the Gulf of Finland. The 20°C temperature seems to be within the tolerance limits of Acartia spp., because copepods can adapt to the temperature phenotypically by adjusting their body size. Copepods are also able to tolerate a pH decrease of 0.4 from present values, but the combination of warm water and decreased pH causes problems for them. In my studies, the copepod oxidative balance was negatively influenced by the interaction of these two environmental factors, and egg and nauplii production were lower at 20°C and lower pH, than at 20°C and ambient pH. However, presence of toxic cyanobacterium Nodularia spumigena improved the copepod oxidative balance and helped to resist the environmental stress, in question. In addition, adaptive maternal effects seem to be an important adaptation mechanism in a changing environment, but it depends on the condition of the female copepod and her diet how much she can invest in her offspring. I did not find systematic food quality difference between diatoms and dinoflagellates. There are both good and bad diatom and dinoflagellate species. Instead, the dominating species in the phytoplankton bloom composition has a central role in determining the food quality, although copepods aim at obtaining as a balanced diet as possible by foraging on several species. If the dominating species is of poor quality it can cause stress when ingested, or lead to non-optimal foraging if rejected. My thesis demonstrates that climate change induced water temperature and pH changes can cause problems to Baltic Sea copepod communities. However, their resilience depends substantially on their diet, and therefore the response of phytoplankton to the environmental changes. As copepods are an important link in pelagic food webs, their future success can have far reaching consequences, for example on fish stocks.
Resumo:
Memristive computing refers to the utilization of the memristor, the fourth fundamental passive circuit element, in computational tasks. The existence of the memristor was theoretically predicted in 1971 by Leon O. Chua, but experimentally validated only in 2008 by HP Labs. A memristor is essentially a nonvolatile nanoscale programmable resistor — indeed, memory resistor — whose resistance, or memristance to be precise, is changed by applying a voltage across, or current through, the device. Memristive computing is a new area of research, and many of its fundamental questions still remain open. For example, it is yet unclear which applications would benefit the most from the inherent nonlinear dynamics of memristors. In any case, these dynamics should be exploited to allow memristors to perform computation in a natural way instead of attempting to emulate existing technologies such as CMOS logic. Examples of such methods of computation presented in this thesis are memristive stateful logic operations, memristive multiplication based on the translinear principle, and the exploitation of nonlinear dynamics to construct chaotic memristive circuits. This thesis considers memristive computing at various levels of abstraction. The first part of the thesis analyses the physical properties and the current-voltage behaviour of a single device. The middle part presents memristor programming methods, and describes microcircuits for logic and analog operations. The final chapters discuss memristive computing in largescale applications. In particular, cellular neural networks, and associative memory architectures are proposed as applications that significantly benefit from memristive implementation. The work presents several new results on memristor modeling and programming, memristive logic, analog arithmetic operations on memristors, and applications of memristors. The main conclusion of this thesis is that memristive computing will be advantageous in large-scale, highly parallel mixed-mode processing architectures. This can be justified by the following two arguments. First, since processing can be performed directly within memristive memory architectures, the required circuitry, processing time, and possibly also power consumption can be reduced compared to a conventional CMOS implementation. Second, intrachip communication can be naturally implemented by a memristive crossbar structure.
Resumo:
Esitys Kirjastoverkkopäivillä 23.10.2012 Helsingissä
Resumo:
Technological developments in microprocessors and ICT landscape have made a shift to a new era where computing power is embedded in numerous small distributed objects and devices in our everyday lives. These small computing devices are ne-tuned to perform a particular task and are increasingly reaching our society at every level. For example, home appliances such as programmable washing machines, microwave ovens etc., employ several sensors to improve performance and convenience. Similarly, cars have on-board computers that use information from many di erent sensors to control things such as fuel injectors, spark plug etc., to perform their tasks e ciently. These individual devices make life easy by helping in taking decisions and removing the burden from their users. All these objects and devices obtain some piece of information about the physical environment. Each of these devices is an island with no proper connectivity and information sharing between each other. Sharing of information between these heterogeneous devices could enable a whole new universe of innovative and intelligent applications. The information sharing between the devices is a diffcult task due to the heterogeneity and interoperability of devices. Smart Space vision is to overcome these issues of heterogeneity and interoperability so that the devices can understand each other and utilize services of each other by information sharing. This enables innovative local mashup applications based on shared data between heterogeneous devices. Smart homes are one such example of Smart Spaces which facilitate to bring the health care system to the patient, by intelligent interconnection of resources and their collective behavior, as opposed to bringing the patient into the health system. In addition, the use of mobile handheld devices has risen at a tremendous rate during the last few years and they have become an essential part of everyday life. Mobile phones o er a wide range of different services to their users including text and multimedia messages, Internet, audio, video, email applications and most recently TV services. The interactive TV provides a variety of applications for the viewers. The combination of interactive TV and the Smart Spaces could give innovative applications that are personalized, context-aware, ubiquitous and intelligent by enabling heterogeneous systems to collaborate each other by sharing information between them. There are many challenges in designing the frameworks and application development tools for rapid and easy development of these applications. The research work presented in this thesis addresses these issues. The original publications presented in the second part of this thesis propose architectures and methodologies for interactive and context-aware applications, and tools for the development of these applications. We demonstrated the suitability of our ontology-driven application development tools and rule basedapproach for the development of dynamic, context-aware ubiquitous iTV applications.
Resumo:
Valmistustekniikoiden kehittyessä IC-piireille saadaan mahtumaan yhä enemmän transistoreja. Monimutkaisemmat piirit mahdollistavat suurempien laskutoimitusmäärien suorittamisen aikayksikössä. Piirien aktiivisuuden lisääntyessä myös niiden energiankulutus lisääntyy, ja tämä puolestaan lisää piirin lämmöntuotantoa. Liiallinen lämpö rajoittaa piirien toimintaa. Tämän takia tarvitaan tekniikoita, joilla piirien energiankulutusta saadaan pienennettyä. Uudeksi tutkimuskohteeksi ovat tulleet pienet laitteet, jotka seuraavat esimerkiksi ihmiskehon toimintaa, rakennuksia tai siltoja. Tällaisten laitteiden on oltava energiankulutukseltaan pieniä, jotta ne voivat toimia pitkiä aikoja ilman akkujen lataamista. Near-Threshold Computing on tekniikka, jolla pyritään pienentämään integroitujen piirien energiankulutusta. Periaatteena on käyttää piireillä pienempää käyttöjännitettä kuin piirivalmistaja on niille alunperin suunnitellut. Tämä hidastaa ja haittaa piirin toimintaa. Jos kuitenkin laitteen toiminnassa pystyään hyväksymään huonompi laskentateho ja pienentynyt toimintavarmuus, voidaan saavuttaa säästöä energiankulutuksessa. Tässä diplomityössä tarkastellaan Near-Threshold Computing -tekniikkaa eri näkökulmista: aluksi perustuen kirjallisuudesta löytyviin aikaisempiin tutkimuksiin, ja myöhemmin tutkimalla Near-Threshold Computing -tekniikan soveltamista kahden tapaustutkimuksen kautta. Tapaustutkimuksissa tarkastellaan FO4-invertteriä sekä 6T SRAM -solua piirisimulaatioiden avulla. Näiden komponenttien käyttäytymisen Near-Threshold Computing –jännitteillä voidaan tulkita antavan kattavan kuvan suuresta osasta tavanomaisen IC-piirin pinta-alaa ja energiankulusta. Tapaustutkimuksissa käytetään 130 nm teknologiaa, ja niissä mallinnetaan todellisia piirivalmistusprosessin tuotteita ajamalla useita Monte Carlo -simulaatioita. Tämä valmistuskustannuksiltaan huokea teknologia yhdistettynä Near-Threshold Computing -tekniikkaan mahdollistaa matalan energiankulutuksen piirien valmistaminen järkevään hintaan. Tämän diplomityön tulokset näyttävät, että Near-Threshold Computing pienentää piirien energiankulutusta merkittävästi. Toisaalta, piirien nopeus heikkenee, ja yleisesti käytetty 6T SRAM -muistisolu muuttuu epäluotettavaksi. Pidemmät polut logiikkapiireissä sekä transistorien kasvattaminen muistisoluissa osoitetaan tehokkaiksi vastatoimiksi Near- Threshold Computing -tekniikan huonoja puolia vastaan. Tulokset antavat perusteita matalan energiankulutuksen IC-piirien suunnittelussa sille, kannattaako käyttää normaalia käyttöjännitettä, vai laskea sitä, jolloin piirin hidastuminen ja epävarmempi käyttäytyminen pitää ottaa huomioon.
Resumo:
Hystricognathi represent a monophyletic taxon within Rodentia. Since phylogenetically analyzed morphological systems are essential for revealing evolutionary processes, this study identifies evolutionary character transformations on the stem lineage of Hystricognathi as derived from the author's own work and the literature. Data so far indicate that evolutionary transformations in the rostral head region, the loss of tactile ability in the outer nasal skin and the mobile arrangement of the associated cartilage, were allied with a switch from omnivorous to herbivorous and fiber-rich nutrition. Additional character transformations in the skull assist in digesting such food. Structures associated with reproduction and placentation show a remarkable pro portion of derived character conditions: the chorioallantoic placenta has a ring-shaped organization and growth structure which optimizes the capacity for passive diffusion, a subplacenta occurred as a specialized region responsible for placental invasion and the inverted yolk sac facilitates substance exchange with the main placenta. Finally, precocial newborns evolved as a derived condition within Rodentia. All things considered, a mode of reproduction is indicated, which does not demand excessive additional energy intake by the mother and is in accordance with her low energetic diet. Hystricognathi possess major character transformations that represent prerequisites for their successful radiation at the time when more open ecosystems and grasslands evolved during Earth history. The analysis resulted in the reconstruction of a life-near picture of the hystricognath stem species pattern with high explanatory power in terms of changes in space and time and their interdependence with biodiversity.
Resumo:
Despite the fact that the literature on mergers and acquisitions is extensive, relatively little effort has been made to examine the relationship between the acquiring firms’ financial slack and short-term post-takeover announcement abnormal stock returns. In this study, the case is made that the financial slack of a firm is not only an outcome of past business and financing activities but it also may affect the quality of acquisition decisions. We will hypothesize that the level of financial slack in a firm is negatively associated with the abnormal returns following acquisition announcements because slack reduces managerial discipline over the use of corporate funds and also because it may give rise to managerial self-serving behavior. In this study, financial slack is measured in terms of three financial statements ratios: leverage ratio, cash and equivalents to total assets ratio and free cash flow to total assets ratio. The data used in this paper is collected from two main sources. A list comprising 90 European acquisition announcements is retrieved from Thomson One Banker database. The stock price data and financial statements information for the respective firms is collected using Datastream. Our empirical analysis is two-fold. First, we conduct a two-sample t-test where we find that the most slack-rich firms experience lower abnormal returns than the most slack-poor firms in the event window [-1, +1], significant at 5% risk level. Second, we perform a cross sectional regression for sample firms using three financial statements ratios to explain cumulative abnormal returns (CAR). We find that leverage shows a statistically significant positive relationship with cumulative abnormal returns in event window [-1; +1] (significance 5%). Moreover, cash to total assets ratio showed a weak negative relationship with CAR (significant at 10%) in event window [-1; +1]. We conclude that our hypothesis for the inverse relationship between slack and abnormal returns receives empirical support. Based on the results of the event study we get empirical support for the hypothesis that the capital markets expect the acquisitions undertaken by slack-rich firms to more likely be driven by managerial self-serving behavior and hubris than do those undertaken by slackpoor firms, signaling possible agency problems and behavioral biases.
Resumo:
In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.
Resumo:
The interconnected domains are attracting interest from industries and academia, although this phenomenon, called ‘convergence’ is not new. Organizational research has indeed focused on uncovering co-creation for manufacturing and the industrial organization, with limited implications to entrepreneurship. Although convergence has been characterized as a process connecting seemingly disparate disciplines, it is argued that these studies tend to leave the creative industries unnoticed. With the art market boom and new forms of collaboration riding past the institution-focused arts marketing literature, this thesis takes a leap to uncover the processes of entrepreneurship in the emergence of a cultural product. As a symbolic work of synergism itself, the thesis combines organizational theory with literature in natural sciences and arts. Assuming nonlinearity, a framework is created for analysing aesthetic experience in an empirical event where network actors are connected to multiple contexts. As the focal case in study, the empirical analysis performed for a music festival organized in a skiing resort in the French Alps in March. The researcher attends the festival and models its cocreation process by enquiring from an artist, festival organisers, and a festival visitor. The findings contribute to fields of entrepreneurship, aesthetics and marketing mainly. It is found that the network actors engage in intimate and creative interaction where activity patterns are interrupted and cultural elements combined. This process is considered to both create and destruct value, through identity building, legitimisation, learning, and access to larger audiences, and it is considered particularly useful for domains where resources are too restrained for conventional marketing practices. This thesis uncovered the role of artists and informants and posits that particularly through experience design, this type of skilled individual be regarded more often as a research informant. Future research is encouraged to engage in convergence by experimenting with different fields and research designs, and it is suggested that future studies could arrive at different descriptive results.
Resumo:
Tämän tutkielman tavoite on tutkia vahvan brandin merkitystä ja syitä, miksi yritykset käyttävät sisältömarkkinointia brandin rakentamisessa, sekä selvittää mitkä ovat yritysten sisältömarkkinointi-investointien tavoitteet, odotetut seuraukset ja koetut hyödyt ja tulokset. Teoriaosassa esitetään empiriaosaa varten tarvittava akateeminen taustatutkimustieto. Empiria on toteutettu puoli-struktroiduilla kahdenkeskisillä puhelinhaastatteuilla. Haastateltavina ovat työn toimeksiantajan Calcus Kustannus Oy:n viisi asiakasta. Näiden yritysten liikevaihto vaihtelee 23-75M€ välillä, ja yritykset toimivat pääosin suomen B2B yritysmarkkinoilla. Haastateltavat henkilöt ovat yrityksissä vastuussa markkinointipäätöksistä. Tulokset osoittavat, että vahvalla brandilla on erittäin suuri merkitys yrityksen kasvun ja menestyksen kannalta, ja että yritykset käyttävät sisältömarkkinointia brandin rakentamisessa, koska se kasvattaa brandipääomaa tehokkaasti. Sisältömarkkinoinnilla yritykset pyrkivät kasvttamaan heidän asiakkaidensa branditietoisuutta ja brandiuskollisuutta, sekä brandinsa uskottavuutta jakamalla aidosti kiinnostavaa ja lisäarvoa tuottavaa sisältöä. Lisäksi yrityksen asiantuntija-, sekä mielipidejohtajaroolin vahvistaminen koettiin erittäin tärkeäksi. Sisältömarkkinoinnin koetaan olevan brandin rakentamisessa tehokkaampaa kuin traditionaalinen markkinointi, minkä vuoksi yritykset tulevat kasvattamaan sen osuutta markkinointisuunnitelmissaan tulevaisuudessa.