14 resultados para Distributed computer-controlled systems
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Markkinasegmentointi nousi esiin ensi kerran jo 50-luvulla ja se on ollut siitä lähtien yksi markkinoinnin peruskäsitteistä. Suuri osa segmentointia käsittelevästä tutkimuksesta on kuitenkin keskittynyt kuluttajamarkkinoiden segmentointiin yritys- ja teollisuusmarkkinoiden segmentoinnin jäädessä vähemmälle huomiolle. Tämän tutkimuksen tavoitteena on luoda segmentointimalli teollismarkkinoille tietotekniikan tuotteiden ja palveluiden tarjoajan näkökulmasta. Tarkoituksena on selvittää mahdollistavatko case-yrityksen nykyiset asiakastietokannat tehokkaan segmentoinnin, selvittää sopivat segmentointikriteerit sekä arvioida tulisiko tietokantoja kehittää ja kuinka niitä tulisi kehittää tehokkaamman segmentoinnin mahdollistamiseksi. Tarkoitus on luoda yksi malli eri liiketoimintayksiköille yhteisesti. Näin ollen eri yksiköiden tavoitteet tulee ottaa huomioon eturistiriitojen välttämiseksi. Tutkimusmetodologia on tapaustutkimus. Lähteinä tutkimuksessa käytettiin sekundäärisiä lähteitä sekä primäärejä lähteitä kuten case-yrityksen omia tietokantoja sekä haastatteluita. Tutkimuksen lähtökohtana oli tutkimusongelma: Voiko tietokantoihin perustuvaa segmentointia käyttää kannattavaan asiakassuhdejohtamiseen PK-yritys sektorilla? Tavoitteena on luoda segmentointimalli, joka hyödyntää tietokannoissa olevia tietoja tinkimättä kuitenkaan tehokkaan ja kannattavan segmentoinnin ehdoista. Teoriaosa tutkii segmentointia yleensä painottuen kuitenkin teolliseen markkinasegmentointiin. Tarkoituksena on luoda selkeä kuva erilaisista lähestymistavoista aiheeseen ja syventää näkemystä tärkeimpien teorioiden osalta. Tietokantojen analysointi osoitti selviä puutteita asiakastiedoissa. Peruskontaktitiedot löytyvät mutta segmentointia varten tietoa on erittäin rajoitetusti. Tietojen saantia jälleenmyyjiltä ja tukkureilta tulisi parantaa loppuasiakastietojen saannin takia. Segmentointi nykyisten tietojen varassa perustuu lähinnä sekundäärisiin tietoihin kuten toimialaan ja yrityskokoon. Näitäkään tietoja ei ole saatavilla kaikkien tietokannassa olevien yritysten kohdalta.
Resumo:
-
Resumo:
Nowadays, computer-based systems tend to become more complex and control increasingly critical functions affecting different areas of human activities. Failures of such systems might result in loss of human lives as well as significant damage to the environment. Therefore, their safety needs to be ensured. However, the development of safety-critical systems is not a trivial exercise. Hence, to preclude design faults and guarantee the desired behaviour, different industrial standards prescribe the use of rigorous techniques for development and verification of such systems. The more critical the system is, the more rigorous approach should be undertaken. To ensure safety of a critical computer-based system, satisfaction of the safety requirements imposed on this system should be demonstrated. This task involves a number of activities. In particular, a set of the safety requirements is usually derived by conducting various safety analysis techniques. Strong assurance that the system satisfies the safety requirements can be provided by formal methods, i.e., mathematically-based techniques. At the same time, the evidence that the system under consideration meets the imposed safety requirements might be demonstrated by constructing safety cases. However, the overall safety assurance process of critical computerbased systems remains insufficiently defined due to the following reasons. Firstly, there are semantic differences between safety requirements and formal models. Informally represented safety requirements should be translated into the underlying formal language to enable further veri cation. Secondly, the development of formal models of complex systems can be labour-intensive and time consuming. Thirdly, there are only a few well-defined methods for integration of formal verification results into safety cases. This thesis proposes an integrated approach to the rigorous development and verification of safety-critical systems that (1) facilitates elicitation of safety requirements and their incorporation into formal models, (2) simplifies formal modelling and verification by proposing specification and refinement patterns, and (3) assists in the construction of safety cases from the artefacts generated by formal reasoning. Our chosen formal framework is Event-B. It allows us to tackle the complexity of safety-critical systems as well as to structure safety requirements by applying abstraction and stepwise refinement. The Rodin platform, a tool supporting Event-B, assists in automatic model transformations and proof-based verification of the desired system properties. The proposed approach has been validated by several case studies from different application domains.
Resumo:
Tietojenkäsittelyn pääkokoelma sijaitsee pääkirjastossa (Linnassa), jossa painettu yleis- ja käsikirjastokokoelma koostuu noin 4000 nimekkeestä monografioita (painettujen monografiasarjojen osat mukaan lukien). Tietojenkäsittely-kokoelmasta kartoitettiin neljä osa-aluetta. Näistä selvimmäksi painopistealaksi osoittautui ohjelmointi, ohjelmointikielet & atk-ohjelmat, joka käsitti noin 33 % nimekkeistä (1314). Muiden ryhmien osuudet olivat pienemmät: tietojärjestelmät, tiedonhallinta & tietoturva noin 18 % (727 nimekettä); tekoäly, tietämystekniikka & hahmontunnistus noin 16 % (629 nimekettä). Käsikirjaston karsitussa noin 100 nimekkeen kokoelmassa on runsaasti sanakirjoja ja erilaisia hakuteoksia kuten lähes täydellinen (44/45) Encyclopedia of computer science and technology ja myös e-muodossa oleva 3-osainen Handbook of information security. Painettuja lehtiä oli 6 nimekettä (IEEE Pervasive Computing, MikroPC, myös e-muodossa oleva Social Science Computer Review, Tekniikan näköalat, Tietokone ja Tietoyhteys). Sähkökirjoja kokoelmassa oli 466 nimekettä Ebrary: Information technology -tietokannassa, 24 nimekettä NetLibrary-tietokannassa, 3 nimekettä Taylor & Francis eBooks online -tietokannassa ja 2 nimekettä sähköisinä hakuteoksina (Encyclopedia of gender and information technology ja Encyclopedia of information science and technology) sekä 4964-osainen Lecture notes in computer science -monografiasarja. Verkkolehtiä kokoelmassa oli noin 300 nimekettä. Tietokantoja oli 4 kokotekstitietokantaa (ACM - Association for Computing Machinery, EBSCOhost Academic Search Premier, Elsevier ScienceDirect ja SpringerLink) sekä 2 viitetietokantaa (Computer + Info Systems (CSA) ja Web of Scence (ISI)).
Resumo:
Tässä työssä käsitellään lähinnä relaatiomallia hyödyntäviä tiedonhallintajärjestelmiä. Tiedonhallintajärjestelmä hallitsee yleisesti tietokannan luontia, käyttöä ja muutoksia ja relaatiomallia käyttävät tiedonhallintajärjestelmät ovat jo 1970 -luvulta lähtien olleet hallitseva trendi tietokantamarkkinoilla. Työssä otetaan huomioon neljä eri tiedonhallintajärjestelmä-tyyppiä, jotka ovat keskitetyt, hajautetut, tietovarasto ja operatiiviset tiedonhallintajärjestelmät. Työssä selvitetään, miten näitä tiedonhallintajärjestelmiä voi verrata ja mitkä valintakriteerit vaikuttavat niiden valintaan.
Resumo:
Älytelevisiomarkkinat ovat nykyisellään pirstaloituneet eri valmistajien kehittäessä omia älytelevisioalustoitaan, mikä tekee sovelluskehittämisestä erittäin työlästä, kun kehitystyö pitää tehdä jokaiselle alustalle erikseen. LG:n ja Philipsin perustama Smart TV Alliance pyrkii yksinkertaistamaan sovelluskehittäjien työtä, samalla houkutellen lisää kehittäjiä alalle. Työssä tutustutaan tuotealustoihin, avoimeen ja suljettuun innovaatioon, sekä alliansseihin. Lisäksi perehdytään älytelevisioihin sekä tietenkin itse Smart TV Allianceen. Lisäksi tarkastellaan nykyistä markkina-asetelmaa ja arvioidaan yksittäisten toimijoiden tilannetta ja mahdollisia toimenpiteitä. Työn painopiste on fyysisen laitevalmistajan ja käyttöjärjestelmän kehittäjän/ylläpitäjän näkökulmasta. Työn kannalta tärkeässä roolissa ovat ohjelmistopohjaiset tuotealustat. Eritoten työssä käsitellään älytelevisioiden ohjelmistoa tuotealustana, mutta hyvä vaihtoehtoinen ja eritoten monille käytännönläheisempi esimerkki on tietokoneen käyttöjärjestelmä, kuten Microsoft Windows tai useat Linux-pohjaiset käyttöjärjestelmät. Keskeisenä ominaisuutena näissä kaikissa on, että itse käyttöjärjestelmä toimii yhteisenä pohjana, jonka päälle voidaan rakentaa muuta toiminnallisuutta, kuten pelejä ja toimistosovelluksia.
Resumo:
Resilience is the property of a system to remain trustworthy despite changes. Changes of a different nature, whether due to failures of system components or varying operational conditions, significantly increase the complexity of system development. Therefore, advanced development technologies are required to build robust and flexible system architectures capable of adapting to such changes. Moreover, powerful quantitative techniques are needed to assess the impact of these changes on various system characteristics. Architectural flexibility is achieved by embedding into the system design the mechanisms for identifying changes and reacting on them. Hence a resilient system should have both advanced monitoring and error detection capabilities to recognise changes as well as sophisticated reconfiguration mechanisms to adapt to them. The aim of such reconfiguration is to ensure that the system stays operational, i.e., remains capable of achieving its goals. Design, verification and assessment of the system reconfiguration mechanisms is a challenging and error prone engineering task. In this thesis, we propose and validate a formal framework for development and assessment of resilient systems. Such a framework provides us with the means to specify and verify complex component interactions, model their cooperative behaviour in achieving system goals, and analyse the chosen reconfiguration strategies. Due to the variety of properties to be analysed, such a framework should have an integrated nature. To ensure the system functional correctness, it should rely on formal modelling and verification, while, to assess the impact of changes on such properties as performance and reliability, it should be combined with quantitative analysis. To ensure scalability of the proposed framework, we choose Event-B as the basis for reasoning about functional correctness. Event-B is a statebased formal approach that promotes the correct-by-construction development paradigm and formal verification by theorem proving. Event-B has a mature industrial-strength tool support { the Rodin platform. Proof-based verification as well as the reliance on abstraction and decomposition adopted in Event-B provides the designers with a powerful support for the development of complex systems. Moreover, the top-down system development by refinement allows the developers to explicitly express and verify critical system-level properties. Besides ensuring functional correctness, to achieve resilience we also need to analyse a number of non-functional characteristics, such as reliability and performance. Therefore, in this thesis we also demonstrate how formal development in Event-B can be combined with quantitative analysis. Namely, we experiment with integration of such techniques as probabilistic model checking in PRISM and discrete-event simulation in SimPy with formal development in Event-B. Such an integration allows us to assess how changes and di erent recon guration strategies a ect the overall system resilience. The approach proposed in this thesis is validated by a number of case studies from such areas as robotics, space, healthcare and cloud domain.
Resumo:
The objective of this master’s thesis is to investigate the loss behavior of three-level ANPC inverter and compare it with conventional NPC inverter. The both inverters are controlled with mature space vector modulation strategy. In order to provide the comparison both accurate and detailed enough NPC and ANPC simulation models should be obtained. The similar control model of SVM is utilized for both NPC and ANPC inverter models. The principles of control algorithms, the structure and description of models are clarified. The power loss calculation model is based on practical calculation approaches with certain assumptions. The comparison between NPC and ANPC topologies is presented based on results obtained for each semiconductor device, their switching and conduction losses and efficiency of the inverters. Alternative switching states of ANPC topology allow distributing losses among the switches more evenly, than in NPC inverter. Obviously, the losses of a switching device depend on its position in the topology. Losses distribution among the components in ANPC topology allows reducing the stress on certain switches, thus losses are equally distributed among the semiconductors, however the efficiency of the inverters is the same. As a new contribution to earlier studies, the obtained models of SVM control, NPC and ANPC inverters have been built. Thus, this thesis can be used in further more complicated modelling of full-power converters for modern multi-megawatt wind energy conversion systems.
Resumo:
This bachelor’s thesis, written for Lappeenranta University of Technology and implemented in a medium-sized enterprise (SME), examines a distributed document migration system. The system was created to migrate a large number of electronic documents, along with their metadata, from one document management system to another, so as to enable a rapid switchover of an enterprise resource planning systems inside the company. The paper examines, through theoretical analysis, messaging as a possible enabler of distributing applications and how it naturally fits an event based model, whereby system transitions and states are expressed through recorded behaviours. This is put into practice by analysing the implemented migration systems and how the core components, MassTransit, RabbitMQ and MongoDB, were orchestrated together to realize such a system. As a result, the paper presents an architecture for a scalable and distributed system that could migrate hundreds of thousands of documents over weekend, serving its goals in enabling a rapid system switchover.
Resumo:
Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.
Resumo:
With the new age of Internet of Things (IoT), object of everyday such as mobile smart devices start to be equipped with cheap sensors and low energy wireless communication capability. Nowadays mobile smart devices (phones, tablets) have become an ubiquitous device with everyone having access to at least one device. There is an opportunity to build innovative applications and services by exploiting these devices’ untapped rechargeable energy, sensing and processing capabilities. In this thesis, we propose, develop, implement and evaluate LoadIoT a peer-to-peer load balancing scheme that can distribute tasks among plethora of mobile smart devices in the IoT world. We develop and demonstrate an android-based proof of concept load-balancing application. We also present a model of the system which is used to validate the efficiency of the load balancing approach under varying application scenarios. Load balancing concepts can be apply to IoT scenario linked to smart devices. It is able to reduce the traffic send to the Cloud and the energy consumption of the devices. The data acquired from the experimental outcomes enable us to determine the feasibility and cost-effectiveness of a load balanced P2P smart phone-based applications.
Resumo:
The goal of this thesis is to define and validate a software engineering approach for the development of a distributed system for the modeling of composite materials, based on the analysis of various existing software development methods. We reviewed the main features of: (1) software engineering methodologies; (2) distributed system characteristics and their effect on software development; (3) composite materials modeling activities and the requirements for the software development. Using the design science as a research methodology, the distributed system for creating models of composite materials is created and evaluated. Empirical experiments which we conducted showed good convergence of modeled and real processes. During the study, we paid attention to the matter of complexity and importance of distributed system and a deep understanding of modern software engineering methods and tools.
Resumo:
Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.
Resumo:
In this bachelor's thesis a relay card for capacitance measurements was designed, built and tested. The study was made for the research and development laboratory of VTI Technologies, which manufactures capacitive silicon micro electro mechanical accelerometers and pressure sensors. As the size of the sensors is decreasing the capacitance value of the sensors also decreases. The decreased capacitance causes a need for new and more accurate measurement systems. The technology used in the instrument measuring the capacitance dictates a framework how the relay card should be designed, thus the operating principle of the instrument must be known. To achieve accurate results the measurement instrument and its functions needed to be used correctly. The relay card was designed using printed circuit board design methods that minimize interference coupling to the measurement. The relay card that was designed in this study is modular. It consists of a separate CPU card, which was used to control the add-on cards connected to it. The CPU card was controlled from a computer through a serial bus. Two add-on cards for the CPU card were designed in this study. The first one was the measurement card, which could be used to measure 32 capacitive sensors. The second add-on card was the MUX card, which could be used to switch between two measurement cards. The capacitance measurements carried out through the MUX card and the measurement cards were characterized with a series of test measurements. The test measurement data was then analysed. The relay card design was confirmed to work and offer accurate measurement results up to a measurement frequency of 10 MHz. The length of the measurement cables limited the measurement frequency.