23 resultados para Web sites-design
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Työn tarkoituksena oli tutkia sisältö- ja diskurssianalyysin avulla kuinka yritykset viestivät asiakasreferenssejä verkkosivuillaan. Työssä keskityttiin tutkimaan yritysten referenssikuvausten teemoja ja diskursseja, sekä sitä kuinka referenssisuhde rakentuu diskursiivisesti referenssikuvauksissa. Tutkimukseen valittiin kolme suomalaista ICT-alan yritystä: Nokia, TietoEnator ja F-Secure. Aineisto koostuu 140:stä yritysten WWW-sivuilta kerätystä referenssikuvauksesta. Sisältöanalyysin tuloksena havaittiin, että referenssikuvaukset keskittyvät kuvaamaan yksittäisiä tuote- tai projektitoimituksia referenssiasiakkaille kyseisten asiakassuhteiden valossa. Analyysin tuloksena tunnistettiin kolme diskurssia: hyötydiskurssi, sitoutumisen diskurssi sekä teknologisen eksperttiyden diskurssi. Diskurssit paljastavat referenssikuvausten retoriset keinot ja konstruoivat referenssisuhteen ja toimittajan subjektiposition eri näkökulmista. Pääpaino referenssikuvauksissa on toimittajan ratkaisun tuomissa hyödyissä. Diskurssit tuottavat referenssisuhteesta kuvan hyötyjä tuovana ja läheisenä asiakassuhteena, joka tarjoaa väylän ulkopuolisiin kyvykkyyksiin ja teknologioihin. Toimittaja esitetään referenssikuvauksissa diskurssista riippuen hyötyjen tuojana, luotettavana partnerina sekä kokeneena eksperttinä. Referenssiasiakas sen sijaan esitetään vain yhdestä näkökulmasta stereotyyppisesti tärkeänä ja tyytyväisenä asiakkaana.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
Tavoitteena on tutkia sisällön räätälöintiä Internetissä. Yritysten tarjoama sisällön määrä WWW-sivuillaan on kasvanut räjähdysmäisesti. Räätälöinnin avulla asiakkaat saavat juuri haluamaansa ja tarvitsemaansa sisältöä. Räätälöinti edellyttää asiakkaiden profilointia. Asiakastietojen kerääminen aiheuttaa huolta yksityisyyden menettämisestä. Tutkimus toteutetaan case-tutkimuksena. Tutkimuksen kohteena on viisi yritystä, jotka toimivat sisällön tarjoajina. Tutkimus pohjautuu valmiiseen aineistoon sekä osallistuvaan havainnointiin kohde yrityksistä. Sisällön räätälöinnistä voidaan havaita neljä eri perus lähestymistapaa. Profilointi toteutetaan pääsääntöisesti joko asiakkaan itse antamien tietojen pohjalta tai havainnoimalla hänen käyttäytymistään WWW-sivulla. Tulevaisuudessa tarvitaan selkeät pelisäännöt asiakastietojen keräämiseen ja käyttämiseen. Asiakkaat haluavat räätälöityä sisältöä, mutta sisällön tarjoajien on saavutettava heidän luottamuksensa yksityisyyden suojasta. Luottamuksen merkitys kasvaa entisestään, kun räätälöintiä kehitetään pidemmälle.
Resumo:
Tärkeänä osana Internet-sivujen toteutusta on niiden huolellinen suunnittelu. Käyttöliittymäsuunnittelun työvälineinä Internet-sivujen toteuttamisessa käytetään apuna hahmotelmia ja erilaisia prototyyppejä. Näiden avulla sivuston suunnitelmaa selkeytetään yhteistyössä asiakkaan ja tulevien käyttäjien kanssa. Tässä diplomityössä toteutetaan komponenttikirjasto verkkosivujen suunnitteluun Uoma Oy nimiselle yritykselle. Kirjastoon tulevia komponentteja kartoitetaan analysoimalla toteutettuja projekteja. Työssä myös selvitetään komponenttien laatukriteerit ja toteutetaan kirjaston komponentit. Kirjaston laatua ja tehokkuutta arvioidaan toteuttamalla mallisivusto. Työ osoittaa, että käyttämällä kirjastoa saadaan sekä laadullista hyötyä että parannetaan tehokkuutta, verrattuna yrityksessä aiemmin käytettyyn tapaan työskennellä. Kirjastoa voidaan käyttää monipuolisesti eri suunnitteluvaiheiden tarpeisiin.
Resumo:
Browsing the web has become one of the most important features in high end mobile phones and in the future more and more people will be using mobile phone for web browsing. Large touchscreens improve browsing experience but many web sites are designed to be used with a mouse. A touchscreen differs substantially from a mouse as a pointing device and therefore mouse emulation logic is required in the browsers to make more web sites usable. This Master's thesis lists the most significant cases where the differences of a mouse and a touchscreen affect web browsing. Five touchscreen mobile phones and their web browsers were evaluated to find out if and how these cases are handled in them. Also as a part of this thesis, a simple QtWebKit based mobile web browser with advanced mouse emulation model was implemented, aiming to solve all the problematic cases. The conclusion of this work is that it is feasible to emulate a mouse with a touchscreen and thus deliver good user experience in mobile web browsing. However, current highend touchscreen mobile phones have relatively underdeveloped mouse emulations in their web browsers and there is a lot to improve.
Resumo:
Tämän tutkimuksen tavoitteena oli selvittää, vaikuttaako kansainvälisen opiskelijan kulttuuritausta opiskelijan odotetun ja koetun yliopistoimagon muodostumiseen. Jotta kulttuurin vaikutuksia yliopistoimagoon voitiin tutkia, tutkimuksessa tunnistettiin yliopistoimagon muodostumiseen oleellisesti vaikuttavat tekijät. Kulttuurin roolia organisaation imagon muodostumisessa ei ole tutkittu aiemmissa tieteellisissä julkaisuissa. Näin ollen tämän tutkimuksen voidaan katsoa edistäneen nykyistä imagotutkimusta. Tutkimuksen kohdeyliopistona oli Lappeenrannan teknillinen yliopisto (LTY). Tutkimuksen empiirinen osa toteutettiin kvantitatiivisena Internet - pohjaisena kyselytutkimuksena tilastollisen analyysin menetelmin. Otos (N=179) koostui kaikista Lappeenrannan teknillisessä yliopistossa lukuvuonna 2005-2006 opiskelleista kansainvälisistä opiskelijoista. Kyselyyn vastasi 68,7 % opiskelijoista. Johtopäätöksenä voidaan todeta, että kulttuurilla ei ole merkittävää vaikutusta yliopistoimagon muodostumiseen. Tutkimuksessa saatiin selville, että yliopiston Internet-sivujen laatu vaikuttaa positiivisesti odotetun yliopistoimagon muodostumiseen, kun taas koettuun yliopistoimagoon vaikuttavat positiivisesti odotettu yliopistoimago, pedagoginen laatu sekä opetusympäristö. Markkinoinnin näkökulmasta tulokset voidaan vetää yhteen toteamalla, että yliopistojen ei tarvitsisi räätälöidä tutkimuksessa tunnistettuja imagoon vaikuttavia tekijöitä eri kulttuureistatulevia opiskelijoita varten.
Resumo:
Työn tavoitteena on ollut arvioida Oy International Business Machines Ab:n sähköistä hankintatoimintaa benchmarkingin avulla. Tutkimuksessa on selvitetty, miten IBM:n sähköinen hankintatoiminta on toteutettu kolmen muun yrityksen sähköisiin hankintatoimintoihin verrattuna. Lisäksi tarkoituksena on ollut löytää uusia mahdollisuuksia IBM:n sähköisten hankintatoimintojen kehittämiseksi. Työn teoriaosassa on käsitelty ensin hankintatoimintaa ja sen roolia toimitusketjussa, jonka jälkeen on tarkasteltu Internetin käyttöä hankintatoiminnoissa. Lisäksi on esitelty IBM:n sähköistä hankintatoimintaa sekä benchmarkingia toiminnan arvioinnin välineenä. Soveltavassa osassa on rakennettu benchmarking-prosessi, jonka avulla benchmarking-tutkimus suoritettiin. Tutkimusaineisto on kerätty benchmarking-vierailujen ja kirjallisuuslähteiden avulla. Benchmarking-tutkimuksen tulokset osoittivat, että kaikilla yrityksillä on kokemuksia sähköisien menetelmien käytöstä yritysten hankintatoiminnoissa ja ne on koettu hyödyllisiksi. Yritysten hankintatoiminnot perustuivat kuitenkin edelleen sekä perinteisiin toimintatapoihin että sähköisiin menetelmiin, mutta niiden kehittäminen täysin sähköisiksi oli yrityksille lähivuosien tärkeä tavoite. IBM:n hankintatoimintojen kehittämiseksi on esitetty useiden eri tekniikoiden käyttämistä sähköisen hankintatoiminnan toteuttamisessa. Kehitysmahdollisuutena on myös ehdotettu toimittajien verkkosivujen käyttämistä joidenkin tuotteiden hankkimisessa.
Resumo:
Diplomityö on tehty Lappeenrannan teknillisen korkeakoulun tuotantotalouden osastolla. Työ on osa Asiakastarpeiden kustannus- ja aikatehokas selvittäminen kansainvälisiltä markkinoilta (Kannasta) -projektia. Työn tarkoituksena oli tutkia Internetiä asiakastarvekartoituksessa. Työn tavoitteena oli kar-toittaa Internetin avulla käytettäviä asiakastarvetiedon lähteitä tuotekehitykselle ja esitellä Internetin avulla tapahtuvan asiakastarvetiedon keräyksen toimintatapa sekä tiedon keräyksen apuvälineet. Internetin avulla voidaan yrityksen liiketoimintaympäristöstä kerätä asiakastarvetietoja. Tietojen keräys voidaan tehdä kirjoituspöytätutkimuksena tai kvantitatiivis-kvalitatiivisena tutkimuksena. Internetin avulla tehtävässä kirjoituspöytätutkimuksessa tietoa asiakkaista, asiakkaiden tarpeista ja kilpailijoista voidaan kerätä World Wide Web -sivuilta tai tietopankeista. Kvantitatiivis-kvalitatiivisessa tutkimuksessa tietoja asiakkaiden tarpeista ja vaatimuksista kerätään Internet-työkaluilla yrityksen asiakkailta ja muilta sen sidosryhmiltä. Työssä on esitelty kolme eri asiakastarvekartoituksen Internet-työkalua. Internetin avulla saadaan parannettua yrityksen ja sen asiakkaiden välistä viestintää sekä tehostettua asiakastarvetiedon keräystä.
Resumo:
Today’s commercial web sites are under heavy user load and they are expected to be operational and available at all times. Distributed system architectures have been developed to provide a scalable and failure tolerant high availability platform for these web based services. The focus on this thesis was to specify and implement resilient and scalable locally distributed high availability system architecture for a web based service. Theory part concentrates on the fundamental characteristics of distributed systems and presents common scalable high availability server architectures that are used in web based services. In the practical part of the thesis the implemented new system architecture is explained. Practical part also includes two different test cases that were done to test the system's performance capacity.
Resumo:
The objective of this thesis is to provide a business model framework that connects customer value to firm resources and explains the change logic of the business model. Strategic supply management and especially dynamic value network management as its scope, the dissertation is based on basic economic theories, transaction cost economics and the resource-based view. The main research question is how the changing customer values should be taken into account when planning business in a networked environment. The main question is divided into questions that form the basic research problems for the separate case studies presented in the five Publications. This research adopts the case study strategy, and the constructive research approach within it. The material consists of data from several Delphi panels and expert workshops, software pilot documents, company financial statements and information on investor relations on the companies’ web sites. The cases used in this study are a mobile multi-player game value network, smart phone and “Skype mobile” services, the business models of AOL, eBay, Google, Amazon and a telecom operator, a virtual city portal business system and a multi-play offering. The main contribution of this dissertation is bridging the gap between firm resources and customer value. This has been done by theorizing the business model concept and connecting it to both the resource-based view and customer value. This thesis contributes to the resource-based view, which deals with customer value and firm resources needed to deliver the value but has a gap in explaining how the customer value changes should be connected to the changes in key resources. This dissertation also provides tools and processes for analyzing the customer value preferences of ICT services, constructing and analyzing business models and business concept innovation and conducting resource analysis.
Resumo:
This study focuses on the phenomenon of customer reference marketing in a business tobusiness (B to B) context. Although customer references are generally considered an important marketing and sales tool, the academic literature has paid surprisingly little attention to the phenomenon. The study suggests that customer references could be viewed as important marketing assets for industrial suppliers, and the ability to build, manage and leverage customer reference portfolios systematically constitutes a relevant marketing capability. The role of customer references is examined in the context of the industrial suppliers' shift towards a solution and project orientation and in the light of the on going changes in the project business. Suppliers in several industry sectors are undergoing a change from traditional equipment manufacturing towards project and solution oriented business. It is argued in this thesis that the high complexity, the project oriented nature and the intangible service elements that characterise many contemporary B to B offerings further increase the role of customer references. The study proposes three mechanisms of customer reference marketing: status transfer, validation through testimonials and the demonstration of experience and prior performance. The study was conducted in the context of Finnish B to B process technology and information technology companies. The empirical data comprises 38 interviews with managers of four case companies, 165 customer reference descriptions gathered from six case companies' Web sites, as well as company internal material. The findings from the case studies show that customer references have various external and internal functions that contribute to the growth and performance of B to B firms. Externally, customer references bring status transfer effects from reputable customers, concretise and demonstrate complex solutions, and provide indirect evidence of experience, previous performance, technological functionality and delivered customer value. They can also be leveraged internally to facilitate organisational learning and training, advance offering development, and motivate personnel. Major reference projects create new business opportunities and can be used as a vehicle for strategic change. The findings of the study shed light on the on going changing orientations in the project business environment, increase understanding of the variety of ways in which customer references can be deployed as marketing assets, and provide a framework of the relevant tasks and activities related to building, managing and leveraging a firm's customer reference portfolio. The findings contribute to the industrial marketing research, to the literature on marketing assets and capabilities and to the literature on projects and solutions. The proposed functions and mechanisms of customer reference marketing bring a more thorough and structured understanding about the essence and characteristics of the phenomenon and give a wide ranging view of the role of customer references as marketing assets for B to B firms. The study suggests several managerial implications for industrial suppliers in order to systematise customer reference marketing efforts.
Resumo:
A web service is a software system that provides a machine-processable interface to the other machines over the network using different Internet protocols. They are being increasingly used in the industry in order to automate different tasks and offer services to a wider audience. The REST architectural style aims at producing scalable and extensible web services using technologies that play well with the existing tools and infrastructure of the web. It provides a uniform set of operation that can be used to invoke a CRUD interface (create, retrieve, update and delete) of a web service. The stateless behavior of the service interface requires that every request to a resource is independent of the previous ones facilitating scalability. Automated systems, e.g., hotel reservation systems, provide advanced scenarios for stateful services that require a certain sequence of requests that must be followed in order to fulfill the service goals. Designing and developing such services for advanced scenarios with REST constraints require rigorous approaches that are capable of creating web services that can be trusted for their behavior. Systems that can be trusted for their behavior can be termed as dependable systems. This thesis presents an integrated design, analysis and validation approach that facilitates the service developer to create dependable and stateful REST web services. The main contribution of this thesis is that we provide a novel model-driven methodology to design behavioral REST web service interfaces and their compositions. The behavioral interfaces provide information on what methods can be invoked on a service and the pre- and post-conditions of these methods. The methodology uses Unified Modeling Language (UML), as the modeling language, which has a wide user base and has mature tools that are continuously evolving. We have used UML class diagram and UML state machine diagram with additional design constraints to provide resource and behavioral models, respectively, for designing REST web service interfaces. These service design models serve as a specification document and the information presented in them have manifold applications. The service design models also contain information about the time and domain requirements of the service that can help in requirement traceability which is an important part of our approach. Requirement traceability helps in capturing faults in the design models and other elements of software development environment by tracing back and forth the unfulfilled requirements of the service. The information about service actors is also included in the design models which is required for authenticating the service requests by authorized actors since not all types of users have access to all the resources. In addition, following our design approach, the service developer can ensure that the designed web service interfaces will be REST compliant. The second contribution of this thesis is consistency analysis of the behavioral REST interfaces. To overcome the inconsistency problem and design errors in our service models, we have used semantic technologies. The REST interfaces are represented in web ontology language, OWL2, that can be part of the semantic web. These interfaces are used with OWL 2 reasoners to check unsatisfiable concepts which result in implementations that fail. This work is fully automated thanks to the implemented translation tool and the existing OWL 2 reasoners. The third contribution of this thesis is the verification and validation of REST web services. We have used model checking techniques with UPPAAL model checker for this purpose. The timed automata of UML based service design models are generated with our transformation tool that are verified for their basic characteristics like deadlock freedom, liveness, reachability and safety. The implementation of a web service is tested using a black-box testing approach. Test cases are generated from the UPPAAL timed automata and using the online testing tool, UPPAAL TRON, the service implementation is validated at runtime against its specifications. Requirement traceability is also addressed in our validation approach with which we can see what service goals are met and trace back the unfulfilled service goals to detect the faults in the design models. A final contribution of the thesis is an implementation of behavioral REST interfaces and service monitors from the service design models. The partial code generation tool creates code skeletons of REST web services with method pre and post-conditions. The preconditions of methods constrain the user to invoke the stateful REST service under the right conditions and the post condition constraint the service developer to implement the right functionality. The details of the methods can be manually inserted by the developer as required. We do not target complete automation because we focus only on the interface aspects of the web service. The applicability of the approach is demonstrated with a pedagogical example of a hotel room booking service and a relatively complex worked example of holiday booking service taken from the industrial context. The former example presents a simple explanation of the approach and the later worked example shows how stateful and timed web services offering complex scenarios and involving other web services can be constructed using our approach.
Resumo:
Technological innovations, the development of the internet, and globalization have increased the number and complexity of web applications. As a result, keeping web user interfaces understandable and usable (in terms of ease-of-use, effectiveness, and satisfaction) is a challenge. As part of this, designing userintuitive interface signs (i.e., the small elements of web user interface, e.g., navigational link, command buttons, icons, small images, thumbnails, etc.) is an issue for designers. Interface signs are key elements of web user interfaces because ‘interface signs’ act as a communication artefact to convey web content and system functionality, and because users interact with systems by means of interface signs. In the light of the above, applying semiotic (i.e., the study of signs) concepts on web interface signs will contribute to discover new and important perspectives on web user interface design and evaluation. The thesis mainly focuses on web interface signs and uses the theory of semiotic as a background theory. The underlying aim of this thesis is to provide valuable insights to design and evaluate web user interfaces from a semiotic perspective in order to improve overall web usability. The fundamental research question is formulated as What do practitioners and researchers need to be aware of from a semiotic perspective when designing or evaluating web user interfaces to improve web usability? From a methodological perspective, the thesis follows a design science research (DSR) approach. A systematic literature review and six empirical studies are carried out in this thesis. The empirical studies are carried out with a total of 74 participants in Finland. The steps of a design science research process are followed while the studies were designed and conducted; that includes (a) problem identification and motivation, (b) definition of objectives of a solution, (c) design and development, (d) demonstration, (e) evaluation, and (f) communication. The data is collected using observations in a usability testing lab, by analytical (expert) inspection, with questionnaires, and in structured and semi-structured interviews. User behaviour analysis, qualitative analysis and statistics are used to analyze the study data. The results are summarized as follows and have lead to the following contributions. Firstly, the results present the current status of semiotic research in UI design and evaluation and highlight the importance of considering semiotic concepts in UI design and evaluation. Secondly, the thesis explores interface sign ontologies (i.e., sets of concepts and skills that a user should know to interpret the meaning of interface signs) by providing a set of ontologies used to interpret the meaning of interface signs, and by providing a set of features related to ontology mapping in interpreting the meaning of interface signs. Thirdly, the thesis explores the value of integrating semiotic concepts in usability testing. Fourthly, the thesis proposes a semiotic framework (Semiotic Interface sign Design and Evaluation – SIDE) for interface sign design and evaluation in order to make them intuitive for end users and to improve web usability. The SIDE framework includes a set of determinants and attributes of user-intuitive interface signs, and a set of semiotic heuristics to design and evaluate interface signs. Finally, the thesis assesses (a) the quality of the SIDE framework in terms of performance metrics (e.g., thoroughness, validity, effectiveness, reliability, etc.) and (b) the contributions of the SIDE framework from the evaluators’ perspective.
Resumo:
In modem hitec industry Advanced Planning and Scheduling (APS) systems provide the basis for e-business solutions towards the suppliers and the customers. One objective of this thesis was to clarify the modem supply chain management with the APS systems and especially concentrate on the area of Collaborative Planning. In order Advanced Planning and Scheduling systems to be complete and usable, user interfaces are needed. Current Visual Basic user interfaces have faced many complaints and arguments from the users as well as from the development team. This thesis is trying to analyze the reasons and causes for the encountered problems and also provide ways to overcome them. The decision has been made to build the new user interfaces to be Web-enabled. Therefore another objective of this thesis was to research and find suitable technologies for building the Web-based user interfaces for Advanced Planning and Scheduling Systems in Nokia Demand/Supply Planning business area. Comparison between the most suitable technologies is made. Usability issues of Web-enabled user interfaces are also covered. The empirical part of the thesis includes design and implementation of a Web-based user interface with the chosen technology for a particular APS module that enables Collaborative Planning with suppliers.
Resumo:
Teollusuussovelluksissa vaaditaan nykyisin yhä useammin reaaliaikaista tiedon käsittelyä. Luotettavuus on yksi tärkeimmistä reaaliaikaiseen tiedonkäsittelyyn kykenevän järjestelmän ominaisuuksista. Sen saavuttamiseksi on sekä laitteisto, että ohjelmisto testattava. Tämän työn päätavoitteena on laitteiston testaaminen ja laitteiston testattavuus, koska luotettava laitteistoalusta on perusta tulevaisuuden reaaliaikajärjestelmille. Diplomityössä esitetään digitaaliseen signaalinkäsittelyyn soveltuvan prosessorikortin suunnittelu. Prosessorikortti on tarkoitettu sähkökoneiden ennakoivaa kunnonvalvontaa varten. Uusimmat DFT (Desing for Testability) menetelmät esitellään ja niitä sovelletaan prosessorikortin sunnittelussa yhdessä vanhempien menetelmien kanssa. Kokemukset ja huomiot menetelmien soveltuvuudesta raportoidaan työn lopussa. Työn tavoitteena on kehittää osakomponentti web -pohjaiseen valvontajärjestelmään, jota on kehitetty Sähkötekniikan osastolla Lappeenrannan teknillisellä korkeakoululla.