86 resultados para Information interfaces and presentation: Miscellaneous.

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study we used market settlement prices of European call options on stock index futures to extract implied probability distribution function (PDF). The method used produces a PDF of returns of an underlying asset at expiration date from implied volatility smile. With this method, the assumption of lognormal distribution (Black-Scholes model) is tested. The market view of the asset price dynamics can then be used for various purposes (hedging, speculation). We used the so called smoothing approach for implied PDF extraction presented by Shimko (1993). In our analysis we obtained implied volatility smiles from index futures markets (S&P 500 and DAX indices) and standardized them. The method introduced by Breeden and Litzenberger (1978) was then used on PDF extraction. The results show significant deviations from the assumption of lognormal returns for S&P500 options while DAX options mostly fit the lognormal distribution. A deviant subjective view of PDF can be used to form a strategy as discussed in the last section.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This Master´s thesis explores how the a global industrial corporation’s after sales service department should arrange its installed base management practices in order to maintain and utilize the installed base information effectively. Case company has product-related records, such as product’s lifecycle information, service history information and information about product’s performance. Information is collected and organized often case by case, therefore the systematic and effective use of installed base information is difficult also the overview of installed base is missing. The goal of the thesis study was to find out how the case company can improve the installed base maintenance and management practices and improve the installed base information availability and reliability. Installed base information management practices were first examined through the literature. The empirical research was conducted by the interviews and questionnaire survey, targeted to the case company’s service department. The research purpose was to find out the challenges related to case company´s service department’s information management practices. The study also identified the installed base information needs and improvement potential in the availability of information. Based on the empirical research findings, recommendations for improve installed base management practices and information availability were created. Grounding of the recommendations, the case company is suggested the following proposals for action: Service report development, improving the change management process, ensuring the quality of the product documentation in early stages of product life cycle and decision to improve installed base management practices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Supply chains are becoming increasingly dependent on information ex-change in today’s world, and any disruption can cause severe repercus-sions to the flow of materials in the chain. The speed, accuracy and amount of information are key factors. The aim in this thesis is to address a gap in the research by focusing on information exchange and the risks related to it in a multimodal wood supply chain operating between the Baltic States and Finland. The study involved interviewing people engaged in logistics management in the supply chain in question. The main risk the interviewees identified arose from the sea logistics system, which held a lot of different kinds of information. The threat of breakdown in the Internet connection was also found to hinder the operations significantly. A vulnerability analysis was carried out in order to identify the main actors and channels of infor-mation flow in the supply chain. The analysis revealed that the most important and therefore most vulnerable information-exchange channels were those linking the terminal superintendent, the operative managers and the mill managers. The study gives a holistic picture of the investigated supply chain. Information-exchange-related risks varied greatly. One of the most frequently mentioned was the risk of information inaccuracy, which was usually due to the fact that those in charge of the various functions did not fully understand the consequences for the entire chain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fast changing environment sets pressure on firms to share large amount of information with their customers and suppliers. The terms information integration and information sharing are essential for facilitating a smooth flow of information throughout the supply chain, and the terms are used interchangeably in research literature. By integrating and sharing information, firms want to improve their logistics performance. Firms share information with their suppliers and customers by using traditional communication methods (telephone, fax, Email, written and face-to-face contacts) and by using advanced or modern communication methods such as electronic data interchange (EDI), enterprise resource planning (ERP), web-based procurement systems, electronic trading systems and web portals. Adopting new ways of using IT is one important resource for staying competitive on the rapidly changing market (Saeed et al. 2005, 387), and an information system that provides people the information they need for performing their work, will support company performance (Boddy et al. 2005, 26). The purpose of this research has been to test and understand the relationship between information integration with key suppliers and/or customers and a firm’s logistics performance, especially when information technology (IT) and information systems (IS) are used for integrating information. Quantitative and qualitative research methods have been used to perform the research. Special attention has been paid to the scope, level and direction of information integration (Van Donk & van der Vaart 2005a). In addition, the four elements of integration (Jahre & Fabbe-Costes 2008) are closely tied to the frame of reference. The elements are integration of flows, integration of processes and activities, integration of information technologies and systems and integration of actors. The study found that information integration has a low positive relationship to operational performance and a medium positive relationship to strategic performance. The potential performance improvements found in this study vary from efficiency, delivery and quality improvements (operational) to profit, profitability or customer satisfaction improvements (strategic). The results indicate that although information integration has an impact on a firm’s logistics performance, all performance improvements have not been achieved. This study also found that the use of IT and IS have a mediocre positive relationship to information integration. Almost all case companies agreed on that the use of IT and IS could facilitate information integration and improve their logistics performance. The case companies felt that an implementation of a web portal or a data bank would benefit them - enhance their performance and increase information integration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the growth in new technologies, using online tools have become an everyday lifestyle. It has a greater impact on researchers as the data obtained from various experiments needs to be analyzed and knowledge of programming has become mandatory even for pure biologists. Hence, VTT came up with a new tool, R Executables (REX) which is a web application designed to provide a graphical interface for biological data functions like Image analysis, Gene expression data analysis, plotting, disease and control studies etc., which employs R functions to provide results. REX provides a user interactive application for the biologists to directly enter the values and run the required analysis with a single click. The program processes the given data in the background and prints results rapidly. Due to growth of data and load on server, the interface has gained problems concerning time consumption, poor GUI, data storage issues, security, minimal user interactive experience and crashes with large amount of data. This thesis handles the methods by which these problems were resolved and made REX a better application for the future. The old REX was developed using Python Django and now, a new programming language, Vaadin has been implemented. Vaadin is a Java framework for developing web applications and the programming language is extremely similar to Java with new rich components. Vaadin provides better security, better speed, good and interactive interface. In this thesis, subset functionalities of REX was selected which includes IST bulk plotting and image segmentation and implemented those using Vaadin. A code of 662 lines was programmed by me which included Vaadin as the front-end handler while R language was used for back-end data retrieval, computing and plotting. The application is optimized to allow further functionalities to be migrated with ease from old REX. Future development is focused on including Hight throughput screening functions along with gene expression database handling

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Efficient problem solving in cellular networks is important when enhancing the network performance and liability. Analysis of calls and packet switched sessions in protocol level between the network elements is an important part of this process. They can provide very detailed information about error situations which otherwise would be difficult to recognise. In this thesis we seek solutions for monitoring GPRS/EDGE sessions in two specific interfaces simultaneously in such manner that all information important to the users will be provided in easily understandable form. This thesis focuses on Abis and AGPRS interfaces of GSM radio network and introduces a solution for managing the correlation between these interfaces by using signalling messages and common parameters as linking elements. ~: Finally this thesis presents an implementation of GPRS/EDGE session monitoring application for Abis and AGPRS interfaces and evaluates its benefits to the end users. Application is implemented as a part of Windows based 3G/GSM network analyser.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014