863 resultados para Integration of information systems
Resumo:
The main objective of this study is to assess the potential of the information technology industry in the Saint Petersburg area to become one of the new key industries in the Russian economy. To achieve this objective, the study analyzes especially the international competitiveness of the industry and the conditions for clustering. Russia is currently heavily dependent on its natural resources, which are the main source of its recent economic growth. In order to achieve good long-term economic performance, Russia needs diversification in its well-performing industries in addition to the ones operating in the field of natural resources. The Russian government has acknowledged this and started special initiatives to promote such other industries as information technology and nanotechnology. An interesting industry that is basically less than 20 years old and fast growing in Russia, is information technology. Information technology activities and markets are mainly concentrated in Russia’s two biggest cities, Moscow and Saint Petersburg, and areas around them. The information technology industry in the Saint Petersburg area, although smaller than Moscow, is especially dynamic and is gaining increasing foreign company presence. However, the industry is not yet internationally competitive as it lacks substantial and sustainable competitive advantages. The industry is also merely a potential global information technology cluster, as it lacks the competitive edge and a wide supplier and manufacturing base and other related parts of the whole information technology value system. Alone, the industry will not become a key industry in Russia, but it will, on the other hand, have an important supporting role for the development of other industries. The information technology market in the Saint Petersburg area is already large and if more tightly integrated to Moscow, they will together form a huge and still growing market sufficient for most companies operating in Russia currently and in the future. Therefore, the potential of information technology inside Russia is immense.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
The main objective of this study is to assess the potential of the information technology industry in the Saint Petersburg area to become one of the new key industries in the Russian economy. To achieve this objective, the study analyzes especially the international competitiveness of the industry and the conditions for clustering. Russia is currently heavily dependent on its natural resources, which are the main source of its recent economic growth. In order to achieve good long-term economic performance, Russia needs diversification in its well-performing industries in addition to the ones operating in the field of natural resources. The Russian government has acknowledged this and started special initiatives to promote such other industries as information technology and nanotechnology. An interesting industry that is basically less than 20 years old and fast growing in Russia, is information technology. Information technology activities and markets are mainly concentrated in Russia’s two biggest cities, Moscow and Saint Petersburg, and areas around them. The information technology industry in the Saint Petersburg area, although smaller than Moscow, is especially dynamic and is gaining increasing foreign company presence. However, the industry is not yet internationally competitive as it lacks substantial and sustainable competitive advantages. The industry is also merely a potential global information technology cluster, as it lacks the competitive edge and a wide supplier and manufacturing base and other related parts of the whole information technology value system. Alone, the industry will not become a key industry in Russia, but it will, on the other hand, have an important supporting role for the development of other industries. The information technology market in the Saint Petersburg area is already large and if more tightly integrated to Moscow, they will together form a huge and still growing market sufficient for most companies operating in Russia currently and in the future. Therefore, the potential of information technology inside Russia is immense.
Resumo:
Drying is a major step in the manufacturing process in pharmaceutical industries, and the selection of dryer and operating conditions are sometimes a bottleneck. In spite of difficulties, the bottlenecks are taken care of with utmost care due to good manufacturing practices (GMP) and industries' image in the global market. The purpose of this work is to research the use of existing knowledge for the selection of dryer and its operating conditions for drying of pharmaceutical materials with the help of methods like case-based reasoning and decision trees to reduce time and expenditure for research. The work consisted of two major parts as follows: Literature survey on the theories of spray dying, case-based reasoning and decision trees; working part includes data acquisition and testing of the models based on existing and upgraded data. Testing resulted in a combination of two models, case-based reasoning and decision trees, leading to more specific results when compared to conventional methods.
Resumo:
Adult neurogenesis is regulated by the neurogenic niche, through mechanisms that remain poorly defined. Here, we investigated whether niche-constituting astrocytes influence the maturation of adult-born hippocampal neurons using two independent transgenic approaches to block vesicular release from astrocytes. In these models, adult-born neurons but not mature neurons showed reduced glutamatergic synaptic input and dendritic spine density that was accompanied with lower functional integration and cell survival. By taking advantage of the mosaic expression of transgenes in astrocytes, we found that spine density was reduced exclusively in segments intersecting blocked astrocytes, revealing an extrinsic, local control of spine formation. Defects in NMDA receptor (NMDAR)-mediated synaptic transmission and dendrite maturation were partially restored by exogenous D-serine, whose extracellular level was decreased in transgenic models. Together, these results reveal a critical role for adult astrocytes in local dendritic spine maturation, which is necessary for the NMDAR-dependent functional integration of newborn neurons.
Resumo:
Objective To construct a Portuguese language index of information on the practice of diagnostic radiology in order to improve the standardization of the medical language and terminology. Materials and Methods A total of 61,461 definitive reports were collected from the database of the Radiology Information System at Hospital das Clínicas – Faculdade de Medicina de Ribeirão Preto (RIS/HCFMRP) as follows: 30,000 chest x-ray reports; 27,000 mammography reports; and 4,461 thyroid ultrasonography reports. The text mining technique was applied for the selection of terms, and the ANSI/NISO Z39.19-2005 standard was utilized to construct the index based on a thesaurus structure. The system was created in *html. Results The text mining resulted in a set of 358,236 (n = 100%) words. Out of this total, 76,347 (n = 21%) terms were selected to form the index. Such terms refer to anatomical pathology description, imaging techniques, equipment, type of study and some other composite terms. The index system was developed with 78,538 *html web pages. Conclusion The utilization of text mining on a radiological reports database has allowed the construction of a lexical system in Portuguese language consistent with the clinical practice in Radiology.
Resumo:
A rotating machine usually consists of a rotor and bearings that supports it. The nonidealities in these components may excite vibration of the rotating system. The uncontrolled vibrations may lead to excessive wearing of the components of the rotating machine or reduce the process quality. Vibrations may be harmful even when amplitudes are seemingly low, as is usually the case in superharmonic vibration that takes place below the first critical speed of the rotating machine. Superharmonic vibration is excited when the rotational velocity of the machine is a fraction of the natural frequency of the system. In such a situation, a part of the machine’s rotational energy is transformed into vibration energy. The amount of vibration energy should be minimised in the design of rotating machines. The superharmonic vibration phenomena can be studied by analysing the coupled rotor-bearing system employing a multibody simulation approach. This research is focused on the modelling of hydrodynamic journal bearings and rotorbearing systems supported by journal bearings. In particular, the non-idealities affecting the rotor-bearing system and their effect on the superharmonic vibration of the rotating system are analysed. A comparison of computationally efficient journal bearing models is carried out in order to validate one model for further development. The selected bearing model is improved in order to take the waviness of the shaft journal into account. The improved model is implemented and analyzed in a multibody simulation code. A rotor-bearing system that consists of a flexible tube roll, two journal bearings and a supporting structure is analysed employing the multibody simulation technique. The modelled non-idealities are the shell thickness variation in the tube roll and the waviness of the shaft journal in the bearing assembly. Both modelled non-idealities may cause subharmonic resonance in the system. In multibody simulation, the coupled effect of the non-idealities can be captured in the analysis. Additionally one non-ideality is presented that does not excite the vibrations itself but affects the response of the rotorbearing system, namely the waviness of the bearing bushing which is the non-rotating part of the bearing system. The modelled system is verified with measurements performed on a test rig. In the measurements the waviness of bearing bushing was not measured and therefore it’s affect on the response was not verified. In conclusion, the selected modelling approach is an appropriate method when analysing the response of the rotor-bearing system. When comparing the simulated results to the measured ones, the overall agreement between the results is concluded to be good.
Resumo:
Hoitajien informaatioteknologian hyväksyntä ja käyttö psykiatrisissa sairaaloissa Informaatioteknologian (IT) käyttö ei ole ollut kovin merkittävässä roolissa psykiatrisessa hoitotyössä, vaikka IT sovellusten on todettu vaikuttaneen radikaalisti terveydenhuollon palveluihin ja hoitohenkilökunnan työprosesseihin viime vuosina. Tämän tutkimuksen tavoitteena on kuvata psykiatrisessa hoitotyössä toimivan hoitohenkilökunnan informaatioteknologian hyväksyntää ja käyttöä ja luoda suositus, jonka avulla on mahdollista tukea näitä asioita psykiatrisissa sairaaloissa. Tutkimus koostuu viidestä osatutkimuksesta, joissa on hyödynnetty sekä tilastollisia että laadullisia tutkimusmetodeja. Tutkimusaineistot on kerätty yhdeksän akuuttipsykiatrian osaston hoitohenkilökunnan keskuudessa vuosien 2003-2006 aikana. Technology Acceptance Model (TAM) –teoriaa on hyödynnetty jäsentämään tutkimusprosessia sekä syventämään ymmärrystä saaduista tutkimustuloksista. Tutkimus osoitti kahdeksan keskeistä tekijää, jotka saattavat tukea psykiatrisessa sairaalassa toimivien hoitajien tietoteknologiasovellusten hyväksyntää ja hyödyntämistä, kun nämä tekijät otetaan huomioon uusia sovelluksia käyttöönotettaessa. Tekijät jakautuivat kahteen ryhmään; ulkoiset tekijät (resurssien suuntaaminen, yhteistyö, tietokonetaidot, IT koulutus, sovelluksen käyttöön liittyvä harjoittelu, potilas-hoitaja suhde), sekä käytön helppous ja sovelluksen käytettävyys (käytön ohjeistus, käytettävyyden varmistaminen). TAM teoria todettiin käyttökelpoiseksi tulosten tulkinnassa. Kehitetty suositus sisältää ne toimenpiteet, joiden avulla on mahdollista tukea sekä organisaation johdon että hoitohenkilökunnan sitoutumista ja tätä kautta varmistaa uuden sovelluksen hyväksyntä ja käyttö hoitotyössä. Suositusta on mahdollista hyödyntää käytännössä kun uusia tietojärjestelmiä implementoidaan käyttöön psykiatrisissa sairaaloissa.
Resumo:
Tässä diplomityössä tutkittiin puuhakkeen esihydrolyysi- ja hakkuujätteen hydrolyysiprosessien integroimista sellutehtaaseen bioetanolin tuottamiseksi. Tällaisesta ns. biojalostamosta luotiin WinGEMS-simulointiohjelmalla simulointimalli, jonka avulla tutkittiin bioetanoliprosessin vaikutusta sellutehtaan massa- ja energiataseisiin sekä alustavaa biojalostamon kannattavuutta. Simuloinnissa tarkasteltiin kolmea eri tapausta, joissa mäntysellun tuotannon ajateltiin olevan 1000 tonnia päivässä ja hakkuujätettä käytettävän 10 % tarvittavan kuitupuun määrästä: 1) Puuhakkeen esihydrolyysi ja hakkuujätteen hydrolyysi etanolin tuottamiseksi 2) Puuhakkeen esihydrolyysi, hakkuujäte kuorikattilaan poltettavaksi 3) Ei esihydrolyysiä, hakkuujäte kuorikattilaan poltettavaksi Verrattuna tapaukseen 3, puun kulutus kasvaa 16 % esihydrolysoitaessa puuhake ennen keittoa tapauksissa 1 ja 2. Kasvaneella puun kulutuksella tuotetaan tapauksessa 1 149 tonnia etanolia ja 240 MWh enemmän ylimääräsähköä päivässä. Tapauksessa 2 tuotetaan 68 tonnia etanolia ja 460 MWh enemmän ylimääräsähköä päivässä. Tämä tuottaisi vuotuista lisäkassavirtaa 18,8 miljoonaa euroa tapauksessa 1 ja 9,4 miljoonaa euroa tapauksessa 2. Hydrolyysin tuoteliuoksen, hydrolysaatin, haihduttaminen sekä hydrolyysiprosessien orgaanisten jäännöstuotteiden haihduttaminen ja polttaminen kasvattavat haihduttamon ja soodakattilan kuormitusta. Verrattuna tapaukseen 3, tapauksissa 1 ja 2 haihduttamon vaiheiden määrä on kasvatettava viidestä seitsemään ja tarvittavat lämmönsiirtopinta-alat lähes kaksinkertaistettava. Soodakattilan kuormitus kasvaa 39 % tapauksessa 1 ja 26 % tapauksessa 2.
Resumo:
In the present paper we characterize the optimal use of Poisson signals to establish incentives in the "bad" and "good" news models of Abreu et al. [1]. In the former, for small time intervals the signals' quality is high and we observe a "selective" use of information; otherwise there is a "mass" use. In the latter, for small time intervals the signals' quality is low and we observe a "fine" use of information; otherwise there is a "non-selective" use. JEL: C73, D82, D86. KEYWORDS: Repeated Games, Frequent Monitoring, Public Monitoring, Infor- mation Characteristics.
Resumo:
This thesis consists of three main theoretical themes: quality of data, success of information systems, and metadata in data warehousing. Loosely defined, metadata is descriptive data about data, and, in this thesis, master data means reference data about customers, products etc. The objective of the thesis is to contribute to an implementation of a metadata management solution for an industrial enterprise. The metadata system incorporates a repository, integration, delivery and access tools, as well as semantic rules and procedures for master data maintenance. It targets to improve maintenance processes and quality of hierarchical master data in the case company’s informational systems. That should bring benefits to whole organization in improved information quality, especially in cross-system data consistency, and in more efficient and effective data management processes. As the result of this thesis, the requirements for the metadata management solution in case were compiled, and the success of the new information system and the implementation project was evaluated.
Resumo:
The purpose of this thesis is to investigate projects funded in European 7th framework Information and Communication Technology- work programme. The research has been limited to issue ”Pervasive and trusted network and service infrastructure” and the aim is to find out which are the most important topics into which research will concentrate in the future. The thesis will provide important information for the Department of Information Technology in Lappeenranta University of Technology. First in this thesis will be investigated what are the requirements for the projects which were funded in “Pervasive and trusted network and service infrastructure” – programme 2007. Second the projects funded according to “Pervasive and trusted network and service infrastructure”-programme will be listed in to tables and the most important keywords will be gathered. Finally according to the keyword appearances the vision of the most important future topics will be defined. According to keyword-analysis the wireless networks are in important role in the future and core networks will be implemented with fiber technology to ensure fast data transfer. Software development favors Service Oriented Architecture (SOA) and open source solutions. The interoperability and ensuring the privacy are in key role in the future. 3D in all forms and content delivery are important topics as well. When all the projects were compared, the most important issue was discovered to be SOA which leads the way to cloud computing.