42 resultados para Very large scale integration
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
In the field of molecular biology, scientists adopted for decades a reductionist perspective in their inquiries, being predominantly concerned with the intricate mechanistic details of subcellular regulatory systems. However, integrative thinking was still applied at a smaller scale in molecular biology to understand the underlying processes of cellular behaviour for at least half a century. It was not until the genomic revolution at the end of the previous century that we required model building to account for systemic properties of cellular activity. Our system-level understanding of cellular function is to this day hindered by drastic limitations in our capability of predicting cellular behaviour to reflect system dynamics and system structures. To this end, systems biology aims for a system-level understanding of functional intraand inter-cellular activity. Modern biology brings about a high volume of data, whose comprehension we cannot even aim for in the absence of computational support. Computational modelling, hence, bridges modern biology to computer science, enabling a number of assets, which prove to be invaluable in the analysis of complex biological systems, such as: a rigorous characterization of the system structure, simulation techniques, perturbations analysis, etc. Computational biomodels augmented in size considerably in the past years, major contributions being made towards the simulation and analysis of large-scale models, starting with signalling pathways and culminating with whole-cell models, tissue-level models, organ models and full-scale patient models. The simulation and analysis of models of such complexity very often requires, in fact, the integration of various sub-models, entwined at different levels of resolution and whose organization spans over several levels of hierarchy. This thesis revolves around the concept of quantitative model refinement in relation to the process of model building in computational systems biology. The thesis proposes a sound computational framework for the stepwise augmentation of a biomodel. One starts with an abstract, high-level representation of a biological phenomenon, which is materialised into an initial model that is validated against a set of existing data. Consequently, the model is refined to include more details regarding its species and/or reactions. The framework is employed in the development of two models, one for the heat shock response in eukaryotes and the second for the ErbB signalling pathway. The thesis spans over several formalisms used in computational systems biology, inherently quantitative: reaction-network models, rule-based models and Petri net models, as well as a recent formalism intrinsically qualitative: reaction systems. The choice of modelling formalism is, however, determined by the nature of the question the modeler aims to answer. Quantitative model refinement turns out to be not only essential in the model development cycle, but also beneficial for the compilation of large-scale models, whose development requires the integration of several sub-models across various levels of resolution and underlying formal representations.
Resumo:
Enterprise resource planning (ERP) software is used to combine all the functions happening inside the organization with the help of one software. All the data is centralized which makes it easy to manage information for all participants. The literature on ERP is studied thoroughly the whole process of adoption till the implementation and final evaluations. But studies that focus on small and medium sized enterprises are limited in number when compared to the large scale enterprises. In case of Pakistan, research is very limited. In this thesis, the author tries to analyze the current status of SMEs usage of ERP system. The benefits obtained and challenges faced by SMEs of Pakistan are studied. Framework presented by Shang and Seddon (2000) is used to understand the benefits obtained by the SMEs in Pakistan. This is a comprehensive framework that classifies the benefits obtained by the ERP adoption, into five categories: operational benefits, managerial benefits, Strategic benefits, IT benefits, and Organizational benefits. The results show that SMEs of Pakistan are also getting many benefits after adoption of ERP. Most of the firms had implemented SAP software. Operational benefits were mentioned by all the firms. The most important benefits were report generation, quick access to critical information, better product and cost planning. Respondents also mentioned that they had reduced corruption as a result of ERP implementation. It is also an important benefit considering high corruption rate in Pakistan. Along with benefits, challenges faced by Pakistani SMEs included infrastructure problems like electricity, difficulties with integration of one module with other module, costs of adoption and lack of skilled ERP consultants. Further studies in this regard can be conducted on cloud based ERP which is fast growing all around the world.
Resumo:
The objective of my thesis was to find out how mobile TV service will influence TV consumption behaviour of the Finns. In particular the study focuses on the consumption behaviour of a well educated urban people. For my thesis, I provided a detailed analysis of the study results of a large scale questionnaire research FinPilot from the year 2005 based on an assignment of Nokia Ltd. In order to deepen the study results, I focused on the above mentioned group of young people with good education. The goal of the FinPilot research was to give answers to the following questions: what kind of programs, in what kind of circumstances, and for which reasons are they watched when using the mobile television service. The results of the research consisted mainly of data like figures, graphics etc. The data was explaned from the helicopter perspective, for it gave additional value to the research and consequently to my own thesis. My study offered complementary, unique information about their needs as it was based on questionnaires supplemented by individual interviews of the group members, their free comments as well as group discussions. The study results proved that mobile TV service did not increase the total TV consumption time. The time used for watching the mobile TV was significantly shorter than the time for watching the traditional TV. According to my study, the young urban people with good education are more interested to adapt the mobile TV service than the average Finns. Being eager to utilize the added value offered by the mobile TVs they are a potential target group in launching and marketing processes. On the basis of the outcome of the thesis, the future of mobile TV service seems very promising. The content and the pricing, however, have to match the user's needs and expectations. All the study results prove that there exists a social order for mobile TV service.
Resumo:
Firms operating in a changing environment have a need for structures and practices that provide flexibility and enable rapid response to changes. Given the challenges they face in attempts to keep up with market needs, they have to continuously improve their processes and products, and develop new products to match market requirements. Success in changing markets depends on the firm's ability to convert knowledge into innovations, and consequently their internal structures and capabilities have an important role in innovation activities. According 10 the dynamic capability view of the firm, firms thus need dynamic capabilities in (he form ofassets, processes and structures that enable strategic flexibility and support entrepreneurial opportunity sensing and exploitation. Dynamic capabilities are also needed in conditions of rapid change in the operating environment, and in activities such as new product development and expansion to new markets. Despite the growing interest in these issues and the theoretical developments in the field of strategy research, there are still only very few empirical studies, and large-scale empirical studies in particular, that provide evidence that firms'dynamic capabilities are reflected in performance differences. This thesis represents an attempt to advance the research by providing empirical evidence of thelinkages between the firm's dynamic capabilities and performance in intenationalization and innovation activities. The aim is thus to increase knowledge and enhance understanding of the organizational factors that explain interfirm performance differences. The study is in two parts. The first part is the introduction and the second part comprises five research publications covering the theoretical foundations of the dynamic capability view and subsequent empirical analyses. Quantitative research methodology is used throughout. The thesis contributes to the literature in several ways. While a lot of prior research on dynamic capabilities is conceptual in nature, or conducted through case studies, this thesis introduces empirical measures for assessing the different aspects, and uses large-scale sampling to investigate the relationships between them and performance indicators. The dynamic capability view is further developed by integrating theoretical frameworks and research traditions from several disciplines. The results of the study provide support for the basic tenets of the dynamic capability view. The empirical findings demonstrate that the firm's ability to renew its knowledge base and other intangible assets, its proactive, entrepreneurial behavior, and the structures and practices that support operational flexibility arepositively related to performance indicators.
Resumo:
Työssä selvitetään kompostointilaitoksen lopputuotteen erilaisia käyttömahdollisuuksia. Yleisten käyttökohteiden pohjalta on luotu malli Vapo Oy Biotech:n toimittaman Himangan kompostointilaitoksen kompostin hyödyntämiselle. Laitos sijaitsee Himangan kunnassa Keski- Pohjanmaalla. Laitoksella kompostoidaan pääasiassa Himangan ja ympäröivien kuntien jätevesilietteitä turkiseläinlantaa. Jätehuolto Suomessa ja koko EU:n alueella elää voimakasta muutoskautta. Orgaanisten jätteiden vienti kaatopaikoille pyritään tulevaisuudessa lopettamaan kokonaan, mikä on lisännyt niiden käsittelyä mm. kompostoimalla. Kompostoinnin lopputuotteena saadaan ravinteikasta, humuspitoista ainesta. Kompostoinnin yhtenä tavoitteena on tuottaa hyötykäyttöön soveltuvaa tuotetta, joten käyttökohteen löytäminen kompostille on laitoksen toiminnan kannalta erittäin tärkeää. Kompostin käyttömahdollisuuksiin vaikuttavat lainsäädäntö, kompostin ominaisuudet sekä paikalliset olosuhteet. Käyttökohteet on työssä jaettu ei-energiakäyttöön ja energiakäyttöön. Ei-energiakäyttöön kuuluvat maanparannuskäyttö, lannoitekäyttö, käyttö kasvualustassa, maisemointi ja julkinen rakentaminen sekä joitakin erityissovelluksia. Energiakäytön puolella on tarkasteltu kompostin soveltuvuutta erilaisille polttotekniikoille sekä vertailtu kompostia muihin kiinteisiin polttoaineisiin. Kompostien soveltuvuutta eri kohteisiin on arvioitu kompostin analyysi- ja kasvatuskoetulosten pohjalta. Kompostin ei-energiakaytössa on saatu eri tutkimuksissa lupaavia tuloksia. Kompostin on todettu soveltuvan erittäin hyvin mm. perunanviljelyyn, viljan ja nurmikasvien viljelyyn ja erilaisiin maisemointikohteisiin sekä kotipuutarhakäyttöön. Kompostin käyttöä polttoaineena ei ole vielä kokeiltu missään. Kompostin polton suurimmat ongelmat ovat korkea tuhka-, rikki- ja typpipitoisuus sekä epätasainen laatu.
Resumo:
Euroopan Unionin asettamat tavoitteet uusiutuvien energialähteiden lisäämiselle sähköntuotannossa ovat johtamassa tuulivoimalla tuotetun sähkön merkittävään kasvamiseen. Suomeenkin suunnitellaan suuria, useista kymmenistä tuulivoimaloista koostuvia tuulivoimapuistoja niin maalle kuin merelle. Tuulivoimapuiston suunnittelu on kokonaisuudessaan pitkä prosessi, johon sisältyy sähköteknisen suunnittelun lisäksi myös ympäristövaikutusten arviointiohjelma ja erinäiset lupa-asiat. Tämän diplomityön tavoitteena on kehittää menetelmiä, joiden avulla suurten tuulivoimapuistojen keskijänniteverkon teknistaloudellinen suunnittelu helpottuisi. Tuulivoimapuiston keskijänniteverkon parhaan teknistaloudellisen ratkaisun löytyminen riippuu useista muuttujista. Työssä kehitettiin laskentamallipohja, jonka avulla voidaan helposti ja nopeasti tarkastella erilaisten ratkaisuiden vaikutusta kokonaisuuteen. Tavoite on optimoida teknistaloudellisessa mielessä koko tuulivoimapuiston sisäinen keskijänniteverkko. Kehitettyä laskentamallipohjaa hyödynnettiin esimerkkiprojektina olleen tuulivoimapuiston keskijänniteverkon suunnittelussa. Mallin avulla voidaan laskea nopeasti verkon kustannukset koko pitoajalta. Kustannuslaskelmissa otetaan huomioon investointi-, häviö- ja keskeytyskustannukset.
Resumo:
The results shown in this thesis are based on selected publications of the 2000s decade. The work was carried out in several national and EC funded public research projects and in close cooperation with industrial partners. The main objective of the thesis was to study and quantify the most important phenomena of circulating fluidized bed combustors by developing and applying proper experimental and modelling methods using laboratory scale equipments. An understanding of the phenomena plays an essential role in the development of combustion and emission performance, and the availability and controls of CFB boilers. Experimental procedures to study fuel combustion behaviour under CFB conditions are presented in the thesis. Steady state and dynamic measurements under well controlled conditions were carried out to produce the data needed for the development of high efficiency, utility scale CFB technology. The importance of combustion control and furnace dynamics is emphasized when CFB boilers are scaled up with a once through steam cycle. Qualitative information on fuel combustion characteristics was obtained directly by comparing flue gas oxygen responses during the impulse change experiments with fuel feed. A one-dimensional, time dependent model was developed to analyse the measurement data Emission formation was studied combined with fuel combustion behaviour. Correlations were developed for NO, N2O, CO and char loading, as a function of temperature and oxygen concentration in the bed area. An online method to characterize char loading under CFB conditions was developed and validated with the pilot scale CFB tests. Finally, a new method to control air and fuel feeds in CFB combustion was introduced. The method is based on models and an analysis of the fluctuation of the flue gas oxygen concentration. The effect of high oxygen concentrations on fuel combustion behaviour was also studied to evaluate the potential of CFB boilers to apply oxygenfiring technology to CCS. In future studies, it will be necessary to go through the whole scale up chain from laboratory phenomena devices through pilot scale test rigs to large scale, commercial boilers in order to validate the applicability and scalability of the, results. This thesis shows the chain between the laboratory scale phenomena test rig (bench scale) and the CFB process test rig (pilot). CFB technology has been scaled up successfully from an industrial scale to a utility scale during the last decade. The work shown in the thesis, for its part, has supported the development by producing new detailed information on combustion under CFB conditions.
Resumo:
The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.
Resumo:
The purpose of this dissertation is to examine the dynamics of the socio-technical system in the field of ageing. The study stems from the notion that the ageing of the population as a powerful megatrend has wide societal effects, and is not just a matter for the social and health sector. The central topic in the study is change: not only the age structures and structures of society are changing, but also at the same time there is constant development, for instance, in technologies, infrastructures and cultural perceptions. The changing concept of innovation has widened the understanding of innovations related to ageing from medical and assistive technological innovations to service and social innovations, as well as systemic innovations at different levels, which means the intertwined and co-evolutionary change in technologies, structures, services and thinking models. By the same token, the perceptions of older people and old age are becoming more multi-faceted: old age is no longer equated to illnesses and decline, but visions of active ageing and a third age have emerged, which are framed by choices, opportunities, resources and consumption in later life. The research task in this study is to open up the processes and mechanisms of change in the field of ageing, which are studied as a complex, multi-level and interrelated socio-technical system. The question is about co-effective elements consisting of macro-level landscape changes, the existing socio-technical regime (the rule system, practices and structures) and bottom-up niche-innovations. Societal transitions do not account for the things inside the regime alone, or for the long-term changes in the landscape, nor for the radical innovations, but for the interplay between all these levels. The research problem is studied through five research articles, which offer micro-level case studies to macro-level phenomenon. Each of the articles focus on different aspects related to ageing and change, and utilise various datasets. The framework of this study leans on the studies of socio-technical systems and multi-level perspective on transitions mainly developed by Frank Geels. Essential factors in transition from one socio-technological regime to another are the co-evolutionary processes between landscape changes, regime level and experimental niches. Landscape level changes, like the ageing of the population, destabilise the regime in the forms of coming pressures. This destabilization offers windows for opportunity to niche-innovations outside or at fringe of the regime, which, through their breakthrough, accelerate the transition process. However, the change is not easy because of various kinds of lock-ins and inertia, which tend to maintain the stability of the regime. In this dissertation, a constructionist approach of society is applied leaning mainly to the ideas of Anthony Giddens’ theory of structuration, with the dual nature of structures. The change is taking place in the interplay between actors and structures: structures shape people’s practices, but at the same time these practices constitute and reproduce social systems. Technology and other material aspects, as part of socio-technical systems, and the use of them, also take part in the structuration process. The findings of the study point out that co-evolutionary and co-effective relationships between economic, cultural, technological and institutional fields, as well as relationships between landscape changes, changes in the local and regime-level practices and rule systems, are a very complex and multi-level dynamic socio-technical phenomenon. At the landscape level of ageing, which creates the pressures and triggers to the regime change, there are three remarkable megatrends: demographic change, changes in the global economy and the development of technologies. These exert pressures to the socio-technical regime, which as a rule system is experiencing changes in the form of new markets and consumer habits, new ways of perceiving ageing, new models of organising the health care and other services and as new ways of considering innovation and innovativeness. There are also inner dynamics in the relationships between these aspects within the regime. These are interrelated and coconstructed: the prevailing perceptions of ageing and innovation, for instance, reflect the ageing policies, innovation policies, societal structures, organising models, technology and scientific discussion, and vice versa. Technology is part of the inner dynamics of the sociotechnological regime. Physical properties of the artefacts set limitations and opportunities with regard to their functions and uses. The use of and discussion about technology, contributes producing and reproducing the perceptions of old age. For societal transition, micro-level changes are also needed, in form of niche-innovations, for instance new services, organisational models or new technologies, Regimes, as stabilitystriven systems, tend to generate incremental innovations, but radically new innovations are generated in experimental niches protected from ‘normal’ market selection. The windows of opportunity for radical novelties may be opened if the circumstances are favourable for instance by tensions in the socio-technical regime affected by landscape level changes. This dissertation indicates that a change is taking place, firstly, in the dynamic interactionbetween levels, as a result of purposive action and governance to some extent. Breaking the inertia and using the window of opportunity for change and innovation offered by dynamics between levels, presupposes the actors’ special capabilities and actions such as dynamic capabilities and distance management. Secondly, the change is taking place the socio-technological negotiations inside the regime: interaction between technological and social, which is embodied in the use of technology. The use of technology includes small-level contextual scripts that also participate in forming broader societal scripts (for instance defining old age at the society level), which in their turn affect the formation of policies for innovation and ageing. Thirdly, the change is taking place by the means of active formation of the multi-actor innovation networks, where the role of distance management is crucial to facilitate the communication between actors coming from different backgrounds as well as to help the niches born outside the regime to utilise the window of opportunity offered by regime destabilisation. This dissertation has both theoretical and practical contributions. This study participates in the discussion of action-oriented view on transition by opening up of the socio-technological, coevolutionary processes of the multi-faceted phenomenon of ageing, which has lacked systematic analyses. The focus of this study, however, is not on the large-scale coordination and governance, but rather on opening up the incremental elements and structuration processes, which contribute to the transition little by little, and which can be affected to. This increases the practical importance of this dissertation, by highlighting the importance of very tiny, everyday elements in the change processes in the long run.
Resumo:
In this dissertation, active galactic nuclei (AGN) are discussed, as they are seen with the high-resolution radio-astronomical technique called Very Long Baseline Interferometry (VLBI). This observational technique provides very high angular resolution (_ 10−300 = 1 milliarcsecond). VLBI observations, performed at different radio frequencies (multi-frequency VLBI), allow to penetrate deep into the core of an AGN to reveal an otherwise obscured inner part of the jet and the vicinity of the AGN’s central engine. Multi-frequency VLBI data are used to scrutinize the structure and evolution of the jet, as well as the distribution of the polarized emission. These data can help to derive the properties of the plasma and the magnetic field, and to provide constraints to the jet composition and the parameters of emission mechanisms. Also VLBI data can be used for testing the possible physical processes in the jet by comparing observational results with results of numerical simulations. The work presented in this thesis contributes to different aspects of AGN physics studies, as well as to the methodology of VLBI data reduction. In particular, Paper I reports evidence of optical and radio emission of AGN coming from the same region in the inner jet. This result was obtained via simultaneous observations of linear polarization in the optical and in radio using VLBI technique of a sample of AGN. Papers II and III describe, in detail, the jet kinematics of the blazar 0716+714, based on multi-frequency data, and reveal a peculiar kinematic pattern: plasma in the inner jet appears to move substantially faster that that in the large-scale jet. This peculiarity is explained by the jet bending, in Paper III. Also, Paper III presents a test of the new imaging technique for VLBI data, the Generalized Maximum Entropy Method (GMEM), with the observed (not simulated) data and compares its results with the conventional imaging. Papers IV and V report the results of observations of the circularly polarized (CP) emission in AGN at small spatial scales. In particular, Paper IV presents values of the core CP for 41 AGN at 15, 22 and 43 GHz, obtained with the help of the standard Gain transfer (GT) method, which was previously developed by D. Homan and J.Wardle for the calibration of multi-source VLBI observations. This method was developed for long multi-source observations, when many AGN are observed in a single VLBI run. In contrast, in Paper V, an attempt is made to apply the GT method to single-source VLBI observations. In such observations, the object list would include only a few sources: a target source and two or three calibrators, and it lasts much shorter than the multi-source experiment. For the CP calibration of a single-source observation, it is necessary to have a source with zero or known CP as one of the calibrators. If the archival observations included such a source to the list of calibrators, the GT could also be used for the archival data, increasing a list of known AGN with the CP at small spatial scale. Paper V contains also calculation of contributions of different sourced of errors to the uncertainty of the final result, and presents the first results for the blazar 0716+714.
Resumo:
Nykyisessä valmistusteollisuudessa erilaisten robottien ja automatisoitujen tuotantovaiheiden rooli on erittäin merkittävä. Tarkasti suunnitellut liikkeet ja toimintavaiheet voidaan nykyisillä järjestelmillä ajoittaa tarkasti toisiinsa nähden, jolloin erilaisten virhetilanteidenkin sattuessa järjestelmä pystyy toimimaan tilanteen edellyttämällä tavalla. Automatisoinnin etuna on myös tuotannon muokkaaminen erilaisten tuotteiden valmistamiseen pienillä muutoksilla, jolloin tuotantokustannukset pysyvät matalina myös pienten valmistuserien tapauksissa. Usean akselin laitteissa eli niin sanotuissa moniakselikäytöissä laitteen toimintatarkkuus riippuu jokaisen liikeakselin tarkkuudesta. Liikkeenohjauksessa on perinteisesti ollut käytössä myötäkytketty paikkakaskadi, jonka virityksessä otetaan huomioon akselilla olevat erilaiset dynaamiset tilat ja käytettävät referenssit. Monissa nykyisissä hajautetuissa järjestelmissä eli moniakselikäytöissä, joissa jokaiselle akselille on oma ohjauslaite, ei yksittäisen akselin paikkavirhettä huomioida muiden akseleiden ohjauksessa. Työssä tutkitaan erilaisia moniakselijärjestelmien ohjausmenetelmiä ja myötäkytketyn paikkakaskadin toimintaa moniakselikäytössä pyritään parantamaan tuomalla paikkasäätimen rinnalle toinen säädin, jonka tulona on akseleiden välinen paikkaero.
Resumo:
According to the participant role approach (Salmivalli, Lagerspetz, Björkqvist, Österman, & Kaukiainen, 1996), bullying is a group phenomenon that is largely enabled and maintained by the classmates taking on different participant roles (e.g., reinforcers or assistants of the bully). There is, however, very little evidence on whether the bystander behaviors actually have an effect on the risk for victimization. Furthermore, the participant role approach implies that the bystanders should be used in putting an end to bullying. This view has been put into practice in the KiVa antibullying program, but it has not yet been investigated whether the program is effective. Four studies were conducted to investigate, (a) whether the behaviors of bystanders have an effect on the risk for victimization (Study I) and (b) whether the KiVa program reduces bullying and victimization and has other beneficial effects as well (Studies II–IV). The participants included large samples of elementary and lower secondary school students (Grades 1–9) from Finland. The assessments were done with web-based questionnaires including questions about bullying and victimization (both self- and peer reports), and about several bullying-related constructs. The results of this thesis suggest that bystander behaviors in bullying situations may influence the risk for victimization of vulnerable students. Moreover, the results indicate that the KiVa antibullying program is effective in reducing victimization and bullying. The program effects are larger in elementary schools than in lower secondary schools, whereas in Grades 8 and 9, they are larger for boys than girls for some peer-reported outcomes. The magnitude of the overall effects can be considered practically significant when obtained in a large-scale dissemination of the program.
Resumo:
Energy efficiency is one of the major objectives which should be achieved in order to implement the limited energy resources of the world in a sustainable way. Since radiative heat transfer is the dominant heat transfer mechanism in most of fossil fuel combustion systems, more accurate insight and models may cause improvement in the energy efficiency of the new designed combustion systems. The radiative properties of combustion gases are highly wavelength dependent. Better models for calculating the radiative properties of combustion gases are highly required in the modeling of large scale industrial combustion systems. With detailed knowledge of spectral radiative properties of gases, the modeling of combustion processes in the different applications can be more accurate. In order to propose a new method for effective non gray modeling of radiative heat transfer in combustion systems, different models for the spectral properties of gases including SNBM, EWBM, and WSGGM have been studied in this research. Using this detailed analysis of different approaches, the thesis presents new methods for gray and non gray radiative heat transfer modeling in homogeneous and inhomogeneous H2O–CO2 mixtures at atmospheric pressure. The proposed method is able to support the modeling of a wide range of combustion systems including the oxy-fired combustion scenario. The new methods are based on implementing some pre-obtained correlations for the total emissivity and band absorption coefficient of H2O–CO2 mixtures in different temperatures, gas compositions, and optical path lengths. They can be easily used within any commercial CFD software for radiative heat transfer modeling resulting in more accurate, simple, and fast calculations. The new methods were successfully used in CFD modeling by applying them to industrial scale backpass channel under oxy-fired conditions. The developed approaches are more accurate compared with other methods; moreover, they can provide complete explanation and detailed analysis of the radiation heat transfer in different systems under different combustion conditions. The methods were verified by applying them to some benchmarks, and they showed a good level of accuracy and computational speed compared to other methods. Furthermore, the implementation of the suggested banded approach in CFD software is very easy and straightforward.
Resumo:
The main aims of the present report are to describe the current state of railway transport in Russia, and to gather standpoints of Russian private transportation logistics sector towards the development of new railway connection called Rail Baltica Growth Corridor, connecting North-West Russia with Germany through the Baltic States and Poland. North-West Russia plays important role not only in Russian logistics, but also wider European markets as in container sea ports handling is approx. 2.5 mill. TEU p.a. and handling volume in all terminals is above 190 million tons p.a. The whole transportation logistics sector is shortly described as an operational environment for railways – this is done through technical and economic angles. Transportation development is always going in line with economics of the country, so the analysis on economical development is also presented. Logistics integration of the country is strongly influenced by its engagement in the international trade. Although, raw material handling at sea ports and container transports (imports) are blossoming, domestic transportation market is barely growing (in long-term perspective). Thus, recent entrance of Russia into World Trade Organization (WTO) is analyzed theme in this research, as the WTO is an important regulator of the foreign trade and enabler of volume growth in foreign trade related transportation logistics. However, WTO membership can influence negatively the development of Russia’s own industry and its volumes (these have been uncompetitive in global markets for decades). Data gathering in empirical part was accomplished by semi-structured case study interviews among North-West Russian logistics sector actors (private). These were conducted during years 2012-2013, and research compiles findings out of ten case company interviews. Although, there was no sea port involved in the study, most of the interviewed companies relied in European Logistics within significant parts in short sea shipping and truck combined transportation chains (in Russian part also using railways). As the results of the study, it could be concluded that Rail Baltica is seen as possible transport corridor in most of the interviewed companies, if there is enough cargo available. However, interviewees are a bit sceptical, because major and large-scale infrastructural improvements are needed. Delivery time, frequency and price level are three main factors influencing the attractiveness of Rail Baltica route. Price level is the most important feature, but if RB can offer other advantages such as higher frequency, shorter lead times or more developed set of value-added services, then some flexibility is possible for the price level. Environmental issues are not the main criteria of today, but are recognized and discussed among customers. Great uncertainty exists among respondents e.g. on forthcoming sulphur oxide ban on Baltic Sea shipping (whether or not it is going to be implemented in Russia). Rather surprisingly, transportation routes to Eastern Europe and Mediterranean area are having higher value and price space than those to Germany/Central Europe. Border crossing operations (traction monopoly at rails and customs), gauge widths as well as unclear decision-making processes (in Russia), are named as hindering factors. Performance standards for European connected logistics among Russian logistics sector representatives are less demanding as compared to neighbourhood countries belonging to EU.