19 resultados para Linear Attention,Conditional Language Model,Natural Language Generation,FLAX,Rare diseases
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Concentrated solar power (CSP) is a renewable energy technology, which could contribute to overcoming global problems related to pollution emissions and increasing energy demand. CSP utilizes solar irradiation, which is a variable source of energy. In order to utilize CSP technology in energy production and reliably operate a solar field including thermal energy storage system, dynamic simulation tools are needed in order to study the dynamics of the solar field, to optimize production and develop control systems. The object of this Master’s Thesis is to compare different concentrated solar power technologies and configure a dynamic solar field model of one selected CSP field design in the dynamic simulation program Apros, owned by VTT and Fortum. The configured model is based on German Novatec Solar’s linear Fresnel reflector design. Solar collector components including dimensions and performance calculation were developed, as well as a simple solar field control system. The preliminary simulation results of two simulation cases under clear sky conditions were good; the desired and stable superheated steam conditions were maintained in both cases, while, as expected, the amount of steam produced was reduced in the case having lower irradiation conditions. As a result of the model development process, it can be concluded, that the configured model is working successfully and that Apros is a very capable and flexible tool for configuring new solar field models and control systems and simulating solar field dynamic behaviour.
Resumo:
The subject of the thesis is automatic sentence compression with machine learning, so that the compressed sentences remain both grammatical and retain their essential meaning. There are multiple possible uses for the compression of natural language sentences. In this thesis the focus is generation of television program subtitles, which often are compressed version of the original script of the program. The main part of the thesis consists of machine learning experiments for automatic sentence compression using different approaches to the problem. The machine learning methods used for this work are linear-chain conditional random fields and support vector machines. Also we take a look which automatic text analysis methods provide useful features for the task. The data used for machine learning is supplied by Lingsoft Inc. and consists of subtitles in both compressed an uncompressed form. The models are compared to a baseline system and comparisons are made both automatically and also using human evaluation, because of the potentially subjective nature of the output. The best result is achieved using a CRF - sequence classification using a rich feature set. All text analysis methods help classification and most useful method is morphological analysis. Tutkielman aihe on suomenkielisten lauseiden automaattinen tiivistäminen koneellisesti, niin että lyhennetyt lauseet säilyttävät olennaisen informaationsa ja pysyvät kieliopillisina. Luonnollisen kielen lauseiden tiivistämiselle on monta käyttötarkoitusta, mutta tässä tutkielmassa aihetta lähestytään television ohjelmien tekstittämisen kautta, johon käytännössä kuuluu alkuperäisen tekstin lyhentäminen televisioruudulle paremmin sopivaksi. Tutkielmassa kokeillaan erilaisia koneoppimismenetelmiä tekstin automaatiseen lyhentämiseen ja tarkastellaan miten hyvin erilaiset luonnollisen kielen analyysimenetelmät tuottavat informaatiota, joka auttaa näitä menetelmiä lyhentämään lauseita. Lisäksi tarkastellaan minkälainen lähestymistapa tuottaa parhaan lopputuloksen. Käytetyt koneoppimismenetelmät ovat tukivektorikone ja lineaarisen sekvenssin mallinen CRF. Koneoppimisen tukena käytetään tekstityksiä niiden eri käsittelyvaiheissa, jotka on saatu Lingsoft OY:ltä. Luotuja malleja vertaillaan Lopulta mallien lopputuloksia evaluoidaan automaattisesti ja koska teksti lopputuksena on jossain määrin subjektiivinen myös ihmisarviointiin perustuen. Vertailukohtana toimii kirjallisuudesta poimittu menetelmä. Tutkielman tuloksena paras lopputulos saadaan aikaan käyttäen CRF sekvenssi-luokittelijaa laajalla piirrejoukolla. Kaikki kokeillut teksin analyysimenetelmät auttavat luokittelussa, joista tärkeimmän panoksen antaa morfologinen analyysi.
Resumo:
Last two decades have seen a rapid change in the global economic and financial situation; the economic conditions in many small and large underdeveloped countries started to improve and they became recognized as emerging markets. This led to growth in the amounts of global investments in these countries, partly spurred by expectations of higher returns, favorable risk-return opportunities, and better diversification alternatives to global investors. This process, however, has not been without problems and it has emphasized the need for more information on these markets. In particular, the liberalization of financial markets around the world, globalization of trade and companies, recent formation of economic and regional blocks, and the rapid development of underdeveloped countries during the last two decades have brought a major challenge to the financial world and researchers alike. This doctoral dissertation studies one of the largest emerging markets, namely Russia. The motivation why the Russian equity market is worth investigating includes, among other factors, its sheer size, rapid and robust economic growth since the turn of the millennium, future prospect for international investors, and a number of important major financial reforms implemented since the early 1990s. Another interesting feature of the Russian economy, which gives motivation to study Russian market, is Russia’s 1998 financial crisis, considered as one of the worst crisis in recent times, affecting both developed and developing economies. Therefore, special attention has been paid to Russia’s 1998 financial crisis throughout this dissertation. This thesis covers the period from the birth of the modern Russian financial markets to the present day, Special attention is given to the international linkage and the 1998 financial crisis. This study first identifies the risks associated with Russian market and then deals with their pricing issues. Finally some insights about portfolio construction within Russian market are presented. The first research paper of this dissertation considers the linkage of the Russian equity market to the world equity market by examining the international transmission of the Russia’s 1998 financial crisis utilizing the GARCH-BEKK model proposed by Engle and Kroner. Empirical results shows evidence of direct linkage between the Russian equity market and the world market both in regards of returns and volatility. However, the weakness of the linkage suggests that the Russian equity market was only partially integrated into the world market, even though the contagion can be clearly seen during the time of the crisis period. The second and the third paper, co-authored with Mika Vaihekoski, investigate whether global, local and currency risks are priced in the Russian stock market from a US investors’ point of view. Furthermore, the dynamics of these sources of risk are studied, i.e., whether the prices of the global and local risk factors are constant or time-varying over time. We utilize the multivariate GARCH-M framework of De Santis and Gérard (1998). Similar to them we find price of global market risk to be time-varying. Currency risk also found to be priced and highly time varying in the Russian market. Moreover, our results suggest that the Russian market is partially segmented and local risk is also priced in the market. The model also implies that the biggest impact on the US market risk premium is coming from the world risk component whereas the Russian risk premium is on average caused mostly by the local and currency components. The purpose of the fourth paper is to look at the relationship between the stock and the bond market of Russia. The objective is to examine whether the correlations between two classes of assets are time varying by using multivariate conditional volatility models. The Constant Conditional Correlation model by Bollerslev (1990), the Dynamic Conditional Correlation model by Engle (2002), and an asymmetric version of the Dynamic Conditional Correlation model by Cappiello et al. (2006) are used in the analysis. The empirical results do not support the assumption of constant conditional correlation and there was clear evidence of time varying correlations between the Russian stocks and bond market and both asset markets exhibit positive asymmetries. The implications of the results in this dissertation are useful for both companies and international investors who are interested in investing in Russia. Our results give useful insights to those involved in minimising or managing financial risk exposures, such as, portfolio managers, international investors, risk analysts and financial researchers. When portfolio managers aim to optimize the risk-return relationship, the results indicate that at least in the case of Russia, one should account for the local market as well as currency risk when calculating the key inputs for the optimization. In addition, the pricing of exchange rate risk implies that exchange rate exposure is partly non-diversifiable and investors are compensated for bearing the risk. Likewise, international transmission of stock market volatility can profoundly influence corporate capital budgeting decisions, investors’ investment decisions, and other business cycle variables. Finally, the weak integration of the Russian market and low correlations between Russian stock and bond market offers good opportunities to the international investors to diversify their portfolios.
Resumo:
Tämä diplomityö on tehty Patria Vehicles Oy:n toimeksiannosta. Patria Vehicles Oy:n tuotantoon kuuluvat vaativiin maasto-olosuhteisiin soveltuvat sotilasajoneuvot. Tutkimuksen tarkoituksena oli kehittää menetelmäohjeet, kuinka FEM-analyysillä voidaan tutkia tuotekehitysvaiheessa ajoneuvon korin teräsrakenteiden värähtelyominaisuuksia ja dynaamista käyttäytymistä. Tutkimuksessa on käytetty Ideas-FEM-ohjelmistoa. Dynaamisten ongelmien ratkaisemiseksi on ymmärrettävä rakenteiden dynaamista käyttäytymistä. Rakenteiden käyttäytymistä ja muodonmuutoksia on tutkittava kriittisillä ominaistaajuuksilla. Tutkimuksessa on selvitetty, kuinka ajoneuvon elementtimallilla voidaan tehdä ominaisvärähtely- ja vastelaskentaa. Ominaisvärähtelylaskennalla selvitetään rakenteen ominaismuodot ja -taajuudet. Vastelaskennalla tutkitaan erilaisten herätteiden vaikutuksia ajoneuvon dynaamiseen käyttäytymiseen ja määritetään herätteistä rakenteeseen aiheutuvat vasteet ja herätteiden siirtyvyys rakenteessa. Lisäksi tutkitaan herätteiden aiheuttamia todellisia jännityksiä ja siirtymiä, jotta saadaan selville rakenteen todelliset rasitukset. Analyyseillä voidaan tutkia, kuinka ajoneuvon korirakennetta on jäykistettävä ja vaimennettava, jotta siinä ei esiinny haitallista melua ja värähtelyä.
Resumo:
This thesis investigates the effectiveness of time-varying hedging during the financial crisis of 2007 and the European Debt Crisis of 2010. In addition, the seven test economies are part of the European Monetary Union and these countries are in different economical states. Time-varying hedge ratio was constructed using conditional variances and correlations, which were created by using multivariate GARCH models. Here we have used three different underlying portfolios: national equity markets, government bond markets and the combination of these two. These underlying portfolios were hedged by using credit default swaps. Empirical part includes the in-sample and out-of-sample analysis, which are constructed by using constant and dynamic models. Moreover, almost in every case dynamic models outperform the constant ones in the determination of the hedge ratio. We could not find any statistically significant evidence to support the use of asymmetric dynamic conditional correlation model. In addition, our findings are in line with prior literature and support the use of time-varying hedge ratio. Finally, we found that in some cases credit default swaps are not suitable instruments for hedging and they act more as a speculative instrument.
Resumo:
CHARGE syndrome, Sotos syndrome and 3p deletion syndrome are examples of rare inherited syndromes that have been recognized for decades but for which the molecular diagnostics only have been made possible by recent advances in genomic research. Despite these advances, development of diagnostic tests for rare syndromes has been hindered by diagnostic laboratories having limited funds for test development, and their prioritization of tests for which a (relatively) high demand can be expected. In this study, the molecular diagnostic tests for CHARGE syndrome and Sotos syndrome were developed, resulting in their successful translation into routine diagnostic testing in the laboratory of Medical Genetics (UTUlab). In the CHARGE syndrome group, mutation was identified in 40.5% of the patients and in the Sotos syndrome group, in 34%, reflecting the use of the tests in routine diagnostics in differential diagnostics. In CHARGE syndrome, the low prevalence of structural aberrations was also confirmed. In 3p deletion syndrome, it was shown that small terminal deletions are not causative for the syndrome, and that testing with arraybased analysis provides a reliable estimate of the deletion size but benign copy number variants complicate result interpretation. During the development of the tests, it was discovered that finding an optimal molecular diagnostic strategy for a given syndrome is always a compromise between the sensitivity, specificity and feasibility of applying a new method. In addition, the clinical utility of the test should be considered prior to test development: sometimes a test performing well in a laboratory has limited utility for the patient, whereas a test performing poorly in the laboratory may have a great impact on the patient and their family. At present, the development of next generation sequencing methods is changing the concept of molecular diagnostics of rare diseases from single tests towards whole-genome analysis.
Resumo:
Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.
Resumo:
Linguistic modelling is a rather new branch of mathematics that is still undergoing rapid development. It is closely related to fuzzy set theory and fuzzy logic, but knowledge and experience from other fields of mathematics, as well as other fields of science including linguistics and behavioral sciences, is also necessary to build appropriate mathematical models. This topic has received considerable attention as it provides tools for mathematical representation of the most common means of human communication - natural language. Adding a natural language level to mathematical models can provide an interface between the mathematical representation of the modelled system and the user of the model - one that is sufficiently easy to use and understand, but yet conveys all the information necessary to avoid misinterpretations. It is, however, not a trivial task and the link between the linguistic and computational level of such models has to be established and maintained properly during the whole modelling process. In this thesis, we focus on the relationship between the linguistic and the mathematical level of decision support models. We discuss several important issues concerning the mathematical representation of meaning of linguistic expressions, their transformation into the language of mathematics and the retranslation of mathematical outputs back into natural language. In the first part of the thesis, our view of the linguistic modelling for decision support is presented and the main guidelines for building linguistic models for real-life decision support that are the basis of our modeling methodology are outlined. From the theoretical point of view, the issues of representation of meaning of linguistic terms, computations with these representations and the retranslation process back into the linguistic level (linguistic approximation) are studied in this part of the thesis. We focus on the reasonability of operations with the meanings of linguistic terms, the correspondence of the linguistic and mathematical level of the models and on proper presentation of appropriate outputs. We also discuss several issues concerning the ethical aspects of decision support - particularly the loss of meaning due to the transformation of mathematical outputs into natural language and the issue or responsibility for the final decisions. In the second part several case studies of real-life problems are presented. These provide background and necessary context and motivation for the mathematical results and models presented in this part. A linguistic decision support model for disaster management is presented here – formulated as a fuzzy linear programming problem and a heuristic solution to it is proposed. Uncertainty of outputs, expert knowledge concerning disaster response practice and the necessity of obtaining outputs that are easy to interpret (and available in very short time) are reflected in the design of the model. Saaty’s analytic hierarchy process (AHP) is considered in two case studies - first in the context of the evaluation of works of art, where a weak consistency condition is introduced and an adaptation of AHP for large matrices of preference intensities is presented. The second AHP case-study deals with the fuzzified version of AHP and its use for evaluation purposes – particularly the integration of peer-review into the evaluation of R&D outputs is considered. In the context of HR management, we present a fuzzy rule based evaluation model (academic faculty evaluation is considered) constructed to provide outputs that do not require linguistic approximation and are easily transformed into graphical information. This is achieved by designing a specific form of fuzzy inference. Finally the last case study is from the area of humanities - psychological diagnostics is considered and a linguistic fuzzy model for the interpretation of outputs of multidimensional questionnaires is suggested. The issue of the quality of data in mathematical classification models is also studied here. A modification of the receiver operating characteristics (ROC) method is presented to reflect variable quality of data instances in the validation set during classifier performance assessment. Twelve publications on which the author participated are appended as a third part of this thesis. These summarize the mathematical results and provide a closer insight into the issues of the practicalapplications that are considered in the second part of the thesis.
Resumo:
A rotating machine usually consists of a rotor and bearings that supports it. The nonidealities in these components may excite vibration of the rotating system. The uncontrolled vibrations may lead to excessive wearing of the components of the rotating machine or reduce the process quality. Vibrations may be harmful even when amplitudes are seemingly low, as is usually the case in superharmonic vibration that takes place below the first critical speed of the rotating machine. Superharmonic vibration is excited when the rotational velocity of the machine is a fraction of the natural frequency of the system. In such a situation, a part of the machine’s rotational energy is transformed into vibration energy. The amount of vibration energy should be minimised in the design of rotating machines. The superharmonic vibration phenomena can be studied by analysing the coupled rotor-bearing system employing a multibody simulation approach. This research is focused on the modelling of hydrodynamic journal bearings and rotorbearing systems supported by journal bearings. In particular, the non-idealities affecting the rotor-bearing system and their effect on the superharmonic vibration of the rotating system are analysed. A comparison of computationally efficient journal bearing models is carried out in order to validate one model for further development. The selected bearing model is improved in order to take the waviness of the shaft journal into account. The improved model is implemented and analyzed in a multibody simulation code. A rotor-bearing system that consists of a flexible tube roll, two journal bearings and a supporting structure is analysed employing the multibody simulation technique. The modelled non-idealities are the shell thickness variation in the tube roll and the waviness of the shaft journal in the bearing assembly. Both modelled non-idealities may cause subharmonic resonance in the system. In multibody simulation, the coupled effect of the non-idealities can be captured in the analysis. Additionally one non-ideality is presented that does not excite the vibrations itself but affects the response of the rotorbearing system, namely the waviness of the bearing bushing which is the non-rotating part of the bearing system. The modelled system is verified with measurements performed on a test rig. In the measurements the waviness of bearing bushing was not measured and therefore it’s affect on the response was not verified. In conclusion, the selected modelling approach is an appropriate method when analysing the response of the rotor-bearing system. When comparing the simulated results to the measured ones, the overall agreement between the results is concluded to be good.
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.
Resumo:
The ability to recognize potential knowledge and convert it into business opportunities is one of the key factors of renewal in uncertain environments. This thesis examines absorptive capacity in the context of non-research and development innovation, with a primary focus on the social interaction that facilitates the absorption of knowledge. It proposes that everyone is and should be entitled to take part in the social interaction that shapes individual observations into innovations. Both innovation and absorptive capacity have been traditionally related to research and development departments and institutions. These innovations need to be adopted and adapted by others. This so-called waterfall model of innovations is only one aspect of new knowledge generation and innovation. In addition to this Science–Technology–Innovation perspective, more attention has been recently paid to the Doing–Using–Interacting mode of generating new knowledge and innovations. The amount of literature on absorptive capacity is vast, yet the concept is reified. The greater part of the literature links absorptive capacity to research and development departments. Some publications have focused on the nature of absorptive capacity in practice and the role of social interaction in enhancing it. Recent literature on absorptive capacity calls for studies that shed light on the relationship between individual absorptive capacity and organisational absorptive capacity. There has also been a call to examine absorptive capacity in non-research and development environments. Drawing on the literature on employee-driven innovation and social capital, this thesis looks at how individual observations and ideas are converted into something that an organisation can use. The critical phases of absorptive capacity, during which the ideas of individuals are incorporated into a group context, are assimilation and transformation. These two phases are seen as complementary: whereas assimilation is the application of easy-to-accept knowledge, transformation challenges the current way of thinking. The two require distinct kinds of social interaction and practices. The results of this study can been crystallised thus: “Enhancing absorptive capacity in practicebased non-research and development context is to organise the optimal circumstances for social interaction. Every individual is a potential source of signals leading to innovations. The individual, thus, recognises opportunities and acquires signals. Through the social interaction processes of assimilation and transformation, these signals are processed into the organisation’s reality and language. The conditions of creative social capital facilitate the interplay between assimilation and transformation. An organisation that strives for employee-driven innovation gains the benefits of a broader surface for opportunity recognition and faster absorption.” If organisations and managers become more aware of the benefits of enhancing absorptive capacity in practice, they have reason to assign resources to those practices that facilitate the creation of absorptive capacity. By recognising the underlying social mechanisms and structural features that lead either to assimilation or transformation, it is easier to balance between renewal and effective operations.
Resumo:
Denna avhandling tar sin utgångspunkt i ett ifrågasättande av effektiviteten i EU:s konditionalitetspolitik avseende minoritetsrättigheter. Baserat på den rationalistiska teoretiska modellen, External Incentives Model of Governance, syftar denna hypotesprövande avhandling till att förklara om tidsavståndet på det potentiella EU medlemskapet påverkar lagstiftningsnivån avseende minoritetsspråksrättigheter. Mätningen av nivån på lagstiftningen avseende minoritetsspråksrättigheter begränsas till att omfatta icke-diskriminering, användning av minoritetsspråk i officiella sammanhang samt minoriteters språkliga rättigheter i utbildningen. Metodologiskt används ett jämförande angreppssätt både avseende tidsramen för studien, som sträcker sig mellan 2003 och 2010, men även avseende urvalet av stater. På basis av det \"mest lika systemet\" kategoriseras staterna i tre grupper efter deras olika tidsavstånd från det potentiella EU medlemskapet. Hypotesen som prövas är följande: ju kortare tidsavstånd till det potentiella EU medlemskapet desto större sannolikhet att staternas lagstiftningsnivå inom de tre områden som studeras har utvecklats till en hög nivå. Studien visar att hypotesen endast bekräftas delvis. Resultaten avseende icke-diskriminering visar att sambandet mellan tidsavståndet och nivån på lagstiftningen har ökat markant under den undersökta tidsperioden. Detta samband har endast stärkts mellan kategorin av stater som ligger tidsmässigt längst bort ett potentiellt EU medlemskap och de två kategorier som ligger närmare respektive närmast ett potentiellt EU medlemskap. Resultaten avseende användning av minoritetsspråk i officiella sammanhang och minoriteters språkliga rättigheter i utbildningen visar inget respektive nästan inget samband mellan tidsavståndet och utvecklingen på lagstiftningen mellan 2003 och 2010.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
As the national language of the PRC, the world's growing economic power and the sovereign of Hong Kong, Putonghua is a language with multiple facets of relevance for the current Special Administrative Region. This paper seeks to explore and explain different representations of Putonghua in Hong Kong's leading English-language newspaper South China Morning Post in articles published between January 2012 and February 2013. The representations are studied in the context of the different discourses in which they appear, some of which feature language(s) as a central theme and some more marginally. An overview is first presented of the scholarly research on the most important developments in Hong Kong's complex language scene from the beginnings of the colony until present day, with the aim of detecting developments and attitudes with potential relevance or parallels to the context of Putonghua today. The paper then reflects on the media and its role in producing and perpetuating discourses in the society, before turning to more practical considerations on Hong Kong's English and Chinese language media and the role of South China Morning Post in it. The methods used in analysing the discourses are those of discourse analysis, with textual analysis as its starting point, in which close attention is paid to linguistic forms as the concrete representations of meanings in a text. Particularly the immediate contexts of the appearances of the word “Putonghua” in the articles were studied carefully to detect vocabulary, grammar and semantical choices as signs of different discourses, potentially also revealing fundamental underlying assumptions and other “hidden meanings” in the text. Some of the most distinctive discourses in which different representations of Putonghua appeared were the Instrumental value for the individual (in which Putonghua was represented as a form of social capital); Othering of the mainlanders (in which Putonghua served as a concrete marker of distinction); Belonging to China (Putonghua as a symbol of unity); and Cultural distinctiveness of Hong Kong (Putonghua as a threat to Hong Kong's history and culture, as embodied in Cantonese). Some of these discourses were more prominent than others, and for example the discourse of Belonging to China was relatively rarely enacted in Hongkongers' voices. In general, the findings were not surprising in the light of the history, but showed a fair degree of consistency with what has been written earlier about the languages and attitudes towards them in Hong Kong. It has often been noted that Putonghua and its relation with Cantonese is a matter linked with the social identity of the colony and its citizens. While it appeared that there were no strict taboos in the representations of Putonghua in the societal context, the possibility of self-censorship cannot be ruled out as a factor toning down political discourses in the representations.