56 resultados para Languages, Modern.
Resumo:
The purpose of this study is to examine how well risk parity works in terms of risk, return and diversification relative to more traditional minimum variance, 1/N and 60/40 portfolios. Risk parity portfolios were constituted of five risk sources; three common asset classes and two alternative beta investment strategies. The three common asset classes were equities, bonds and commodities, and the alternative beta investment strategies were carry trade and trend following. Risk parity portfolios were constructed using five different risk measures of which four were tail risk measures. The risk measures were standard deviation, Value-at-Risk, Expected Shortfall, modified Value-at-Risk and modified Expected Shortfall. We studied also how sensitive risk parity is to the choice of risk measure. The hypothesis is that risk parity portfolios provide better return with the same amount of risk and are better diversified than the benchmark portfolios. We used two data sets, monthly and weekly data. The monthly data was from the years 1989-2011 and the weekly data was from the years 2000-2011. Empirical studies showed that risk parity portfolios provide better diversification since the diversification is made at the risk level. Risk based portfolios provided superior return compared to the asset based portfolios. Using tail risk measures in risk parity portfolios do not necessarily provide better hedge from tail events than standard deviation.
Resumo:
Käsittelen tutkimuksessani monikielisyyden ilmenemistä Canterburyn katedraalikoulun oppilaiden 1600-luvun loppupuoliskolla esittämissä näytelmissä, jotka löytyvät käsikirjoituksesta Lit.Ms.E41 (Canterburyn katedraalin arkisto). Tämä käsikirjoitus sisältää puheita ja näytelmiä, joiden kielinä ovat englanti, latina ja vähemmissä määrin myös kreikka. Useissa näytelmissä esiintyy koodinvaihtoa näiden kielten välillä, ja tutkielmassani selvitän, millaisia syntaktisia ilmenemismuotoja ja pragmaattisia merkityksiä koodinvaihdolla on. Teoreettinen viitekehykseni on yhdistelmä filologista ja lingvististä lähestymistapaa. Olen sisällyttänyt tutkielmaani aiemman koodinvaihdon tutkimuksen lisäksi Brownin ja Levinsonin kohteliaisuusteorian, jonka avulla erityisesti puhujien välisiin sosiaalisiin suhteisiin liittyviä koodinvaihdon funktioita voidaan luokitella. Koska historiallinen koodinvaihto on tutkimusaiheena vielä melko tuore, käsittelen perusteellisesti erilaisia metodologisia ratkaisuja. Valitsemani metodi yhdistää perinteisen filologisen lähiluvun pragmaattiseen analyysiin, jonka kautta työssäni vaikuttavat muun muassa rationaalisuuden ja empatian käsitteet. Analyysini perusteella kävi ilmi, että erityisen yleinen koodinvaihdon funktio on mahdollistaa intertekstuaalisuus, jolla edelleen voidaan ilmaista esimerkiksi solidaarisuutta eli sosiaalista läheisyyttä tai loukata puhuteltavaa. Solidaarisuus oli myös ilman intertekstuaalisuutta yleinen koodinvaihdon funktio. Näiden lisäksi koodinvaihdon funktioita olivat muun muassa kasvoja uhkaavat teot, eufemismit, stilistiset efektit sekä diskurssin avustaminen. Syntaktisten ilmenemismuotojen osalta keskeisin havainto oli, että koodinvaihdon ja lainaamisen erottaminen ei ole tarpeellista tai edes kannattavaa kaikissa tilanteissa. Lisäksi voitiin todeta, että valittu metodi soveltui hyvin aineiston analysoimiseen, ja sitä tulisi soveltaa mahdollisuuksien mukaan laajempaan materiaaliin sekä muiden pragmaattisten ilmiöiden tutkimiseen.
Resumo:
This thesis is part of the Arctic Materials Technologies Development –project, which aims to research and develop manufacturing techniques, especially welding, for Arctic areas. The main target of this paper is to clarify what kind of European metallic materials are used, or can be used, in Arctic. These materials include mainly carbon steels but also stainless steels and aluminium and its alloys. Standardized materials, their properties and also some recent developments are being introduced. Based on this thesis it can be said that carbon steels (shipbuilding and pipeline steels) have been developed based on needs of industry and steels exist, which can be used in Arctic areas. Still, these steels cannot be fully benefited, because rules and standards are under development. Also understanding of fracture behavior of new ultra high strength steels is not yet good enough, which means that research methods (destructive and non-destructive methods) need to be developed too. The most of new nickel-free austenitic and austenitic-ferritic stainless steels can be used in cold environment. Ferritic and martensitic stainless steels are being developed for better weldability and these steels are mainly developed in nuclear industry. Aluminium alloys are well suitable for subzero environment and these days high strength aluminium alloys are available also as thick sheets. Nanotechnology makes it possible to manufacture steels, stainless steels and aluminium alloys with even higher strength. Joining techniques needs to be developed and examined properly to achieve economical and safe way to join these modern alloys.
Resumo:
Formal software development processes and well-defined development methodologies are nowadays seen as the definite way to produce high-quality software within time-limits and budgets. The variety of such high-level methodologies is huge ranging from rigorous process frameworks like CMMI and RUP to more lightweight agile methodologies. The need for managing this variety and the fact that practically every software development organization has its own unique set of development processes and methods have created a profession of software process engineers. Different kinds of informal and formal software process modeling languages are essential tools for process engineers. These are used to define processes in a way which allows easy management of processes, for example process dissemination, process tailoring and process enactment. The process modeling languages are usually used as a tool for process engineering where the main focus is on the processes themselves. This dissertation has a different emphasis. The dissertation analyses modern software development process modeling from the software developers’ point of view. The goal of the dissertation is to investigate whether the software process modeling and the software process models aid software developers in their day-to-day work and what are the main mechanisms for this. The focus of the work is on the Software Process Engineering Metamodel (SPEM) framework which is currently one of the most influential process modeling notations in software engineering. The research theme is elaborated through six scientific articles which represent the dissertation research done with process modeling during an approximately five year period. The research follows the classical engineering research discipline where the current situation is analyzed, a potentially better solution is developed and finally its implications are analyzed. The research applies a variety of different research techniques ranging from literature surveys to qualitative studies done amongst software practitioners. The key finding of the dissertation is that software process modeling notations and techniques are usually developed in process engineering terms. As a consequence the connection between the process models and actual development work is loose. In addition, the modeling standards like SPEM are partially incomplete when it comes to pragmatic process modeling needs, like light-weight modeling and combining pre-defined process components. This leads to a situation, where the full potential of process modeling techniques for aiding the daily development activities can not be achieved. Despite these difficulties the dissertation shows that it is possible to use modeling standards like SPEM to aid software developers in their work. The dissertation presents a light-weight modeling technique, which software development teams can use to quickly analyze their work practices in a more objective manner. The dissertation also shows how process modeling can be used to more easily compare different software development situations and to analyze their differences in a systematic way. Models also help to share this knowledge with others. A qualitative study done amongst Finnish software practitioners verifies the conclusions of other studies in the dissertation. Although processes and development methodologies are seen as an essential part of software development, the process modeling techniques are rarely used during the daily development work. However, the potential of these techniques intrigues the practitioners. As a conclusion the dissertation shows that process modeling techniques, most commonly used as tools for process engineers, can also be used as tools for organizing the daily software development work. This work presents theoretical solutions for bringing the process modeling closer to the ground-level software development activities. These theories are proven feasible by presenting several case studies where the modeling techniques are used e.g. to find differences in the work methods of the members of a software team and to share the process knowledge to a wider audience.
Resumo:
kuv., 14 x 22 cm
Resumo:
The thesis presents results obtained during the authors PhD-studies. First systems of language equations of a simple form consisting of just two equations are proved to be computationally universal. These are systems over unary alphabet, that are seen as systems of equations over natural numbers. The systems contain only an equation X+A=B and an equation X+X+C=X+X+D, where A, B, C and D are eventually periodic constants. It is proved that for every recursive set S there exists natural numbers p and d, and eventually periodic sets A, B, C and D such that a number n is in S if and only if np+d is in the unique solution of the abovementioned system of two equations, so all recursive sets can be represented in an encoded form. It is also proved that all recursive sets cannot be represented as they are, so the encoding is really needed. Furthermore, it is proved that the family of languages generated by Boolean grammars is closed under injective gsm-mappings and inverse gsm-mappings. The arguments apply also for the families of unambiguous Boolean languages, conjunctive languages and unambiguous languages. Finally, characterizations for morphisims preserving subfamilies of context-free languages are presented. It is shown that the families of deterministic and LL context-free languages are closed under codes if and only if they are of bounded deciphering delay. These families are also closed under non-codes, if they map every letter into a submonoid generated by a single word. The family of unambiguous context-free languages is closed under all codes and under the same non-codes as the families of deterministic and LL context-free languages.
Resumo:
The threat of global warming and its consequences are widely recognized, and the question of how to proceed with the long transition towards fossil fuel -neutral economies concerns many nations and people. At the same time the world’s primary energy use is predicted to increase significantly during the next decades as a result of global population and welfare increase. Improved energy efficiency and increased use of renewable energy sources in the world’s energy mix play important roles in the future energy production and consumption. The objective of this thesis is to study how novel renewable energy technologies, such as distributed small-scale bio-fueled combined heat and power production and wind power technologies could be commercialized efficiently. A wide array of attributes may contribute to the diffusion of new products. In general, the bioenergy and wind power technologies are in emerging phases, and the diffusion stage varies from country to country. The effects of firms’ technology choices, collaboration and alliances are studied in this thesis. Furthermore, the roles of national energy infrastructure and energy support schemes in the commercialization of new renewable energy products are explored. The empirical data is based on energy expert interviews, financial and patent data, and literature reviews of different case studies. The thesis comprises two parts. The first part provides an overview of the study, and the second part includes six research publications. The results reveal that small-scale bio-fueled combined heat and power production and wind power technologies are still in emerging phases in their life cycles, and energy support schemes are crucial in the market diffusion. The study contributes to earlier findings in the literature and industry by confirming that adequate energy policies and energy infrastructure are fundamental in the commercialization of novel renewable energy technologies. Firm-specific issues, including business relationships and new business models, and market-related issues will have a more significant role in the market penetration in the future, when the technologies mature and become competitive without political support schemes.
Resumo:
Finland’s rural landscape has gone through remarkable changes from the 1950’s, due to agricultural developments. Changed farming practices have influenced especially traditional landscape management, and modifications in the arable land structure and grasslands transitions are notable. The review of the previous studies reveal the importance of the rural landscape composition and structure to species and landscape diversity, whereas including the relevance in presence of the open ditches, size of the field and meadow patches, topology of the natural and agricultural landscape. This land-change study includes applying remote sensed data from two time series and empirical geospatial analysis in Geographic Information Systems (GIS). The aims of this retrospective research is to detect agricultural landscape use and land cover change (LULCC) dynamics and discuss the consequences of agricultural intensification to landscape structure covering from the aspects of landscape ecology. Measurements of LULC are derived directly from pre-processed aerial images by a variety of analytical procedures, including statistical methods and image interpretation. The methodological challenges are confronted in the process of landscape classification and combining change detection approaches with landscape indices. Particular importance is paid on detecting agricultural landscape features at a small scale, demanding comprehensive understanding of such agroecosystems. Topological properties of the classified arable land and valley are determined in order to provide insight and emphasize the aspect the field edges in the agricultural landscape as important habitat. Change detection dynamics are presented with change matrix and additional calculations of gain, loss, swap, net change, change rate and tendencies are made. Transition’s possibility is computed following Markov’s probability model and presented with matrix, as well. Thesis’s spatial aspect is revealed with illustrative maps providing knowledge of location of the classified landscape categories and location of the dynamics of the changes occurred. It was assured that in Rekijoki valley’s landscape, remarkable changes in landscape has occurred. Landscape diversity has been strongly influenced by modern agricultural landscape change, as NP of open ditches has decreased and the MPS of the arable plot has decreased. Overall change in the diversity of the landscape is determined with the decrease of SHDI. Valley landscape considered as traditional land use area has experienced major transitional changes, as meadows class has lost almost one third of the area due to afforestation. Also, remarkable transitions have occurred from forest to meadow and arable land to built area. Boundaries measurement between modern and traditional landscape has indicated noticeable proportional increase in arable land-forest edge type and decrease in arable land-meadow edge type. Probability calculations predict higher future changes for traditional landscape, but also for arable land turning into built area.
Resumo:
Jussi-Pekka Hakkaraisen esitys Ala-Saksin Valtiollisessa ja Yliopistollisessa kirjastossa Göttingenissä 28.5.2013
Resumo:
Jussi-Pekka Hakkaraisen esitys Viron kielen instituutissa (Eesti keele instituut) Tallinnassa 23.10.2013.
Resumo:
Can crowdsourcing solutions serve many masters? Can they be beneficial for both, for the layman or native speakers of minority languages on the one hand and serious linguistic research on the other? How did an infrastructure that was designed to support linguistics turn out to be a solution for raising awareness of native languages? Since 2012 the National Library of Finland has been developing the Digitisation Project for Kindred Languages, in which the key objective is to support a culture of openness and interaction in linguistic research, but also to promote crowdsourcing as a tool for participation of the language community in research. In the course of the project, over 1,200 monographs and nearly 111,000 pages of newspapers in Finno-Ugric languages will be digitised and made available in the Fenno-Ugrica digital collection. This material was published in the Soviet Union in the 1920s and 1930s, and users have had only sporadic access to the material. The publication of open-access and searchable materials from this period is a goldmine for researchers. Historians, social scientists and laymen with an interest in specific local publications can now find text materials pertinent to their studies. The linguistically-oriented population can also find writings to delight them: (1) lexical items specific to a given publication, and (2) orthographically-documented specifics of phonetics. In addition to the open access collection, we developed an open source code OCR editor that enables the editing of machine-encoded text for the benefit of linguistic research. This tool was necessary since these rare and peripheral prints often include already archaic characters, which are neglected by modern OCR software developers but belong to the historical context of kindred languages, and are thus an essential part of the linguistic heritage. When modelling the OCR editor, it was essential to consider both the needs of researchers and the capabilities of lay citizens, and to have them participate in the planning and execution of the project from the very beginning. By implementing the feedback iteratively from both groups, it was possible to transform the requested changes as tools for research that not only supported the work of linguistics but also encouraged the citizen scientists to face the challenge and work with the crowdsourcing tools for the benefit of research. This presentation will not only deal with the technical aspects, developments and achievements of the infrastructure but will highlight the way in which user groups, researchers and lay citizens were engaged in a process as an active and communicative group of users and how their contributions were made to mutual benefit.
Resumo:
In this work, the feasibility of the floating-gate technology in analog computing platforms in a scaled down general-purpose CMOS technology is considered. When the technology is scaled down the performance of analog circuits tends to get worse because the process parameters are optimized for digital transistors and the scaling involves the reduction of supply voltages. Generally, the challenge in analog circuit design is that all salient design metrics such as power, area, bandwidth and accuracy are interrelated. Furthermore, poor flexibility, i.e. lack of reconfigurability, the reuse of IP etc., can be considered the most severe weakness of analog hardware. On this account, digital calibration schemes are often required for improved performance or yield enhancement, whereas high flexibility/reconfigurability can not be easily achieved. Here, it is discussed whether it is possible to work around these obstacles by using floating-gate transistors (FGTs), and analyze problems associated with the practical implementation. FGT technology is attractive because it is electrically programmable and also features a charge-based built-in non-volatile memory. Apart from being ideal for canceling the circuit non-idealities due to process variations, the FGTs can also be used as computational or adaptive elements in analog circuits. The nominal gate oxide thickness in the deep sub-micron (DSM) processes is too thin to support robust charge retention and consequently the FGT becomes leaky. In principle, non-leaky FGTs can be implemented in a scaled down process without any special masks by using “double”-oxide transistors intended for providing devices that operate with higher supply voltages than general purpose devices. However, in practice the technology scaling poses several challenges which are addressed in this thesis. To provide a sufficiently wide-ranging survey, six prototype chips with varying complexity were implemented in four different DSM process nodes and investigated from this perspective. The focus is on non-leaky FGTs, but the presented autozeroing floating-gate amplifier (AFGA) demonstrates that leaky FGTs may also find a use. The simplest test structures contain only a few transistors, whereas the most complex experimental chip is an implementation of a spiking neural network (SNN) which comprises thousands of active and passive devices. More precisely, it is a fully connected (256 FGT synapses) two-layer spiking neural network (SNN), where the adaptive properties of FGT are taken advantage of. A compact realization of Spike Timing Dependent Plasticity (STDP) within the SNN is one of the key contributions of this thesis. Finally, the considerations in this thesis extend beyond CMOS to emerging nanodevices. To this end, one promising emerging nanoscale circuit element - memristor - is reviewed and its applicability for analog processing is considered. Furthermore, it is discussed how the FGT technology can be used to prototype computation paradigms compatible with these emerging two-terminal nanoscale devices in a mature and widely available CMOS technology.