29 resultados para range shift
Resumo:
The central theme of this thesis is the emancipation and further development of learning activity in higher education in the context of the ongoing digital transformation of our societies. It was developed in response to the highly problematic mainstream approach to digital re-instrumentation of teaching and studying practises in contemporary higher education. The mainstream approach is largely based on centralisation, standardisation, commoditisation, and commercialisation, while re-producing the general patterns of control, responsibility, and dependence that are characteristic for activity systems of schooling. Whereas much of educational research and development focuses on the optimisation and fine-tuning of schooling, the overall inquiry that is underlying this thesis has been carried out from an explicitly critical position and within a framework of action science. It thus conceptualises learning activity in higher education not only as an object of inquiry but also as an object to engage with and to intervene into from a perspective of intentional change. The knowledge-constituting interest of this type of inquiry can be tentatively described as a combination of heuristic-instrumental (guidelines for contextualised action and intervention), practical-phronetic (deliberation of value-rational aspects of means and ends), and developmental-emancipatory (deliberation of issues of power, self-determination, and growth) aspects. Its goal is the production of orientation knowledge for educational practise. The thesis provides an analysis, argumentation, and normative claim on why the development of learning activity should be turned into an object of individual|collective inquiry and intentional change in higher education, and why the current state of affairs in higher education actually impedes such a development. It argues for a decisive shift of attention to the intentional emancipation and further development of learning activity as an important cultural instrument for human (self-)production within the digital transformation. The thesis also attempts an in-depth exploration of what type of methodological rationale can actually be applied to an object of inquiry (developing learning activity) that is at the same time conceptualised as an object of intentional change within the ongoing digital transformation. The result of this retrospective reflection is the formulation of “optimally incomplete” guidelines for educational R&D practise that shares the practicalphronetic (value related) and developmental-emancipatory (power related) orientations that had been driving the overall inquiry. In addition, the thesis formulates the instrumental-heuristic knowledge claim that the conceptual instruments that were adapted and validated in the context of a series of intervention studies provide means to effectively intervene into existing practise in higher education to support the necessary development of (increasingly emancipated) networked learning activity. It suggests that digital networked instruments (tools and services) generally should be considered and treated as transient elements within critical systemic intervention research in higher education. It further argues for the predominant use of loosely-coupled, digital networked instruments that allow for individual|collective ownership, control, (co-)production, and re-use in other contexts and for other purposes. Since the range of digital instrumentation options is continuously expanding and currently shows no signs of an imminent slow-down or consolidation, individual and collective exploration and experimentation of this realm needs to be systematically incorporated into higher education practise.
Resumo:
Technological developments in microprocessors and ICT landscape have made a shift to a new era where computing power is embedded in numerous small distributed objects and devices in our everyday lives. These small computing devices are ne-tuned to perform a particular task and are increasingly reaching our society at every level. For example, home appliances such as programmable washing machines, microwave ovens etc., employ several sensors to improve performance and convenience. Similarly, cars have on-board computers that use information from many di erent sensors to control things such as fuel injectors, spark plug etc., to perform their tasks e ciently. These individual devices make life easy by helping in taking decisions and removing the burden from their users. All these objects and devices obtain some piece of information about the physical environment. Each of these devices is an island with no proper connectivity and information sharing between each other. Sharing of information between these heterogeneous devices could enable a whole new universe of innovative and intelligent applications. The information sharing between the devices is a diffcult task due to the heterogeneity and interoperability of devices. Smart Space vision is to overcome these issues of heterogeneity and interoperability so that the devices can understand each other and utilize services of each other by information sharing. This enables innovative local mashup applications based on shared data between heterogeneous devices. Smart homes are one such example of Smart Spaces which facilitate to bring the health care system to the patient, by intelligent interconnection of resources and their collective behavior, as opposed to bringing the patient into the health system. In addition, the use of mobile handheld devices has risen at a tremendous rate during the last few years and they have become an essential part of everyday life. Mobile phones o er a wide range of different services to their users including text and multimedia messages, Internet, audio, video, email applications and most recently TV services. The interactive TV provides a variety of applications for the viewers. The combination of interactive TV and the Smart Spaces could give innovative applications that are personalized, context-aware, ubiquitous and intelligent by enabling heterogeneous systems to collaborate each other by sharing information between them. There are many challenges in designing the frameworks and application development tools for rapid and easy development of these applications. The research work presented in this thesis addresses these issues. The original publications presented in the second part of this thesis propose architectures and methodologies for interactive and context-aware applications, and tools for the development of these applications. We demonstrated the suitability of our ontology-driven application development tools and rule basedapproach for the development of dynamic, context-aware ubiquitous iTV applications.
Resumo:
This doctoral dissertation investigates the adult education policy of the European Union (EU) in the framework of the Lisbon agenda 2000–2010, with a particular focus on the changes of policy orientation that occurred during this reference decade. The year 2006 can be considered, in fact, a turning point for the EU policy-making in the adult learning sector: a radical shift from a wide--ranging and comprehensive conception of educating adults towards a vocationally oriented understanding of this field and policy area has been observed, in particular in the second half of the so--called ‘Lisbon decade’. In this light, one of the principal objectives of the mainstream policy set by the Lisbon Strategy, that of fostering all forms of participation of adults in lifelong learning paths, appears to have muted its political background and vision in a very short period of time, reflecting an underlying polarisation and progressive transformation of European policy orientations. Hence, by means of content analysis and process tracing, it is shown that the new target of the EU adult education policy, in this framework, has shifted from citizens to workers, and the competence development model, borrowed from the corporate sector, has been established as the reference for the new policy road maps. This study draws on the theory of governance architectures and applies a post-ontological perspective to discuss whether the above trends are intrinsically due to the nature of the Lisbon Strategy, which encompasses education policies, and to what extent supranational actors and phenomena such as globalisation influence the European governance and decision--making. Moreover, it is shown that the way in which the EU is shaping the upgrading of skills and competences of adult learners is modeled around the needs of the ‘knowledge economy’, thus according a great deal of importance to the ‘new skills for new jobs’ and perhaps not enough to life skills in its broader sense which include, for example, social and civic competences: these are actually often promoted but rarely implemented in depth in the EU policy documents. In this framework, it is conveyed how different EU policy areas are intertwined and interrelated with global phenomena, and it is emphasised how far the building of the EU education systems should play a crucial role in the formation of critical thinking, civic competences and skills for a sustainable democratic citizenship, from which a truly cohesive and inclusive society fundamentally depend, and a model of environmental and cosmopolitan adult education is proposed in order to address the challenges of the new millennium. In conclusion, an appraisal of the EU’s public policy, along with some personal thoughts on how progress might be pursued and actualised, is outlined.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
The Sun is a crucial benchmark for how we see the universe. Especially when it comes to the visible range of the spectrum, stars are commonly compared to the Sun, as it is the most thoroughly studied star. In this work I have focussed on two aspects of the Sun and how it is used in modern astronomy. Firstly, I try to answer the question on how similar to the Sun another star can be. Given the limits of observations, we call a solar twin a star that has the same observed parameters as the Sun within its errors. These stars can be used as stand-in suns when doing observations, as normal night-time telescopes are not built to be pointed at the Sun. There have been many searches for these twins and every one of them provided not only information on how close to the Sun another star can be, but also helped us to understand the Sun itself. In my work I have selected _ 300 stars that are both photometrically and spectroscopically close to the Sun and found 22 solar twins, of which 17 were previously unknown and can therefore help the emerging picture on solar twins. In my second research project I have used my full sample of 300 solar analogue stars to check the temperature and metallicity scale of stellar catalogue calibrations. My photometric sample was originally drawn from the Geneva-Copenhagen-Survey (Nordström et al. 2004; Holmberg et al. 2007, 2009) for which two alternative calibrations exist, i.e. GCS-III (Holmberg et al. 2009) and C11 (Casagrande et al. 2011). I used very high resolution spectra of solar analogues, and a new approach to test the two calibrations. I found a zero–point shift of order of +75 K and +0.10 dex in effective temperature and metallicity, respectively, in the GCS-III and therefore favour the C11 calibration, which found similar offsets. I then performed a spectroscopic analysis of the stars to derive effective temperatures and metallicities, and tested that they are well centred around the solar values.
Resumo:
University of Turku, Faculty of Medicine, Department of Clinical Medicine, Department of Physical Activity and Health, Paavo Nurmi Centre, Doctoral Programme of Clinical Investigation, University of Turku, Turku, Finland. Annales Universitatis Turkuensis. Medica – Odontologica, Turku, Finland, 2014. Background: Atherosclerosis progression spans an entire lifetime and has a wide pool of risk factors. Oxidized LDL (oxLDL) is a crucial element in the progression of atherosclerosis. As a rather new member in the atherosclerosis risk factor family, its interaction with the traditional pro-atherogenic contributors that occur at different ages is poorly known. Aims: The aim of this study was to investigate oxLDL and its relation to major contributing risk factors in estimating atherosclerosis risk in data consisting mostly of adult men. The study subjects of this study consisted of four different sets of data, one of which contained also women. The age range of participants was 18-100 years and totaled 2337 participants (of whom 69% were men). Data on anthropometric and hormonal parameters, laboratory measures and medical records were assessed during 1998-2009. Results: Obesity was paralleled with high concentrations of oxLDL, which consequentially was reduced by weight reduction. Importantly, successful weight maintenance preserveed this benefit. A shift from insulin sensitivity to insulin resistance increased oxLDL. Smokers had more oxLDL than non-smokers. A combination of obesity and smoking, or smoking and low serum total testosterone,resulted in even higher levels of oxLDL than any of the three conditions alone. Proportioning oxLDL to HDL-c or apoA1 stood out as a risk factor of all-cause mortality in the elderly. Conclusions: OxLDL was associated with aging, androgens, smoking, obesity, insulin metabolism, weight balance and other circulating lipid classes. Through this variety of metabolic environments containing both constant conditions (aging and gender) as well as lifestyle issues, these findings supported an essential and multidimensional role that oxLDL plays in atherosclerosis pathogenesis.
Resumo:
The emergence of the idea of multiculturalism in Swedish public discourse and social science in the latter half of the 1960s and introduction of official multiculturalism in 1975 constituted a major intellectual and political shift in the post-war history of Sweden. The ambition of the 1975 immigrant and minority policy to enable the preservation of ethno-cultural minorities and to create a positive attitude towards the new multicultural society among the majority population was also incorporated into Swedish cultural, educational and media policies. The rejection of assimilationism and the new commitment to ethno-cultural diversity, the multicultural moment, has earned Sweden a place on the list of the early adopters of official multiculturalism, together with Canada and Australia. This compilation thesis examines the origins and early post-war history of the idea of multiculturalism as well as the interplay between idea and politics in the shift from a public ideal of homogeneity to an ideal of multiculturalism in Sweden. It does so from a range of conceptual, comparative, transnational, and biographical perspectives. The thesis consists of an introduction (Part I) and four previously published studies (Part II). The primary research result of the thesis concerns the agency involved in the break-through and formal establishment of the idea of multiculturalism in Sweden. Actors such as ethnic activists, experts and officials were instrumental in the introduction and establishment of multiculturalism in Sweden, as they also had been in Canada and in Australia. These actors have, however, not previously been recognized and analysed as significant idea-makers and political agents in the case of Sweden. The intertwined connections between activists, social scientists, linguists, and officials facilitated the transfer of the idea of multiculturalism from a publically contested idea to public policy via the way of The Swedish Trade Union Confederation, academia and the Royal Commission of Immigration. The thesis furthermore shows that the political success of the idea of multiculturalism, such as it was within the limits of the universalist social democratic welfare state, was dependent on whom the claims-makers were, the status and positions they held, and the way the idea of multiculturalism was conceptualised and used. It was also dependent on the migratory context of labour immigration in the 1960s and 1970s and on whose behalf the advocates of multiculturalism made their claims. The majority of the labour immigrants were Finnish citizens from the former eastern half of the kingdom of Sweden who were net contributors to the Swedish welfare state. This facilitated the recognition of their ethno-cultural difference, and, following the logic of universalism, the ethno-cultural difference of other minority groups in Sweden. The historical significance of the multicultural moment is still evident in the contemporary immigration and integration policies of Sweden. The affirmation of diversity continues to set Sweden apart from the rest of Europe, now more so than in the 1970s, even though the migratory context has changed radically in the last 40 years.
Resumo:
Meandering rivers have been perceived to evolve rather similarly around the world independently of the location or size of the river. Despite the many consistent processes and characteristics they have also been noted to show complex and unique sets of fluviomorphological processes in which local factors play important role. These complex interactions of flow and morphology affect notably the development of the river. Comprehensive and fundamental field, flume and theoretically based studies of fluviomorphological processes in meandering rivers have been carried out especially during the latter part of the 20th century. However, as these studies have been carried out with traditional field measurements techniques their spatial and temporal resolution is not competitive to the level achievable today. The hypothesis of this study is that, by exploiting e increased spatial and temporal resolution of the data, achieved by combining conventional field measurements with a range of modern technologies, will provide new insights to the spatial patterns of the flow-sediment interaction in meandering streams, which have perceived to show notable variation in space and time. This thesis shows how the modern technologies can be combined to derive very high spatial and temporal resolution data on fluvio-morphological processes over meander bends. The flow structure over the bends is recorded in situ using acoustic Doppler current profiler (ADCP) and the spatial and temporal resolution of the flow data is enhanced using 2D and 3D CFD over various meander bends. The CFD are also exploited to simulate sediment transport. Multi-temporal terrestrial laser scanning (TLS), mobile laser scanning (MLS) and echo sounding data are used to measure the flow-based changes and formations over meander bends and to build the computational models. The spatial patterns of erosion and deposition over meander bends are analysed relative to the measured and modelled flow field and sediment transport. The results are compared with the classic theories of the processes in meander bends. Mainly, the results of this study follow well the existing theories and results of previous studies. However, some new insights regarding to the spatial and temporal patterns of the flow-sediment interaction in a natural sand-bed meander bend are provided. The results of this study show the advantages of the rapid and detailed measurements techniques and the achieved spatial and temporal resolution provided by CFD, unachievable with field measurements. The thesis also discusses the limitations which remain in the measurement and modelling methods and in understanding of fluvial geomorphology of meander bends. Further, the hydro- and morphodynamic models’ sensitivity to user-defined parameters is tested, and the modelling results are assessed against detailed field measurement. The study is implemented in the meandering sub-Arctic Pulmanki River in Finland. The river is unregulated and sand-bed and major morphological changes occur annually on the meander point bars, which are inundated only during the snow-melt-induced spring floods. The outcome of this study applies to sandbed meandering rivers in regions where normally one significant flood event occurs annually, such as Arctic areas with snow-melt induced spring floods, and where the point bars of the meander bends are inundated only during the flood events.
Resumo:
The shift towards a knowledge-based economy has inevitably prompted the evolution of patent exploitation. Nowadays, patent is more than just a prevention tool for a company to block its competitors from developing rival technologies, but lies at the very heart of its strategy for value creation and is therefore strategically exploited for economic pro t and competitive advantage. Along with the evolution of patent exploitation, the demand for reliable and systematic patent valuation has also reached an unprecedented level. However, most of the quantitative approaches in use to assess patent could arguably fall into four categories and they are based solely on the conventional discounted cash flow analysis, whose usability and reliability in the context of patent valuation are greatly limited by five practical issues: the market illiquidity, the poor data availability, discriminatory cash-flow estimations, and its incapability to account for changing risk and managerial flexibility. This dissertation attempts to overcome these impeding barriers by rationalizing the use of two techniques, namely fuzzy set theory (aiming at the first three issues) and real option analysis (aiming at the last two). It commences with an investigation into the nature of the uncertainties inherent in patent cash flow estimation and claims that two levels of uncertainties must be properly accounted for. Further investigation reveals that both levels of uncertainties fall under the categorization of subjective uncertainty, which differs from objective uncertainty originating from inherent randomness in that uncertainties labelled as subjective are highly related to the behavioural aspects of decision making and are usually witnessed whenever human judgement, evaluation or reasoning is crucial to the system under consideration and there exists a lack of complete knowledge on its variables. Having clarified their nature, the application of fuzzy set theory in modelling patent-related uncertain quantities is effortlessly justified. The application of real option analysis to patent valuation is prompted by the fact that both patent application process and the subsequent patent exploitation (or commercialization) are subject to a wide range of decisions at multiple successive stages. In other words, both patent applicants and patentees are faced with a large variety of courses of action as to how their patent applications and granted patents can be managed. Since they have the right to run their projects actively, this flexibility has value and thus must be properly accounted for. Accordingly, an explicit identification of the types of managerial flexibility inherent in patent-related decision making problems and in patent valuation, and a discussion on how they could be interpreted in terms of real options are provided in this dissertation. Additionally, the use of the proposed techniques in practical applications is demonstrated by three fuzzy real option analysis based models. In particular, the pay-of method and the extended fuzzy Black-Scholes model are employed to investigate the profitability of a patent application project for a new process for the preparation of a gypsum-fibre composite and to justify the subsequent patent commercialization decision, respectively; a fuzzy binomial model is designed to reveal the economic potential of a patent licensing opportunity.
Resumo:
The negotiations between the EU and the US over the Transatlantic Trade and Investment Partnership (TTIP) have generated a lot of discussion about investor-state dispute settlement (ISDS). This discussion provided the inspiration for this thesis, with the TTIP in the background, setting the scene. In this thesis I study the nature of ISDS and the principle of transparency within investor-state arbitration. I aim to determine whether the use of ISDS is restricted to international arbitration and whether ISDS can be considered to constitute a system or regime. Furthermore, I consider whether the introduction of the UNCITRAL Rules on Transparency in Treaty-based Investor-State Arbitration (2014, the UNCITRAL Transparency Rules) changes investor-state arbitration in relation to transparency. To achieve this, I examine ISDS provisions in several different international investment agreements (IIAs) and evaluate the ways in which transparency is incorporated into investment law. Moreover, I compare the provisions on transparency and confidentiality in institutional arbitration rules with the UNCITRAL Transparency Rules. I have formed several conclusions, including that the ISDS provisions may contain methods other than international arbitration and that ISDS does not constitute a system. Furthermore, the UNCITRAL Transparency Rules do change – theoretically, at least – investor-state arbitration to become more transparent. Whether the UNCITRAL Transparency Rules will make investor-state arbitration fully transparent depends on the actions of the contracting state parties when negotiating new IIAs and whether they choose to incorporate the UNCITRAL Transparency Rules in the IIAs already concluded.