17 resultados para Space – time blocks coding
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
The objective of this thesis is to study wavelets and their role in turbulence applications. Under scrutiny in the thesis is the intermittency in turbulence models. Wavelets are used as a mathematical tool to study the intermittent activities that turbulence models produce. The first section generally introduces wavelets and wavelet transforms as a mathematical tool. Moreover, the basic properties of turbulence are discussed and classical methods for modeling turbulent flows are explained. Wavelets are implemented to model the turbulence as well as to analyze turbulent signals. The model studied here is the GOY (Gledzer 1973, Ohkitani & Yamada 1989) shell model of turbulence, which is a popular model for explaining intermittency based on the cascade of kinetic energy. The goal is to introduce better quantification method for intermittency obtained in a shell model. Wavelets are localized in both space (time) and scale, therefore, they are suitable candidates for the study of singular bursts, that interrupt the calm periods of an energy flow through various scales. The study concerns two questions, namely the frequency of the occurrence as well as the intensity of the singular bursts at various Reynolds numbers. The results gave an insight that singularities become more local as Reynolds number increases. The singularities become more local also when the shell number is increased at certain Reynolds number. The study revealed that the singular bursts are more frequent at Re ~ 107 than other cases with lower Re. The intermittency of bursts for the cases with Re ~ 106 and Re ~ 105 was similar, but for the case with Re ~ 104 bursts occured after long waiting time in a different fashion so that it could not be scaled with higher Re.
Resumo:
My presupposition, that learning at some level deals with life praxis, is expressed in four metaphors: space, time, fable and figure. Relations between learning,knowledge building and meaning making are linked to the concept of personal knowledge. I present a two part study of learning as text in a drama pedagogical rooted reading where learning is framed as the ongoing event, and knowledge, as the product of previous processes, is framed as culturally formed utterances. A frame analysis model is constructed as a topological guide for relations between the two concepts learning and knowledge. It visualises an aesthetic understanding, rooted in drama pedagogical comprehension. Insight and perception are linked in an inner relationship that is neither external nor identical. This understanding expresses the movement "in between" connecting asymmetrical and nonlinear features of human endeavour and societal issues. The performability of bodily and oral participation in the learning event in a socio-cultural setting is analysed as a dialogised text. In an ethnographical case study I have gathered material with an interest for the particular. The empirical material is based on three problem based learning situations in a Polytechnic setting. The act of transformation in the polyphony of the event is considered as a turning point in the narrative employment. Negotiation and figuration in the situation form patterns of the space for improvisation (flow) and tensions at the boundaries (thresholds) which imply the logical structure of transformation. Learning as a dialogised text of "yes" and "no", of structure and play for the improvised, interrelate in that movement. It is related to both the syntagmic and the paradigmatic forms of thinking. In the philosophical study, forms of understanding are linked to the logical structure of transformation as a cultural issue. The classical rhetorical concepts of Logos, Pathos, Ethos and Mythos are connected to the multidimensional rationality of the human being. In the Aristotelian form of knowledge, phronesis,a logic structure of inquiry is recognised. The shifting of perspectives between approaches, the construction of knowledge as context and the human project of meaning making as a subtext, illuminates multiple layers of the learning text. In an argumentation that post-modern apprehension of knowledge, emphasising contextual and situational values, has an empowering impact on learning, I find pedagogical benefits. The dialogical perspective has opened lenses that manage to hold in aesthetic doubling the individual action of inquiry and the stage with its cultural tools in a three dimensional reading.
Resumo:
The thesis consists of four studies (articles I–IV) and a comprehensive summary. The aim is to deepen understanding and knowledge of newly qualified teachers’ experiences of their induction practices. The research interest thus reflects the ambition to strengthen the research-based platform for support measures. The aim can be specified in the following four sub-areas: to scrutinise NQTs’ experiences of the profession in the transition from education to work (study I), to describe and analyse NQTs’ experiences of their first encounters with school and classroom (study II), to explore NQTs’ experiences of their relationships within the school community (study III), to view NQTs’ experiences of support through peer-group mentoring as part of the wider aim of collaboration and assessment (study IV). The overall theoretical perspective constitutes teachers’ professional development. Induction forms an essential part of this continuum and can primarily be seen as a socialisation process into the profession and the social working environment of schools, as a unique phase of teachers’ development contributing to certain experiences, and as a formal programme designed to support new teachers. These lines of research are initiated in the separate studies (I–IV) and deepened in the theoretical part of the comprehensive summary. In order to appropriately understand induction as a specific practice the lines of research are in the end united and discussed with help of practice theory. More precisely the theory of practice architectures, including semantic space, physical space-time and social space, are used. The methodological approach to integrating the four studies is above all represented by abduction and meta-synthesis. Data has been collected through a questionnaire survey, with mainly open-ended questions, and altogether ten focus group meetings with newly qualified primary school teachers in 2007–2008. The teachers (n=88 in questionnaire, n=17 in focus groups), had between one and three years of teaching experience. Qualitative content analysis and narrative analysis were used when analysing the data. What is then the collected picture of induction or the first years in the profession if scrutinising the results presented in the articles? Four dimensions seem especially to permeate the studies and emerge when they are put together. The first dimension, the relational ˗ emotional, captures the social nature of induction and teacher’s work and the emotional character intimately intertwined. The second dimension, the tensional ˗ mutable, illustrates the intense pace of induction, together with the diffuse and unclear character of a teacher’s job. The third dimension, the instructive ˗ developmental, depicts induction as a unique and intensive phase of learning, maturity and professional development. Finally, the fourth dimension, the reciprocal ˗ professional, stresses the importance of reciprocity and collaboration in induction, both formally and informally. The outlined four dimensions, or integration of results, describing induction from the experiences of new teachers, constitute part of a new synthesis, induction practice. This synthesis was generated from viewing the integrated results through the theoretical lens of practice architecture and the three spaces, semantic space, physical space-time and social space. In this way, a more comprehensive, refined and partially new architecture of teachers’ induction practices are presented and discussed.
Resumo:
Highly dynamic systems, often considered as resilient systems, are characterised by abiotic and biotic processes under continuous and strong changes in space and time. Because of this variability, the detection of overlapping anthropogenic stress is challenging. Coastal areas harbour dynamic ecosystems in the form of open sandy beaches, which cover the vast majority of the world’s ice-free coastline. These ecosystems are currently threatened by increasing human-induced pressure, among which mass-development of opportunistic macroalgae (mainly composed of Chlorophyta, so called green tides), resulting from the eutrophication of coastal waters. The ecological impact of opportunistic macroalgal blooms (green tides, and blooms formed by other opportunistic taxa), has long been evaluated within sheltered and non-tidal ecosystems. Little is known, however, on how more dynamic ecosystems, such as open macrotidal sandy beaches, respond to such stress. This thesis assesses the effects of anthropogenic stress on the structure and the functioning of highly dynamic ecosystems using sandy beaches impacted by green tides as a study case. The thesis is based on four field studies, which analyse natural sandy sediment benthic community dynamics over several temporal (from month to multi-year) and spatial (from local to regional) scales. In this thesis, I report long-lasting responses of sandy beach benthic invertebrate communities to green tides, across thousands of kilometres and over seven years; and highlight more pronounced responses of zoobenthos living in exposed sandy beaches compared to semi-exposed sands. Within exposed sandy sediments, and across a vertical scale (from inshore to nearshore sandy habitats), I also demonstrate that the effects of the presence of algal mats on intertidal benthic invertebrate communities is more pronounced than that on subtidal benthic invertebrate assemblages, but also than on flatfish communities. Focussing on small-scale variations in the most affected faunal group (i.e. benthic invertebrates living at low shore), this thesis reveals a decrease in overall beta-diversity along a eutrophication-gradient manifested in the form of green tides, as well as the increasing importance of biological variables in explaining ecological variability of sandy beach macrobenthic assemblages along the same gradient. To illustrate the processes associated with the structural shifts observed where green tides occurred, I investigated the effects of high biomasses of opportunistic macroalgae (Ulva spp.) on the trophic structure and functioning of sandy beaches. This work reveals a progressive simplification of sandy beach food web structure and a modification of energy pathways over time, through direct and indirect effects of Ulva mats on several trophic levels. Through this thesis I demonstrate that highly dynamic systems respond differently (e.g. shift in δ13C, not in δ15N) and more subtly (e.g. no mass-mortality in benthos was found) to anthropogenic stress compared to what has been previously shown within more sheltered and non-tidal systems. Obtaining these results would not have been possible without the approach used through this work; I thus present a framework coupling field investigations with analytical approaches to describe shifts in highly variable ecosystems under human-induced stress.
Resumo:
IP-verkkojen hyvin tunnettu haitta on, että nämä eivät pysty takaamaan tiettyä palvelunlaatua (Quality of Service) lähetetyille paketeille. Seuraavat kaksi tekniikkaa pidetään lupaavimpina palvelunlaadun tarjoamiselle: Differentiated Services (DiffServ) ja palvelunlaatureititys (QoS Routing). DiffServ on varsin uusi IETF:n määrittelemä Internetille tarkoitettu palvelunlaatumekanismi. DiffServ tarjoaa skaalattavaa palvelujen erilaistamista ilman viestintää joka hypyssä ja per-flow –tilan ohjausta. DiffServ on hyvä esimerkki hajautetusta verkkosuunnittelusta. Tämän palvelutasomekanismin tavoite on viestintäjärjestelmien suunnittelun yksinkertaistaminen. Verkkosolmu voidaan rakentaa pienestä hyvin määritellystä rakennuspalikoiden joukosta. Palvelunlaatureititys on reititysmekanismi, jolla liikennereittejä määritellään verkon käytettävissä olevien resurssien pohjalta. Tässä työssä selvitetään uusi palvelunlaatureititystapa, jota kutsutaan yksinkertaiseksi monitiereititykseksi (Simple Multipath Routing). Tämän työn tarkoitus on suunnitella palvelunlaatuohjain DiffServille. Tässä työssä ehdotettu palvelunlaatuohjain on pyrkimys yhdistää DiffServ ja palvelunlaatureititysmekanismeja. Työn kokeellinen osuus keskittyy erityisesti palvelunlaatureititysalgoritmeihin.
Resumo:
As the development of integrated circuit technology continues to follow Moore’s law the complexity of circuits increases exponentially. Traditional hardware description languages such as VHDL and Verilog are no longer powerful enough to cope with this level of complexity and do not provide facilities for hardware/software codesign. Languages such as SystemC are intended to solve these problems by combining the powerful expression of high level programming languages and hardware oriented facilities of hardware description languages. To fully replace older languages in the desing flow of digital systems SystemC should also be synthesizable. The devices required by modern high speed networks often share the same tight constraints for e.g. size, power consumption and price with embedded systems but have also very demanding real time and quality of service requirements that are difficult to satisfy with general purpose processors. Dedicated hardware blocks of an application specific instruction set processor are one way to combine fast processing speed, energy efficiency, flexibility and relatively low time-to-market. Common features can be identified in the network processing domain making it possible to develop specialized but configurable processor architectures. One such architecture is the TACO which is based on transport triggered architecture. The architecture offers a high degree of parallelism and modularity and greatly simplified instruction decoding. For this M.Sc.(Tech) thesis, a simulation environment for the TACO architecture was developed with SystemC 2.2 using an old version written with SystemC 1.0 as a starting point. The environment enables rapid design space exploration by providing facilities for hw/sw codesign and simulation and an extendable library of automatically configured reusable hardware blocks. Other topics that are covered are the differences between SystemC 1.0 and 2.2 from the viewpoint of hardware modeling, and compilation of a SystemC model into synthesizable VHDL with Celoxica Agility SystemC Compiler. A simulation model for a processor for TCP/IP packet validation was designed and tested as a test case for the environment.
Resumo:
The amount of installed wind power has been growing exponentially during the past ten years. As wind turbines have become a significant source of electrical energy, the interactions between the turbines and the electric power network need to be studied more thoroughly than before. Especially, the behavior of the turbines in fault situations is of prime importance; simply disconnecting all wind turbines from the network during a voltage drop is no longer acceptable, since this would contribute to a total network collapse. These requirements have been a contributor to the increased role of simulations in the study and design of the electric drive train of a wind turbine. When planning a wind power investment, the selection of the site and the turbine are crucial for the economic feasibility of the installation. Economic feasibility, on the other hand, is the factor that determines whether or not investment in wind power will continue, contributing to green electricity production and reduction of emissions. In the selection of the installation site and the turbine (siting and site matching), the properties of the electric drive train of the planned turbine have so far been generally not been taken into account. Additionally, although the loss minimization of some of the individual components of the drive train has been studied, the drive train as a whole has received less attention. Furthermore, as a wind turbine will typically operate at a power level lower than the nominal most of the time, efficiency analysis in the nominal operating point is not sufficient. This doctoral dissertation attempts to combine the two aforementioned areas of interest by studying the applicability of time domain simulations in the analysis of the economicfeasibility of a wind turbine. The utilization of a general-purpose time domain simulator, otherwise applied to the study of network interactions and control systems, in the economic analysis of the wind energy conversion system is studied. The main benefits of the simulation-based method over traditional methods based on analytic calculation of losses include the ability to reuse and recombine existing models, the ability to analyze interactions between the components and subsystems in the electric drive train (something which is impossible when considering different subsystems as independent blocks, as is commonly done in theanalytical calculation of efficiencies), the ability to analyze in a rather straightforward manner the effect of selections other than physical components, for example control algorithms, and the ability to verify assumptions of the effects of a particular design change on the efficiency of the whole system. Based on the work, it can be concluded that differences between two configurations can be seen in the economic performance with only minor modifications to the simulation models used in the network interaction and control method study. This eliminates the need ofdeveloping analytic expressions for losses and enables the study of the system as a whole instead of modeling it as series connection of independent blocks with no lossinterdependencies. Three example cases (site matching, component selection, control principle selection) are provided to illustrate the usage of the approach and analyze its performance.
Resumo:
This thesis presents an approach for formulating and validating a space averaged drag model for coarse mesh simulations of gas-solid flows in fluidized beds using the two-fluid model. Proper modeling for fluid dynamics is central in understanding any industrial multiphase flow. The gas-solid flows in fluidized beds are heterogeneous and usually simulated with the Eulerian description of phases. Such a description requires the usage of fine meshes and small time steps for the proper prediction of its hydrodynamics. Such constraint on the mesh and time step size results in a large number of control volumes and long computational times which are unaffordable for simulations of large scale fluidized beds. If proper closure models are not included, coarse mesh simulations for fluidized beds do not give reasonable results. The coarse mesh simulation fails to resolve the mesoscale structures and results in uniform solids concentration profiles. For a circulating fluidized bed riser, such predicted profiles result in a higher drag force between the gas and solid phase and also overestimated solids mass flux at the outlet. Thus, there is a need to formulate the closure correlations which can accurately predict the hydrodynamics using coarse meshes. This thesis uses the space averaging modeling approach in the formulation of closure models for coarse mesh simulations of the gas-solid flow in fluidized beds using Geldart group B particles. In the analysis of formulating the closure correlation for space averaged drag model, the main parameters for the modeling were found to be the averaging size, solid volume fraction, and distance from the wall. The closure model for the gas-solid drag force was formulated and validated for coarse mesh simulations of the riser, which showed the verification of this modeling approach. Coarse mesh simulations using the corrected drag model resulted in lowered values of solids mass flux. Such an approach is a promising tool in the formulation of appropriate closure models which can be used in coarse mesh simulations of large scale fluidized beds.
Resumo:
Multiprocessor system-on-chip (MPSoC) designs utilize the available technology and communication architectures to meet the requirements of the upcoming applications. In MPSoC, the communication platform is both the key enabler, as well as the key differentiator for realizing efficient MPSoCs. It provides product differentiation to meet a diverse, multi-dimensional set of design constraints, including performance, power, energy, reconfigurability, scalability, cost, reliability and time-to-market. The communication resources of a single interconnection platform cannot be fully utilized by all kind of applications, such as the availability of higher communication bandwidth for computation but not data intensive applications is often unfeasible in the practical implementation. This thesis aims to perform the architecture-level design space exploration towards efficient and scalable resource utilization for MPSoC communication architecture. In order to meet the performance requirements within the design constraints, careful selection of MPSoC communication platform, resource aware partitioning and mapping of the application play important role. To enhance the utilization of communication resources, variety of techniques such as resource sharing, multicast to avoid re-transmission of identical data, and adaptive routing can be used. For implementation, these techniques should be customized according to the platform architecture. To address the resource utilization of MPSoC communication platforms, variety of architectures with different design parameters and performance levels, namely Segmented bus (SegBus), Network-on-Chip (NoC) and Three-Dimensional NoC (3D-NoC), are selected. Average packet latency and power consumption are the evaluation parameters for the proposed techniques. In conventional computing architectures, fault on a component makes the connected fault-free components inoperative. Resource sharing approach can utilize the fault-free components to retain the system performance by reducing the impact of faults. Design space exploration also guides to narrow down the selection of MPSoC architecture, which can meet the performance requirements with design constraints.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
Measuring protein biomarkers from sample matrix, such as plasma, is one of the basic tasks in clinical diagnostics. Bioanalytical assays used for the measuring should be able to measure proteins with high sensitivity and specificity. Furthermore, multiplexing capability would also be advantageous. To ensure the utility of the diagnostic test in point-of-care setting, additional requirements such as short turn-around times, ease-ofuse and low costs need to be met. On the other hand, enhancement of assay sensitivity could enable exploiting novel biomarkers, which are present in very low concentrations and which the current immunoassays are unable to measure. Furthermore, highly sensitive assays could enable the use of minimally invasive sampling. In the development of high-sensitivity assays the label technology and affinity binders are in pivotal role. Additionally, innovative assay designs contribute to the obtained sensitivity and other characteristics of the assay as well as its applicability. The aim of this thesis was to study the impact of assay components on the performance of both homogeneous and heterogeneous assays. Applicability of two different lanthanide-based label technologies, upconverting nanoparticles and switchable lanthanide luminescence, to protein detection was explored. Moreover, the potential of recombinant antibodies and aptamers as alternative affinity binders were evaluated. Additionally, alternative conjugation chemistries for production of the labeled binders were studied. Different assay concepts were also evaluated with respect to their applicability to point-of-care testing, which requires simple yet sensitive methods. The applicability of upconverting nanoparticles to the simultaneous quantitative measurement of multiple analytes using imaging-based detection was demonstrated. Additionally, the required instrumentation was relatively simple and inexpensive compared to other luminescent lanthanide-based labels requiring time-resolved measurement. The developed homogeneous assays exploiting switchable lanthanide luminescence were rapid and simple to perform and thus applicable even to point-ofcare testing. The sensitivities of the homogeneous assays were in the picomolar range, which are still inadequate for some analytes, such as cardiac troponins, requiring ultralow limits of detection. For most analytes, however, the obtained limits of detection were sufficient. The use of recombinant antibody fragments and aptamers as binders allowed site-specific and controlled covalent conjugation to construct labeled binders reproducibly either by using chemical modification or recombinant technology. Luminescent lanthanide labels were shown to be widely applicable for protein detection in various assay setups and to contribute assay sensitivity.
Resumo:
Time series analysis can be categorized into three different approaches: classical, Box-Jenkins, and State space. Classical approach makes a basement for the analysis and Box-Jenkins approach is an improvement of the classical approach and deals with stationary time series. State space approach allows time variant factors and covers up a broader area of time series analysis. This thesis focuses on parameter identifiablity of different parameter estimation methods such as LSQ, Yule-Walker, MLE which are used in the above time series analysis approaches. Also the Kalman filter method and smoothing techniques are integrated with the state space approach and MLE method to estimate parameters allowing them to change over time. Parameter estimation is carried out by repeating estimation and integrating with MCMC and inspect how well different estimation methods can identify the optimal model parameters. Identification is performed in probabilistic and general senses and compare the results in order to study and represent identifiability more informative way.