933 resultados para Hierarchical dynamic models
Resumo:
Many educators and educational institutions have yet to integrate web-based practices into their classrooms and curricula. As a result, it can be difficult to prototype and evaluate approaches to transforming classrooms from static endpoints to dynamic, content-creating nodes in the online information ecosystem. But many scholastic journalism programs have already embraced the capabilities of the Internet for virtual collaboration, dissemination, and reader participation. Because of this, scholastic journalism can act as a test-bed for integrating web-based sharing and collaboration practices into classrooms. Student Journalism 2.0 was a research project to integrate open copyright licenses into two scholastic journalism programs, to document outcomes, and to identify recommendations and remaining challenges for similar integrations. Video and audio recordings of two participating high school journalism programs informed the research. In describing the steps of our integration process, we note some important legal, technical, and social challenges. Legal worries such as uncertainty over copyright ownership could lead districts and administrators to disallow open licensing of student work. Publication platforms among journalism classrooms are far from standardized, making any integration of new technologies and practices difficult to achieve at scale. And teachers and students face challenges re-conceptualizing the role their class work can play online.
Resumo:
In general, models of ecological systems can be broadly categorized as ’top-down’ or ’bottom-up’ models, based on the hierarchical level that the model processes are formulated on. The structure of a top-down, also known as phenomenological, population model can be interpreted in terms of population characteristics, but it typically lacks an interpretation on a more basic level. In contrast, bottom-up, also known as mechanistic, population models are derived from assumptions and processes on a more basic level, which allows interpretation of the model parameters in terms of individual behavior. Both approaches, phenomenological and mechanistic modelling, can have their advantages and disadvantages in different situations. However, mechanistically derived models might be better at capturing the properties of the system at hand, and thus give more accurate predictions. In particular, when models are used for evolutionary studies, mechanistic models are more appropriate, since natural selection takes place on the individual level, and in mechanistic models the direct connection between model parameters and individual properties has already been established. The purpose of this thesis is twofold. Firstly, a systematical way to derive mechanistic discrete-time population models is presented. The derivation is based on combining explicitly modelled, continuous processes on the individual level within a reproductive period with a discrete-time maturation process between reproductive periods. Secondly, as an example of how evolutionary studies can be carried out in mechanistic models, the evolution of the timing of reproduction is investigated. Thus, these two lines of research, derivation of mechanistic population models and evolutionary studies, are complementary to each other.
Resumo:
Genetic diversity is one of the levels of biodiversity that the World Conservation Union (IUCN) has recognized as being important to preserve. This is because genetic diversity is fundamental to the future evolution and to the adaptive flexibility of a species to respond to the inherently dynamic nature of the natural world. Therefore, the key to maintaining biodiversity and healthy ecosystems is to identify, monitor and maintain locally-adapted populations, along with their unique gene pools, upon which future adaptation depends. Thus, conservation genetics deals with the genetic factors that affect extinction risk and the genetic management regimes required to minimize the risk. The conservation of exploited species, such as salmonid fishes, is particularly challenging due to the conflicts between different interest groups. In this thesis, I conduct a series of conservation genetic studies on primarily Finnish populations of two salmonid fish species (European grayling, Thymallus thymallus, and lake-run brown trout, Salmo trutta) which are popular recreational game fishes in Finland. The general aim of these studies was to apply and develop population genetic approaches to assist conservation and sustainable harvest of these populations. The approaches applied included: i) the characterization of population genetic structure at national and local scales; ii) the identification of management units and the prioritization of populations for conservation based on evolutionary forces shaping indigenous gene pools; iii) the detection of population declines and the testing of the assumptions underlying these tests; and iv) the evaluation of the contribution of natural populations to a mixed stock fishery. Based on microsatellite analyses, clear genetic structuring of exploited Finnish grayling and brown trout populations was detected at both national and local scales. Finnish grayling were clustered into three genetically distinct groups, corresponding to northern, Baltic and south-eastern geographic areas of Finland. The genetic differentiation among and within population groups of grayling ranged from moderate to high levels. Such strong genetic structuring combined with low genetic diversity strongly indicates that genetic drift plays a major role in the evolution of grayling populations. Further analyses of European grayling covering the majority of the species’ distribution range indicated a strong global footprint of population decline. Using a coalescent approach the beginning of population reduction was dated back to 1 000-10 000 years ago (ca. 200-2 000 generations). Forward simulations demonstrated that the bottleneck footprints measured using the M ratio can persist within small populations much longer than previously anticipated in the face of low levels of gene flow. In contrast to the M ratio, two alternative methods for genetic bottleneck detection identified recent bottlenecks in six grayling populations that warrant future monitoring. Consistent with the predominant role of random genetic drift, the effective population size (Ne) estimates of all grayling populations were very low with the majority of Ne estimates below 50. Taken together, highly structured local populations, limited gene flow and the small Ne of grayling populations indicates that grayling populations are vulnerable to overexploitation and, hence, monitoring and careful management using the precautionary principles is required not only in Finland but throughout Europe. Population genetic analyses of lake-run brown trout populations in the Inari basin (northernmost Finland) revealed hierarchical population structure where individual populations were clustered into three population groups largely corresponding to different geographic regions of the basin. Similar to my earlier work with European grayling, the genetic differentiation among and within population groups of lake-run brown trout was relatively high. Such strong differentiation indicated that the power to determine the relative contribution of populations in mixed fisheries should be relatively high. Consistent with these expectations, high accuracy and precision in mixed stock analysis (MSA) simulations were observed. Application of MSA to indigenous fish caught in the Inari basin identified altogether twelve populations that contributed significantly to mixed stock fisheries with the Ivalojoki river system being the major contributor (70%) to the total catch. When the contribution of wild trout populations to the fisheries was evaluated regionally, geographically nearby populations were the main contributors to the local catches. MSA also revealed a clear separation between the lower and upper reaches of Ivalojoki river system – in contrast to lower reaches of the Ivalojoki river that contributed considerably to the catch, populations from the upper reaches of the Ivalojoki river system (>140 km from the river mouth) did not contribute significantly to the fishery. This could be related to the available habitat size but also associated with a resident type life history and increased cost of migration. The studies in my thesis highlight the importance of dense sampling and wide population coverage at the scale being studied and also demonstrate the importance of critical evaluation of the underlying assumptions of the population genetic models and methods used. These results have important implications for conservation and sustainable fisheries management of Finnish populations of European grayling and brown trout in the Inari basin.
Resumo:
Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike “traditional” biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is “foreign” to “tra- ditional” biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.
Resumo:
In this thesis, general approach is devised to model electrolyte sorption from aqueous solutions on solid materials. Electrolyte sorption is often considered as unwanted phenomenon in ion exchange and its potential as an independent separation method has not been fully explored. The solid sorbents studied here are porous and non-porous organic or inorganic materials with or without specific functional groups attached on the solid matrix. Accordingly, the sorption mechanisms include physical adsorption, chemisorption on the functional groups and partition restricted by electrostatic or steric factors. The model is tested in four Cases Studies dealing with chelating adsorption of transition metal mixtures, physical adsorption of metal and metalloid complexes from chloride solutions, size exclusion of electrolytes in nano-porous materials and electrolyte exclusion of electrolyte/non-electrolyte mixtures. The model parameters are estimated using experimental data from equilibrium and batch kinetic measurements, and they are used to simulate actual single-column fixed-bed separations. Phase equilibrium between the solution and solid phases is described using thermodynamic Gibbs-Donnan model and various adsorption models depending on the properties of the sorbent. The 3-dimensional thermodynamic approach is used for volume sorption in gel-type ion exchangers and in nano-porous adsorbents, and satisfactory correlation is obtained provided that both mixing and exclusion effects are adequately taken into account. 2-Dimensional surface adsorption models are successfully applied to physical adsorption of complex species and to chelating adsorption of transition metal salts. In the latter case, comparison is also made with complex formation models. Results of the mass transport studies show that uptake rates even in a competitive high-affinity system can be described by constant diffusion coefficients, when the adsorbent structure and the phase equilibrium conditions are adequately included in the model. Furthermore, a simplified solution based on the linear driving force approximation and the shrinking-core model is developed for very non-linear adsorption systems. In each Case Study, the actual separation is carried out batch-wise in fixed-beds and the experimental data are simulated/correlated using the parameters derived from equilibrium and kinetic data. Good agreement between the calculated and experimental break-through curves is usually obtained indicating that the proposed approach is useful in systems, which at first sight are very different. For example, the important improvement in copper separation from concentrated zinc sulfate solution at elevated temperatures can be correctly predicted by the model. In some cases, however, re-adjustment of model parameters is needed due to e.g. high solution viscosity.
Resumo:
Cells of epithelial origin, e.g. from breast and prostate cancers, effectively differentiate into complex multicellular structures when cultured in three-dimensions (3D) instead of conventional two-dimensional (2D) adherent surfaces. The spectrum of different organotypic morphologies is highly dependent on the culture environment that can be either non-adherent or scaffold-based. When embedded in physiological extracellular matrices (ECMs), such as laminin-rich basement membrane extracts, normal epithelial cells differentiate into acinar spheroids reminiscent of glandular ductal structures. Transformed cancer cells, in contrast, typically fail to undergo acinar morphogenic patterns, forming poorly differentiated or invasive multicellular structures. The 3D cancer spheroids are widely accepted to better recapitulate various tumorigenic processes and drug responses. So far, however, 3D models have been employed predominantly in the Academia, whereas the pharmaceutical industry has yet to adopt a more widely and routine use. This is mainly due to poor characterisation of cell models, lack of standardised workflows and high throughput cell culture platforms, and the availability of proper readout and quantification tools. In this thesis, a complete workflow has been established entailing well-characterised 3D cell culture models for prostate cancer, a standardised 3D cell culture routine based on high-throughput-ready platform, automated image acquisition with concomitant morphometric image analysis, and data visualisation, in order to enable large-scale high-content screens. Our integrated suite of software and statistical analysis tools were optimised and validated using a comprehensive panel of prostate cancer cell lines and 3D models. The tools quantify multiple key cancer-relevant morphological features, ranging from cancer cell invasion through multicellular differentiation to growth, and detect dynamic changes both in morphology and function, such as cell death and apoptosis, in response to experimental perturbations including RNA interference and small molecule inhibitors. Our panel of cell lines included many non-transformed and most currently available classic prostate cancer cell lines, which were characterised for their morphogenetic properties in 3D laminin-rich ECM. The phenotypes and gene expression profiles were evaluated concerning their relevance for pre-clinical drug discovery, disease modelling and basic research. In addition, a spontaneous model for invasive transformation was discovered, displaying a highdegree of epithelial plasticity. This plasticity is mediated by an abundant bioactive serum lipid, lysophosphatidic acid (LPA), and its receptor LPAR1. The invasive transformation was caused by abrupt cytoskeletal rearrangement through impaired G protein alpha 12/13 and RhoA/ROCK, and mediated by upregulated adenylyl cyclase/cyclic AMP (cAMP)/protein kinase A, and Rac/ PAK pathways. The spontaneous invasion model tangibly exemplifies the biological relevance of organotypic cell culture models. Overall, this thesis work underlines the power of novel morphometric screening tools in drug discovery.
Resumo:
Modern machine structures are often fabricated by welding. From a fatigue point of view, the structural details and especially, the welded details are the most prone to fatigue damage and failure. Design against fatigue requires information on the fatigue resistance of a structure’s critical details and the stress loads that act on each detail. Even though, dynamic simulation of flexible bodies is already current method for analyzing structures, obtaining the stress history of a structural detail during dynamic simulation is a challenging task; especially when the detail has a complex geometry. In particular, analyzing the stress history of every structural detail within a single finite element model can be overwhelming since the amount of nodal degrees of freedom needed in the model may require an impractical amount of computational effort. The purpose of computer simulation is to reduce amount of prototypes and speed up the product development process. Also, to take operator influence into account, real time models, i.e. simplified and computationally efficient models are required. This in turn, requires stress computation to be efficient if it will be performed during dynamic simulation. The research looks back at the theoretical background of multibody dynamic simulation and finite element method to find suitable parts to form a new approach for efficient stress calculation. This study proposes that, the problem of stress calculation during dynamic simulation can be greatly simplified by using a combination of floating frame of reference formulation with modal superposition and a sub-modeling approach. In practice, the proposed approach can be used to efficiently generate the relevant fatigue assessment stress history for a structural detail during or after dynamic simulation. In this work numerical examples are presented to demonstrate the proposed approach in practice. The results show that approach is applicable and can be used as proposed.
Resumo:
Abstract—This paper discusses existing military capability models and proposes a comprehensive capability meta-model (CCMM) which unites the existing capability models into an integrated and hierarchical whole. The Zachman Framework for Enterprise Architecture is used as a structure for the CCMM. The CCMM takes into account the abstraction level, the primary area of application, stakeholders, intrinsic process, and life cycle considerations of each existing capability model, and shows how the models relate to each other. The validity of the CCMM was verified through a survey of subject matter experts. The results suggest that the CCMM is of practical value to various capability stakeholders in many ways, such as helping to improve communication between the different capability communities.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
This thesis considers optimization problems arising in printed circuit board assembly. Especially, the case in which the electronic components of a single circuit board are placed using a single placement machine is studied. Although there is a large number of different placement machines, the use of collect-and-place -type gantry machines is discussed because of their flexibility and increasing popularity in the industry. Instead of solving the entire control optimization problem of a collect-andplace machine with a single application, the problem is divided into multiple subproblems because of its hard combinatorial nature. This dividing technique is called hierarchical decomposition. All the subproblems of the one PCB - one machine -context are described, classified and reviewed. The derived subproblems are then either solved with exact methods or new heuristic algorithms are developed and applied. The exact methods include, for example, a greedy algorithm and a solution based on dynamic programming. Some of the proposed heuristics contain constructive parts while others utilize local search or are based on frequency calculations. For the heuristics, it is made sure with comprehensive experimental tests that they are applicable and feasible. A number of quality functions will be proposed for evaluation and applied to the subproblems. In the experimental tests, artificially generated data from Markov-models and data from real-world PCB production are used. The thesis consists of an introduction and of five publications where the developed and used solution methods are described in their full detail. For all the problems stated in this thesis, the methods proposed are efficient enough to be used in the PCB assembly production in practice and are readily applicable in the PCB manufacturing industry.
Resumo:
The objective of the present study was to characterize the heart rate (HR) patterns of healthy males using the autoregressive integrated moving average (ARIMA) model over a power range assumed to correspond to the anaerobic threshold (AT) during discontinuous dynamic exercise tests (DDET). Nine young (22.3 ± 1.57 years) and 9 middle-aged (MA) volunteers (43.2 ± 3.53 years) performed three DDET on a cycle ergometer. Protocol I: DDET in steps with progressive power increases of 10 W; protocol II: DDET using the same power values as protocol 1, but applied randomly; protocol III: continuous dynamic exercise protocol with ventilatory and metabolic measurements (10 W/min ramp power), for the measurement of ventilatory AT. HR was recorded and stored beat-to-beat during DDET, and analyzed using the ARIMA (protocols I and II). The DDET experiments showed that the median physical exercise workloads at which AT occurred were similar for protocols I and II, i.e., AT occurred between 75 W (116 bpm) and 85 W (116 bpm) for the young group and between 60 W (96 bpm) and 75 W (107 bpm) for group MA in protocols I and II, respectively; in two MA volunteers the ventilatory AT occurred at 90 W (108 bpm) and 95 W (111 bpm). This corresponded to the same power values of the positive trend in HR responses. The change in HR response using ARIMA models at submaximal dynamic exercise powers proved to be a promising approach for detecting AT in normal volunteers.
Resumo:
Time series analysis can be categorized into three different approaches: classical, Box-Jenkins, and State space. Classical approach makes a basement for the analysis and Box-Jenkins approach is an improvement of the classical approach and deals with stationary time series. State space approach allows time variant factors and covers up a broader area of time series analysis. This thesis focuses on parameter identifiablity of different parameter estimation methods such as LSQ, Yule-Walker, MLE which are used in the above time series analysis approaches. Also the Kalman filter method and smoothing techniques are integrated with the state space approach and MLE method to estimate parameters allowing them to change over time. Parameter estimation is carried out by repeating estimation and integrating with MCMC and inspect how well different estimation methods can identify the optimal model parameters. Identification is performed in probabilistic and general senses and compare the results in order to study and represent identifiability more informative way.
Resumo:
Traditionally real estate has been seen as a good diversification tool for a stock portfolio due to the lower return and volatility characteristics of real estate investments. However, the diversification benefits of a multi-asset portfolio depend on how the different asset classes co-move in the short- and long-run. As the asset classes are affected by the same macroeconomic factors, interrelationships limiting the diversification benefits could exist. This master’s thesis aims to identify such dynamic linkages in the Finnish real estate and stock markets. The results are beneficial for portfolio optimization tasks as well as for policy-making. The real estate industry can be divided into direct and securitized markets. In this thesis the direct market is depicted by the Finnish housing market index. The securitized market is proxied by the Finnish all-sectors securitized real estate index and by a European residential Real Estate Investment Trust index. The stock market is depicted by OMX Helsinki Cap index. Several macroeconomic variables are incorporated as well. The methodology of this thesis is based on the Vector Autoregressive (VAR) models. The long-run dynamic linkages are studied with Johansen’s cointegration tests and the short-run interrelationships are examined with Granger-causality tests. In addition, impulse response functions and forecast error variance decomposition analyses are used for robustness checks. The results show that long-run co-movement, or cointegration, did not exist between the housing and stock markets during the sample period. This indicates diversification benefits in the long-run. However, cointegration between the stock and securitized real estate markets was identified. This indicates limited diversification benefits and shows that the listed real estate market in Finland is not matured enough to be considered a separate market from the general stock market. Moreover, while securitized real estate was shown to cointegrate with the housing market in the long-run, the two markets are still too different in their characteristics to be used as substitutes in a multi-asset portfolio. This implies that the capital intensiveness of housing investments cannot be circumvented by investing in securitized real estate.
Resumo:
This thesis studies the impact of the latest Russian crisis on global markets, and especially Central and Eastern Europe. The results are compared to other shocks and crises over the last twenty years to see how significant they have been. The cointegration process of Central and Eastern European financial markets is also reviewed and updated. Using three separate conditional correlation GARCH models, the latest crisis is not found to have initiated similar surges in conditional correlations to previous crises over the last two decades. Market cointegration for Central and Eastern Europe is found to have stalled somewhat after initial correlation increases post EU accession.
Resumo:
Guided by the social-ecological conceptualization of bullying, this thesis examines the implications of classroom and school contexts—that is, students’ shared microsystems—for peer-to-peer bullying and antibullying practices. Included are four original publications, three of which are empirical studies utilizing data from a large Finnish sample of students in the upper grade levels of elementary school. Both self- and peer reports of bullying and victimization are utilized, and the hierarchical nature of the data collected from students nested within school ecologies is accounted for by multilevel modeling techniques. The first objective of the thesis is to simultaneously examine risk factors for victimization at individual, classroom, and school levels (Study I). The second objective is to uncover the individual- and classroom-level working mechanisms of the KiVa antibullying program which has been shown to be effective in reducing bullying problems in Finnish schools (Study II). Thirdly, an overview of the extant literature on classroom- and school-level contributions to bullying and victimization is provided (Study III). Finally, attention is paid to the assessment of victimization and, more specifically, to how the classroom context influences the concordance between self- and peer reports of victimization (Study IV). Findings demonstrate the multiple ways in which contextual factors, and importantly students’ perceptions thereof, contribute to the bullying dynamic and efforts to counteract it. Whereas certain popular beliefs regarding the implications of classroom and school contexts do not receive support, the role of peer contextual factors and the significance of students’ perceptions of teachers’ attitudes toward bullying are highlighted. Directions for future research and school-based antibullying practices are suggested.