959 resultados para Modeling information
Resumo:
Abstract
Resumo:
Advancements in high-throughput technologies to measure increasingly complex biological phenomena at the genomic level are rapidly changing the face of biological research from the single-gene single-protein experimental approach to studying the behavior of a gene in the context of the entire genome (and proteome). This shift in research methodologies has resulted in a new field of network biology that deals with modeling cellular behavior in terms of network structures such as signaling pathways and gene regulatory networks. In these networks, different biological entities such as genes, proteins, and metabolites interact with each other, giving rise to a dynamical system. Even though there exists a mature field of dynamical systems theory to model such network structures, some technical challenges are unique to biology such as the inability to measure precise kinetic information on gene-gene or gene-protein interactions and the need to model increasingly large networks comprising thousands of nodes. These challenges have renewed interest in developing new computational techniques for modeling complex biological systems. This chapter presents a modeling framework based on Boolean algebra and finite-state machines that are reminiscent of the approach used for digital circuit synthesis and simulation in the field of very-large-scale integration (VLSI). The proposed formalism enables a common mathematical framework to develop computational techniques for modeling different aspects of the regulatory networks such as steady-state behavior, stochasticity, and gene perturbation experiments.
Resumo:
The paper presents some contemporary approaches to spatial environmental data analysis. The main topics are concentrated on the decision-oriented problems of environmental spatial data mining and modeling: valorization and representativity of data with the help of exploratory data analysis, spatial predictions, probabilistic and risk mapping, development and application of conditional stochastic simulation models. The innovative part of the paper presents integrated/hybrid model-machine learning (ML) residuals sequential simulations-MLRSS. The models are based on multilayer perceptron and support vector regression ML algorithms used for modeling long-range spatial trends and sequential simulations of the residuals. NIL algorithms deliver non-linear solution for the spatial non-stationary problems, which are difficult for geostatistical approach. Geostatistical tools (variography) are used to characterize performance of ML algorithms, by analyzing quality and quantity of the spatially structured information extracted from data with ML algorithms. Sequential simulations provide efficient assessment of uncertainty and spatial variability. Case study from the Chernobyl fallouts illustrates the performance of the proposed model. It is shown that probability mapping, provided by the combination of ML data driven and geostatistical model based approaches, can be efficiently used in decision-making process. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Tämän tutkimustyön kohteena on TietoEnator Oy:n kehittämän Fenix-tietojärjestelmän kapasiteettitarpeen ennustaminen. Työn tavoitteena on tutustua Fenix-järjestelmän eri osa-alueisiin, löytää tapa eritellä ja mallintaa eri osa-alueiden vaikutus järjestelmän kuormitukseen ja selvittää alustavasti mitkä parametrit vaikuttavat kyseisten osa-alueiden luomaan kuormitukseen. Osa tätä työtä on tutkia eri vaihtoehtoja simuloinnille ja selvittää eri vaihtoehtojen soveltuvuus monimutkaisten järjestelmien mallintamiseen. Kerätyn tiedon pohjaltaluodaan järjestelmäntietovaraston kuormitusta kuvaava simulaatiomalli. Hyödyntämällä mallista saatua tietoa ja tuotantojärjestelmästä mitattua tietoa mallia kehitetään vastaamaan yhä lähemmin todellisen järjestelmän toimintaa. Mallista tarkastellaan esimerkiksi simuloitua järjestelmäkuormaa ja jonojen käyttäytymistä. Tuotantojärjestelmästä mitataan eri kuormalähteiden käytösmuutoksia esimerkiksi käyttäjämäärän ja kellonajan suhteessa. Tämän työn tulosten on tarkoitus toimia pohjana myöhemmin tehtävälle jatkotutkimukselle, jossa osa-alueiden parametrisointia tarkennetaan lisää, mallin kykyä kuvata todellista järjestelmää tehostetaanja mallin laajuutta kasvatetaan.
Resumo:
Abstract:The objective of this work was to develop and validate a prognosis system for volume yield and basal area of intensively managed loblolly pine (Pinus taeda) stands, using stand and diameter class models compatible in basal area estimates. The data used in the study were obtained from plantations located in northern Uruguay. For model validation without data loss, a three-phase validation scheme was applied: first, the equations were fitted without the validation database; then, model validation was carried out; and, finally, the database was regrouped to recalibrate the parameter values. After the validation and final parameterization of the models, a simulation of the first commercial thinning was carried out. The developed prognosis system was precise and accurate in estimating basal area production per hectare or per diameter classes. There was compatibility in basal area estimates between diameter class and whole stand models, with a mean difference of -0.01 m2ha-1. The validation scheme applied is logic and consistent, since information on the accuracy and precision of the models is obtained without the loss of any information in the estimation of the models' parameters.
Resumo:
Static process simulation has traditionally been used to model complex processes for various purposes. However, the use of static processsimulators for the preparation of holistic examinations aiming at improving profit-making capability requires a lot of work because the production of results requires the assessment of the applicability of detailed data which may be irrelevant to the objective. The relevant data for the total assessment gets buried byirrelevant data. Furthermore, the models do not include an examination of the maintenance or risk management, and economic examination is often an extra property added to them which can be performed with a spreadsheet program. A process model applicable to holistic economic examinations has been developed in this work. The model is based on the life cycle profit philosophy developed by Hagberg and Henriksson in 1996. The construction of the model has utilized life cycle assessment and life cycle costing methodologies with a view to developing, above all, a model which would be applicable to the economic examinations of complete wholes and which would require the need for information focusing on aspects essential to the objectives. Life cycle assessment and costing differ from each other in terms of the modeling principles, but the features of bothmethodologies can be used in the development of economic process modeling. Methods applicable to the modeling of complex processes can be examined from the viewpoint of life cycle methodologies, because they involve the collection and management of large corpuses of information and the production of information for the needs of decision-makers as well. The results of the study shows that on the basis of the principles of life cycle modeling, a process model can be created which may be used to produce holistic efficiency examinations on the profit-making capability of the production line, with fewer resources thanwith traditional methods. The calculations of the model are based to the maximum extent on the information system of the factory, which means that the accuracyof the results can be improved by developing information systems so that they can provide the best information for this kind of examinations.
Resumo:
Globalization and new information technologies mean that organizations have to face world-wide competition in rapidly transforming, unpredictable environments, and thus the ability to constantly generate novel and improved products, services and processes has become quintessential for organizational success. Performance in turbulent environments is, above all, influenced by the organization's capability for renewal. Renewal capability consists of the ability of the organization to replicate, adapt, develop and change its assets, capabilities and strategies. An organization with a high renewal capability can sustain its current success factors while at the same time building new strengths for the future. This capability does not only mean that the organization is able to respond to today's challenges and to keep up with the changes in its environment, but also that it can actas a forerunner by creating innovations, both at the tactical and strategic levels of operation and thereby change the rules of the market. However, even though it is widely agreed that the dynamic capability for continuous learning, development and renewal is a major source of competitive advantage, there is no widely shared view on how organizational renewal capability should be defined, and the field is characterized by a plethora of concepts and definitions. Furthermore,there is a lack of methods for systematically assessing organizational renewal capability. The dissertation aims to bridge these gaps in the existing research by constructing an integrative theoretical framework for organizational renewal capability and by presenting a method for modeling and measuring this capability. The viability of the measurement tool is demonstrated in several contexts, andthe framework is also applied to assess renewal in inter-organizational networks. In this dissertation, organizational renewal capability is examined by drawing on three complimentary theoretical perspectives: knowledge management, strategic management and intellectual capital. The knowledge management perspective considers knowledge as inherently social and activity-based, and focuses on the organizational processes associated with its application and development. Within this framework, organizational renewal capability is understood as the capacity for flexible knowledge integration and creation. The strategic management perspective, on the other hand, approaches knowledge in organizations from the standpoint of its implications for the creation of competitive advantage. In this approach, organizational renewal is framed as the dynamic capability of firms. The intellectual capital perspective is focused on exploring how intangible assets can be measured, reported and communicated. From this vantage point, renewal capability is comprehended as the dynamic dimension of intellectual capital, which consists of the capability to maintain, modify and create knowledge assets. Each of the perspectives significantly contributes to the understanding of organizationalrenewal capability, and the integrative approach presented in this dissertationcontributes to the individual perspectives as well as to the understanding of organizational renewal capability as a whole.
Resumo:
Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.
Resumo:
Children who sustain a prenatal or perinatal brain injury in the form of a stroke develop remarkably normal cognitive functions in certain areas, with a particular strength in language skills. A dominant explanation for this is that brain regions from the contralesional hemisphere "take over" their functions, whereas the damaged areas and other ipsilesional regions play much less of a role. However, it is difficult to tease apart whether changes in neural activity after early brain injury are due to damage caused by the lesion or by processes related to postinjury reorganization. We sought to differentiate between these two causes by investigating the functional connectivity (FC) of brain areas during the resting state in human children with early brain injury using a computational model. We simulated a large-scale network consisting of realistic models of local brain areas coupled through anatomical connectivity information of healthy and injured participants. We then compared the resulting simulated FC values of healthy and injured participants with the empirical ones. We found that the empirical connectivity values, especially of the damaged areas, correlated better with simulated values of a healthy brain than those of an injured brain. This result indicates that the structural damage caused by an early brain injury is unlikely to have an adverse and sustained impact on the functional connections, albeit during the resting state, of damaged areas. Therefore, these areas could continue to play a role in the development of near-normal function in certain domains such as language in these children.
Resumo:
Vaatimusmäärittelyn tavoitteena on luoda halutun järjestelmän kokonaisen, yhtenäisen vaatimusluettelon vaatimusten määrittämiseksi käsitteellisellä tasolla. Liiketoimintaprosessien mallintaminen on varsin hyödyllinen vaatimusmäärittelyn varhaisissa vaiheissa. Tämä työ tutkii liiketoimintaprosessien mallintamista tietojärjestelmien kehittämistä varten. Nykyään on olemassa erilaisia liiketoimintaprosessien mallintamiseen tarkoitettuja tekniikoita. Tämä työ tarkastaa liiketoimintaprosessien mallintamisen periaatteet ja näkökohdat sekä eri mallinnustekniikoita. Uusi menetelmä, joka on suunniteltu erityisesti pienille ja keskisuurille ohjelmistoprojekteille, on kehitetty prosessinäkökohtien ja UML-kaavioiden perusteella.
Resumo:
The purpose of our project is to contribute to earlier diagnosis of AD and better estimates of its severity by using automatic analysis performed through new biomarkers extracted from non-invasive intelligent methods. The methods selected in this case are speech biomarkers oriented to Sponta-neous Speech and Emotional Response Analysis. Thus the main goal of the present work is feature search in Spontaneous Speech oriented to pre-clinical evaluation for the definition of test for AD diagnosis by One-class classifier. One-class classifi-cation problem differs from multi-class classifier in one essen-tial aspect. In one-class classification it is assumed that only information of one of the classes, the target class, is available. In this work we explore the problem of imbalanced datasets that is particularly crucial in applications where the goal is to maximize recognition of the minority class as in medical diag-nosis. The use of information about outlier and Fractal Dimen-sion features improves the system performance.
Resumo:
Tutkimus tarkastelee taloudellisia mallintamismahdollisuuksia metsäteollisuuden liiketoimintayksikössä. Tavoitteena on suunnitella ja luoda taloudellinen malli liiketoimintayksikölle, jonka avulla sen tuloksen analysoiminen ja ennustaminen on mahdollista. Tutkimusta tarkastellaan konstruktiivisen tutkimusmenetelmän avulla. Teoreettinen viitekehys tarkastelee olemassa olevan informaation muotoilemista keskittyen tiedon jalostamisen tarpeisiin, päätöksenteon asettamiin vaatimuksiin sekä mallintamiseen. Toiseksi, teoria esittää informaatiolle asetettavia vaatimuksia organisatorisen ohjauksen näkökulmasta.Empiirinen tieto kerätään osallistuvan havainnoinnin avulla hyödyntäen epävirallisia keskusteluja, tietojärjestelmiä ja laskentatoimen dokumentteja. Tulokset osoittavat, että liikevoiton ennustaminen mallin avulla on vaikeaa, koska taustalla vaikuttavien muuttujien määrä on suuri. Tästä johtuen malli täytyykin rakentaa niin, että se tarkastelee liikevoittoa niin yksityiskohtaisella tasolla kuin mahdollista. Testauksessa mallin tarkkuus osoittautui sitä paremmaksi, mitä tarkemmalla tasolla ennustaminen tapahtui. Lisäksi testaus osoitti, että malli on käyttökelpoinen liiketoiminnan ohjauksessa lyhyellä aikavälillä. Näin se luo myös pohjan pitkän aikavälin ennustamiselle.
Resumo:
How a stimulus or a task alters the spontaneous dynamics of the brain remains a fundamental open question in neuroscience. One of the most robust hallmarks of task/stimulus-driven brain dynamics is the decrease of variability with respect to the spontaneous level, an effect seen across multiple experimental conditions and in brain signals observed at different spatiotemporal scales. Recently, it was observed that the trial-to-trial variability and temporal variance of functional magnetic resonance imaging (fMRI) signals decrease in the task-driven activity. Here we examined the dynamics of a large-scale model of the human cortex to provide a mechanistic understanding of these observations. The model allows computing the statistics of synaptic activity in the spontaneous condition and in putative tasks determined by external inputs to a given subset of brain regions. We demonstrated that external inputs decrease the variance, increase the covariances, and decrease the autocovariance of synaptic activity as a consequence of single node and large-scale network dynamics. Altogether, these changes in network statistics imply a reduction of entropy, meaning that the spontaneous synaptic activity outlines a larger multidimensional activity space than does the task-driven activity. We tested this model's prediction on fMRI signals from healthy humans acquired during rest and task conditions and found a significant decrease of entropy in the stimulus-driven activity. Altogether, our study proposes a mechanism for increasing the information capacity of brain networks by enlarging the volume of possible activity configurations at rest and reliably settling into a confined stimulus-driven state to allow better transmission of stimulus-related information.
Resumo:
BACKGROUND: Variations in physical activity (PA) across nations may be driven by socioeconomic position. As national incomes increase, car ownership becomes within reach of more individuals. This report characterizes associations between car ownership and PA in African-origin populations across 5 sites at different levels of economic development and with different transportation infrastructures: US, Seychelles, Jamaica, South Africa, and Ghana. METHODS: Twenty-five hundred adults, ages 25-45, were enrolled in the study. A total of 2,101 subjects had valid accelerometer-based PA measures (reported as average daily duration of moderate to vigorous PA, MVPA) and complete socioeconomic information. Our primary exposure of interest was whether the household owned a car. We adjusted for socioeconomic position using household income and ownership of common goods. RESULTS: Overall, PA levels did not vary largely between sites, with highest levels in South Africa, lowest in the US. Across all sites, greater PA was consistently associated with male gender, fewer years of education, manual occupations, lower income, and owning fewer material goods. We found heterogeneity across sites in car ownership: after adjustment for confounders, car owners in the US had 24.3 fewer minutes of MVPA compared to non-car owners in the US (20.7 vs. 45.1 minutes/day of MVPA); in the non-US sites, car-owners had an average of 9.7 fewer minutes of MVPA than non-car owners (24.9 vs. 34.6 minutes/day of MVPA). CONCLUSIONS: PA levels are similar across all study sites except Jamaica, despite very different levels of socioeconomic development. Not owning a car in the US is associated with especially high levels of MVPA. As car ownership becomes prevalent in the developing world, strategies to promote alternative forms of active transit may become important.
Resumo:
Recent technology has provided us with new information about the internal structures and properties of biomolecules. This has lead to the design of applications based on underlying biological processes. Applications proposed for biomolecules are, for example, the future computers and different types of sensors. One potential biomolecule to be incorporated in the applications is bacteriorhodopsin. Bacteriorhodopsin is a light-sensitive biomolecule, which works in a similar way as the light sensitive cells of the human eye. Bacteriorhodopsin reacts to light by undergoing a complicated series of chemical and thermal transitions. During these transitions, a proton translocation occurs inside the molecule. It is possible to measure the photovoltage caused by the proton translocations when a vast number of molecules is immobilized in a thin film. Also the changes in the light absorption of the film can be measured. This work aimed to develop the electronics needed for the voltage measurements of the bacteriorhodopsin-based optoelectronic sensors. The development of the electronics aimed to get more accurate information about the structure and functionality of these sensors. The sensors used in this work contain a thick film of bacteriorhodopsin immobilized in polyvinylalcohol. This film is placed between two transparent electrodes. The result of this work is an instrumentation amplifier which can be placed in a small space very close to the sensor. By using this amplifier, the original photovoltage can be measured in more detail. The response measured using this amplifier revealed two different components, which could not be distinguished earlier. Another result of this work is the model for the photoelectric response in dry polymer films.