999 resultados para THEORETIC MODELS
Resumo:
Development of research methods requires a systematic review of their status. This study focuses on the use of Hierarchical Linear Modeling methods in psychiatric research. Evaluation includes 207 documents published until 2007, included and indexed in the ISI Web of Knowledge databases; analyses focuses on the 194 articles in the sample. Bibliometric methods are used to describe the publications patterns. Results indicate a growing interest in applying the models and an establishment of methods after 2000. Both Lotka"s and Bradford"s distributions are adjusted to the data.
Resumo:
Alpine tree-line ecotones are characterized by marked changes at small spatial scales that may result in a variety of physiognomies. A set of alternative individual-based models was tested with data from four contrasting Pinus uncinata ecotones in the central Spanish Pyrenees to reveal the minimal subset of processes required for tree-line formation. A Bayesian approach combined with Markov chain Monte Carlo methods was employed to obtain the posterior distribution of model parameters, allowing the use of model selection procedures. The main features of real tree lines emerged only in models considering nonlinear responses in individual rates of growth or mortality with respect to the altitudinal gradient. Variation in tree-line physiognomy reflected mainly changes in the relative importance of these nonlinear responses, while other processes, such as dispersal limitation and facilitation, played a secondary role. Different nonlinear responses also determined the presence or absence of krummholz, in agreement with recent findings highlighting a different response of diffuse and abrupt or krummholz tree lines to climate change. The method presented here can be widely applied in individual-based simulation models and will turn model selection and evaluation in this type of models into a more transparent, effective, and efficient exercise.
Resumo:
Tutkielman tavoitteena oli tarkastella innovaatioiden leviämismallien ennustetarkkuuteen vaikuttavia tekijöitä. Tutkielmassa ennustettiin logistisella mallilla matkapuhelinliittymien leviämistä kolmessa Euroopan maassa: Suomessa, Ranskassa ja Kreikassa. Teoriaosa keskittyi innovaatioiden leviämisen ennustamiseen leviämismallien avulla. Erityisesti painotettiin mallien ennustuskykyä ja niiden käytettävyyttä eri tilanteissa. Empiirisessä osassa keskityttiin ennustamiseen logistisella leviämismallilla, joka kalibroitiin eri tavoin koostetuilla aikasarjoilla. Näin tehtyjä ennusteita tarkasteltiin tiedon kokoamistasojen vaikutusten selvittämiseksi. Tutkimusasetelma oli empiirinen, mikä sisälsi logistisen leviämismallin ennustetarkkuuden tutkimista otosdatan kokoamistasoa muunnellen. Leviämismalliin syötettävä data voidaan kerätä kuukausittain ja operaattorikohtaisesti vaikuttamatta ennustetarkkuuteen. Dataan on sisällytettävä leviämiskäyrän käännöskohta, eli pitkän aikavälin huippukysyntäpiste.
Resumo:
Tutkimuksen tavoitteena oli analysoida liiketoimintamalleihin liittyviä teorioita ja erilaisten mallien pohjalta rakentaa selkeä teoria, jota yritykset voivat käyttää määritellessään ja analysoidessaan liiketoimintamalleja. Tutkimuksen kohteena olleet yritykset voitiin jaotella sisäisesti fokusoituneisiin ja ulkoisesti suuntautuneisiin. Jaottelun pohjalta oli mahdollista tehdä johtopäätöksiä koskien liiketoimintamallien potentiaalia. Tutkimus oli luonteeltaan kvalitatiivinen. Tutkimuksen tuloksena on liiketoimintamallien rakentamiseen ja analysointiin sopiva työkalu, jota voidaan käyttää yrityksen strategisessa suunnittelussa.
Resumo:
Recent literature evidences differential associations of personal and general just-world beliefs with constructs in the interpersonal domain. In line with this research, we examine the respective relationships of each just-world belief with the Five-Factor and the HEXACO models of personality in one representative sample of the working population of Switzerland and one sample of the general US population, respectively. One suppressor effect was observed in both samples: Neuroticism and emotionality was positively associated with general just-world belief, but only after controlling for personal just-world belief. In addition, agreeableness was positively and honesty-humility negatively associated with general just-world belief but unrelated to personal just-world belief. Conscientiousness was consistently unrelated to any of the just-world belief and extraversion and openness to experience revealed unstable coefficients across studies. We discuss these points in light of just-world theory and their implications for future research taking both dimensions into account.
Resumo:
Ingvaldsen et al. comment on our study assessing global fish interchanges between the North Atlantic and Pacific oceans for more than 500 species during the entire 21st century. They propose that discrepancies between our model projections and observed data for cod in the Barents Sea are the result of the choice of Atmosphere-Ocean General Circulation Models (AOGCMs). We address this assertion here, re-running the cod model with additional observation data from the Barents Sea1, 3, and show that the lack of open-access, archived data for the Barents Sea was the primary cause of local prediction mismatch. This finding recalls the importance of systematic deposit of biodiversity data in global databases
Resumo:
Biotic interactions are known to affect the composition of species assemblages via several mechanisms, such as competition and facilitation. However, most spatial models of species richness do not explicitly consider inter-specific interactions. Here, we test whether incorporating biotic interactions into high-resolution models alters predictions of species richness as hypothesised. We included key biotic variables (cover of three dominant arctic-alpine plant species) into two methodologically divergent species richness modelling frameworks - stacked species distribution models (SSDM) and macroecological models (MEM) - for three ecologically and evolutionary distinct taxonomic groups (vascular plants, bryophytes and lichens). Predictions from models including biotic interactions were compared to the predictions of models based on climatic and abiotic data only. Including plant-plant interactions consistently and significantly lowered bias in species richness predictions and increased predictive power for independent evaluation data when compared to the conventional climatic and abiotic data based models. Improvements in predictions were constant irrespective of the modelling framework or taxonomic group used. The global biodiversity crisis necessitates accurate predictions of how changes in biotic and abiotic conditions will potentially affect species richness patterns. Here, we demonstrate that models of the spatial distribution of species richness can be improved by incorporating biotic interactions, and thus that these key predictor factors must be accounted for in biodiversity forecasts
Resumo:
Tutkielman tavoitteena oli tarkastella telekommunikaatiolaitevalmistajien liiketoimintamalleja. Tutkielma jakaantuu teoreettiseen ja empiiriseen osaan. Teoreettinen osa keskittyy lähinnä liiketoimintamallin käsitteen määrittelyyn. Olemassa olevien määritelmien, sekä liiketoimintamallin käsitteeseen läheisesti liittyvien termien, pohjalta luotiin liiketoimintamallille uusi malli. Tutkielman empiirinen osa keskittyy case-yritys Cisco Systemsin liiketoimintamallin määrittelyyn ja kehityksen kuvaamiseen. Liiketoimintamallin kehitystä seurattiin kahden vuoden ajalta perehtymällä lähinnä yrityksen lehdistötiedotteisiin, artikkeleihin ja muuhun julkiseen materiaaliin. Ciscon lisäksi empiirisessä osassa tutkittiin kahdeksan muun laitevalmistajan liiketoimintamallien kehitystä. Empiirisen osan päätavoitteena oli selvittää, miten telekommunikaatiolaitevalmistajien liiketoimintamallit kehittyvät nyt ja tulevaisuudessa.
Resumo:
This symposium presents research from different contexts to improve our collective understanding of a variety of aspects of mixed forms of service delivery, be they mixed contracting at the level of the market (which is more common in the U.S.), or mixed management and ownership at the level of the firm (which is more common in Europe). The articles included in this special symposium examine the factors that give rise to mixed forms of service delivery (e.g., economic and fiscal stress, regulatory flexibility, geography, management) and how these factors impact their design and operation. Articles also explore the performance of mixed forms of service delivery relative to more conventional arrangements like contracted or direct service delivery. The articles contribute to a better theoretical and conceptual understanding of mixed/hybrid forms of services delivery.
Resumo:
Tämän tutkimuksen päätavoitteena oli selvittää, millaiset liiketoimintamallit soveltuvat mobiilin internet-liiketoiminnan harjoittamiseen kehittyvillä markkinoilla. Tavoitteena oli myös selvittää tekijöitä, jotka vaikuttavat mobiilin internetin diffuusioon. Tutkimus tehtiin käyttäen sekä kvantitatiivista että kvalitatiivista tutkimusmenetelmää. Klusterianalyysin avulla 40 Euroopan maasta muodostettiin sisäisesti homogeenisiä maaklustereita. Näiden klustereiden avulla oli mahdollista suunnitella erityyppisille markkinoille soveltuvat liiketoimintamallit. Haastatteluissa selvitettiin asiantuntijoiden näkemyksiä tekijöistä, jotka vaikuttavat mobiilin internetin diffuusioon kehittyvillä markkinoilla. Tutkimuksessa saatiin selville, että tärkeimmät liiketoimintamallin elementit kehittyvillä markkinoilla ovat hinnoittelu, arvotarjooma ja arvoverkko. Puutteellisen kiinteän verkon todettiin olevan yksi tärkeimmistä mobiilin internetin diffuusiota edistävistä tekijöistä kehittyvillä markkinoilla.
Resumo:
A scheme to generate long-range spin-spin interactions between three-level ions in a chain is presented, providing a feasible experimental route to the rich physics of well-known SU(3) models. In particular, we demonstrate different signatures of quantum chaos which can be controlled and observed in experiments with trapped ions.
Resumo:
This work presents new, efficient Markov chain Monte Carlo (MCMC) simulation methods for statistical analysis in various modelling applications. When using MCMC methods, the model is simulated repeatedly to explore the probability distribution describing the uncertainties in model parameters and predictions. In adaptive MCMC methods based on the Metropolis-Hastings algorithm, the proposal distribution needed by the algorithm learns from the target distribution as the simulation proceeds. Adaptive MCMC methods have been subject of intensive research lately, as they open a way for essentially easier use of the methodology. The lack of user-friendly computer programs has been a main obstacle for wider acceptance of the methods. This work provides two new adaptive MCMC methods: DRAM and AARJ. The DRAM method has been built especially to work in high dimensional and non-linear problems. The AARJ method is an extension to DRAM for model selection problems, where the mathematical formulation of the model is uncertain and we want simultaneously to fit several different models to the same observations. The methods were developed while keeping in mind the needs of modelling applications typical in environmental sciences. The development work has been pursued while working with several application projects. The applications presented in this work are: a winter time oxygen concentration model for Lake Tuusulanjärvi and adaptive control of the aerator; a nutrition model for Lake Pyhäjärvi and lake management planning; validation of the algorithms of the GOMOS ozone remote sensing instrument on board the Envisat satellite of European Space Agency and the study of the effects of aerosol model selection on the GOMOS algorithm.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.