925 resultados para Finite model generation
Resumo:
This paper presents a general expression to predict breeding values using animal models when the base population is selected, i.e. the means and variances of breeding values in the base generation differ among individuals. Rules for forming the mixed model equations are also presented. A numerical example illustrates the procedure.
Resumo:
Shallow upland drains, grips, have been hypothesized as responsible for increased downstream flow magnitudes. Observations provide counterfactual evidence, often relating to the difficulty of inferring conclusions from statistical correlation and paired catchment comparisons, and the complexity of designing field experiments to test grip impacts at the catchment scale. Drainage should provide drier antecedent moisture conditions, providing more storage at the start of an event; however, grips have higher flow velocities than overland flow, thus potentially delivering flow more rapidly to the drainage network. We develop and apply a model for assessing the impacts of grips on flow hydrographs. The model was calibrated on the gripped case, and then the gripped case was compared with the intact case by removing all grips. This comparison showed that even given parameter uncertainty, the intact case had significantly higher flood peaks and lower baseflows, mirroring field observations of the hydrological response of intact peat. The simulations suggest that this is because delivery effects may not translate into catchment-scale impacts for three reasons. First, in our case, the proportions of flow path lengths that were hillslope were not changed significantly by gripping. Second, the structure of the grip network as compared with the structure of the drainage basin mitigated against grip-related increases in the concentration of runoff in the drainage network, although it did marginally reduce the mean timing of that concentration at the catchment outlet. Third, the effect of the latter upon downstream flow magnitudes can only be assessed by reference to the peak timing of other tributary basins, emphasizing that drain effects are both relative and scale dependent. However, given the importance of hillslope flow paths, we show that if upland drainage causes significant changes in surface roughness on hillslopes, then critical and important feedbacks may impact upon the speed of hydrological response. Copyright (c) 2012 John Wiley & Sons, Ltd.
Resumo:
The advent of new advances in mobile computing has changed the manner we do our daily work, even enabling us to perform collaborative activities. However, current groupware approaches do not offer an integrating and efficient solution that jointly tackles the flexibility and heterogeneity inherent to mobility as well as the awareness aspects intrinsic to collaborative environments. Issues related to the diversity of contexts of use are collected under the term plasticity. A great amount of tools have emerged offering a solution to some of these issues, although always focused on individual scenarios. We are working on reusing and specializing some already existing plasticity tools to the groupware design. The aim is to offer the benefits from plasticity and awareness jointly, trying to reach a real collaboration and a deeper understanding of multi-environment groupware scenarios. In particular, this paper presents a conceptual framework aimed at being a reference for the generation of plastic User Interfaces for collaborative environments in a systematic and comprehensive way. Starting from a previous conceptual framework for individual environments, inspired on the model-based approach, we introduce specific components and considerations related to groupware.
Resumo:
Electricity is a strategic service in modern societies. Thus, it is extremely important for governments to be able to guarantee an affordable and reliable supply, which depends to a great extent on an adequate expansion of the generation and transmission capacities. Cross- border integration of electricity markets creates new challenges for the regulators, since the evolution of the market is now influenced by the characteristics and policies of neighbouring countries. There is still no agreement on why and how regions should integrate their electricity markets. The aim of this thesis is to improve the understanding of integrated electricity markets and how their behaviour depends on the prevailing characteristics of the national markets and the policies implemented in each country. We developed a simulation model to analyse under what circumstances integration is desirable. This model is used to study three cases of interconnection between two countries. Several policies regarding interconnection expansion and operation, combined with different generation capacity adequacy mechanisms, are evaluated. The thesis is composed of three papers. The first paper presents a detailed description of the model and an analysis of the case of Colombia and Ecuador. It shows that market coupling can bring important benefits, but the relative size of the countries can lead to import dependency issues in the smaller country. The second paper compares the case of Colombia and Ecuador with the case of Great Britain and France. These countries are significantly different in terms of electricity sources, hydro- storage capacity, complementarity and demand growth. We show that complementarity is essential in order to obtain benefits from integration, while higher demand growth and hydro- storage capacity can lead to counterintuitive outcomes, thus complicating policy design. In the third paper, an extended version of the model presented in the first paper is used to analyse the case of Finland and its interconnection with Russia. Different trading arrangements are considered. We conclude that unless interconnection capacity is expanded, the current trading arrangement, where a single trader owns the transmission rights and limits the flow during peak hours, is beneficial for Finland. In case of interconnection expansion, market coupling would be preferable. We also show that the costs of maintaining a strategic reserve in Finland are justified in order to limit import dependency, while still reaping the benefits of interconnection. In general, we conclude that electricity market integration can bring benefits if the right policies are implemented. However, a large interconnection capacity is only desirable if the countries exhibit significant complementarity and trust each other. The outcomes of policies aimed at guaranteeing security of supply at a national level can be quite counterintuitive due to the interactions between neighbouring countries and their effects on interconnection and generation investments. Thus, it is important for regulators to understand these interactions and coordinate their decisions in order to take advantage of the interconnection without putting security of supply at risk. But it must be taken into account that even when integration brings benefits to the region, some market participants lose and might try to hinder the integration process. -- Dans les sociétés modernes, l'électricité est un service stratégique. Il est donc extrêmement important pour les gouvernements de pouvoir garantir la sécurité d'approvisionnement à des prix abordables. Ceci dépend en grande mesure d'une expansion adéquate des capacités de génération et de transmission. L'intégration des marchés électriques pose des nouveaux défis pour les régulateurs, puisque l'évolution du marché est maintenant influencée par les caractéristiques et les politiques des pays voisins. Il n'est pas encore claire pourquoi ni comment les marches électriques devraient s'intégrer. L'objectif de cette thèse est d'améliorer la compréhension des marchés intégrés d'électricité et de leur comportement en fonction des caractéristiques et politiques de chaque pays. Un modèle de simulation est proposé pour étudier les conditions dans lesquelles l'intégration est désirable. Ce modèle est utilisé pour étudier trois cas d'interconnexion entre deux pays. Plusieurs politiques concernant l'expansion et l'opération de l'interconnexion, combinées avec différents mécanismes de rémunération de la capacité, sont évalués. Cette thèse est compose de trois articles. Le premier présente une description détaillée du modèle et une analyse du cas de la Colombie et de l'Equateur. Il montre que le couplage de marchés peut amener des bénéfices importants ; cependant, la différence de taille entre pays peut créer des soucis de dépendance aux importations pour le pays le plus petit. Le second papier compare le cas de la Colombie et l'Equateur avec le cas de la Grande Bretagne et de la France. Ces pays sont très différents en termes de ressources, taille des réservoirs d'accumulation pour l'hydro, complémentarité et croissance de la demande. Nos résultats montrent que la complémentarité joue un rôle essentiel dans l'obtention des bénéfices potentiels de l'intégration, alors qu'un taux élevé de croissance de la demande, ainsi qu'une grande capacité de stockage, mènent à des résultats contre-intuitifs, ce qui complique les décisions des régulateurs. Dans le troisième article, une extension du modèle présenté dans le premier article est utilisée pour analyser le cas de la Finlande et de la Russie. Différentes règles pour les échanges internationaux d'électricité sont considérées. Nos résultats indiquent qu'à un faible niveau d'interconnexion, la situation actuelle, où un marchand unique possède les droits de transmission et limite le flux pendant les heures de pointe, est bénéfique pour la Finlande. Cependant, en cas d'expansion de la capacité d'interconnexion, «market coupling» est préférable. préférable. Dans tous les cas, la Finlande a intérêt à garder une réserve stratégique, car même si cette politique entraine des coûts, elle lui permet de profiter des avantages de l'intégration tout en limitant ca dépendance envers les importations. En général, nous concluons que si les politiques adéquates sont implémentées, l'intégration des marchés électriques peut amener des bénéfices. Cependant, une grande capacité d'interconnexion n'est désirable que si les pays ont une complémentarité importante et il existe une confiance mutuelle. Les résultats des politiques qui cherchent à préserver la sécurité d'approvisionnement au niveau national peuvent être très contre-intuitifs, étant données les interactions entre les pays voisins et leurs effets sur les investissements en génération et en interconnexion. Il est donc très important pour les régulateurs de comprendre ces interactions et de coordonner décisions à fin de pouvoir profiter de l'interconnexion sans mettre en danger la sécurité d'approvisionnement. Mais il faut être conscients que même quand l'intégration amène de bénéfices pour la région, certains participants au marché sont perdants et pourraient essayer de bloquer le processus d'intégration.
Resumo:
The analysis of the shape of excitation-emission matrices (EEMs) is a relevant tool for exploring the origin, transport and fate of dissolved organic matter (DOM) in aquatic ecosystems. Within this context, the decomposition of EEMs is acquiring a notable relevance. A simple mathematical algorithm that automatically deconvolves individual EEMs is described, creating new possibilities for the comparison of DOM fluorescence properties and EEMs that are very different from each other. A mixture model approach is adopted to decompose complex surfaces into sub-peaks. The laplacian operator and the Nelder-Mead optimisation algorithm are implemented to individuate and automatically locate potential peaks in the EEM landscape. The EEMs of a simple artificial mixture of fluorophores and DOM samples collected in a Mediterranean river are used to describe the model application and to illustrate a strategy that optimises the search for the optimal output.
Resumo:
Planarian flatworms are an exception among bilaterians in that they possess a large pool of adult stem cells that enables them to promptly regenerate any part of their body, including the brain. Although known for two centuries for their remarkable regenerative capabilities, planarians have only recently emerged as an attractive model for studying regeneration and stem cell biology. This revival is due in part to the availability of a sequenced genome and the development of new technologies, such as RNA interference and next-generation sequencing, which facilitate studies of planarian regeneration at the molecular level. Here, we highlight why planarians are an exciting tool in the study of regeneration and its underlying stem cell biology in vivo, and discuss the potential promises and current limitations of this model organism for stem cell research and regenerative medicine.
Resumo:
We analyze the neutron skin thickness in finite nuclei with the droplet model and effective nuclear interactions. The ratio of the bulk symmetry energy J to the so-called surface stiffness coefficient Q has in the droplet model a prominent role in driving the size of neutron skins. We present a correlation between the density derivative of the nuclear symmetry energy at saturation and the J/Q ratio. We emphasize the role of the surface widths of the neutron and proton density profiles in the calculation of the neutron skin thickness when one uses realistic mean-field effective interactions. Next, taking as experimental baseline the neutron skin sizes measured in 26 antiprotonic atoms along the mass table, we explore constraints arising from neutron skins on the value of the J/Q ratio. The results favor a relatively soft symmetry energy at subsaturation densities. Our predictions are compared with the recent constraints derived from other experimental observables. Though the various extractions predict different ranges of values, one finds a narrow window L∼45-75 MeV for the coefficient L that characterizes the density derivative of the symmetry energy that is compatible with all the different empirical indications.
Resumo:
In the last decade, an important debate has arisen about the characteristics of today"s students due to their intensive experience as users of ICT. The main belief is that frequent use of technologies in everyday life implies that competent users are able to transfer their digital skills to learning activities. However, empirical studies developed in different countries reveal similar results suggesting that the"digital native" label does not provide evidence of a better use of technology to support learning. The debate has to go beyond the characteristics of the new generation and focus on the implications of being a learner in a digitalised world. This paper is based on the hypothesis that the use of technology to support learning is not related to whether a student belongs to the Net Generation, but that it is mainly influenced by the teaching model. The study compares behaviour and preferences towards ICT use in two groups of university students: face-to-face students and online students. A questionnaire was applied to a sample of students from five universities with different characteristics (one offers online education and four offer face-to-face education with LMS teaching support). Findings suggest that although access to and use of ICT is widespread, the influence of teaching methodology is very decisive. For academic purposes, students seem to respond to the requirements of their courses, programmes, and universities. There is a clear relationship between students" perception of usefulness regarding certain ICT resources and their teachers" suggested uses of technologies. The most highly rated technologies correspond with those proposed by teachers. The study shows that the educational model (face-to-face or online) has a stronger influence on students" perception of usefulness regarding ICT support for learning than the fact of being a digital native.
Resumo:
Neljännen sukupolven mobiiliverkot kokoaa kaikki tietoliikenneverkot ja palvelut Internetin ympärille. Tämä mullistus muuttaa vanhat vertikaaliset tietoliikenneverkot joissa yhden tietoliikenneverkon palvelut ovat saatavissa vain kyseisen verkon päätelaitteille horisontaaliseksi malliksi jossa päätelaitteet käyttävät omaa verkkoansa pääsynä Internetin palveluihin. Tämä diplomityö esittelee idean paikallisista palveluista neljännen sukupolven mobiiliverkossa. Neljännen sukupolven mobiiliverkko yhdistää perinteiset televerkkojen palvelut ja Internet palvelut sekä mahdollistaa uuden tyyppisten palveluiden luonnin. TCP/IP protokollien ja Internetin evoluutio on esitelty. Laajakaistaiset, lyhyen kantaman radiotekniikat joita käytetään langattomana yhteytenä Internetiin on käsitelty. Evoluutio kohti neljännen sukupolven mobiiliverkkoja on kuvattu esittelemällä vanhat, nykyiset ja tulevat mobiiliverkot sekä niiden palvelut. Ennustukset palveluiden ja markkinoiden tulevaisuuden kehityksestä on käsitelty. Neljännen sukupolven mobiiliverkon arkkitehtuuri mahdollistaa paikalliset palvelut jotka ovat saatavilla vain yhdessä paikallisessa 4G verkossa. Paikalliset palvelut voidaan muunnella jokaiselle käyttäjälle erikseen käyttäen profiili-informaatiota ja paikkatietoa. Työssä on pohdittu paikallisten palveluiden käyttökelpoisuutta ja mahdollisuuksia käyttäen Lappeenrannan teknillisen korkeakoulun 4G projektin palvelupilotin tuloksia.
Resumo:
Maximum entropy modeling (Maxent) is a widely used algorithm for predicting species distributions across space and time. Properly assessing the uncertainty in such predictions is non-trivial and requires validation with independent datasets. Notably, model complexity (number of model parameters) remains a major concern in relation to overfitting and, hence, transferability of Maxent models. An emerging approach is to validate the cross-temporal transferability of model predictions using paleoecological data. In this study, we assess the effect of model complexity on the performance of Maxent projections across time using two European plant species (Alnus giutinosa (L.) Gaertn. and Corylus avellana L) with an extensive late Quaternary fossil record in Spain as a study case. We fit 110 models with different levels of complexity under present time and tested model performance using AUC (area under the receiver operating characteristic curve) and AlCc (corrected Akaike Information Criterion) through the standard procedure of randomly partitioning current occurrence data. We then compared these results to an independent validation by projecting the models to mid-Holocene (6000 years before present) climatic conditions in Spain to assess their ability to predict fossil pollen presence-absence and abundance. We find that calibrating Maxent models with default settings result in the generation of overly complex models. While model performance increased with model complexity when predicting current distributions, it was higher with intermediate complexity when predicting mid-Holocene distributions. Hence, models of intermediate complexity resulted in the best trade-off to predict species distributions across time. Reliable temporal model transferability is especially relevant for forecasting species distributions under future climate change. Consequently, species-specific model tuning should be used to find the best modeling settings to control for complexity, notably with paleoecological data to independently validate model projections. For cross-temporal projections of species distributions for which paleoecological data is not available, models of intermediate complexity should be selected.
Resumo:
We consider robust parametric procedures for univariate discrete distributions, focusing on the negative binomial model. The procedures are based on three steps: ?First, a very robust, but possibly inefficient, estimate of the model parameters is computed. ?Second, this initial model is used to identify outliers, which are then removed from the sample. ?Third, a corrected maximum likelihood estimator is computed with the remaining observations. The final estimate inherits the breakdown point (bdp) of the initial one and its efficiency can be significantly higher. Analogous procedures were proposed in [1], [2], [5] for the continuous case. A comparison of the asymptotic bias of various estimates under point contamination points out the minimum Neyman's chi-squared disparity estimate as a good choice for the initial step. Various minimum disparity estimators were explored by Lindsay [4], who showed that the minimum Neyman's chi-squared estimate has a 50% bdp under point contamination; in addition, it is asymptotically fully efficient at the model. However, the finite sample efficiency of this estimate under the uncontaminated negative binomial model is usually much lower than 100% and the bias can be strong. We show that its performance can then be greatly improved using the three step procedure outlined above. In addition, we compare the final estimate with the procedure described in
Resumo:
Coexisting workloads from professional, household and family, and caregiving activities for frail parents expose middle-aged individuals, the so-called "Sandwich Generation", to potential health risks. Current trends suggest that this situation will continue or increase. Thus SG health promotion has become a nursing concern. Most existing research considers coexisting workloads a priori pathogenic. Most studies have examined the association of one, versus two, of these three activities with health. Few studies have used a nursing perspective. This article presents the development of a framework based on a nursing model. We integrated Siegrist's Effort-Reward Imbalance middle-range theory into "Neuman Systems Model". The latter was chosen for its salutogenic orientation, its attention to preventive nursing interventions and the opportunity it provides to simultaneously consider positive and negative perceptions of SG health and SG coexisting workloads. Finally, it facilitated a theoretical identification of health protective factors.
Resumo:
Among unidentified gamma-ray sources in the galactic plane, there are some that present significant variability and have been proposed to be high-mass microquasars. To deepen the study of the possible association between variable low galactic latitude gamma-ray sources and microquasars, we have applied a leptonic jet model based on the microquasar scenario that reproduces the gamma-ray spectrum of three unidentified gamma-ray sources, 3EG J1735-1500, 3EG J1828+0142 and GRO J1411-64, and is consistent with the observational constraints at lower energies. We conclude that if these sources were generated by microquasars, the particle acceleration processes could not be as efficient as in other objects of this type that present harder gamma-ray spectra. Moreover, the dominant mechanism of high-energy emission should be synchrotron self-Compton (SSC) scattering, and the radio jets may only be observed at low frequencies. For each particular case, further predictions of jet physical conditions and variability generation mechanisms have been made in the context of the model. Although there might be other candidates able to explain the emission coming from these sources, microquasars cannot be excluded as counterparts. Observations performed by the next generation of gamma-ray instruments, like GLAST, are required to test the proposed model.
Resumo:
Quartz Tuning Fork (QTF)-based Scanning Probe Microscopy (SPM) is an important field of research. A suitable model for the QTF is important to obtain quantitative measurements with these devices. Analytical models have the limitation of being based on the double cantilever configuration. In this paper, we present an electromechanical finite element model of the QTF electrically excited with two free prongs. The model goes beyond the state-of-the-art of numerical simulations currently found in the literature for this QTF configuration. We present the first numerical analysis of both the electrical and mechanical behavior of QTF devices. Experimental measurements obtained with 10 units of the same model of QTF validate the finite element model with a good agreement.
Resumo:
Context. The understanding of Galaxy evolution can be facilitated by the use of population synthesis models, which allow to test hypotheses on the star formation history, star evolution, as well as chemical and dynamical evolution of the Galaxy. Aims. The new version of the Besanc¸on Galaxy Model (hereafter BGM) aims to provide a more flexible and powerful tool to investigate the Initial Mass Function (IMF) and Star Formation Rate (SFR) of the Galactic disc. Methods. We present a new strategy for the generation of thin disc stars which assumes the IMF, SFR and evolutionary tracks as free parameters. We have updated most of the ingredients for the star count production and, for the first time, binary stars are generated in a consistent way. We keep in this new scheme the local dynamical self-consistency as in Bienayme et al (1987). We then compare simulations from the new model with Tycho-2 data and the local luminosity function, as a first test to verify and constrain the new ingredients. The effects of changing thirteen different ingredients of the model are systematically studied. Results. For the first time, a full sky comparison is performed between BGM and data. This strategy allows to constrain the IMF slope at high masses which is found to be close to 3.0, excluding a shallower slope such as Salpeter"s one. The SFR is found decreasing whatever IMF is assumed. The model is compatible with a local dark matter density of 0.011 M pc−3 implying that there is no compelling evidence for significant amount of dark matter in the disc. While the model is fitted to Tycho2 data, a magnitude limited sample with V<11, we check that it is still consistent with fainter stars. Conclusions. The new model constitutes a new basis for further comparisons with large scale surveys and is being prepared to become a powerful tool for the analysis of the Gaia mission data.