965 resultados para capture-recapture models
Resumo:
When brain mechanism carry out motion integration and segmentation processes that compute unambiguous global motion percepts from ambiguous local motion signals? Consider, for example, a deer running at variable speeds behind forest cover. The forest cover is an occluder that creates apertures through which fragments of the deer's motion signals are intermittently experienced. The brain coherently groups these fragments into a trackable percept of the deer in its trajectory. Form and motion processes are needed to accomplish this using feedforward and feedback interactions both within and across cortical processing streams. All the cortical areas V1, V2, MT, and MST are involved in these interactions. Figure-ground processes in the form stream through V2, such as the seperation of occluding boundaries of the forest cover from the boundaries of the deer, select the motion signals which determine global object motion percepts in the motion stream through MT. Sparse, but unambiguous, feauture tracking signals are amplified before they propogate across position and are intergrated with far more numerous ambiguous motion signals. Figure-ground and integration processes together determine the global percept. A neural model predicts the processing stages that embody these form and motion interactions. Model concepts and data are summarized about motion grouping across apertures in response to a wide variety of displays, and probabilistic decision making in parietal cortex in response to random dot displays.
Resumo:
European badgers (Meles meles) are an important part of the Irish ecosystem; they are a component of Ireland’s native fauna and are afforded protection by national and international laws. The species is also a reservoir host for bovine tuberculosis (bTB) and implicated in the epidemiology of bTB in cattle. Due to this latter point, badgers have been culled in the Republic of Ireland (ROI) in areas where persistent cattle bTB outbreaks exist. The population dynamics of badgers are therefore of great pure and applied interest. The studies within this thesis used large datasets and a number of analytical approaches to uncover essential elements of badger populations in the ROI. Furthermore, a review and meta-analysis of all available data on Irish badgers was completed to give a framework from which key knowledge gaps and future directions could be identified (Chapter 1). One main finding suggested that badger densities are significantly reduced in areas of repeated culling, as revealed through declining trends in signs of activity (Chapter 2) and capture numbers (Chapter 2 and Chapter 3). Despite this, the trappability of badgers was shown to be lower than previously thought. This indicates that management programmes would require repeated long-term efforts to be effective (Chapter 4). Mark-recapture modelling of a population (sample area: 755km2) suggested that mean badger density was typical of continental European populations, but substantially lower than British populations (Chapter 4). Badger movement patterns indicated that most of the population exhibited site fidelity. Long-distance movements were also recorded, the longest of which (20.1km) was the greatest displacement of an Irish badger currently known (Chapter 5). The studies presented in this thesis allows for the development of more robust models of the badger population at national scales (see Future Directions). Through the use of large-scale datasets future models will facilitate informed sustainable planning for disease control.
Resumo:
Flip-chip assembly, developed in the early 1960s, is now being positioned as a key joining technology to achieve high-density mounting of electronic components on to printed circuit boards for high-volume, low-cost products. Computer models are now being used early within the product design stage to ensure that optimal process conditions are used. These models capture the governing physics taking place during the assembly process and they can also predict relevant defects that may occur. Describes the application of computational modelling techniques that have the ability to predict a range of interacting physical phenomena associated with the manufacturing process. For example, in the flip-chip assembly process we have solder paste deposition, solder joint shape formation, heat transfer, solidification and thermal stress. Illustrates the application of modelling technology being used as part of a larger UK study aiming to establish a process route for high-volume, low-cost, sub-100-micron pitch flip-chip assembly.
Resumo:
Heat is extracted away from an electronic package by convection, conduction, and/or radiation. The amount of heat extracted by forced convection using air is highly dependent on the characteristics of the airflow around the package which includes its velocity and direction. Turbulence in the air is also important and is required to be modeled accurately in thermal design codes that use computational fluid dynamics (CFD). During air cooling the flow can be classified as laminar, transitional, or turbulent. In electronics systems, the flow around the packages is usually in the transition region, which lies between laminar and turbulent flow. This requires a low-Reynolds number numerical model to fully capture the impact of turbulence on the fluid flow calculations. This paper provides comparisons between a number of turbulence models with experimental data. These models included the distance from the nearest wall and the local velocity (LVEL), Wolfshtein, Norris and Reynolds, k-ε, k-ω, shear-stress transport (SST), and kε/kl models. Results show that in terms of the fluid flow calculations most of the models capture the difficult wake recirculation region behind the package reasonably well, although for packages whose heights cause a high degree of recirculation behind the package the SST model appears to struggle. The paper also demonstrates the sensitivity of the models to changes in the mesh density; this study is aimed specifically at thermal design engineers as mesh independent simulations are rarely conducted in an industrial environment.
Resumo:
Ecosystems consist of complex dynamic interactions among species and the environment, the understanding of which has implications for predicting the environmental response to changes in climate and biodiversity. However, with the recent adoption of more explorative tools, like Bayesian networks, in predictive ecology, few assumptions can be made about the data and complex, spatially varying interactions can be recovered from collected field data. In this study, we compare Bayesian network modelling approaches accounting for latent effects to reveal species dynamics for 7 geographically and temporally varied areas within the North Sea. We also apply structure learning techniques to identify functional relationships such as prey–predator between trophic groups of species that vary across space and time. We examine if the use of a general hidden variable can reflect overall changes in the trophic dynamics of each spatial system and whether the inclusion of a specific hidden variable can model unmeasured group of species. The general hidden variable appears to capture changes in the variance of different groups of species biomass. Models that include both general and specific hidden variables resulted in identifying similarity with the underlying food web dynamics and modelling spatial unmeasured effect. We predict the biomass of the trophic groups and find that predictive accuracy varies with the models' features and across the different spatial areas thus proposing a model that allows for spatial autocorrelation and two hidden variables. Our proposed model was able to produce novel insights on this ecosystem's dynamics and ecological interactions mainly because we account for the heterogeneous nature of the driving factors within each area and their changes over time. Our findings demonstrate that accounting for additional sources of variation, by combining structure learning from data and experts' knowledge in the model architecture, has the potential for gaining deeper insights into the structure and stability of ecosystems. Finally, we were able to discover meaningful functional networks that were spatially and temporally differentiated with the particular mechanisms varying from trophic associations through interactions with climate and commercial fisheries.
Resumo:
A growing number of respected commentators now argue that regulatory capture of public agencies and public policy by leading banks was one of the main causal factors behind the financial crisis of 2007–2009, resulting in a permissive regulatory environment. This regulatory environment placed a faith in banks own internal risk models, contributed to pro-cyclical behaviour and turned a blind eye to excessive risk taking. The article argues that a form of ‘multi-level regulatory capture’ characterized the global financial architecture prior to the crisis. Simultaneously, regulatory capture fed off, but also nourished the financial boom, in a fashion that mirrored the life cycle of the boom itself. Minimizing future financial booms and crises will require continuous, conscious and explicit efforts to restrain financial regulatory capture now and into the future. The article assesses the extent to which this has been achieved in current global financial governance reform efforts and highlights some of the persistent difficulties that will continue to hamper efforts to restrain regulatory capture. The evidence concerning the extent to which regulatory capture is being effectively restrained is somewhat mixed, and where it is happening it is largely unintentional and accidental. Recent reforms have overlooked the political causes of the crisis and have failed to focus explicitly or systematically on regulatory capture.
Resumo:
We present mid-infrared (5.2-15.2 mu m) spectra of the Type Ia supernovae (SNe Ia) 2003hv and 2005df observed with the Spitzer Space Telescope. These are the first observed mid-infrared spectra of thermonuclear supernovae, and show strong emission from fine-structure lines of Ni, Co, S, and Ar. The detection of Ni emission in SN 2005df 135 days after the explosion provides direct observational evidence of high-density nuclear burning forming a significant amount of stable Ni in a SN Ia. The SN 2005df Ar lines also exhibit a two-pronged emission profile, implying that the Ar emission deviates significantly from spherical symmetry. The spectrum of SN 2003hv also shows signs of asymmetry, exhibiting blueshifted [Co (III)], which matches the blueshift of [Fe (II)] lines in nearly coeval near-infrared spectra. Finally, local thermodynamic equilibrium abundance estimates for the yield of radioactive Ni-56 give M-56Ni approximate to 0.5 M-circle dot, for SN 2003hv, but only M-56Ni approximate to 0.13-0.22 M-circle dot for the apparently subluminous SN 2005df, supporting the notion that the luminosity of SNe Ia is primarily a function of the radioactive 56Ni yield. The observed emission-line profiles in the SN 2005df spectrum indicate a chemically stratified ejecta structure, which matches the predictions of delayed detonation (DD) models, but is entirely incompatible with current three-dimensional deflagration models. Furthermore, the degree that this layering persists to the innermost regions of the supernova is difficult to explain even in a DD scenario, where the innermost ejecta are still the product of deflagration burning. Thus, while these results are roughly consistent with a delayed detonation, it is clear that a key piece of physics is still missing from our understanding of the earliest phases of SN Ia explosions.
Resumo:
Different classes of constitutive models have been proposed to capture the time-dependent behaviour of soft soil (creep, stress relaxation, rate dependency). This paper critically reviews many of the models developed based on understanding of the time dependent stress-strain-stress rate-strain rate behaviour of soils and viscoplasticity in terms of their strengths and weaknesses. Some discussion is also made on the numerical implementation aspects of these models. Typical findings from numerical analyses of geotechnical structures constructed on soft soils are also discussed. The general elastic viscoplastic (EVP) models can roughly be divided into two categories: models based on the concept of overstress and models based on non-stationary flow surface theory. Although general in structure, both categories have their own strengths and shortcomings. This review indicates that EVP analysis is yet to be vastly used by the geotechnical engineers, apparently due to the mathematical complication involved in the formulation of the constitutive models, unconvincing benefit in terms of the accuracy of performance prediction, requirement of additional soil parameter(s), difficulties in determining them, and the necessity of excessive computing resources and time. © 2013 Taylor & Francis.
Resumo:
Product Line software Engineering depends on capturing the commonality and variability within a family of products, typically using feature modeling, and using this information to evolve a generic reference architecture for the family. For embedded systems, possible variability in hardware and operating system platforms is an added complication. The design process can be facilitated by first exploring the behavior associated with features. In this paper we outline a bidirectional feature modeling scheme that supports the capture of commonality and variability in the platform environment as well as within the required software. Additionally, 'behavior' associated with features can be included in the overall model. This is achieved by integrating the UCM path notation in a way that exploits UCM's static and dynamic stubs to capture behavioral variability and link it to the feature model structure. The resulting model is a richer source of information to support the architecture development process.
Resumo:
In recent years, the issue of life expectancy has become of upmost importance to pension providers, insurance companies and the government bodies in the developed world. Significant and consistent improvements in mortality rates and, hence, life expectancy have led to unprecedented increases in the cost of providing for older ages. This has resulted in an explosion of stochastic mortality models forecasting trends in mortality data in order to anticipate future life expectancy and, hence, quantify the costs of providing for future aging populations. Many stochastic models of mortality rates identify linear trends in mortality rates by time, age and cohort, and forecast these trends into the future using standard statistical methods. The modeling approaches used failed to capture the effects of any structural change in the trend and, thus, potentially produced incorrect forecasts of future mortality rates. In this paper, we look at a range of leading stochastic models of mortality and test for structural breaks in the trend time series.
Resumo:
The area of mortality modelling has received significant attention over the last 20 years owing to the need to quantify and forecast improving mortality rates. This need is driven primarily by the concern of governments, professionals, insurance and actuarial professionals and individuals to be able to fund their old age. In particular, to quantify the costs of increasing longevity we need suitable model of mortality rates that capture the dynamics of the data and forecast them with sufficient accuracy to make them useful. In this paper we test several of those models by considering the fitting quality and in particular, testing the residuals of those models for normality properties. In a wide ranging study considering 30 countries we find that almost exclusively the residuals do not demonstrate normality. Further, in Hurst tests of the residuals we find evidence that structure remains that is not captured by the models.
Resumo:
Thermal comfort is defined as “that condition of mind which expresses satisfaction with the thermal environment’ [1] [2]. Field studies have been completed in order to establish the governing conditions for thermal comfort [3]. These studies showed that the internal climate of a room was the strongest factor in establishing thermal comfort. Direct manipulation of the internal climate is necessary to retain an acceptable level of thermal comfort. In order for Building Energy Management Systems (BEMS) strategies to be efficiently utilised it is necessary to have the ability to predict the effect that activating a heating/cooling source (radiators, windows and doors) will have on the room. The numerical modelling of the domain can be challenging due to necessity to capture temperature stratification and/or different heat sources (radiators, computers and human beings). Computational Fluid Dynamic (CFD) models are usually utilised for this function because they provide the level of details required. Although they provide the necessary level of accuracy these models tend to be highly computationally expensive especially when transient behaviour needs to be analysed. Consequently they cannot be integrated in BEMS. This paper presents and describes validation of a CFD-ROM method for real-time simulations of building thermal performance. The CFD-ROM method involves the automatic extraction and solution of reduced order models (ROMs) from validated CFD simulations. The test case used in this work is a room of the Environmental Research Institute (ERI) Building at the University College Cork (UCC). ROMs have shown that they are sufficiently accurate with a total error of less than 1% and successfully retain a satisfactory representation of the phenomena modelled. The number of zones in a ROM defines the size and complexity of that ROM. It has been observed that ROMs with a higher number of zones produce more accurate results. As each ROM has a time to solution of less than 20 seconds they can be integrated into the BEMS of a building which opens the potential to real time physics based building energy modelling.
Resumo:
In this paper, we test a version of the conditional CAPM with respect to a local market portfolio, proxied by the Brazilian stock index during the 1976-1992 period. We also test a conditional APT model by using the difference between the 30-day rate (Cdb) and the overnight rate as a second factor in addition to the market portfolio in order to capture the large inflation risk present during this period. The conditional CAPM and APT models are estimated by the Generalized Method of Moments (GMM) and tested on a set of size portfolios created from a total of 25 securities exchanged on the Brazilian markets. The inclusion of this second factor proves to be crucial for the appropriate pricing of the portfolios.
Resumo:
In this paper, we test a version of the conditional CAPM with respect to a local market portfolio, proxied by the Brazilian stock index during the 1976-1992 period. We also test a conditional APT model by using the difference between the 30-day rate (Cdb) and the overnight rate as a second factor in addition to the market portfolio in order to capture the large inflation risk present during this period. the conditional CAPM and APT models are estimated by the Generalized Method of Moments (GMM) and tested on a set of size portfolios created from a total of 25 securities exchanged on the Brazilian markets. the inclusion of this second factor proves to be crucial for the appropriate pricing of the portfolios.
Resumo:
En écologie, dans le cadre par exemple d’études des services fournis par les écosystèmes, les modélisations descriptive, explicative et prédictive ont toutes trois leur place distincte. Certaines situations bien précises requièrent soit l’un soit l’autre de ces types de modélisation ; le bon choix s’impose afin de pouvoir faire du modèle un usage conforme aux objectifs de l’étude. Dans le cadre de ce travail, nous explorons dans un premier temps le pouvoir explicatif de l’arbre de régression multivariable (ARM). Cette méthode de modélisation est basée sur un algorithme récursif de bipartition et une méthode de rééchantillonage permettant l’élagage du modèle final, qui est un arbre, afin d’obtenir le modèle produisant les meilleures prédictions. Cette analyse asymétrique à deux tableaux permet l’obtention de groupes homogènes d’objets du tableau réponse, les divisions entre les groupes correspondant à des points de coupure des variables du tableau explicatif marquant les changements les plus abrupts de la réponse. Nous démontrons qu’afin de calculer le pouvoir explicatif de l’ARM, on doit définir un coefficient de détermination ajusté dans lequel les degrés de liberté du modèle sont estimés à l’aide d’un algorithme. Cette estimation du coefficient de détermination de la population est pratiquement non biaisée. Puisque l’ARM sous-tend des prémisses de discontinuité alors que l’analyse canonique de redondance (ACR) modélise des gradients linéaires continus, la comparaison de leur pouvoir explicatif respectif permet entre autres de distinguer quel type de patron la réponse suit en fonction des variables explicatives. La comparaison du pouvoir explicatif entre l’ACR et l’ARM a été motivée par l’utilisation extensive de l’ACR afin d’étudier la diversité bêta. Toujours dans une optique explicative, nous définissons une nouvelle procédure appelée l’arbre de régression multivariable en cascade (ARMC) qui permet de construire un modèle tout en imposant un ordre hiérarchique aux hypothèses à l’étude. Cette nouvelle procédure permet d’entreprendre l’étude de l’effet hiérarchisé de deux jeux de variables explicatives, principal et subordonné, puis de calculer leur pouvoir explicatif. L’interprétation du modèle final se fait comme dans une MANOVA hiérarchique. On peut trouver dans les résultats de cette analyse des informations supplémentaires quant aux liens qui existent entre la réponse et les variables explicatives, par exemple des interactions entres les deux jeux explicatifs qui n’étaient pas mises en évidence par l’analyse ARM usuelle. D’autre part, on étudie le pouvoir prédictif des modèles linéaires généralisés en modélisant la biomasse de différentes espèces d’arbre tropicaux en fonction de certaines de leurs mesures allométriques. Plus particulièrement, nous examinons la capacité des structures d’erreur gaussienne et gamma à fournir les prédictions les plus précises. Nous montrons que pour une espèce en particulier, le pouvoir prédictif d’un modèle faisant usage de la structure d’erreur gamma est supérieur. Cette étude s’insère dans un cadre pratique et se veut un exemple pour les gestionnaires voulant estimer précisément la capture du carbone par des plantations d’arbres tropicaux. Nos conclusions pourraient faire partie intégrante d’un programme de réduction des émissions de carbone par les changements d’utilisation des terres.