940 resultados para one-boson-exchange models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

En años recientes,la Inteligencia Artificial ha contribuido a resolver problemas encontrados en el desempeño de las tareas de unidades informáticas, tanto si las computadoras están distribuidas para interactuar entre ellas o en cualquier entorno (Inteligencia Artificial Distribuida). Las Tecnologías de la Información permiten la creación de soluciones novedosas para problemas específicos mediante la aplicación de los hallazgos en diversas áreas de investigación. Nuestro trabajo está dirigido a la creación de modelos de usuario mediante un enfoque multidisciplinario en los cuales se emplean los principios de la psicología, inteligencia artificial distribuida, y el aprendizaje automático para crear modelos de usuario en entornos abiertos; uno de estos es la Inteligencia Ambiental basada en Modelos de Usuario con funciones de aprendizaje incremental y distribuido (conocidos como Smart User Model). Basándonos en estos modelos de usuario, dirigimos esta investigación a la adquisición de características del usuario importantes y que determinan la escala de valores dominantes de este en aquellos temas en los cuales está más interesado, desarrollando una metodología para obtener la Escala de Valores Humanos del usuario con respecto a sus características objetivas, subjetivas y emocionales (particularmente en Sistemas de Recomendación).Una de las áreas que ha sido poco investigada es la inclusión de la escala de valores humanos en los sistemas de información. Un Sistema de Recomendación, Modelo de usuario o Sistemas de Información, solo toman en cuenta las preferencias y emociones del usuario [Velásquez, 1996, 1997; Goldspink, 2000; Conte and Paolucci, 2001; Urban and Schmidt, 2001; Dal Forno and Merlone, 2001, 2002; Berkovsky et al., 2007c]. Por lo tanto, el principal enfoque de nuestra investigación está basado en la creación de una metodología que permita la generación de una escala de valores humanos para el usuario desde el modelo de usuario. Presentamos resultados obtenidos de un estudio de casos utilizando las características objetivas, subjetivas y emocionales en las áreas de servicios bancarios y de restaurantes donde la metodología propuesta en esta investigación fue puesta a prueba.En esta tesis, las principales contribuciones son: El desarrollo de una metodología que, dado un modelo de usuario con atributos objetivos, subjetivos y emocionales, se obtenga la Escala de Valores Humanos del usuario. La metodología propuesta está basada en el uso de aplicaciones ya existentes, donde todas las conexiones entre usuarios, agentes y dominios que se caracterizan por estas particularidades y atributos; por lo tanto, no se requiere de un esfuerzo extra por parte del usuario.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new technique is described for the analysis of cloud-resolving model simulations, which allows one to investigate the statistics of the lifecycles of cumulus clouds. Clouds are tracked from timestep-to-timestep within the model run. This allows for a very simple method of tracking, but one which is both comprehensive and robust. An approach for handling cloud splits and mergers is described which allows clouds with simple and complicated time histories to be compared within a single framework. This is found to be important for the analysis of an idealized simulation of radiative-convective equilibrium, in which the moist, buoyant, updrafts (i.e., the convective cores) were tracked. Around half of all such cores were subject to splits and mergers during their lifecycles. For cores without any such events, the average lifetime is 30min, but events can lengthen the typical lifetime considerably.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A traditional method of validating the performance of a flood model when remotely sensed data of the flood extent are available is to compare the predicted flood extent to that observed. The performance measure employed often uses areal pattern-matching to assess the degree to which the two extents overlap. Recently, remote sensing of flood extents using synthetic aperture radar (SAR) and airborne scanning laser altimetry (LIDAR) has made more straightforward the synoptic measurement of water surface elevations along flood waterlines, and this has emphasised the possibility of using alternative performance measures based on height. This paper considers the advantages that can accrue from using a performance measure based on waterline elevations rather than one based on areal patterns of wet and dry pixels. The two measures were compared for their ability to estimate flood inundation uncertainty maps from a set of model runs carried out to span the acceptable model parameter range in a GLUE-based analysis. A 1 in 5-year flood on the Thames in 1992 was used as a test event. As is typical for UK floods, only a single SAR image of observed flood extent was available for model calibration and validation. A simple implementation of a two-dimensional flood model (LISFLOOD-FP) was used to generate model flood extents for comparison with that observed. The performance measure based on height differences of corresponding points along the observed and modelled waterlines was found to be significantly more sensitive to the channel friction parameter than the measure based on areal patterns of flood extent. The former was able to restrict the parameter range of acceptable model runs and hence reduce the number of runs necessary to generate an inundation uncertainty map. A result of this was that there was less uncertainty in the final flood risk map. The uncertainty analysis included the effects of uncertainties in the observed flood extent as well as in model parameters. The height-based measure was found to be more sensitive when increased heighting accuracy was achieved by requiring that observed waterline heights varied slowly along the reach. The technique allows for the decomposition of the reach into sections, with different effective channel friction parameters used in different sections, which in this case resulted in lower r.m.s. height differences between observed and modelled waterlines than those achieved by runs using a single friction parameter for the whole reach. However, a validation of the modelled inundation uncertainty using the calibration event showed a significant difference between the uncertainty map and the observed flood extent. While this was true for both measures, the difference was especially significant for the height-based one. This is likely to be due to the conceptually simple flood inundation model and the coarse application resolution employed in this case. The increased sensitivity of the height-based measure may lead to an increased onus being placed on the model developer in the production of a valid model

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diffuse reflectance spectroscopy (DRS) is increasingly being used to predict numerous soil physical, chemical and biochemical properties. However, soil properties and processes vary at different scales and, as a result, relationships between soil properties often depend on scale. In this paper we report on how the relationship between one such property, cation exchange capacity (CEC), and the DRS of the soil depends on spatial scale. We show this by means of a nested analysis of covariance of soils sampled on a balanced nested design in a 16 km × 16 km area in eastern England. We used principal components analysis on the DRS to obtain a reduced number of variables while retaining key variation. The first principal component accounted for 99.8% of the total variance, the second for 0.14%. Nested analysis of the variation in the CEC and the two principal components showed that the substantial variance components are at the > 2000-m scale. This is probably the result of differences in soil composition due to parent material. We then developed a model to predict CEC from the DRS and used partial least squares (PLS) regression do to so. Leave-one-out cross-validation results suggested a reasonable predictive capability (R2 = 0.71 and RMSE = 0.048 molc kg− 1). However, the results from the independent validation were not as good, with R2 = 0.27, RMSE = 0.056 molc kg− 1 and an overall correlation of 0.52. This would indicate that DRS may not be useful for predictions of CEC. When we applied the analysis of covariance between predicted and observed we found significant scale-dependent correlations at scales of 50 and 500 m (0.82 and 0.73 respectively). DRS measurements can therefore be useful to predict CEC if predictions are required, for example, at the field scale (50 m). This study illustrates that the relationship between DRS and soil properties is scale-dependent and that this scale dependency has important consequences for prediction of soil properties from DRS data

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Land use and land cover changes in the Brazilian Amazon have major implications for regional and global carbon (C) cycling. Cattle pasture represents the largest single use (about 70%) of this once-forested land in most of the region. The main objective of this study was to evaluate the accuracy of the RothC and Century models at estimating soil organic C (SOC) changes under forest-to-pasture conditions in the Brazilian Amazon. We used data from 11 site-specific 'forest to pasture' chronosequences with the Century Ecosystem Model (Century 4.0) and the Rothamsted C Model (RothC 26.3). The models predicted that forest clearance and conversion to well managed pasture would cause an initial decline in soil C stocks (0-20 cm depth), followed in the majority of cases by a slow rise to levels exceeding those under native forest. One exception to this pattern was a chronosequence in Suia-Missu, which is under degraded pasture. In three other chronosequences the recovery of soil C under pasture appeared to be only to about the same level as under the previous forest. Statistical tests were applied to determine levels of agreement between simulated SOC stocks and observed stocks for all the sites within the 11 chronosequences. The models also provided reasonable estimates (coefficient of correlation = 0.8) of the microbial biomass C in the 0-10 cm soil layer for three chronosequences, when compared with available measured data. The Century model adequately predicted the magnitude and the overall trend in delta C-13 for the six chronosequences where measured 813 C data were available. This study gave independent tests of model performance, as no adjustments were made to the models to generate outputs. Our results suggest that modelling techniques can be successfully used for monitoring soil C stocks and changes, allowing both the identification of current patterns in the soil and the projection of future conditions. Results were used and discussed not only to evaluate soil C dynamics but also to indicate soil C sequestration opportunities for the Brazilian Amazon region. Moreover, modelling studies in these 'forest to pasture' systems have important applications, for example, the calculation of CO, emissions from land use change in national greenhouse gas inventories. (0 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a review is undertaken of the major models currently in use for describing water quality in freshwater river systems. The number of existing models is large because the various studies of water quality in rivers around the world have often resulted in the construction of new 'bespoke' models designed for the particular situation of that study. However, it is worth considering models that are already available, since an existing model, suitable for the purposes of the study, will save a great deal of work and may already have been established within regulatory and legal frameworks. The models chosen here are SIMCAT, TOMCAT, QUAL2E, QUASAR, MIKE-11 and ISIS, and the potential for each model is examined in relation to the issue of simulating dissolved oxygen (DO) in lowland rivers. These models have been developed for particular purposes and this review shows that no one model can provide all of the functionality required. Furthermore, all of the models contain assumptions and limitations that need to be understood if meaningful interpretations of the model simulations are to. be made. The work is concluded with the view that it is unfair to set one model against another in terms of broad applicability, but that a model of intermediate complexity, such as QUASAR, is generally well suited to simulate DO in river systems. (C) 2003 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Rio Tinto river in SW Spain is a classic example of acid mine drainage and the focus of an increasing amount of research including environmental geochemistry, extremophile microbiology and Mars-analogue studies. Its 5000-year mining legacy has resulted in a wide range of point inputs including spoil heaps and tunnels draining underground workings. The variety of inputs and importance of the river as a research site make it an ideal location for investigating sulphide oxidation mechanisms at the field scale. Mass balance calculations showed that pyrite oxidation accounts for over 93% of the dissolved sulphate derived from sulphide oxidation in the Rio Tinto point inputs. Oxygen isotopes in water and sulphate were analysed from a variety of drainage sources and displayed delta O-18((SO4-H2O)) values from 3.9 to 13.6 parts per thousand, indicating that different oxidation pathways occurred at different sites within the catchment. The most commonly used approach to interpreting field oxygen isotope data applies water and oxygen fractionation factors derived from laboratory experiments. We demonstrate that this approach cannot explain high delta O-18((SO4-H2O)) values in a manner that is consistent with recent models of pyrite and sulphoxyanion oxidation. In the Rio Tinto, high delta O-18((SO4-H2O)) values (11.2-13.6 parts per thousand) occur in concentrated (Fe = 172-829 mM), low pH (0.88-1.4), ferrous iron (68-91% of total Fe) waters and are most simply explained by a mechanism involving a dissolved sulphite intermediate, sulphite-water oxygen equilibrium exchange and finally sulphite oxidation to sulphate with O-2. In contrast, drainage from large waste blocks of acid volcanic tuff with pyritiferous veins also had low pH (1.7). but had a low delta O-18((SO4-H2O)) value of 4.0 parts per thousand and high concentrations of ferric iron (Fe(III) = 185 mM, total Fe = 186 mM), suggesting a pathway where ferric iron is the primary oxidant, water is the primary source of oxygen in the sulphate and where sulphate is released directly from the pyrite surface. However, problems remain with the sulphite-water oxygen exchange model and recommendations are therefore made for future experiments to refine our understanding of oxygen isotopes in pyrite oxidation. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Covered Catchment Experiment at Gordsjon is a large scale forest ecosystem manipulation, where acid precipitation was intercepted by a 7000 m(2) plastic roof and replaced by 'clean precipitation' sprinkled below the roof for ten years between 1991 and 2001. The treatment resulted in a strong positive response of runoff quality. The runoff sulphate, inorganic aluminium and base cations decreased, while there was a strong increase in runoff ANC and a moderate increase in pH. The runoff continued to improve over the whole duration of the experiment. The achieved quality was, however, after ten years still considerably worse than estimated pre-industrial runoff at the site. Stable isotopes of sulphur were analysed to study the soil sulphur cycling. At the initial years of the experiment, the desorption of SO4 from the mineral soil appeared to control the runoff SO4 concentration. However, as the experiment proceeded, there was growing evidence that net mineralisation of soil organic sulphur in the humus layer was an additional source of SO4 in runoff. This might provide a challenge to current acidification models. The experiment convincingly demonstrated on a catchment scale, that reduction in acid deposition causes an immediate improvement of surface water quality even at heavily acidified sites. The improvement of the runoff appeared to be largely a result of cation exchange processes in the soil due to decreasing concentrations of the soil solution, while any potential change in soil base saturation seemed to be less important for the runoff chemistry over the short time period of one decade. These findings should be considered when interpreting and extrapolating regional trends in surface water chemistry to the terrestrial parts of ecosystems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

General circulation models (GCMs) use the laws of physics and an understanding of past geography to simulate climatic responses. They are objective in character. However, they tend to require powerful computers to handle vast numbers of calculations. Nevertheless, it is now possible to compare results from different GCMs for a range of times and over a wide range of parameterisations for the past, present and future (e.g. in terms of predictions of surface air temperature, surface moisture, precipitation, etc.). GCMs are currently producing simulated climate predictions for the Mesozoic, which compare favourably with the distributions of climatically sensitive facies (e.g. coals, evaporites and palaeosols). They can be used effectively in the prediction of oceanic upwelling sites and the distribution of petroleum source rocks and phosphorites. Models also produce evaluations of other parameters that do not leave a geological record (e.g. cloud cover, snow cover) and equivocal phenomena such as storminess. Parameterisation of sub-grid scale processes is the main weakness in GCMs (e.g. land surfaces, convection, cloud behaviour) and model output for continental interiors is still too cold in winter by comparison with palaeontological data. The sedimentary and palaeontological record provides an important way that GCMs may themselves be evaluated and this is important because the same GCMs are being used currently to predict possible changes in future climate. The Mesozoic Earth was, by comparison with the present, an alien world, as we illustrate here by reference to late Triassic, late Jurassic and late Cretaceous simulations. Dense forests grew close to both poles but experienced months-long daylight in warm summers and months-long darkness in cold snowy winters. Ocean depths were warm (8 degrees C or more to the ocean floor) and reefs, with corals, grew 10 degrees of latitude further north and south than at the present time. The whole Earth was warmer than now by 6 degrees C or more, giving more atmospheric humidity and a greatly enhanced hydrological cycle. Much of the rainfall was predominantly convective in character, often focused over the oceans and leaving major desert expanses on the continental areas. Polar ice sheets are unlikely to have been present because of the high summer temperatures achieved. The model indicates extensive sea ice in the nearly enclosed Arctic seaway through a large portion of the year during the late Cretaceous, and the possibility of sea ice in adjacent parts of the Midwest Seaway over North America. The Triassic world was a predominantly warm world, the model output for evaporation and precipitation conforming well with the known distributions of evaporites, calcretes and other climatically sensitive facies for that time. The message from the geological record is clear. Through the Phanerozoic, Earth's climate has changed significantly, both on a variety of time scales and over a range of climatic states, usually baldly referred to as "greenhouse" and "icehouse", although these terms disguise more subtle states between these extremes. Any notion that the climate can remain constant for the convenience of one species of anthropoid is a delusion (although the recent rate of climatic change is exceptional). (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The uptake and storage of anthropogenic carbon in the North Atlantic is investigated using different configurations of ocean general circulation/carbon cycle models. We investigate how different representations of the ocean physics in the models, which represent the range of models currently in use, affect the evolution of CO2 uptake in the North Atlantic. The buffer effect of the ocean carbon system would be expected to reduce ocean CO2 uptake as the ocean absorbs increasing amounts of CO2. We find that the strength of the buffer effect is very dependent on the model ocean state, as it affects both the magnitude and timing of the changes in uptake. The timescale over which uptake of CO2 in the North Atlantic drops to below preindustrial levels is particularly sensitive to the ocean state which sets the degree of buffering; it is less sensitive to the choice of atmospheric CO2 forcing scenario. Neglecting physical climate change effects, North Atlantic CO2 uptake drops below preindustrial levels between 50 and 300 years after stabilisation of atmospheric CO2 in different model configurations. Storage of anthropogenic carbon in the North Atlantic varies much less among the different model configurations, as differences in ocean transport of dissolved inorganic carbon and uptake of CO2 compensate each other. This supports the idea that measured inventories of anthropogenic carbon in the real ocean cannot be used to constrain the surface uptake. Including physical climate change effects reduces anthropogenic CO2 uptake and storage in the North Atlantic further, due to the combined effects of surface warming, increased freshwater input, and a slowdown of the meridional overturning circulation. The timescale over which North Atlantic CO2 uptake drops to below preindustrial levels is reduced by about one-third, leading to an estimate of this timescale for the real world of about 50 years after the stabilisation of atmospheric CO2. In the climate change experiment, a shallowing of the mixed layer depths in the North Atlantic results in a significant reduction in primary production, reducing the potential role for biology in drawing down anthropogenic CO2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thirty‐three snowpack models of varying complexity and purpose were evaluated across a wide range of hydrometeorological and forest canopy conditions at five Northern Hemisphere locations, for up to two winter snow seasons. Modeled estimates of snow water equivalent (SWE) or depth were compared to observations at forest and open sites at each location. Precipitation phase and duration of above‐freezing air temperatures are shown to be major influences on divergence and convergence of modeled estimates of the subcanopy snowpack. When models are considered collectively at all locations, comparisons with observations show that it is harder to model SWE at forested sites than open sites. There is no universal “best” model for all sites or locations, but comparison of the consistency of individual model performances relative to one another at different sites shows that there is less consistency at forest sites than open sites, and even less consistency between forest and open sites in the same year. A good performance by a model at a forest site is therefore unlikely to mean a good model performance by the same model at an open site (and vice versa). Calibration of models at forest sites provides lower errors than uncalibrated models at three out of four locations. However, benefits of calibration do not translate to subsequent years, and benefits gained by models calibrated for forest snow processes are not translated to open conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Temperate-zone crops require a period of winter chilling to terminate dormancy and ensure adequate bud break the following spring. The exact chilling requirement of blackcurrant (Ribes nigrum), a commercially important crop in northern Europe, is relatively unknown. Chill unit models have been successfully utilized to determine the optimum chilling temperature of a range of crops, with one chill unit equating to I h exposure to the optimum temperature for chill satisfaction. Two-year-old R. nigrum plants of the cultivars 'Ben Gairn', 'Ben Hope' and 'Ben Tirran' were exposed to temperatures of -10.1 degrees C. -3.4 degrees C. 0.1 degrees C, 1.5 degrees C, 2.1 degrees C, 3.4 degrees C or 8.9 degrees C (+/- 0.7 degrees C) for durations of 0, 2, 4, 6, 8 or 10 weeks and multiple regression analyses used to determine the optimum temperature for chill satisfaction. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A comparison of the models of Vitti et al. (2000, J. Anim. Sci. 78, 2706-2712) and Fernandez (1995c, Livest. Prod. Sci. 41, 255-261) was carried out using two data sets on growing pigs as input. The two models compared were based on similar basic principles, although their aims and calculations differed. The Vitti model employs the rate:state formalism and describes phosphorus (P) flow between four pools representing P content in gut, blood, bone and soft tissue in growing goats. The Fernandez model describes flow and fractional recirculation between P pools in gut, blood and bone in growing pigs. The results from both models showed similar trends for P absorption from gut to blood and net retention in bone with increasing P intake, with the exception of the 65 kg results from Date Set 2 calculated using the FernAndez model. Endogenous loss from blood back to gut increased faster with increasing P intake in the FernAndez than in the Vitti model for Data Set 1. However, for Data Set 2, endogenous loss increased with increasing P intake using the Vitti model, but decreased when calculated using the FernAndez model. Incorporation of P into bone was not influenced by intake in the FernAndez model, while in the Vitti model there was an increasing trend. The FernAndez model produced a pattern of decreasing resorption in bone with increasing P intake, with one of the data sets, which was not observed when using the Vitti model. The pigs maintained their P homeostasis in blood by regulation of P excretion in urine. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The contribution investigates the problem of estimating the size of a population, also known as the missing cases problem. Suppose a registration system is targeting to identify all cases having a certain characteristic such as a specific disease (cancer, heart disease, ...), disease related condition (HIV, heroin use, ...) or a specific behavior (driving a car without license). Every case in such a registration system has a certain notification history in that it might have been identified several times (at least once) which can be understood as a particular capture-recapture situation. Typically, cases are left out which have never been listed at any occasion, and it is this frequency one wants to estimate. In this paper modelling is concentrating on the counting distribution, e.g. the distribution of the variable that counts how often a given case has been identified by the registration system. Besides very simple models like the binomial or Poisson distribution, finite (nonparametric) mixtures of these are considered providing rather flexible modelling tools. Estimation is done using maximum likelihood by means of the EM algorithm. A case study on heroin users in Bangkok in the year 2001 is completing the contribution.